CaraComp
Log inTry Free
CaraComp
Forensic-Grade AI Face Recognition for:
Start Free Trial
ai-regulation

One Missing Consent Record Could Kill Your AI Avatar Business in China

One Missing Consent Record Could Kill Your AI Avatar Business in China

A video posted to Weibo racked up 90 million views. In it, an elderly woman was deceived by an AI-generated avatar of her deceased son — convincing enough that she believed, at least for a moment, that he was still alive. That single clip did more to accelerate China's AI avatar regulations than years of policy debate. And on April 3, 2026, the Cyberspace Administration of China dropped its draft rules for AI-generated virtual humans — a document that doesn't just address deepfake abuse. It fundamentally reframes what the legal risk actually is.

TL;DR

China's draft AI avatar rules move the deepfake conversation from "Can we detect the fake?" to "Can you prove the person consented?" — and that single shift turns consent documentation into the highest-stakes compliance asset in identity work.

The question regulators are now asking isn't whether your AI avatar looks real. They don't particularly care how good the model is. What they want to see is a consent record — a documented chain of evidence proving that the real human whose face, voice, or likeness was used actually said yes. For investigators, compliance teams, and anyone working in identity verification, this is the shift that matters. The technical bottleneck in deepfake cases has moved. It used to be detection. Now it's documentation.


From "Is It Fake?" to "Did They Agree?"

Here's what the draft rules actually say, stripped of bureaucratic language: you cannot create a digital human using another person's personal information — face, voice, biometric data, any of it — without their explicit consent. Not implied consent. Not assumed consent because someone posted a photo publicly. Explicit, documented, separately obtained consent. Biometric Update reports that the draft specifically classifies biometric data as sensitive personal information, which under China's Personal Information Protection Law requires its own separate consent layer — not bundled into a terms-of-service checkbox somewhere.

That's a meaningful legal distinction. Biometric data isn't just "personal information" with a slightly different label — it's a protected category requiring affirmative, standalone authorization. If you trained an avatar on someone's face without obtaining that specific consent, you're not in a gray area. You're in violation. Full stop. This article is part of a series — start with India Biometric App Cancellation Trust Adoption Backlash.

And the enforcement isn't hypothetical. A Shanghai CAC investigation — cited in legal analysis by the International Comparative Legal Guides — found that a website had cloned individuals' voiceprints and provided voice synthesis services without consent. The action resulted in enforcement under both privacy law and deepfake regulations simultaneously. Two frameworks, one case. The lesson for compliance teams is that these rules don't operate in isolation — they stack.

¥200,000
Maximum fine under China's draft AI avatar rules for consent violations — roughly $29,300 USD — with the real cost being mandatory erasure of all source material and avatar deregistration
Source: CAC Draft Regulations, April 2026 / TechJuice

The Compliance Trap Nobody Is Talking About

The fine range — 10,000 to 200,000 yuan ($1,460 to $29,300) — sounds manageable until you read the operational requirements attached to it. According to TechJuice, if consent is withdrawn after an avatar has been created, providers are legally required to erase all source material and deregister the avatar entirely. This isn't a one-time authorization situation. Consent is an ongoing obligation — and its revocation triggers an irreversible compliance action.

Think about what that means operationally. A company builds a customer service avatar using a real person's likeness. Eighteen months in, that person revokes consent. The company must now delete the training data, retire the avatar, and document that it did so. Every iteration, every refinement, every cached version of the model potentially needs to go. That's not a compliance checkbox — that's a workflow redesign.

"The draft regulations explicitly extend consent requirements to the use of sensitive personal information in digital-human modeling, image generation, and scene construction — meaning the consent obligation attaches not just to the final product, but to every stage of the creation process." — Analysis of China's layered consent architecture, China Meta Guide

Read that slowly. Consent doesn't just cover the finished avatar — it covers modeling, image generation, and scene construction. Which means if your avatar development process involves iterative testing, feedback loops, and model refinement (and of course it does), each of those stages theoretically falls under the consent umbrella. Legal teams are going to have heated arguments about how narrowly or broadly to interpret this language. Startups will try to interpret narrowly. Regulators, when it matters, will interpret broadly.

Why This Matters Beyond China

  • Consent as the universal metric — According to Ondato's analysis of global deepfake frameworks, consent has emerged as the common enforcement thread across EU, UK, China, and US jurisdictions — suggesting China isn't an outlier, it's ahead of the curve
  • 📊 Documentation becomes the critical evidence artifact — In fraud investigations involving AI avatars, investigators will now need to demonstrate consent chains — signed agreements, timestamps, audit logs — not just technical proof that a face was synthesized
  • 🔮 The liability shift hits legitimate operators hardest — Bad actors never had consent records to begin with; the operational burden falls on compliant businesses who now must build and maintain documentation infrastructure they never needed before
  • 🧩 US parallels are forming fastGeoPolitechs notes that companion AI laws in New York and California are moving in the same direction, meaning multinational operators face a convergent consent standard, not jurisdiction shopping opportunities

Trusted by Investigators Worldwide
Run Forensic-Grade Comparisons in Seconds
2 free forensic comparisons with full reports. Results in seconds.
Run My First Search →

What This Means If You're Investigating a Fraud Case

Here's where it gets genuinely interesting for identity professionals. Suppose an investigator uncovers a fraudulent AI avatar — say, a synthetic face used to deceive victims in a financial scam. Under the new framework, the absence of a consent record isn't just evidence that the avatar was unauthorized. It's the primary liability artifact in the case. The fraudster didn't have consent. That's provable. That's prosecutable under the draft rules, not just under general fraud statutes. Previously in this series: 1 In 3 Workers Want Biometric Badges Their Employers Arent R.

But here's the flip side that nobody's quite addressing yet: if a legitimate business built an avatar and kept sloppy consent records, an investigator reviewing that avatar can't distinguish between "no consent was ever obtained" and "consent was obtained but not documented." The avatar looks the same either way. The face-matching capability — confirming that the avatar corresponds to a real, identifiable person — tells you who was used. But only the consent trail tells you whether it was legal. Facial comparison tools can establish the identity link; they can't manufacture the paper trail that determines whether that link was authorized.

This is the operational inflection point. The technical question ("Is this a real person's face?") gets answered by matching technology. The legal question ("Did this person agree to this?") gets answered by compliance infrastructure. Those are two different problems, requiring two different systems, and most organizations have only built one of them.

Look, nobody's saying this is simple. The draft rules create real friction for legitimate use cases — customer service avatars, public figures authorizing promotional content, streamlined onboarding in e-commerce. China's AI avatar market exploded precisely because the technology lowered costs and expanded access. Regulating consent at this level of granularity will slow that down. Some of the friction is intentional. The 90-million-view Weibo video wasn't an edge case — it was a preview of where abuse was heading if no guardrails were set.


The Standard That's Coming for Everyone

China's public comment period runs through May 6, 2026. These are draft rules, not final law. Adjustments are likely — particularly around the iterative consent question, which compliance teams have already flagged as unworkable at scale. But the direction is set. Consent documentation is the metric regulators have landed on, and that won't change in revision. What will change are the operational specifics: how granular the documentation needs to be, what constitutes valid ongoing authorization, whether a single consent record covers downstream model refinements. Up next: India Tried 6 Times To Force A Biometric App On Your Phone A.

The broader pattern is clear when you zoom out. Ondato's analysis of global deepfake law frameworks shows that every major jurisdiction converging on this issue is landing in the same place: the question of whether an AI-generated likeness is legal turns not on its technical properties, but on the existence and adequacy of authorization records. The EU is heading there. The UK is heading there. New York and California are already there in draft form. China just published the most detailed version of what "there" actually looks like operationally.

Key Takeaway

The deepfake compliance problem is no longer primarily a detection problem — it's a documentation problem. Organizations that can prove consent was obtained, recorded, and maintained will survive regulatory scrutiny. Organizations that cannot, regardless of how technically sound their avatar systems are, will not. One missing record is all it takes.

For investigators and identity professionals, the practical implication is straightforward: build for both. Matching capability tells you what face was used. Consent infrastructure tells you whether using it was legal. The forensic question and the compliance question are now inseparable — and neither one answers the other.

The elderly woman on Weibo eventually realized the avatar wasn't her son. Ninety million people watched that moment. The question isn't whether China's regulators overreacted. The question is whether your consent records are solid enough that when an investigator comes looking, they find documentation — not a gap where authorization should have been.

Ready for forensic-grade facial comparison?

2 free comparisons with full forensic reports. Results in seconds.

Run My First Search