CaraComp
Log inTry Free
CaraComp
Forensic-Grade AI Face Recognition for:
Start Free Trial
Podcast

China Made Creating a Deepfake the Crime — Not Sharing It. U.S. Courts Are Already Following.

China Made Creating a Deepfake the Crime — Not Sharing It. U.S. Courts Are Already Following.

China Made Creating a Deepfake the Crime — Not Sharing It. U.S. Courts Are Already Following.

0:00-0:00

This episode is based on our article:

Read the full article →

China Made Creating a Deepfake the Crime — Not Sharing It. U.S. Courts Are Already Following.

Full Episode Transcript


China's internet regulator just did something no Western government has tried. On 04-03-2026, Beijing published draft rules that make creating a digital copy of someone's face or voice — without their explicit consent — the violation. Not sharing it. Not using it to deceive. Just making it.


That's a fundamental inversion of how the U

That's a fundamental inversion of how the U.S. and Europe have approached synthetic media. Western laws mostly chase the harm after it happens — the fraud, the nonconsensual image, the courtroom lie. China's framework says the harm starts the moment someone builds a digital version of you without asking. And if you've ever posted a photo online, ever been on a video call, ever walked past a security camera — your face is raw material for that kind of creation right now. The Cyberspace Administration of China's new draft requires what they call explicit informed consent before anyone generates an A.I. "digital human" using identifiable traits of a real person. The rules also specifically ban using these synthetic personas to bypass facial recognition, voice recognition, or any other biometric authentication system. So the question running through this story is straightforward. If consent at the point of creation becomes the global standard — and there are signs it's heading that way — what breaks in the systems we already rely on?

Start with what's happening in American courtrooms. Judges are now facing a problem from two directions at once. One side introduces a video as proof, and the other side says it's a deepfake. Or one side submits A.I.-generated evidence hoping nobody catches it. Both scenarios demand forensic validation, and both erode something courts can't function without — trust in what's presented as real. According to analysis from the Berkeley Technology Law Journal, judges are responding inconsistently to deepfake allegations. There's no uniform standard yet for how to handle it. And right now, no method exists that can definitively classify a piece of audio, video, or imagery as authentic or A.I.-generated. None. That's not a fringe opinion — it's the current state of the science.

Louisiana stepped into that gap. The state passed H.B. 178, which now requires attorneys to exercise what the statute calls "reasonable diligence" to verify that evidence is authentic before offering it in court. That shifts the burden. Before, you could introduce a video and let the other side challenge it. Now, the lawyer bringing the evidence has to do the verification work upfront. For anyone who's ever been part of a legal dispute — a custody case, an insurance claim, a workplace complaint — this changes what counts as proof. The video on your phone might not be enough anymore. Someone may need to prove it's real before a judge will even look at it.


Trusted by Investigators Worldwide
Run Forensic-Grade Comparisons in Seconds
2 free forensic comparisons with full reports. Results in seconds.
Run My First Search →

Layer China's consent framework on top of that

Now layer China's consent framework on top of that. The draft rules don't just regulate deception — they regulate likeness creation without permission. Both the E.U.'s G.D.P.R. and China's Personal Information Protection Law already classify biometric data — faces, voices — as sensitive personal information. That means using it requires legal justification or explicit consent. China's new rules extend that principle specifically to A.I.-generated digital humans. If someone builds a synthetic avatar that looks like you, sounds like you, and moves like you — and they didn't ask first — that's the violation. They don't have to do anything else with it.

For investigators who run facial comparisons as part of their work, this creates a documentation problem that didn't exist two years ago. It's no longer enough to record the result of a match — where the face showed up, what the similarity score was. The sourcing of every image matters now. Was the photo provided by the client? Pulled from a public database? Was consent obtained? That creates what amounts to a dual evidence trail — one proving the investigation was legitimate, and a second proving the evidence is admissible in court. For the rest of us, it means the next time someone's face gets used in a scam video or a fake endorsement, the legal question won't just be "who spread it?" It'll be "who made it — and did anyone say they could?"

There's a wrinkle worth noting. According to reporting from Biometric Update, the same Chinese draft rules also prohibit using digital human services to generate content that "endangers national security" or "incites subversion of state sovereignty." That language reaches well past biometric consent into political speech control. Critics argue the entire framework may function more as a surveillance tool dressed up as privacy protection. And for investigators operating under Western legal systems, Chinese regulatory models don't transplant cleanly. U.S. courts still apply traditional authentication standards rooted in the Federal Rules of Evidence. The real compliance pressure in America is coming from state-level deepfake laws — like Louisiana's — not from Beijing.


The Bottom Line

But the direction is unmistakable. The regulatory world is moving toward treating creation — not distribution — as the moment liability begins. And most people working with facial comparison or digital evidence haven't updated their workflows to reflect that shift.

So — plain and simple. China just proposed rules that make building a digital copy of someone's face, without their permission, the crime. U.S. states like Louisiana are already pushing in the same direction by requiring lawyers to verify evidence is real before a court will hear it. And no technology can yet guarantee that verification. Whether you're documenting a case or just wondering if the video you saw online is real, the same question applies — who made this, and did anyone give them the right to? The full story's in the description if you want the deep dive.

Ready for forensic-grade facial comparison?

2 free comparisons with full forensic reports. Results in seconds.

Run My First Search