CaraComp
Log inTry Free
CaraComp
Forensic-Grade AI Face Recognition for:
Start Free Trial
Podcast

64 Deepfake Laws Passed — And Investigators Still Can't Prove What's Real in Court

64 Deepfake Laws Passed — And Investigators Still Can't Prove What's Real in Court

64 Deepfake Laws Passed — And Investigators Still Can't Prove What's Real in Court

0:00-0:00

This episode is based on our article:

Read the full article →

64 Deepfake Laws Passed — And Investigators Still Can't Prove What's Real in Court

Full Episode Transcript


In January, the U.S. Senate passed the DEFIANCE Act unanimously. That law lets victims of nonconsensual deepfakes sue creators for a minimum of a hundred and fifty thousand dollars in damages. And yet — not a single provision in that law helps an investigator prove whether a piece of evidence is real.


If you work in law enforcement, legal compliance,

If you work in law enforcement, legal compliance, or digital forensics, this gap lands squarely on your desk. Sixty-four deepfake laws were adopted across the U.S. in 2025 alone — up from fifty-two the year before. According to the Digital Watch Observatory, the vast majority of deepfake content online is explicit material that overwhelmingly targets women and girls. More than a hundred and fifty active channels on Telegram right now offer A.I.-generated nude images of both celebrities and ordinary people — often for a fee. Minnesota lawmakers are pushing a bill to outright ban nudification apps and websites that create deepfakes without consent. So the question running through all of this: if legislatures are criminalizing deepfake creation and distribution at record speed, who's responsible for proving which images in a case file are authentic — and how?

Start with what happened around Grok, the A.I. tool built into X. Users discovered they could prompt Grok to produce nonconsensual sexualized images — including images depicting minors. The U.K.'s media regulator, Ofcom, launched a formal investigation. Malaysia, Indonesia, and the Philippines went further — they blocked Grok entirely, citing child protection and obscenity concerns. That's three nations cutting off access to a major platform's A.I. tool because the safeguards failed.

Now zoom out. According to the Digital Watch Observatory, sixty-one global privacy authorities signed a joint declaration warning that nonconsensual A.I. images pose a worldwide risk. But enforcement hits a wall almost immediately. A deepfake app can be developed in one country, hosted on servers in a second country, and used by someone in a third. Each jurisdiction has different legal rules for takedowns and criminal prosecution. So even when a law exists, the content often sits outside any single authority's reach.


Trusted by Investigators Worldwide
Run Forensic-Grade Comparisons in Seconds
2 free forensic comparisons with full reports. Results in seconds.
Run My First Search →

What about platforms themselves

What about platforms themselves? The Take It Down Act sets a hard deadline — 05-19-2026 — for every covered platform to build a notice-and-removal process for nonconsensual intimate imagery, including deepfakes. Platforms get just forty-eight hours to remove flagged content once notified. That sounds decisive. But once a manipulated image gets published, it replicates across networks so quickly that removal becomes a game of whack-a-mole. The original may come down. The copies don't.

And that's where investigators feel the real pressure. If the DEFIANCE Act's damages can reach two hundred and fifty thousand dollars when linked to sexual assault, opposing counsel has every incentive to challenge the authenticity of facial comparison evidence. "How do you know this image wasn't generated by A.I.?" That question didn't come up in courtrooms five years ago. It comes up now. The laws criminalize making and sharing deepfakes — they don't give investigators a standard for certifying that a piece of digital evidence is genuine. Proof of origin, timestamp integrity, biometric grounding — those burdens fall on the person presenting the evidence, not on the legislature that wrote the statute.

Meanwhile, biometric age verification is expanding fast — the U.K. requires it for adult websites, Brazil and South Korea are rolling out similar systems. Those programs will collect millions of facial scans this year. Every one of those scans is both a verification tool and a potential training dataset for the next generation of deepfake models.


The Bottom Line

The legislation isn't solving the evidence problem. It's solving the liability problem. Investigators still have to answer a different question entirely — not "is this illegal?" but "is this real?"

So — plain and simple. Governments passed more than sixty deepfake laws last year, giving victims new ways to sue and forcing platforms to take content down fast. But none of those laws tell an investigator how to prove a photo or video is authentic when it lands on a judge's desk. That authentication gap is the next battleground — and it's already open. The written version goes deeper — link's below.

Ready for forensic-grade facial comparison?

2 free comparisons with full forensic reports. Results in seconds.

Run My First Search