CaraComp
Log inTry Free
CaraComp
Forensic-Grade AI Face Recognition for:
Start Free Trial
Podcast

Deepfake Laws Keep Failing in Court—And Your Image Evidence Faces New Scrutiny

Deepfake Laws Keep Failing in Court—And Your Image Evidence Faces New Scrutiny

Deepfake Laws Keep Failing in Court—And Your Image Evidence Faces New Scrutiny

0:00-0:00

This episode is based on our article:

Read the full article →

Deepfake Laws Keep Failing in Court—And Your Image Evidence Faces New Scrutiny

Full Episode Transcript


On 3-31-2026, the Eighth Circuit refused to rehear a challenge to Minnesota's deepfake election law. That makes three states in a row — Minnesota, Hawaii, California — where federal courts have either struck down or blocked broad deepfake statutes. Nearly every state in the country has introduced at least one deepfake bill. And courts keep tearing them apart.


Trusted by Investigators Worldwide
Run Forensic-Grade Comparisons in Seconds
2 free forensic comparisons with full reports. Results in seconds.
Run My First Search →

If you work in investigations, forensics, or any

If you work in investigations, forensics, or any field where image evidence matters, this pattern hits your desk directly. The story centers on Minnesota State Representative Mary Franson and a digital creator named Christopher Kohls. Kohls made a parody video featuring former Vice President Kamala Harris and challenged Minnesota's law criminalizing A.I.-generated election deepfakes on First Amendment grounds. A smaller panel of the Eighth Circuit tossed the suit, saying Kohls didn't have standing. Then the full court declined to revisit it. So what happens when legislatures keep passing deepfake laws and courts keep blocking them — and your evidence lands in that gap?

Start with what the courts are actually saying. In Hawaii, a federal judge permanently blocked the state's Act 191, which banned certain digitally altered election content. The judge ruled it violated the First Amendment outright and handed a sweeping win to satirists and political commentators. In California, Judge John Menendez struck down A.B. 2655 on 8-5-2025, finding it ran headfirst into Section 230 of the Communications Decency Act. In every case, courts reached the same conclusion — these laws failed what's called the narrow-tailoring requirement. Meaning: states didn't prove they'd chosen the least restrictive way to solve the problem.

Why does that keep happening? Because deepfakes are, at their core, falsehoods. And without separate criminal conduct attached, falsehoods get First Amendment protection. Courts have long held that letting the government decide what's true and what's false would gut free speech entirely. That doesn't mean deepfakes are untouchable. If a creator doesn't own the images, copyright law applies. State privacy and publicity statutes bar unauthorized use of someone's name or likeness. Defamation law still works. The constitutional line runs between what someone created and what harm it caused — not whether A.I. was involved.

Now shift to the evidence side, because this is where it gets practical. According to legal analysts at Jones Walker, no foolproof method currently exists to classify text, audio, video, or images as definitively authentic or A.I.-generated. Three approaches have emerged — technical forensic experts using machine learning, procedural review processes, and evolving court rules around authentication. Researchers at the University of Illinois Chicago Law Library have proposed a specific evidentiary framework. Under that framework, an opposing party can't just claim something's a deepfake and trigger a full inquiry. They'd need to present preliminary evidence suggesting manipulation first. And if they do clear that bar, the side offering the evidence must prove authenticity at a higher standard than the usual baseline.


The Bottom Line

What does that mean for an investigator walking into court? The old approach — manual comparison, professional judgment, "I could tell something looked off" — no longer insulates your evidence from attack. Any case involving alleged image alteration now needs a documented chain of forensic methodology. How did you establish authenticity? What technical basis supports your comparison? Can you distinguish alleged manipulation from parody or artistic expression? Courts are already holding pretrial hearings specifically to resolve authenticity disputes and requiring expert testimony for deepfake allegations.

The real gap isn't between real and fake images. It's between what existing law can already punish — fraud, defamation, copyright violation, privacy invasion — and what legislatures keep trying to ban, which is the content of speech itself. Courts have found that seam, and they're not backing off it.

So the short version: almost every state has tried to outlaw election deepfakes. Courts in Minnesota, Hawaii, and California have blocked those laws because they restrict speech too broadly. And for anyone who handles image evidence professionally, the authentication bar just got higher — documented methods, not gut calls. Some scholars argue a narrow federal statute could thread the constitutional needle where state laws haven't, especially for protecting elections before damage spreads. That debate is far from over. The full story's in the description if you want the deep dive.

Ready for forensic-grade facial comparison?

2 free comparisons with full forensic reports. Results in seconds.

Run My First Search