Deepfake Laws Keep Failing in Court—And Your Image Evidence Faces New Scrutiny
On March 31, 2026, a US appeals court quietly closed the door on Minnesota state Representative Mary Franson and conservative content creator Christopher Kohls. Their petition for a rehearing against Minnesota's law criminalizing AI-generated election deepfakes? Denied. But here's the part that should get your attention: the court didn't rule on the merits. A smaller Eighth Circuit panel dismissed the suit on standing grounds—which means the underlying constitutional fight is still wide open, and the legal ground beneath every image-heavy investigation is actively shifting.
Courts are systematically striking down broad deepfake statutes on First Amendment grounds—which means any case involving manipulated images now demands documented forensic methodology from day one, not gut instinct.
This Minnesota ruling doesn't stand alone. It's the latest data point in a pattern that should alarm anyone who builds cases around image authentication. Federal judges have already blocked deepfake laws in Hawaii and California in the past year alone. The legislative tide is real—according to a Scholarly Publishing Collective analysis of 319 state deepfake bills introduced between 2019 and 2024, 48 of 50 US states have introduced or enacted at least one deepfake bill. But passing a law and having it survive a First Amendment challenge are two very different things.
The Constitutional Seam Courts Are Cutting Along
Here's the core tension: a deepfake is, at its most basic level, a lie constructed with code. And as the First Amendment Encyclopedia at MTSU lays out plainly, lies are generally protected speech. The logic goes that handing government the authority to determine truth from fiction in public discourse would gut the First Amendment almost entirely.
So where's the line? Courts are drawing it between what was created and what harm it actually causes. A parody video of a politician—even a technically convincing one—is protected expression. A deepfake that meets the legal threshold for defamation, fraud, copyright infringement, or non-consensual intimate imagery can be pursued. But a sweeping content-based ban that says "you may not create synthetic political content"? That's where judges keep finding the constitutional flaw. This article is part of a series — start with Deepfake Attacks Target Identity Verification Facial Compari.
In January 2026, a federal judge permanently blocked Hawaii's Act 191, which had banned certain digitally altered election content, ruling it violated the First Amendment and delivering a significant win for satirists and political commentators. Before that, in August 2025, a California federal judge struck down AB 2655, citing both First Amendment concerns and federal preemption under Section 230 of the Communications Decency Act. Each ruling follows the same logic: states have not demonstrated they chose the least restrictive means to achieve their regulatory goals.
"Deepfakes are essentially lies, which, without criminal behavior, are protected as free speech. Falsehoods are protected in part because giving the government the authority to determine truth or falsity would largely negate freedom of speech." — First Amendment Encyclopedia, MTSU
Note that this isn't a fringe academic argument. It's the reasoning federal judges are actually applying. Kohls, you may recall, gained national attention for a parody video featuring Kamala Harris. His case—and Franson's legislative challenge—got tangled on standing before courts even reached the First Amendment merits. That's almost beside the point. The trajectory is clear.
What This Means for Investigators Right Now
Let's get practical—because the legal turbulence above has direct consequences for anyone building a case around image evidence. Courts are not just striking down laws; they're simultaneously tightening evidentiary standards for synthetic media challenges. And the two trends are colliding in a way that creates real professional exposure.
The University of Illinois Chicago Law Library's analysis of proposed deepfake evidentiary rules makes the shift explicit: an opposing party shouldn't be able to trigger an authentication inquiry simply by asserting that an image might be fake. There needs to be a preliminary evidentiary showing first. But—and this is the part that cuts both ways—if that threshold is met, the burden to prove authenticity rises above the standard prima facie requirement. Previously in this series: Eu Deepfake Ban Consent Rules Image Evidence Investigators.
"An opponent should not be allowed to initiate an inquiry into whether an item is a deepfake simply by claiming it is one; a preliminary showing of evidence suggesting the item might be a deepfake should be required. If the opponent does provide evidence indicating that the item may indeed be a deepfake, the opponent must prove the authenticity of the item using a higher evidentiary standard than the usual prima facie standard." — University of Illinois Chicago Law Library, A Deepfake Evidentiary Rule, Just in Case
Translation: if your image evidence gets challenged, "I could tell it was manipulated because the lighting looked off" is not going to survive scrutiny. Courts are now expecting—and in some jurisdictions requiring—documented methodology, expert testimony, and clear delineation between what the analysis technically shows versus the identity claims you're drawing from it.
The Jones Walker LLP AI Law Blog has outlined exactly what courts are now expecting from synthetic media challenges: digital forensic experts using machine learning and multimodal analysis, pretrial evidentiary hearings to resolve authenticity disputes before trial, and heightened scrutiny for celebrity or high-profile content. Three distinct approaches have emerged—technical expert analysis, procedural review frameworks, and evolving court rules—and the expectation is that practitioners know which lane they're operating in.
At CaraComp, we see this play out in facial comparison work constantly. The difference between an analysis that survives a court challenge and one that doesn't usually comes down to whether the examiner can articulate their method in writing, not just in testimony. A facial recognition result that's supported by a documented process—image acquisition, preprocessing steps, comparison methodology, confidence thresholds, and a clear statement distinguishing similarity from identification—is a fundamentally different artifact than one that says "these images appear to show the same person." One holds. One doesn't.
What Your Image Evidence Now Needs—Minimum
- ⚡ Documented authentication method — Not "I reviewed the image." How did you review it? What tools? What thresholds? What comparisons?
- 📊 Explicit manipulation ruling — "Likely authentic," "likely altered," or "cannot determine"—stated separately from any identity claim. These are different conclusions.
- 🔍 Parody/satire consideration — Courts are now drawing hard lines between synthetic content that is protected expression and content that constitutes fraud or defamation. Your report needs to account for the difference.
- 🔮 Expert-level methodology on standby — If challenged, be ready to defend your process with the same rigor as a digital forensics expert, not just professional experience.
The Gap Courts Are Creating—And Who Bears the Risk
There's a legitimate counterargument to all of this that deserves airtime. Critics of the court rulings point out that by striking down broad deepfake statutes and leaving investigators to rely on existing defamation and fraud frameworks, courts are creating a practical vacuum—particularly around election integrity. As Reason.org noted in their coverage of the California ruling, defamation suits are notoriously expensive, slow, and hard to win. Relief often can't be granted until after an election cycle, by which point the damage is done. Up next: Deepfake Laws Keep Failing In Court And Your Image Evidence .
Some legal scholars have argued directly that a federal statute would be a superior remedy precisely because defamation law is "cost and time prohibitive" and proof of personal damages is genuinely difficult in the political deepfake context. That's a real tension—and legislators aren't going to stop trying to pass these laws just because courts keep striking them down. The political incentive to act is too strong.
But here's the thing: investigators don't get to wait for Congress to solve the constitutional problem. You operate in the world as it is, not the regulatory world someone might build eventually. And right now, the world is one where any image-heavy case—identity fraud, electoral manipulation, non-consensual synthetic imagery—is going to face a two-front attack. First, opposing counsel will probe whether the alleged manipulation might be protected expression. Second, they'll test whether your authentication methodology can withstand heightened evidentiary scrutiny.
Key Takeaway for Investigators
If your image evidence doesn't come with a clear, written forensic trail—from acquisition through analysis to conclusion—assume it will be attacked on both constitutional and evidentiary grounds. Your process, not just your professional judgment, is now part of what has to stand up in court.
Ready for forensic-grade facial comparison?
2 free comparisons with full forensic reports. Results in seconds.
Run My First SearchMore News
EU Deepfake Ban and U.S. Biometrics Put Consent at the Center of Image Evidence
The EU voted 569-45 to ban AI nudifier apps while the U.S. Coast Guard locked in new biometric contracts — and the collision between those two moves is about to reshape how photo and video evidence holds up in court.
ai-regulationIndia's 3-Hour Deepfake Deadline Puts Evidence and Investigators at Risk
Lawmakers worldwide are rushing deepfake crackdowns into law — but almost nobody is drawing the line between criminal impersonation and the forensic tools investigators use to prove deepfakes exist in the first place. Here's what that blind spot actually costs.
facial-recognitionRegulators Split Facial AI in Two. Investigators Need to Know Which Side They're On.
Regulators and airports are turning facial age estimation into a gatekeeper for the entire internet. That creates a critical distinction investigators can't afford to miss — in court or in discovery.
