CaraComp
Log inTry Free
CaraComp
Forensic-Grade AI Face Recognition for:
Start Free Trial
Podcast

15 Deepfake Bills Passed This Year — Photo Evidence Still Won't Protect Your Case

15 Deepfake Bills Passed This Year — Photo Evidence Still Won't Protect Your Case

15 Deepfake Bills Passed This Year — Photo Evidence Still Won't Protect Your Case

0:00-0:00

This episode is based on our article:

Read the full article →

15 Deepfake Bills Passed This Year — Photo Evidence Still Won't Protect Your Case

Full Episode Transcript


Fifteen new deepfake bills passed across the United States so far this year. And the total number of states with deepfake laws on the books? It didn't budge. Forty-seven states had deepfake legislation before those bills passed. Forty-seven states have it now. Laws are churning. The problem isn't slowing down.


If you've ever shared a video because it looked

If you've ever shared a video because it looked real — a politician saying something outrageous, a celebrity endorsing a product, a clip that made you angry — this story is about you. It's also about anyone who's ever had a photograph used as proof of something in a courtroom, an insurance claim, or an investigation. Because the ground beneath visual evidence is shifting, and most people haven't felt it yet. According to Ballotpedia, states went on a legislative sprint between January and July of last year. The number of states targeting sexually explicit deepfakes jumped from thirty-two to forty-five in just six months. Political deepfake laws climbed from twenty-one states to twenty-eight in the same window. That's a lot of ink on a lot of paper. And yet — fakes keep spreading, wrongful identifications keep happening, and investigators still don't have a reliable way to tell real from synthetic just by looking. So the question running through everything today is simple. If the laws can't keep up, what actually protects a case?

Start with what happened in Assam, India, during this year's elections. According to reporting from Muslim Network T.V., a single campaign generated a hundred and fifty-eight A.I.-created social media posts. Thirty-one of those were deepfake videos. They showed a Congress party candidate appearing to act as a foreign agent for Pakistan. None of it was real. Those posts racked up nearly one and a half million views. And they weren't distributed by anonymous troll accounts. They went out through official party channels and verified government social media pages. One and a half million people saw fabricated video that looked credible, shared by sources that looked authoritative. That's not a hypothetical threat in a policy paper. That's a real election, with real voters, making decisions based on something that never happened.

Now bring that same dynamic into a criminal investigation. Eight people in the United States have been wrongly arrested after facial recognition systems misidentified them. Eight people. And the pattern behind those cases points to something deeper than a software glitch. People are so accustomed to trusting technological output that even a low-quality image run through an A.I. system triggers automatic confidence. A detective sees a match score, and the instinct is to trust it — the way you'd trust a fingerprint hit. But a similarity score from a facial recognition algorithm isn't an identity. It's a probability. And when that distinction disappears, someone ends up in handcuffs for a crime they didn't commit.


Trusted by Investigators Worldwide
Run Forensic-Grade Comparisons in Seconds
2 free forensic comparisons with full reports. Results in seconds.
Run My First Search →

That cognitive bias — trusting what looks

That cognitive bias — trusting what looks technological — is the same one deepfakes exploit. According to F.T.I. Consulting's analysis of digital forensics practices, visual inspection used to be investigative gospel. You looked at a photo, you looked at a video, and your trained eye told you whether something was off. That era is ending. Visual inspection now routinely yields inconclusive findings, pushing investigators toward digital forensic methods to authenticate anything suspicious. For anyone who's ever served on a jury, that matters. The video evidence you're shown in a courtroom may soon require proof that it wasn't manufactured — before it's even admitted.

Twenty major tech companies — including Meta, Google, and OpenAI — have pledged to watermark and label A.I.-generated content. That sounds reassuring. But the detection tools built to catch synthetic media have proven unreliable and biased in testing. And research shows humans perform poorly at distinguishing real footage from fakes on their own. So the watermarks aren't consistent, the detection software isn't dependable, and our own eyes can't be trusted. What's left?

The shift happening in forensic work right now moves away from asking "does this look real?" and toward a harder question — "can we prove where this came from?" Forensic-grade facial comparison, for instance, doesn't care whether a face looks realistic in a photo. It measures geometric and mathematical relationships between facial features — the distance between the eyes, the angle of the jaw, point-to-point ratios that a generative model can't easily fake at scale. That's a fundamentally different approach. It treats provenance — the chain of custody from capture to courtroom — as the standard, not appearance. For investigators, that rewrites how you build a case. For the rest of us, it means the next convincing video you see online has no guarantee behind it unless someone can trace it back to a real camera, at a real time, in a real place.


The Bottom Line

The laws aren't failing because legislators aren't trying. They're failing because legislation targets the people who create and distribute fakes — and the technology has already moved past the point where creation is the bottleneck. Anyone with a laptop can generate a convincing fake in minutes. The real gap isn't in who's making them. It's in whether the people receiving them — detectives, jurors, voters, you — have any way to verify what they're seeing.

Forty-seven states have deepfake laws. Fifteen more bills passed this year alone. And none of that changes the fact that a photo or video, by itself, no longer proves what it used to. The shift is already underway — from trusting what looks real to demanding proof of where it came from. Whether you're building a case or just scrolling through your feed, that shift changes what "seeing is believing" actually means. The full story's in the description if you want the deep dive.

Ready for forensic-grade facial comparison?

2 free comparisons with full forensic reports. Results in seconds.

Run My First Search