CaraComp
Log inTry Free
CaraComp
Forensic-Grade AI Face Recognition for:
Start Free Trial
Podcast

Deepfakes Just Won. Here's the Only Move Left.

Deepfakes Just Won. Here's the Only Move Left.

Deepfakes Just Won. Here's the Only Move Left.

0:00-0:00

This episode is based on our article:

Read the full article →

Deepfakes Just Won. Here's the Only Move Left.

Full Episode Transcript


In March, a political action committee ran a deepfake ad against a Democratic Senate candidate in Texas. The A.I.-generated version of that candidate spoke convincingly for over a minute. Not a few seconds — a full minute of fabricated speech, distributed with a professional budget across a national network.


That ad didn't need to fool everyone

That ad didn't need to fool everyone. It just needed to plant a seed of doubt. And according to survey data from this election cycle, about half of voters say deepfakes had some influence on their decisions — even though most of those same voters claim they don't trust the technology. The damage isn't about belief. It's about emotion and hesitation. If you've ever watched a political clip online and thought, "Wait — is that real?" — this story is already about you. So what happens when the tools built to catch fakes can't keep up with the tools built to make them?

That Texas ad marks a turning point, and not because the technology is new. Deepfakes have existed for years. What changed is the length, the quality, and the distribution. Previous political deepfakes lasted a few seconds — just long enough to raise eyebrows. This one ran for over sixty seconds with a level of realism that made it hard to distinguish from a real campaign spot. And it wasn't made in a basement. It was funded, produced, and pushed out by a political organization with real money behind it.

Now, you'd expect detection technology to be the answer. Run the video through a forensic tool, flag it, pull it down. But detection is losing this race — badly. Modern A.I.-generated videos can now bypass detection systems with better than ninety percent accuracy. That means for every ten deepfakes that hit the internet, detection tools might catch one. Maybe. The problem is structural. Generation technology improves faster than detection technology. Every time a detection method gets better, the generators learn from it and adapt. For anyone who handles digital evidence — a lawyer, an investigator, a compliance officer — that statistic should change how you think about video as proof. And for the rest of us, it means the next clip you share on social media might show something that never happened.


Trusted by Investigators Worldwide
Run Forensic-Grade Comparisons in Seconds
2 free forensic comparisons with full reports. Results in seconds.
Run My First Search →

If catching fakes after the fact doesn't work, what

So if catching fakes after the fact doesn't work, what does? The answer coming from researchers and the industry itself is a complete reversal of strategy. Instead of trying to prove something is fake after it spreads, you prove something is real before it leaves your hands. It's called content provenance — essentially a digital watermark baked into authentic material at the moment of creation. A campaign films a real press conference. That footage gets a cryptographic certificate — a tamper-evident seal — the instant it's recorded. If someone later produces a deepfake of that same candidate, you don't need to prove the fake is fake. You just point to the certified original and let people compare. That's the shift: from debunking to verifying.

The regulatory picture adds another layer. Right now, only thirty-one U.S. states have any laws addressing deepfakes in elections. At the federal level — nothing. No law prohibits using deepfakes in political campaigns. The rules that do exist focus on disclosure — labeling something as A.I.-generated — not on preventing its creation or distribution. Europe is moving faster. The E.U. A.I. Act's transparency rules take effect in August, requiring mandatory labels on A.I.-generated political content and editorial sign-off by qualified personnel before publication. That's a fundamentally different approach — not just "tell people it's A.I." but "have a real human verify it before it goes out." Whether you vote in U.S. elections or E.U. ones, the question is the same: who's responsible when a fabricated video changes how you feel about a candidate?

And there's a catch even with the provenance approach. Certification only works if the certification system itself can't be compromised. That assumes centralized, trusted infrastructure that doesn't yet exist at scale. A bad actor who forges a certificate or hacks a certification platform doesn't solve the trust problem — they just move it somewhere else. So even the best proposed solution has a vulnerability at its foundation.


The Bottom Line

The real turning point isn't that someone made a convincing deepfake. It's that the entire strategy of catching fakes after they spread has collapsed. The only viable path forward is proving what's real before it ever leaves your camera.

So — to put this plainly. A.I. can now generate political videos so convincing that detection tools miss them nine times out of ten. Half of voters say deepfakes influenced their choices, even when they knew to be skeptical. The response isn't better fake-catching — it's certifying real content at the source, like a digital notary stamp on every authentic video and image. Whether you handle evidence for a living or you just watched a campaign ad on your phone this morning, the question is no longer "is this fake." It's "was this ever certified as real." The full story's in the description if you want the deep dive.

Ready for forensic-grade facial comparison?

2 free comparisons with full forensic reports. Results in seconds.

Run My First Search