CaraComp
Log inTry Free
CaraComp
Forensic-Grade AI Face Recognition for:
Start Free Trial
Podcast

The Courtroom Question You're Not Ready For: 'Prove This Video Isn't a Deepfake'

The Courtroom Question You're Not Ready For: 'Prove This Video Isn't a Deepfake'

The Courtroom Question You're Not Ready For: 'Prove This Video Isn't a Deepfake'

0:00-0:00

This episode is based on our article:

Read the full article →

The Courtroom Question You're Not Ready For: 'Prove This Video Isn't a Deepfake'

Full Episode Transcript


A Pennsylvania State Police corporal just pleaded guilty to creating three thousand A.I.-generated pornographic images. He didn't scrape photos from social media. He pulled them from law enforcement databases he had access to through his badge.


That case landed on 04/08/2026

That case landed on 04/08/2026. The same week, attorneys general in New York, Michigan, Maryland, Connecticut, California, and Oklahoma issued coordinated warnings about a surge in investment scams built on deepfake video. Scammers are generating synthetic footage of real public figures — celebrities, financial experts — and running those videos as ads on Meta platforms to lure people into fraudulent investments. If you've ever scrolled past a video of a famous person pitching a product and thought, "Is that real?" — this story is about you. And if you've ever had to present digital evidence in a courtroom or a boardroom, it's about you too. The question running through all of it is one most people haven't considered: if someone hands you a video and says "prove this isn't fake," can you?

Start with the Pennsylvania case, because it shows something most people don't expect. The person creating deepfakes wasn't a hacker in a basement. He was a corporal — a sworn officer with institutional access to photo databases most civilians will never see. According to the Philadelphia Inquirer, he used that access to generate roughly three thousand explicit A.I. images. Three thousand. That's not experimentation. That's production at scale, powered by the trust a badge is supposed to carry.

Now widen the lens. During that same first week of April, New York Attorney General Letitia James issued an investor alert specifically naming deepfake technology as a tool in fraudulent ad campaigns running on Meta's platforms. Michigan's attorney general followed within a day. So did officials in at least four other states. That kind of coordinated response doesn't happen over a hypothetical threat. It happens when the caseload gets too heavy to ignore.


Trusted by Investigators Worldwide
Run Forensic-Grade Comparisons in Seconds
2 free forensic comparisons with full reports. Results in seconds.
Run My First Search →

What makes these investment scams so effective

What makes these investment scams so effective? The victims aren't careless. They're watching what looks like a credible video endorsement from someone they recognize. The synthetic footage is polished enough to pass a casual glance — which is all most people give a social media ad. For fraud investigators, the challenge is proving that video was fabricated. For everyone else, the challenge is simpler and scarier: you can't trust your own eyes anymore.

And detection isn't the whole answer. A study published in the journal Communications Psychology in twenty twenty-six found that people remain influenced by deepfake video even after being told it's fake before they watch it. Let that sit for a second. Researchers told participants, up front, "this video is not real." The participants watched it anyway, and it still shifted their beliefs. Labeling a deepfake doesn't undo the damage. For courtrooms, that rewrites assumptions about how juries process video evidence. For the rest of us, it means a debunked video keeps doing its work long after the fact-check drops.

Meanwhile, platforms are trying to catch up. According to TechCrunch, YouTube expanded its A.I. deepfake detection tools in March of this year to cover politicians, government officials, and journalists — the people most frequently targeted by synthetic impersonation. That expansion matters, but notice what it is: detection, not prevention. The tool identifies deepfakes after they've been made and uploaded. It doesn't stop someone from creating them. And it only covers a narrow slice of the population. If you're not a public official or journalist, you're not in that system. Your face is on its own.


The Bottom Line

Most people assume the big risk with deepfakes is that a fake video will fool someone. The deeper risk runs the other direction. Once deepfakes are common enough, anyone caught on real video doing something wrong can claim the footage is synthetic — and that doubt alone may be enough to walk free.

So — a police corporal weaponized his own database access to produce thousands of A.I.-generated images. State attorneys general across the country are fighting a wave of deepfake-powered financial fraud. And research shows that even when people know a video is fake, it still changes what they believe. Whether you build cases for a living or just watch videos on your phone, the question isn't whether you'll encounter a deepfake. It's whether you'll know when you have. The full story's in the description if you want the deep dive.

Ready for forensic-grade facial comparison?

2 free comparisons with full forensic reports. Results in seconds.

Run My First Search