CaraComp
Log inTry Free
CaraComp
Forensic-Grade AI Face Recognition for:
Start Free Trial
Podcast

First Federal Deepfake Conviction Puts Every Investigator's Methodology on Trial

First Federal Deepfake Conviction Puts Every Investigator's Methodology on Trial

First Federal Deepfake Conviction Puts Every Investigator's Methodology on Trial

0:00-0:00

This episode is based on our article:

Read the full article →

First Federal Deepfake Conviction Puts Every Investigator's Methodology on Trial

Full Episode Transcript


A man in Columbus, Ohio just became the first person in the country convicted under the federal Take It Down Act. His name is James Strahler the Second. According to prosecutors, he used more than a hundred A.I. models to create fake intimate images of at least six women and multiple children.


That conviction isn't just a criminal case

That conviction isn't just a criminal case. It's a signal that the rules around digital evidence just changed for everyone. If you've ever taken a selfie, been on a video call, or posted a photo online, your face is now data that can be fabricated, stolen, or weaponized. And courts are only now catching up to that reality. The Strahler case forced prosecutors to do something they'd never had to do at the federal level before — prove in court that deepfakes were deepfakes. Not just that he harassed people, but that the images themselves were digitally forged. That standard — proving what's real and what isn't — now hangs over every investigation that touches a photo, a voice clip, or a video. So what happens when the tools we use to verify identity can't keep up with the tools used to fake it?

Start with the scale of what's already happened on the legislative side. According to data compiled by Programs dot com, lawmakers across the country have passed a hundred and sixty-nine laws addressing deepfakes since 2022. At least forty-five states now have their own rules on the books. And in 2025 alone, state legislators introduced nearly a hundred and fifty new bills related to deepfakes. That's not a slow roll. That's a flood. And it means the legal landscape looks completely different depending on where you are. A piece of evidence that holds up in one state might not meet the standard in the next one over.

For anyone who's ever had their photo used without permission — or worried about it — this patchwork matters. Because whether you get justice may depend on your zip code.


Trusted by Investigators Worldwide
Run Forensic-Grade Comparisons in Seconds
2 free forensic comparisons with full reports. Results in seconds.
Run My First Search →

Zoom out from the courtroom to the banking system

Now zoom out from the courtroom to the banking system. According to Regula Forensics, a man in the Netherlands opened forty-six bank accounts at ABN AMRO — one of the country's largest banks — using other people's names. He did it by using deepfake video to fool the bank's facial recognition checks. Forty-six accounts. That's not a glitch. That's a systemic failure. And it tells courts something they're going to remember — a static facial comparison, matching one photo to another, isn't reliable enough on its own anymore. That method has been the foundation of identity verification for years. For investigators, it's been the bread and butter. For the rest of us, it's the thing that's supposed to keep someone from opening a credit card in your name.

The same Regula Forensics report found that more than four out of every five reported A.I. fraud cases were driven by deepfake technology. That's not a fringe problem. That's the majority of A.I.-enabled fraud, running on fabricated faces and voices.

And the pressure isn't just coming from law enforcement. According to Rest of World, Brazil's Digital E.C.A. took effect in March of this year. It requires every operating system, app store, gaming platform, and digital service accessible to minors to implement age verification. The penalty for non-compliance can reach fifty million Brazilian reais. One of the leading methods gaining traction is matching a live selfie against an uploaded I.D. photo. But minors are already finding ways around it — using V.P.N.s, A.I.-generated selfies, and deepfake video to spoof the systems meant to protect them. The verification tools and the evasion tools are locked in an arms race, and right now, neither side is winning cleanly.


The Bottom Line

There's another wrinkle investigators need to know about. The Take It Down Act requires online platforms to remove reported non-consensual intimate images within forty-eight hours. By May nineteenth of this year, covered platforms must have a formal removal process in place. That sounds like a win for victims — and it is. But for anyone building a case, it means digital evidence could disappear mid-investigation. A lead that was live on a platform Monday morning might be gone by Wednesday. For everyday people, that takedown clock is a lifeline. For the people trying to build a prosecution, it's a countdown.

The assumption most people still carry — that if you can see a face in a photo, you can trust it's real — is now a liability. The Strahler conviction didn't just punish one man. It set the expectation that proving something is real requires the same rigor as proving it's fake.

So, the short version. The first federal deepfake conviction just raised the bar for what counts as proof. Dozens of states are writing their own rules, and the tools people use to fake identities are already beating the tools designed to stop them. Matching a face to a photo isn't enough anymore — not in court, not at a bank, and not in your inbox. Whether you're building a case or just trying to trust what you see online, the question is the same — how do you prove something is real when faking it has never been easier? The full story's in the description if you want the deep dive.

Ready for forensic-grade facial comparison?

2 free comparisons with full forensic reports. Results in seconds.

Run My First Search