Deepfakes Surged 2,137%. Courts Rewrote the Rules. Investigators Didn't.
Deepfakes Surged 2,137%. Courts Rewrote the Rules. Investigators Didn't.
This episode is based on our article:
Read the full article →Deepfakes Surged 2,137%. Courts Rewrote the Rules. Investigators Didn't.
Full Episode Transcript
A finance worker in Hong Kong sat on a video call with his company's chief financial officer and several colleagues. Every face on that screen was fake. Every voice was synthetic. He transferred thirty-nine million dollars before anyone realized none of those people had actually been on the call.
That case isn't science fiction
That case isn't science fiction. It already happened. And it sits inside a much larger shift that touches anyone who's ever been on a video call, taken a selfie, or unlocked a phone with their face. According to research from Signicat, fraud attempts using deepfakes have jumped more than two thousand percent in just three years. Between January and September of this year alone, A.I.-driven deepfakes caused over three billion dollars in losses across the United States. Courts are scrambling to rewrite the rules of evidence. Investigators are still relying on their eyes. And the gap between what's fake and what's provably real is widening every single day. So who's responsible for closing it?
Start with detection — the part most people assume is handled. It isn't. Studies show that humans correctly identify high-quality deepfake video only about a quarter of the time. Three out of four fakes sail right past us. That means an investigator reviewing surveillance footage, a juror watching a confession video, a compliance officer screening an identity document — all of them are essentially guessing. And it's not because they're careless. The technology has simply outpaced what the human eye can catch.
That gap matters in courtrooms. In November of last year, the Advisory Committee on Evidence Rules proposed a new federal rule — Rule 901(c) — specifically to address what they called "potentially fabricated or altered electronic evidence." The fact that a federal committee had to draft an entirely new rule tells you something. The existing framework wasn't built for a world where a convincing fake video costs less than a cup of coffee to produce. And right now, both state and federal courts in the U.S. still lack consistent standards for admitting video evidence at all. For anyone who's ever had a traffic camera photo used against them or a doorbell camera clip shown in a dispute — that uncertainty now applies to your evidence too.
Meanwhile, the financial sector is absorbing the
Meanwhile, the financial sector is absorbing the heaviest blows. According to industry data, nearly half of all fraud attempts detected in financial services — about forty-two and a half percent — now involve A.I. in some form. Fraudsters are generating synthetic I.D. documents, fake photographs, even fabricated video to slip past identity verification systems that banks and fintech companies use to open accounts. The F.I.N.C.E.N. — that's the Treasury Department's financial crimes unit — issued a specific warning to banks about deepfake fraud being used to bypass know-your-customer controls. If your bank verified your identity with a selfie the last time you opened an account, that same process is now a target.
And then there's the courtroom tactic that flips the whole problem on its head. It's called the deepfake defense. In a case called Huang versus Tesla, the defense argued that video evidence could be a deepfake — not because they had proof it was manipulated, but because the mere possibility of manipulation was enough to raise doubt. That's the slippery slope judges are now staring at. Genuine evidence gets dismissed because no one can definitively prove it wasn't altered. In Alameda County, California, a judge actually sanctioned a party for submitting falsified evidence. But sanctions after the fact don't undo the damage to a case — or to the person on the other side of it. A parent in a custody battle, a victim in a fraud case — their evidence can now be challenged not on its merits but on the theoretical possibility that A.I. made it up.
So what are the investigators who are getting this right actually doing differently? Three things. They're treating every photo and every voice recording as suspect by default — not out of paranoia, but as standard forensic protocol. They're replacing visual inspection with systematic facial comparison, which uses mathematical distance analysis to compare images side by side. That's different from facial recognition, which scans crowds. Comparison is about verifying that two images show the same person. And critically, they're documenting every step of their methodology so it holds up under cross-examination — the kind of scrutiny that courts now expect in what are called Daubert hearings, where judges decide whether an expert's methods are scientifically sound enough to present to a jury.
The Bottom Line
The deepfake problem isn't really about fakes. It's about the collapse of default trust in anything digital. Once a jury learns that no one systematically verified a photo or a video, it almost doesn't matter whether the evidence is genuine — the doubt is already planted.
So — deepfake fraud has exploded more than two thousand percent in three years. Humans catch fakes only about one time in four. Courts are writing new rules because the old ones can't handle synthetic evidence, and investigators who don't document their verification process are handing the other side a ready-made defense. Whether you're building a case or just trusting a video someone sent you, the question is the same. Can you prove what you're looking at is real? The full story's in the description if you want the deep dive.
Ready for forensic-grade facial comparison?
2 free comparisons with full forensic reports. Results in seconds.
Run My First SearchMore Episodes
First Federal Deepfake Conviction Puts Every Investigator's Methodology on Trial
A man in Columbus, Ohio just became the first person in the country convicted under the federal Take It Down Act. His name is James Strahler the Second. According to prosecutors, he used more than a
PodcastInvestigators Can't Explain Their Own Facial Recognition Evidence. Courts Noticed.
A ninety-five percent confidence score sounds almost perfect. But apply that to a database of ten million faces, and you've just flagged five hundred thousand people as potential matches — every single one of them wrong.
PodcastChina Made Creating a Deepfake the Crime — Not Sharing It. U.S. Courts Are Already Following.
China's internet regulator just did something no Western government has tried. On 04-03-2026, Beijing published draft rules that make creating a digital copy of someone's
