76% Hit, 40% Ready: The Deepfake Gap That Just Cost Arup $25 Million
76% Hit, 40% Ready: The Deepfake Gap That Just Cost Arup $25 Million
This episode is based on our article:
Read the full article →76% Hit, 40% Ready: The Deepfake Gap That Just Cost Arup $25 Million
Full Episode Transcript
Three out of four organizations in the U.K. have already been hit by a deepfake attack. But only about four in ten say they're actually ready for the next one. That gap — between what's already happened and what companies can defend against — just cost one firm twenty-five million dollars.
In early 2024, an employee at Arup, a major British
In early 2024, an employee at Arup, a major British engineering company, joined what looked like a routine video call with colleagues. The faces on screen were familiar. The voices matched. So the employee transferred twenty-five million dollars — to criminals. Every person on that call was a deepfake. None of them were real. If you've ever been on a video call at work, or even a FaceTime with your family, that should sit with you for a second. Because the technology that fooled a trained professional at a global firm isn't locked in some lab. It's available, it's improving, and it's already being used at scale. According to reporting from TechRadar Pro, deepfakes have jumped from a niche curiosity to a mainstream cybersecurity priority at what researchers call a remarkable speed. The question running through this entire story is simple. If you can't trust what you see or hear, what counts as proof?
Start with how fast this moved. According to data tracked by Sumsub, the number of detected deepfakes quadrupled between 2023 and 2024. Pindrop, which monitors voice fraud across call centers, measured a surge of more than thirteen hundred percent in deepfake fraud attempts during the same period. That's not a gradual climb. That's a technology going from rare to everywhere in about eighteen months. And the most common weapon isn't fake video — it's fake audio. Nearly half of the organizations surveyed experienced deepfake audio attacks specifically. Your voice, or a version of it built from a few seconds of sample audio, can now be used to pass identity checks, authorize transactions, or impersonate you on a call. For fraud investigators, that rewrites how they evaluate recorded evidence. For the rest of us, it means the voicemail you just got from your boss might not actually be from your boss.
Now, you'd expect detection tools to keep pace. And on paper, some do. Sensity claims its deepfake detection software hits about ninety-eight percent accuracy. DuckDuckGoose says its tools reach roughly ninety-six percent and can analyze a file in under a second. Those numbers sound reassuring — until you look at the human side. According to a study by iProov, people trying to spot deepfakes on their own got it right just one-tenth of one percent of the time. Essentially zero. The tools exist. But almost nobody's using them where it matters — in the moment, on the front line, before a decision gets made. Adoption is the bottleneck, not capability.
And the consequences aren't just financial anymore — they're legal. In September 2025, a California judge in a case called Mendones versus Cushman and Wakefield issued what's known as a terminating sanction. That's when a court throws out your entire case as punishment. The reason — two deepfake videos had been submitted as evidence. Courts are now treating digital media with what amounts to presumptive skepticism. If you can't prove a video is authentic, a judge may not just ignore it — they may penalize you for presenting it. Louisiana has already passed Act 250, which requires attorneys to exercise reasonable diligence to determine whether evidence from their own clients was generated by A.I. And at the federal level, a proposed new rule — Federal Rule of Evidence 707 — was released for public comment in August 2025 and discussed in a hearing on 01-29-2026. It's designed to regulate when and how A.I.-generated evidence can be admitted in court at all. For anyone who's ever been involved in a legal dispute — a car accident, a workplace complaint, a custody case — this changes what kind of evidence your lawyer can actually use.
The Bottom Line
Meanwhile, governments around the world are pushing harder on biometric identity checks — facial recognition at borders, voice verification for banking, digital I.D. systems. But the same deepfake technology is defeating those checks at a growing rate. Out of a hundred and thirty-two reported A.I. fraud cases, more than four out of five were driven by deepfakes. So governments are mandating systems that criminals are already beating. That's the bind. The trust infrastructure and the attack infrastructure are scaling at the same time — but the attacks are scaling faster.
The detection software works. Human judgment doesn't — not against modern deepfakes. And the gap between those two facts is where the twenty-five million dollars disappeared, where the court case collapsed, and where the next loss is already forming.
So — what actually happened. Deepfake attacks went from a novelty to hitting three-quarters of U.K. organizations, but barely four in ten feel ready. Courts are already punishing people who submit unverified digital evidence, and new laws are forcing lawyers to check whether their clients' files are even real. Detection tools can catch almost all of it — but humans, on their own, catch almost none of it. Whether you're building a fraud case or just answering a video call, the era of trusting what you see and hear is over. The full story's in the description if you want the deep dive.
Ready for forensic-grade facial comparison?
2 free comparisons with full forensic reports. Results in seconds.
Run My First SearchMore Episodes
Malaysia Just Wired 10,000 Facial Recognition Cameras. The Rulebook Doesn't Exist.
Ten thousand facial recognition cameras. Half a billion ringgit — roughly a hundred and twenty-six million U.S. dollars. And not a single published rule governing how the data gets used. <break time
PodcastYour Deepfake Detector Is Reading Last Year's Playbook
A deepfake detector scores ninety-eight out of a hundred in the lab. It ships to investigators, analysts, and newsrooms with that number stamped on the box. Then someone tests it against a different
PodcastDeepfakes Just Stole $410M. Your "Media Literacy" Training Won't Save You.
In January of this year, a finance worker at the engineering firm Arup joined a video call with his chief financial officer and several colleagues. He recognized every face. He recognized every voice
