Your CFO Just Called. It Wasn't Him. $25 Million Is Gone.
Your CFO Just Called. It Wasn't Him. $25 Million Is Gone.
This episode is based on our article:
Read the full article →Your CFO Just Called. It Wasn't Him. $25 Million Is Gone.
Full Episode Transcript
A finance worker in Hong Kong joined a video call with his chief financial officer and several colleagues. Everyone looked right. Everyone sounded right. He followed their instructions and wired twenty-five million dollars to a series of accounts. Every single person on that call was fake. Not one of them was real.
If you've ever been on a Zoom call, a Teams
If you've ever been on a Zoom call, a Teams meeting, or a WhatsApp video chat, this story is about you. Because the software that made that heist possible doesn't require a Hollywood studio or a government lab. It runs on a gaming P.C. According to 404 Media, a Chinese software package called Haotian A.I. is being sold to scammers right now, and it turns a fraudster's face into someone else's — live, in real time, inside the video apps we all use every day. The tool has already earned roughly four million dollars in revenue. And when researchers tested it against the leading academic deepfake detector, it slipped right past. The deeper story isn't just that deepfakes are getting better. It's that the thing most of us treat as proof — seeing someone's face, hearing their voice on a live call — isn't proof anymore. So what actually counts as verification when everything you see and hear can be manufactured in real time?
To understand how fast this shifted, look at the numbers. According to Keepnet Labs, researchers tracked about half a million deepfake files circulating in 2023. By 2025, that number is projected to hit eight million. That's a sixteen-fold increase in two years. And fraud attempts involving deepfakes spiked by three thousand percent in 2023 alone.
Those older deepfakes were mostly pre-recorded. Someone would manipulate a video after the fact — swap a face, clone a voice — and then distribute the finished product. Dangerous, sure. But limited. You couldn't use a pre-recorded clip to answer a spontaneous question on a live call. Haotian A.I. removes that limitation entirely. 404 Media's reporters actually obtained a copy of the software and tested it themselves — on a live Microsoft Teams call. The face swap happened in real time, inside the video stream. Not after the call. During it.
That changes the threat in a way most organizations
That changes the threat in a way most organizations haven't caught up to yet. A lot of corporate fraud playbooks still say: if an email looks suspicious, hop on a video call to verify. That advice made sense twelve months ago. It doesn't anymore. The video call itself is now the attack surface. And it's not just corporations at risk. If your elderly parent gets a FaceTime call from someone who looks and sounds exactly like you, asking for emergency money — that's the same technology, the same vulnerability, just a smaller dollar amount.
So why not just build better detectors? Researchers tried. According to the Deepfake-Eval-2024 study, the best academic detection tools collapse to below fifty percent accuracy under real-world conditions. Fifty percent. That's a coin flip. In a lab, with controlled lighting and high-resolution images, detectors perform well. On a compressed video stream over a choppy internet connection — the kind of call most of us are actually on — they fall apart.
Meanwhile, generative A.I. has compressed the attacker's timeline dramatically. What used to take a skilled fraudster weeks of research — studying a target's voice, mannerisms, appearance — now takes about thirty seconds to clone a voice and apply a real-time video filter. The economics of fraud just flipped. The cost of launching an attack dropped to nearly zero, while the cost of defending against one keeps climbing.
Platform labels — those little tags that say "this
Platform labels — those little tags that say "this content may be A.I.-generated" — do serve a purpose. They create friction. They flag bad actors after the fact. But by the time a label appears, the wire transfer already cleared. The identity claim already went through. Labels are a post-incident tool. What fraud teams and everyday people both need is a pre-incident defense.
That defense looks like multi-channel verification. Not just one signal, but several, stacked together. A video call confirmed by a phone callback to a known number. A Slack or Teams message from the requestor's verified account. An email from a verified domain. And for very large transactions — in-person or notarized authorization. No single channel can be trusted on its own anymore. For investigators and compliance teams, that rewrites standard operating procedure. For families, it might mean agreeing on a secret code word that no A.I. would know to say.
Liveness detection — the technology that checks whether a biometric sample comes from an actual human being sitting in front of the camera, not a replayed video or a mask — is still part of the toolkit. But most businesses treat it as one layer in a broader strategy, not a standalone guarantee. And according to P.W.C., while most enterprises now acknowledge deepfakes as a serious fraud risk, the majority still don't have formal protocols for handling A.I.-generated audio and video attacks. They know the threat is real. They just haven't built the response yet.
The Bottom Line
The real shift isn't that deepfakes exist. It's that "seeing is believing" — the instinct humans have relied on for all of recorded history — is now a vulnerability, not a strength. The more you trust your eyes and ears alone, the easier you are to fool.
So — a software tool running on a regular gaming computer can now impersonate anyone on a live video call, in real time, well enough to beat the best detectors about half the time. Detection alone can't keep up. The only reliable defense is verifying identity through multiple independent channels before any high-stakes decision gets made. Whether you manage fraud cases for a living or you're just the person your family calls when something feels off — the rule is the same now. Don't trust one signal. Verify through a second channel, every time. The full story's in the description if you want the deep dive.
Ready for forensic-grade facial comparison?
2 free comparisons with full forensic reports. Results in seconds.
Run My First SearchMore Episodes
Deepfakes Fool Your Eyes in 30 Seconds. The Math Catches Them Instantly.
A man in Chicago lost sixty-nine thousand dollars because someone held up a badge on a video call. The badge looked like it belonged to a U.S. Marshal. It was generated by A.I. in about thirty second
PodcastDeepfake Fraud Just Became Your Problem: Insurers Walk, Schools Beg, 75 Groups Declare War on Meta
Seventy-five civil rights organizations sent Meta a letter on 04-13-2026, demanding the company kill a feature called Name Tag — a tool that would let Ray-Ban and Oakley smart glasses identif
PodcastFacial Recognition's Three-Front War: Why This Week Broke the Industry
In six trials of live facial recognition by London's Metropolitan Police, Queen Mary University researchers found that just eight out of forty-two matches were actually correct. <break time="0.5s"/
