CaraComp
Log inTry Free
CaraComp
Forensic-Grade AI Face Recognition for:
Start Free Trial
Podcast

The Faces Were Fake. The $25 Million Was Real.

The Faces Were Fake. The $25 Million Was Real.

The Faces Were Fake. The $25 Million Was Real.

0:00-0:00

This episode is based on our article:

Read the full article →

The Faces Were Fake. The $25 Million Was Real.

Full Episode Transcript


A finance worker in Hong Kong got a phishing email claiming to be from his company's chief financial officer in the U.K. He didn't buy it. Then he joined a video call, saw the C.F.O.'s face, saw other colleagues he recognized — and over the next several days, wired two hundred million Hong Kong dollars across fifteen separate transactions. That's about twenty-five million U.S. dollars. Every person on that call was a deepfake.


If you've ever been on a video call — for work, for

If you've ever been on a video call — for work, for a doctor's appointment, for a job interview — this story is about you. Because the employee who got fooled wasn't careless. He spotted the phishing email. He was suspicious. What flipped him from doubt to trust was the most basic human instinct we have — seeing a familiar face and believing it's real. According to Hong Kong police, this wasn't an isolated incident. Authorities tied it to at least twenty other cases where deepfakes successfully defeated facial recognition checks across the city. The employee, the company, the technology that fooled them — none of it was unusual. That's what makes this so hard to sit with. So how did a video call become the most dangerous authentication tool in modern business?

Start with how the attack actually worked. According to Trend Micro's technical analysis, current deepfake generation tools need at least thirty minutes of processing time to produce convincing video. That means the attackers almost certainly didn't generate faces in real time during the call. They pre-built video clips of each person — the C.F.O., the colleagues — and played those clips into the conference as if they were live participants. To anyone watching, it looked like a normal meeting. Familiar faces, familiar voices, a routine request for fund transfers. The deception wasn't about hacking a system. It was about hacking a person's trust.

Now, you might assume automated detection tools would catch something like this. They often don't. A twenty-twenty-four benchmark study called DeepFake-Eval tested how well current detection systems perform against real-world forgeries. Automated tools hit about eighty percent accuracy. Trained human forensic analysts reached closer to ninety percent. That ten-point gap matters enormously. It means one in five sophisticated fakes slips past the software entirely. For an investigator building a case, that's a piece of evidence you might trust that you shouldn't. For the rest of us, it means the next video you watch — of a politician, a celebrity, a family member asking for money — might show something that never happened.


Trusted by Investigators Worldwide
Run Forensic-Grade Comparisons in Seconds
2 free forensic comparisons with full reports. Results in seconds.
Run My First Search →

The volume is accelerating

And the volume is accelerating. Researchers estimate that deepfake video online is growing at roughly nine hundred percent per year. Nine times more fake video, every twelve months. Detection tools can't keep pace with that kind of growth, especially because the newest generation of fakes uses diffusion models — the same A.I. architecture behind image generators like Midjourney. According to a review published through N.I.H., those diffusion-based forgeries produce artifacts that look nothing like the older fakes detection systems were trained on. So the tools miss them. Not because the tools are bad, but because the fakes have already moved on.

Peer-reviewed research in Frontiers in Big Data describes what cutting-edge detection actually requires. Analysts have to examine identity-preserving facial traits — the specific geometry of a face that should stay consistent across frames — while simultaneously checking spatial and frequency-domain features for signs of manipulation. In plain terms, they're looking at whether the fine details of a face hold together the way a real face would, and whether the underlying pixel data shows patterns that only A.I. generation leaves behind. That kind of analysis takes expertise, time, and specialized tools. Most companies don't have any of those things when a C.F.O. calls and says move the money now.

The Hong Kong employee's experience reveals something specific about how trust works. He rejected the email. That's textbook security awareness. But the video call created what fraud researchers describe as a confidence cascade. Once he saw faces he recognized, every remaining doubt collapsed. Fifteen transactions followed. Not one. Fifteen. Each one was a moment where someone could have paused. But the video call had already done its work.


The Bottom Line

The real problem isn't that deepfakes exist. It's that video still carries the weight of proof — in boardrooms, in courtrooms, in fraud investigations — even though it no longer earns that weight. Twenty-five million dollars didn't disappear because of bad technology. It disappeared because everyone involved still treated seeing as believing.

A finance worker saw his boss on a video call and trusted what he saw. Every face on that screen was generated by A.I. Twenty-five million dollars moved before anyone realized. Video used to be the strongest proof we had that something really happened. That era is over — not in some distant future, but in a case that's already been investigated and closed. Whether you evaluate evidence for a living or you just FaceTime your family on weekends, the question is the same now — how do you know the face on your screen is real? I linked the full article below — worth a read.

Ready for forensic-grade facial comparison?

2 free comparisons with full forensic reports. Results in seconds.

Run My First Search