CaraComp
Log inTry Free
CaraComp
Forensic-Grade AI Face Recognition for:
Start Free Trial
Podcast

Deepfakes Will Drive Most ID Fraud by 2026 — Most Fraud Teams Aren't Ready

Deepfakes Will Drive Most ID Fraud by 2026 — Most Fraud Teams Aren't Ready

Deepfakes Will Drive Most ID Fraud by 2026 — Most Fraud Teams Aren't Ready

0:00-0:00

This episode is based on our article:

Read the full article →

Deepfakes Will Drive Most ID Fraud by 2026 — Most Fraud Teams Aren't Ready

Full Episode Transcript


A software developer with more than twenty years of experience sat down for a video call with colleagues he recognized. Two-factor authentication was turned on. Every face on that screen looked real. Every voice sounded right. None of them were human.


North Korean hackers used real-time A

North Korean hackers used real-time A.I. deepfakes to impersonate real people on that call, and it worked. The developer installed malware that compromised Axios — a JavaScript library that gets downloaded about a hundred million times every single week. If you've ever used a website or an app, there's a decent chance code from that library touched your data at some point. This wasn't some crude face-swap you'd spot in two seconds. According to security researchers who examined the attack, the deepfakes sustained real-time conversation with natural intonation, breathing sounds, and matching lip movement. They passed the one test we all thought was enough — a live human being looking at another live human being and deciding, yeah, that's my coworker. And that raises a question that runs through everything we're about to cover. If seeing isn't believing anymore, what is?

To understand how fast this shifted, you need the numbers. In 2023, researchers estimated roughly half a million deepfakes existed online. According to Fortune, citing A.I. researchers, that number hit about eight million by 2025. That's roughly sixteen times more in two years. And the fraud losses followed the same curve. According to Keepnet Labs, deepfake fraud cost Americans about three hundred and sixty million dollars in 2024. One year later — over a billion. Tripled. For anyone tracking caseloads or managing financial risk, that's not a trend line. That's a wall coming at you. And for the rest of us, it means the next video call you join, the next voicemail you trust, the next clip you share — any of it could be manufactured.

What makes the Axios attack different from earlier supply-chain hacks is speed. Remember the xz Utils backdoor from 2024? That attacker spent years building trust inside an open-source project before slipping in malicious code. Years. The deepfake operation against Axios compressed that same playbook into hours. Send a phishing message, hop on a video call, wear someone else's face and voice, and you're in. The attackers specifically targeted the top fifty npm packages — the most widely used building blocks of the modern internet — because they understood exactly how software supply chains work. And the people maintaining those packages? They're often solo developers or tiny teams. No corporate security department. No deepfake detection tools. They're gatekeepers with no gate.


Trusted by Investigators Worldwide
Run Forensic-Grade Comparisons in Seconds
2 free forensic comparisons with full reports. Results in seconds.
Run My First Search →

That same vulnerability shows up far beyond software

That same vulnerability shows up far beyond software. A British engineering firm called Arup lost twenty-five million dollars after an employee joined a video conference where every other participant — every single one — was a deepfake. According to the Institute for Financial Integrity, the attackers had downloaded publicly available videos of real Arup staff and used A.I. to generate fake faces and voices in real time. The employee on that call recognized his colleagues. He was sure the call was legitimate. Twenty-five million dollars. Gone.

So how good are humans at catching these fakes? Not good. Research shows human detection rates for high-quality video deepfakes sit around one in four. That means three out of four times, a convincing deepfake walks right past your eyes. And voice cloning has gotten even harder to spot. According to A.I. researchers cited by Fortune, voice cloning has crossed what they call the indistinguishable threshold. A few seconds of someone's recorded speech is now enough to generate a clone that captures their rhythm, emotion, and even their breathing patterns. A few seconds. That's a voicemail greeting. That's a conference talk posted on YouTube. That's your voice, probably already out there.

Meanwhile, according to the A.C.F.E. and S.A.S. Anti-Fraud Technology Benchmarking Report, only seven percent of anti-fraud professionals say their organizations are firmly ready to detect A.I.-fueled fraud. Seven percent. And about eighty percent of companies have no established protocols or response plans for handling a deepfake-based attack. That's not a gap. That's a canyon. For investigators, it means the cases landing on your desk are about to change faster than most workflows can absorb. For everyone else, it means the companies holding your money, your medical records, your identity — most of them don't have a plan for this yet.


The Bottom Line

Now, detection tools do exist. Cryptographic signing standards like those from the Coalition for Content Provenance and Authenticity can verify whether media has been altered. Multimodal forensic tools can analyze video and audio simultaneously for synthetic artifacts. But about a third of enterprises already say traditional identity verification methods aren't reliable against sophisticated deepfake attacks. Just looking harder at pixels won't cut it anymore. The baseline is shifting toward comparing faces and voices against known, verified source material — not just trusting what shows up on a screen.

Most people assume the danger with deepfakes is that they'll fool a camera or an algorithm. The Axios attack proved the real target isn't technology. It's the moment a human being decides to trust what they see.

So — a hundred million weekly downloads, compromised through a faked video call. Deepfake fraud losses in the U.S. tripled in a single year to over a billion dollars. And fewer than one in ten organizations say they're ready for what's coming. Whether you're building a fraud case or just answering a video call from your bank, the question is the same now. Can you prove the person you're looking at is real? The full story's in the description if you want the deep dive.

Ready for forensic-grade facial comparison?

2 free comparisons with full forensic reports. Results in seconds.

Run My First Search