CaraComp
Log inTry Free
CaraComp
Forensic-Grade AI Face Recognition for:
Start Free Trial
digital-forensics

The Face Matched. The Voice Matched. The Person Never Existed.

The Face Matched. The Voice Matched. The Person Never Existed.

In early 2024, a finance employee at Arup — a well-regarded U.K. engineering firm — transferred $25 million to fraudsters. He wasn't phished by a spoofed email or tricked by a badly written text message. He sat through a video call. He saw his CFO's face. He saw other senior colleagues. Every face on that screen looked exactly right. Every voice sounded exactly right. Every single one of them was a synthetic deepfake, and nobody caught it in real time.

TL;DR

Deepfake identity fraud is no longer a future threat — it's operational, scalable, and hitting agencies, enterprises, and investigators who still treat a "perfect match" as a final answer.

That case should have been a wake-up call. For some organizations, it was. For most, it became a headline they bookmarked and forgot. Here's the problem with that: the number of deepfake attacks hitting identity systems isn't leveling off. According to Entrust's 2025 Identity Fraud Report, deepfake attacks struck identity verification systems every five minutes in 2024 — while digital document forgeries surged 244% year-over-year. Every. Five. Minutes. That's not a trend line anymore. That's a drumbeat.


The Number Everyone Should Be Watching

Forget the AI hype cycle for a second. The number that actually matters — the one with teeth — comes from Gartner: by 2026, 30% of enterprises will no longer consider identity verification and authentication solutions reliable when used in isolation, specifically because of deepfakes. That's not a fringe prediction from a boutique research shop. That's Gartner telling the enterprise world — formally, publicly — that the single-point verification model is broken.

30%
of enterprises will consider identity verification solutions unreliable in isolation due to deepfakes by 2026
Source: Gartner, February 2024

Think about what that actually means. We're not talking about a niche vulnerability that only affects cryptocurrency exchanges or social media platforms. We're talking about the foundational trust layer that enterprise security, law enforcement workflows, tax agencies, and financial institutions all rely on — and it's cracking under pressure from fraud ecosystems that are now fully industrialized. This article is part of a series — start with Deepfake Laws Biometric Standards Gap Investigators.

Because that's the other story here: this isn't some lone hacker running deepfake software in a basement. Fraudsters can now purchase complete persona kits — pre-packaged synthetic identities that include AI-generated faces, cloned voices, fabricated digital histories, and behavioral patterns trained specifically to pass common verification checks. It's plug-and-play fraud. The skill floor dropped to near zero. The attack volume, predictably, went through the roof.


When the IRS Becomes a Punchline

Here's where the availability heuristic kicks in — and it's working overtime right now. It's one thing to read a statistic about enterprise security. It's another to hear that the person claiming to be your IRS agent over video call might be a synthetic construct. That scenario is no longer hypothetical. Tax scammers are actively deploying deepfake AI programs to impersonate government officials, according to reporting from KGW. The psychological weight of that is significant: when the IRS — an institution people already have complicated feelings about — can be convincingly faked, it erodes a baseline trust that most security models assumed was stable.

And it's not just tax agencies. California Attorney General Rob Bonta issued a formal alert warning residents about sophisticated deepfake scams running on Meta's platforms, according to Westside Today. An Australian billionaire's law firm dragged Meta into litigation over deepfake scam advertisements, according to LawFuel. LeBron James filed a cease-and-desist over deepfake imagery circulating in his name. A Democratic politician was forced to cancel his re-election bid after an AI-generated deepfake of him went viral, per MSN reporting.

"The increasing scale and sophistication of deepfake attacks is forcing businesses to augment techniques that rely on facial biometrics with other processes — including device profiling — as fraudsters' perceived ability to evade detection prompts CISOs to reconsider single-factor verification entirely." InformationWeek, covering enterprise CISO response to deepfake fraud escalation

The pattern across all these incidents isn't random. It's a stress test playing out simultaneously across every institution that uses digital identity as a trust signal — and right now, a lot of those institutions are failing the test.


Trusted by Investigators Worldwide
Run Forensic-Grade Comparisons in Seconds
2 free forensic comparisons with full reports. Results in seconds.
Run My First Search →

Detection Is Harder Than You Think — Much Harder

Here's something that should make every investigator uncomfortable: humans correctly identify high-quality deepfake videos roughly 24.5% of the time, according to research compiled by Bright Defense. That's barely better than a coin flip. Worse, only 0.1% of participants across mixed modality tests could reliably spot fakes. Not 10%. Not 1%. Point one percent. Previously in this series: The Faces Were Fake The 25 Million Was Real.

The natural response is to reach for automated detection tools — and those are improving. Some models have achieved 98% accuracy in controlled conditions. But (and this is a significant but) lab accuracy can drop by as much as 50% when those same tools encounter novel real-world deepfakes they weren't trained on. That 98% figure means 1 failure in every 50 attempts under ideal conditions. In a high-volume investigation environment, or in a fraud operation targeting a financial institution processing thousands of identity verifications daily, that failure rate produces real, expensive consequences.

There's a deeper technical problem too. Keepnet Labs reports that injection attacks — where synthetic media is fed directly into verification systems, bypassing liveness detection entirely — grew 200% in 2023. Bad actors aren't trying to fool the camera anymore. They're feeding pre-rendered deepfakes directly into the data pipeline, upstream of where most detection runs. The liveness check you're relying on? It never saw the attack coming because the attack didn't go through the camera at all.

Why This Should Change How Investigators Work

  • A perfect facial match is a starting point, not a conclusion — facial comparison tells you two images look alike. It doesn't tell you whether either image is real.
  • 📊 Device metadata is now a verification layer, not a bonus — IP geolocation, device fingerprinting, call metadata, and behavioral signals all need to be cross-checked against the matched identity before the investigation advances.
  • 🔎 Document forensics must run in parallel, not sequentially — if the face checks out but the supporting document shows signs of digital fabrication, that mismatch is your signal.
  • 🔮 Source provenance matters now — where did this image or video come from? What platform? What chain of custody? Provenance is quickly becoming as important as the biometric comparison itself.

The Workflow Problem Nobody Wants to Talk About

Most organizations have detection somewhere in their stack. Very few have built workflows where facial comparison, document authentication, and behavioral context are cross-checked by default — not as an exception triggered by suspicion, but as the standard operating procedure for every identity claim. That gap is where deepfake fraud lives.

The ACFE has flagged government agency impersonation and synthetic identity fraud as among the most significant emerging fraud vectors — and both rely on that workflow gap. Fraudsters aren't outsmarting detection technology. They're finding the seams between disconnected verification steps and walking through them.

This is the actual opportunity, and it's substantial: not building a better deepfake detector in isolation, but rebuilding the verification chain so that no single check carries the full weight of trust. Facial comparison tools — including the kind CaraComp builds for investigators — are most effective when they're the first layer of a layered workflow, not the last word on identity. Fast, accurate biometric matching gives investigators a confident starting point. What comes after that match determines whether the investigation holds up. Up next: The Cop Who Made 3 000 Deepfakes Exposed A Bigger Problem Th.

Regula Forensics puts it plainly in their identity verification trend analysis: the fraud ecosystem has matured to the point where organizations need to plan for industrial-scale synthetic identity attacks as a baseline assumption, not an edge case. That means building for the threat, not reacting to individual incidents.

Key Takeaway

Any digital identity signal — face, voice, video, or official document — can now be synthetically fabricated at scale. Investigators who treat a "perfect match" as a final answer are working with a verification model that the fraud industry already knows how to beat. The second step after a match isn't optional anymore; it's the whole job.

There's also a secondary implication worth sitting with: as regulators and attorneys general start issuing specific deepfake scam alerts — not general AI warnings, but targeted consumer fraud advisories — the legal and evidentiary standards around digital identity are quietly shifting. Evidence that looked airtight in 2022 may face challenges it didn't face before. That's a problem for prosecutors, for civil litigators, and for anyone building a case on digital media that they can't definitively prove is authentic.

So here's the question that matters: When you get a "perfect" match on a suspect's photo or video today, what's your second step to rule out a deepfake — or are you still treating matching media as inherently trustworthy? Because in 2024, with attacks hitting every five minutes and injection fraud bypassing liveness checks entirely, "it looked right" is exactly what the Arup finance employee said too — right before he wired $25 million to people whose faces never existed.

Ready for forensic-grade facial comparison?

2 free comparisons with full forensic reports. Results in seconds.

Run My First Search