CaraComp
Log inTry Free
CaraComp
Forensic-Grade AI Face Recognition for:
Start Free Trial
Podcast

Synthetic Identity Fraud Now Drives Most ID Scams — Why Facial Comparison Is the Only Check That Bites Back

Synthetic Identity Fraud Now Drives Most ID Scams — Why Facial Comparison Is the Only Check That Bites Back

Synthetic Identity Fraud Now Drives Most ID Scams — Why Facial Comparison Is the Only Check That Bites Back

0:00-0:00

This episode is based on our article:

Read the full article →

Synthetic Identity Fraud Now Drives Most ID Scams — Why Facial Comparison Is the Only Check That Bites Back

Full Episode Transcript


A researcher with zero image editing skills sat down at a five-year-old computer and built a fake human being in seventy minutes. Not a throwaway bot account. A fully formed identity — with documents, employment history, and a face — polished enough to pass a job interview and clear standard identity verification. Every single checkbox said this person was real. But this person had never drawn a breath.


That should unsettle you, and honestly, it should

That should unsettle you, and honestly, it should. If you've ever opened a bank account, applied for a loan, or onboarded a new hire at work, you've trusted the same systems that fake identity sailed right through. And if you're someone who already worries about your personal data floating around after breaches you didn't even know about — your Social Security number, your address, your name — this is exactly where that stolen data ends up. It gets stitched into a person who doesn't exist but looks, on paper, completely legitimate. According to multiple industry estimates, synthetic identity fraud now accounts for roughly eighty percent of all identity fraud in the United States. Today I want to walk you through how these fake identities actually get built, why our current systems are almost blind to them, and where facial comparison technology becomes the one check that can catch them. So what makes a synthetic identity so hard to spot?

A synthetic identity isn't a stolen identity. It's a Frankenstein. Fraudsters take a real Social Security number — maybe yours, leaked in a breach — and attach it to a made-up name, a fabricated work history, and a plausible address. Each piece on its own is either real or formatted correctly enough to pass. The frame is authentic. The canvas is authentic. But the painting inside is a forgery. And the systems we rely on — Know Your Customer checks, or K.Y.C. — were designed to confirm that an identity exists, not that it belongs to a living person. They check boxes. Does the S.S.N. exist? Does the address format match? Is the document valid? A synthetic identity passes every single box because it was built from real fragments specifically to do that.

And these aren't rushed operations. Fraudsters nurture these fake personas for months, sometimes longer, running them through credit-building cycles and layering in behavioral history. They apply for small lines of credit, make on-time payments, build a score. By the time the identity "activates" — meaning the fraudster maxes everything out and vanishes — the profile has a deeper digital footprint than some real people. For anyone reviewing that account or that job application, nothing looks wrong. That's the trap.


Trusted by Investigators Worldwide
Run Forensic-Grade Comparisons in Seconds
2 free forensic comparisons with full reports. Results in seconds.
Run My First Search →

Why don't fraud detection systems catch this

Why don't fraud detection systems catch this? Because there's no victim to sound the alarm. When someone steals your whole identity, you notice. You see charges you didn't make, and you call your bank. But a synthetic identity doesn't belong to anyone real. No one calls to report it. The fraud stays invisible until the account defaults or an internal audit catches the discrepancy. According to fraud model analyses, between eighty-five and ninety-four percent of synthetic identities are never flagged as high risk by existing detection systems. That means the vast majority slip through without a whisper. For investigators reviewing case files, that's a sobering number. For the rest of us, it means institutions we trust with our money likely have synthetic identities already enrolled in their systems right now.

The scale is staggering. According to the LexisNexis twenty-twenty-six cybercrime report, global fraud rates climbed eight percent, driven largely by synthetic identity fraud and increasingly sophisticated bots. That finding came from analyzing more than a hundred and sixteen billion online transactions. And the financial damage is accelerating. Industry projections put synthetic identity fraud losses at around twenty-three billion dollars in twenty-twenty-five, surging to fifty-eight point three billion by twenty-thirty. That's a hundred and fifty-three percent increase in five years. That's not a trend line. That's a cliff.

Now layer in deepfakes. Even when a system requires a live video check — where you hold up your face to a camera to prove you're the person on the document — attackers are injecting A.I.-generated video directly into the verification stream. According to iProov, injection attacks jumped seven hundred and eighty-three percent in twenty-twenty-four. Jumio reported an eighty-eight percent year-over-year rise in twenty-twenty-five. And this isn't just freelance criminals. According to I.D. me's fraud operations team, they suspended more than a hundred and thirty digital wallets linked to North Korean threat actors. Wallet creation attempts from D.P.R.K.-linked actors tripled between March and November of twenty-twenty-five. One documented cell — just eight people — used A.I.-generated headshots and doctored documents to place operatives inside Western companies. They earned one point six four million dollars over three and a half years. A single pipeline they built created a hundred and thirty-five fake personas and targeted more than seventy-three thousand individuals. This isn't opportunistic fraud anymore. It's industrialized. And it means the person on the other side of a video interview might not be a person at all.


The Bottom Line

So where does facial comparison fit? It targets the one moment a synthetic identity is most vulnerable — the activation point. That's the instant someone presents themselves live, on camera, claiming to be the person on the document. Facial comparison technology does two things legacy K.Y.C. doesn't. First, it compares the static face on the identity document to the live face on the video. Second, it analyzes whether that live video is actually real — or whether it's been synthetically generated or digitally injected. Legacy identity verification tools simply don't analyze whether the media has been A.I.-generated or manipulated. That blind spot widens every time generative tools improve. Facial comparison is the light test that reveals the forgery traditional checks can't see.

Your K.Y.C. system might check fifteen boxes. Every box might pass. But not one of those boxes answers the only question that actually matters — is the face claiming this identity the same face you're about to trust?

So here's what to carry with you. Synthetic identities are built from real stolen data, aged for months, and designed to pass every traditional check. Current systems miss up to ninety-four percent of them because there's no real victim to trigger an alert. Facial comparison is the one tool that tests the thing fraudsters can't fake at the moment it matters — whether a real, unmanipulated human face matches the identity being claimed. Whether you're reviewing case files or just wondering if the systems protecting your bank account actually work, understanding this gap is how you stop feeling powerless about it. The written version goes deeper — link's below.

Ready for forensic-grade facial comparison?

2 free comparisons with full forensic reports. Results in seconds.

Run My First Search