Synthetic Identity Fraud Now Drives Most ID Scams — Why Facial Comparison Is the Only Check That Bites Back
A researcher with no image manipulation experience built a job-interview-ready synthetic identity in 70 minutes. Not on a high-end workstation. On a five-year-old computer. When the finished identity went through KYC verification, it passed.
If you were the HR manager reviewing that applicant's file — and every document checked out — you'd probably schedule the interview.
Synthetic identities are fabricated from real stolen fragments, built to pass every standard verification checkpoint — and facial comparison at the moment of live interaction is now the only reliable tool that can catch them before the damage is done.
This is the myth that's quietly becoming a liability for anyone doing ID verification, fraud investigation, or OSINT work: "If the documents pass and someone's standing in front of me, they must be real." It used to be a reasonable assumption. It is no longer a safe one.
What a Synthetic Identity Actually Is (Hint: It's Not Just a Fake ID)
Most people picture synthetic identity fraud as someone printing a bad driver's license at home. That mental model is about fifteen years out of date. Modern synthetic identities aren't crude forgeries — they're carefully engineered composites, stitched together from fragments that are individually real.
Here's how the construction actually works. A fraudster pulls a Social Security number from a data breach. They pair it with a name drawn from a public record database. They attach a real-format address, a plausible employment history, and — critically — a convincing face. Not a stolen face. A generated one. AI tools can now produce photorealistic headshots of people who have never existed, and those images pass the visual inspection that most onboarding workflows use to confirm "that looks like a real person."
But the really clever part isn't the document. It's the patience.
Fraudsters don't activate synthetic identities immediately. They nurture them. The fake persona applies for a secured credit card, makes small purchases, pays on time. Over six to twelve months, it builds a credit file. A payment history. A digital footprint. By the time the identity is "activated" — meaning the fraudster uses it to commit the actual fraud — it has exactly the kind of clean, established record that triggers no red flags whatsoever. This article is part of a series — start with Deepfakes Investigators Workflow Classmates Elections Fraud.
Security Boulevard's analysis of the LexisNexis 2026 Cybercrime Report — covering over 116 billion online transactions — found an 8% rise in global fraud rates, with synthetic identity fraud as a primary driver. The report's most striking benchmark was that 70-minute creation time. That's not a hacker skill. That's an afternoon project.
Why KYC Systems Are Designed to Miss This Entirely
Here's the thing nobody wants to say out loud: traditional KYC systems weren't designed to detect synthetic identities. They were designed to confirm that an identity exists. Those are completely different problems.
When a verification system runs a check, it's asking: Does this SSN appear in any database? Does this name match this address? Is this document format valid? Is there a credit file for this person? Synthetic identities — built from real fragments — answer yes to every single one of those questions. The SSN is real. The address format is legitimate. The document passes template verification. The credit file exists and is healthy.
What makes this especially insidious is the absence of a victim. When someone's identity is stolen outright, the real person eventually notices fraudulent charges and files a report. That report triggers alerts. Investigations begin. With synthetic identities, there's no real person being victimized. No one calls the bank to complain. The fraud remains completely invisible until the account defaults or an internal audit catches a discrepancy that shouldn't be there — often months or years later.
"Synthetic identities also don't trigger alerts associated with stolen credentials — because no 'victim' reports suspicious activity. The fraud remains invisible until the account defaults or an internal audit exposes discrepancies." — Security Boulevard, on the structural detection gap in legacy KYC systems
Estimates suggest between 85% and 94% of synthetic identities are never flagged as high risk by existing fraud models. Some projections put synthetic fraud at nearly 80% of all identity fraud currently occurring. Read that number again. It's not a footnote problem.
That's not growth. That's acceleration. The barrier to creating a synthetic identity has collapsed faster than detection technology has adapted. The economics now favor the attacker by a wide margin.
The Forgery-in-a-Museum Problem
Think of a synthetic identity like a forgery hanging in a museum. The frame is authentic — that's the stolen SSN. The canvas is real — that's the legitimately formatted address and document structure. Even the paint is period-correct, because the credit history was built up over months with genuine transactions. A document examiner can authenticate the frame and the canvas. Both pass verification. The forgery sails right through. Previously in this series: 347 Deepfakes Of 60 Classmates Got 60 Hours Of Community Ser.
The problem is that no one's holding the artwork up to the light. No one's checking whether the brushwork matches the claimed artist. In identity verification terms, that "light test" is facial comparison — specifically, the moment when you compare the static face on the document against the live face presenting itself in real time.
This is where things get more complicated, because fraudsters have anticipated that gap too. According to ID.me's 2026 Identity Fraud Landscape Report, deepfake injection attacks — where a manipulated video stream replaces the live camera feed during a video KYC session — increased 783% in 2024 according to liveness detection firm iProov, with Jumio reporting an 88% year-on-year rise in 2025. Fraud communities, including Russian-speaking groups, now offer deepfake-as-a-service products specifically optimized to bypass automated KYC liveness checks.
In other words: the live video you're reviewing might not be live at all.
When It Becomes State-Sponsored: The DPRK Benchmark
If you want to understand just how industrialized this has become, consider what North Korean state-backed groups have demonstrated. One documented operation involved an eight-person cell that used AI-generated headshots, doctored identity documents, and fabricated employment histories to place operatives inside Western technology companies — earning $1.64 million over 3.5 years before being discovered. A single synthetic identity pipeline in a related operation created 135 distinct personas and targeted over 73,000 individuals.
This is not opportunistic fraud. It's a repeatable, scalable manufacturing process for fake people.
The ID.me fraud operations team suspended more than 130 wallets linked to potential DPRK threat actors, with creation attempts from those actors increasing 200% between March and November 2025 alone. For investigators doing background checks or employment fraud cases: the threat profile has shifted. You're not just looking for a creative individual who faked a resume. You're potentially looking at state-sponsored operators who have refined this process across thousands of iterations.
What You Just Learned
- 🧠 Synthetic identities are composites, not forgeries — they're built from real fragments specifically to pass the checks that detect fake documents
- 🔬 KYC systems verify existence, not reality — they confirm that an identity appears in databases, not that the identity corresponds to a living person
- 🎭 No victim means no alert — synthetic fraud can sit invisible inside an institution for months before any discrepancy surfaces
- 💡 Live video is no longer proof of a live person — deepfake injection attacks replace the camera feed in real time during KYC sessions
Where Facial Comparison Actually Breaks the Illusion
The gap that facial comparison fills is specific and important to understand. Every synthetic identity has one moment of maximum vulnerability: activation. That's when the fabricated persona has to show up — in a video call, at an onboarding session, during a live verification check — and claim to be the face on the document. Up next: 347 Deepfakes Of 60 Classmates Got 60 Hours Of Community Ser.
A careful facial comparison workflow asks two questions simultaneously: Does this face match the face on the document? And is this face a real face presented by a real person, or a generated or injected image? Those are two separate technical problems, and both matter. At CaraComp, this dual-layer approach — matching identity claim to live assertion while flagging AI-generated imagery — is what separates a verification that checks boxes from one that actually catches fabricated people.
Legacy identity verification tools, by and large, do not analyze whether the face in a video has been AI-generated or synthetically manipulated. That's the blind spot that deepfake injection attacks exploit. It's also why the 70-minute synthetic identity creation benchmark is so alarming — the hard part isn't making the fake person. It's getting the fake face past a liveness check. And that gap is closing fast.
Your KYC system may have checked fifteen boxes and passed every one — but none of those boxes answer the question that actually matters: Is the face claiming this identity right now the same face that will show up tomorrow, next month, and after the fraud has already happened? Facial comparison at the activation point is the only check that catches synthetic identities where they're most exposed.
Here's the aha-moment that should reframe how you think about any identity file you review: a synthetic identity doesn't break a verification system. It exploits how the system was designed to trust signals. Clean credit history? Trust signal. Valid document format? Trust signal. Matching address and SSN? Trust signals. The entire architecture of traditional KYC is built around accumulating trust signals — and synthetic identities are specifically engineered, over months of patient construction, to generate exactly those signals.
The only signal they can't manufacture is a real face that consistently belongs to a real person across multiple live interactions over time. Which means the investigators who'll catch these first are the ones asking not just "does this identity check out?" — but "does this face match every time we look?"
Have you ever reviewed an ID, onboarding file, or applicant profile that "felt" off even though every individual document looked legitimate? What tipped you off that something didn't add up — even when the system said it did?
Ready for forensic-grade facial comparison?
2 free comparisons with full forensic reports. Results in seconds.
Run My First SearchMore Education
The Face Never Existed. The ID Is Stolen. The Match Is Perfect.
When attackers build a fake identity by pairing stolen credentials with an AI-generated face, both the ID and the liveness video match — because they were forged together. Here's why that breaks everything investigators thought they knew about facial comparison.
digital-forensicsDeepfake Detectors Score 99% in the Lab. In the Field, They're a Coin Flip.
That 99.9% accuracy score your deepfake detection tool advertises? It was earned on pristine, studio-quality images — not the blurry CCTV frames sitting in your case folder. Here's why that gap matters more than most investigators realize.
biometricsWhy a Deepfake Face Can Fool Your Eyes in Seconds but Not 128 Landmarks at Once
Your live video candidate might be completely synthetic. Here's the frame-by-frame science behind why human eyes miss deepfakes — and what facial landmark analysis actually measures to catch them.
