CaraComp
Log inStart Free Trial
CaraComp
Forensic-Grade AI Face Recognition for:
Start Free Trial
digital-forensics

"Verified" Doesn't Mean Matched: Why 5–6% of Passed Identity Checks Still Hide the Wrong Face

"Verified" Doesn't Mean Matched: Why 5–6% of Passed Identity Checks Still Hide the Wrong Face

Here's a number that should stop you mid-scroll: according to industry data cited by Veriff, roughly 5–6% of all identity verification sessions involve fraudsters actively attempting to impersonate someone else. That means in any stack of 100 "verified" profiles sitting in your case file right now, five or six of them may have passed every automated check — KYC, digital wallet, platform age gate — while belonging to entirely the wrong person.

And the system gave them a green checkmark anyway.

TL;DR

A "verified" digital identity credential — including EU Digital Identity Wallet age checks — proves the document is authentic, not that the face presenting it matches the person it belongs to. That gap is where investigators routinely get it wrong.

What the EUDI Wallet Actually Proves

The European Commission recently published a use case manual explaining how age verification works within the EU Digital Identity Wallet framework. The technical design is genuinely impressive. A citizen can prove they are above a specific age threshold — say, 18 — by sharing a single cryptographically signed attribute from their wallet, without disclosing their exact birthdate, their address, or any other personal detail. The credential is tamper-proof, government-issued, and cryptographically sealed.

Read that again: cryptographically sealed. The document cannot be faked. The digital signature is real. The issuing authority is legitimate.

None of that tells you whose face is on the other side of the screen.

This is the distinction that gets blurred constantly in investigative work, fraud analysis, and compliance reviews. The EUDI system — like virtually every KYC flow, platform age gate, or digital onboarding check — is designed to verify the credential. It answers the question: "Is this identity document authentic and properly issued?" It does not answer: "Is the person holding this device actually the person pictured in the document?" Those are two entirely different questions, and conflating them is the single most common mistake investigators make when processing verified profiles. This article is part of a series — start with Eu Digital Omnibus Will Redraw The Rules On Biomet.


The Old Photo Problem Nobody Talks About

Even in systems that do include a facial comparison step — and not all of them do, or do it well — there's a structural problem baked into the process. According to technical analysis from Patronscan, identity verification systems typically work from a single reference image. One photo. Often the one from the original credential issuance — which could be five, seven, or ten years old.

A face changes. Significantly. Weight shifts. Hairlines move. Skin texture evolves. Lighting in the original ID photo may bear no resemblance to the selfie captured during onboarding. When you're working from a single reference image taken a decade ago, even a well-designed algorithm starts producing confidence scores that no longer mean what you think they mean. Accuracy drops — not catastrophically, but enough to matter when you're trying to determine whether the person on a verified account is actually your suspect.

Now add the bias dimension. Research compiled by academic analysis published on ArXiv shows that darker-skinned individuals and women experience measurably higher false match rates in facial recognition systems. The dangerous part isn't just that errors happen — it's that the system delivers those errors with the same apparent confidence as a correct match. An investigator reading a high-confidence score has no way to know, from the score alone, whether they're looking at a reliable result or a biased false positive. The number looks the same either way.

5–6%
of all identity verification sessions involve active impersonation attempts — even after automated checks pass
Source: Veriff
Trusted by Investigators Worldwide
Run Forensic-Grade Comparisons in Seconds
Full platform access for 7 days. Run real searches — no credit card, no commitment.
Run My First Search →

Age Estimation Is Not Age Verification (And the Gap Is Enormous)

Here's a distinction that trips up even experienced professionals. Many systems marketed as "age verification" are actually performing age estimation — and those are not the same thing. Not remotely.

Age verification means checking a credential: a government-issued document, a cryptographically signed attribute, a database record. Age estimation means looking at a face and guessing. According to iProov, NIST testing of age estimation tools found that to keep false positive rates acceptably low, systems often need to set their "challenge age" — the threshold they're testing against — somewhere between 29 and 33 years old when verifying an 18-year-old claim. In practice, that means a system "verifying" that someone is over 18 might be operating with an effective margin of error exceeding 15 years in either direction.

Fifteen years. On an age claim. Previously in this series: Deepfakes Fool Your Eyes These 3 Frame Level Artif.

An investigator who receives a report stamped "age verified" and doesn't know whether that came from a cryptographic credential check or an estimation algorithm is working with information they can't properly evaluate. The label looks identical. The underlying reliability is completely different.

"Trust in AI-powered face recognition is one of the main reasons for wrongful detentions by law enforcement — each false result should be regarded not as a source of truth, but as an expert opinion that may still be wrong." — Regula Forensics

The Teller Who Trusted the Hologram

Think about how a bank fraud scenario usually unfolds. A customer walks in and presents a credit card — EMV chip intact, hologram gleaming, cryptographic signature perfect. The fraud scanner approves it. The teller reads "authentic credential" on the screen and processes the transaction. Thirty minutes later, the real cardholder calls to report a stolen card.

The scanner was right. The card was authentic. Nobody tampered with the chip. The fraud scanner answered its question correctly — and the teller asked the wrong question.

This is exactly what happens when an investigator treats a verified KYC profile as proof of identity. The credential is real. It passed. The question it answered is "was this document legitimately issued?" — not "is the person holding it the person it was issued to?" Spoofing attacks make this worse. According to the ArXiv research, advanced methods including video replay attacks and 3D mask presentations can defeat liveness detection in age verification systems, producing a false positive at the automated check stage while a completely different face walks away with a clean "verified" result. The investigator downstream never sees the attack. They just see the green checkmark.

Understanding where face recognition software genuinely falls short is what separates an investigator who gets it right from one who gets blindsided by a fraudulent profile that passed every automated gate.

What You Just Learned

  • 🧠 Credential authenticity ≠ facial match — A verified digital credential (EUDI Wallet, KYC, platform check) proves the document is real, not that the right person is presenting it.
  • 🔬 Single reference images degrade accuracy — One photo, potentially 5–10 years old, is not enough for reliable facial comparison — and the confidence score won't tell you when it's failing.
  • ⚠️ Age estimation and age verification are different technologies — Systems using estimation can carry a 15+ year margin of error while displaying the same "verified" label as a cryptographic check.
  • 💡 5–6% of sessions pass automated checks while hiding active fraud — That fraud is invisible without independent facial comparison. The automation is doing its job; the investigator's job is what comes next.

The Investigator's Actual Job Starts After "Verified"

None of this means the EUDI Wallet is flawed technology. It isn't. The cryptographic architecture is sound, the selective disclosure design is elegant, and for what it was built to do — prove a credential attribute without oversharing personal data — it performs well. The mistake isn't in the system. It's in how professionals interpret the system's output. Up next: Verified Doesnt Mean Matched Why 5 6 Of Passed Ide.

When a verified profile lands in your case file, the automated check has completed one task: confirming the credential hasn't been tampered with and was properly issued. Your task — the one that determines whether you've actually identified the right person — is independent facial comparison. That means pulling the reference image from the credential and comparing it against your suspect photos with discipline: accounting for image age, lighting differences, known algorithmic bias on the demographic profile in question, and the possibility that what you're looking at passed automated liveness detection despite being a sophisticated spoof.

At CaraComp, we see this confusion constantly when teams first start working with facial comparison data at scale. The word "verified" carries enormous psychological weight. It feels like a conclusion. It's actually just the starting line.

Key Takeaway

A "verified" digital identity badge — from a KYC flow, an EUDI Wallet age check, or any automated onboarding system — confirms that a credential is authentic. It does not confirm that the face presenting the credential matches the person the credential belongs to. Those are separate questions, and only one of them requires a human with trained eyes and a proper comparison workflow to answer correctly.

So here's the question worth sitting with: the next time you pull a "verified" profile from a platform, a crypto exchange, or a digital wallet system, what does your process actually look like for checking whether the face on that credential is the face of the person in your case? Not the algorithm's job. Yours.

Because somewhere in that stack of verified profiles, statistically speaking, five or six of them are lying. And right now, they look exactly like the ones telling the truth.

Ready to try AI-powered facial recognition?

Match faces in seconds with CaraComp. Free 7-day trial.

Start Free Trial