CaraComp
Log inTry Free
CaraComp
Forensic-Grade AI Face Recognition for:
Start Free Trial
Podcast

Your "Biometric Age Check" Isn't Verifying Identity — And Defense Lawyers Know It

Your "Biometric Age Check" Isn't Verifying Identity — And Defense Lawyers Know It

Your "Biometric Age Check" Isn't Verifying Identity — And Defense Lawyers Know It

0:00-0:00

This episode is based on our article:

Read the full article →

Your "Biometric Age Check" Isn't Verifying Identity — And Defense Lawyers Know It

Full Episode Transcript


A ninety-five percent confidence score sounds bulletproof. But in a courtroom, that number doesn't mean what almost everyone assumes it means. And over half the people who encounter age verification online try to get around it anyway.


That last number comes from research into

That last number comes from research into Australia's age verification rollout. According to reporting by All About Cookies, fifty-one percent of users who hit an age check attempted to bypass it. And the most common workaround wasn't fooling a face scan. It was changing an I.P. address, because many platforms rely on geo-blocking instead of biometric checks in the first place. That gap between what we think these systems do and what they actually do — that's what today's episode is about. If you've ever had a website scan your face and tell you "age verified," you probably assumed the system knew who you were and how old you were. It almost certainly didn't. And if that unsettles you, good — because understanding the difference is how you stop feeling powerless about it. There are actually three separate tests hiding behind the phrase "biometric age check." Most people think they're one thing. Regulators know they're not. Defense attorneys definitely know they're not. So what are these three tests, and why does confusing them put real cases at risk?

The first test is age estimation. An algorithm looks at a single face — your wrinkles, skin texture, facial landmarks — and produces a guess about how old you are. Not a fact. A guess. According to N.I.S.T.'s Face Analysis Technology Evaluation, current age estimation systems achieve a mean absolute error of about one point three years for teenagers between thirteen and seventeen. That sounds pretty good until you widen the age range. For people between six and seventy, that error jumps to two and a half years. So when a system says "age twenty-five," it could realistically mean anywhere from about twenty-two and a half to twenty-seven and a half. The system isn't saying "this person is twenty-five." It's saying "there's roughly a ninety percent chance this person falls somewhere in this range." For a platform trying to keep twelve-year-olds off an adult site, that range might be good enough. For a legal proceeding, it's a different story entirely.

The second test is liveness detection. This one answers a completely different question — not "how old are you?" but "are you a real, live human being right now?" It's designed to catch someone holding up a photograph or playing a deepfake video in front of the camera. Liveness detection confirms a real person is present. It says nothing about that person's age. And passive systems — the ones that don't ask you to blink or turn your head — can be fooled by a printed photo. That's not paranoid speculation. It's a documented limitation.


Trusted by Investigators Worldwide
Run Forensic-Grade Comparisons in Seconds
2 free forensic comparisons with full reports. Results in seconds.
Run My First Search →

The third test is facial recognition, or identity

The third test is facial recognition, or identity matching. This is the one that compares your face to a database and asks "who is this person?" It maps the geometry of your face — the distance between your eyes, the shape of your jawline — and compares those measurements against known images. A high match score means geometric similarity. It means "this face looks like that face." It contains zero information about age. None.

So why do people mix these up? Because platforms bundle them together under one label. You see a message that says "age verification complete" after a face scan, and your brain fills in the rest. You assume the system checked your identity, confirmed you're live, and verified your age — all at once. The confidence scores make it worse. Seeing "ninety-five percent" next to "verified" feels definitive. But that ninety-five percent might only refer to one of those three tests. And the tests use completely different algorithms trained on completely different data. According to N.I.S.T.'s technical report, I.R. eighty-five twenty-five, age estimation algorithms are trained on photographs labeled with known ages. Facial recognition algorithms are trained on pairs of photographs labeled with identity. Different training data. Different mathematical problems. A geometric similarity score tells you nothing about whether someone is twenty-one or seventeen.

Now layer on the bias problem, and it gets harder to ignore. According to Yoti's own published research, age estimation systems show higher error rates for people with darker skin tones. That means the populations most likely to be misclassified — adults wrongly flagged as minors, or minors slipping through as adults — are the same populations where the system is least accurate. For someone relying on these results in a legal context, that's a credibility problem. For the rest of us, it means the face scan that felt routine might be making its worst mistakes on the people who can least afford them.


The Bottom Line

Regulators have already caught on. The U.K.'s Ofcom lists document verification, biometric matching against government I.D., open banking, and digital identity services as approved age verification methods. Self-declaration and facial age estimation alone don't make the cut. The European Commission initially declined to recommend facial age estimation for high-stakes contexts like gambling and adult content. The reason is straightforward — a probabilistic guess doesn't carry the certainty those situations demand. Some jurisdictions have moved toward requiring it anyway, because an imperfect gate beats no gate at all. But "better than nothing" and "legally sufficient" are two very different standards.

The real article's analogy puts it perfectly. Age estimation is a security guard eyeballing a crowd to guess who looks over twenty-one. Fast, scalable, useful as a rough filter. But when the law requires someone to prove their age — that's checking an I.D. at the door. Those are two completely different security gates. And a defense attorney doesn't need to disprove the technology. They just need to ask one question — "does a geometric similarity score contain any information about age?" The answer is no.

So here's what this comes down to. Age estimation guesses how old a face looks. Liveness detection checks that a real person is in front of the camera. Facial recognition asks who that person is. Three different questions, three different algorithms, three different answers — and none of them, alone, does what the phrase "biometric age verification" implies. Whether you're documenting evidence for a case or just wondering what happened when that website scanned your face, the distinction is the same. Knowing which question was actually answered is the difference between understanding and assumption. The written version goes deeper — link's below.

Ready for forensic-grade facial comparison?

2 free comparisons with full forensic reports. Results in seconds.

Run My First Search