Prove You're 18 Without Showing Who You Are: The Cryptography Big Tech Won't Use
Here's a question most people get wrong: what does an age verification system actually need to know?
Most of us assume the answer is "everything" — your name, your date of birth, a government document number, maybe a scan of your face that gets stored somewhere on a server you'll never see. That assumption is understandable. It's also wrong. A well-designed age verification system needs to answer exactly one question: is this person above a threshold? Full stop. The answer is a binary yes or no — and the cryptography to deliver that answer, without any supporting identity data, has existed for decades. We just haven't been using it.
Age verification and identity verification are two completely different questions — and zero-knowledge proofs can answer the first one without ever touching the second.
Two Questions That Got Bundled Together by Accident
At CaraComp, we spend a lot of time thinking about the difference between facial comparison and facial recognition. They sound similar. They're fundamentally different operations. Comparison asks: do these two images show the same person? Recognition asks: who is this person? Same technology, wildly different implications — and the distinction matters enormously for privacy.
Age verification has the exact same problem, just without anyone noticing. "Prove you're over 18" and "prove who you are" have been treated as the same question because most deployed systems answer both simultaneously. You upload a passport. The system reads your name, your birthdate, your document number, your photo. It confirms you're 23. But to get to that one bit of useful information — yes, above threshold — it collected about forty other bits it had no business collecting.
This happened because it was easy, not because it was necessary. Building a document verification pipeline is a solved engineering problem. Building a system that confirms age without touching identity requires something more elegant: zero-knowledge proofs.
What a Zero-Knowledge Proof Actually Does
The name sounds intimidating. The concept, once you see it, is almost beautiful in its simplicity. This article is part of a series — start with India Biometric App Cancellation Trust Adoption Backlash.
A zero-knowledge proof lets one party — call them the prover — convince another party — the verifier — that a statement is true, without revealing anything about why it's true. The verifier walks away knowing the statement checks out. They learn nothing else. Not the underlying data. Not how the proof was constructed. Just: valid.
Here's how the mechanics work in an age verification context. The system takes two inputs: a witness (your secret input — say, your actual birthdate, held privately on your device or in a trusted credential) and a public statement ("this person is over 18 as of today's date"). These get fed into what cryptographers call an arithmetic circuit — essentially a mathematical function that outputs "true" if the conditions hold. The circuit checks whether your birthdate, when subtracted from today's date, produces a number above the threshold. If it does, the system generates a cryptographic proof of that fact. The proof is then sent to the verifier.
The verifier can confirm the proof is valid. But they cannot reverse-engineer it. They cannot extract your birthdate. They cannot learn your name. They receive, in effect, a mathematically unforgeable certificate that says: "Someone ran the calculation. It passed." That's it. That's the whole thing.
Sit with that number for a second. Only 17% of people trust the institutions holding their identity information — and 75% say they're more worried about personal data security now than they were five years ago, according to Digital Information World. That's not paranoia. That's rational risk assessment from people who've lived through enough data breaches to know what a honeypot looks like.
The Honeypot Problem (And Why Your Passport Shouldn't Be In It)
Every time a platform asks you to upload a government-issued ID to verify your age, something subtle and problematic happens. Your data joins a pool. The platform now holds — somewhere on a server — a collection of passports, driver's licenses, and identity documents from potentially millions of users. That collection doesn't just confirm ages. It is the ages, plus names, plus addresses, plus document numbers.
Security professionals call this a honeypot: a concentrated store of high-value data that becomes an irresistible target. The bigger it grows, the more attractive it becomes to attackers. One breach doesn't expose one person's identity. It exposes everyone who ever verified their age on that platform.
Zero-knowledge proofs break this dynamic structurally. The verifier never receives the underlying data — so there's nothing to breach. You can't steal what was never stored. The EU Age Verification Blueprint describes this as distributing trust to the edge — keeping control with the user and eliminating the central repository that makes mass breaches possible. Previously in this series: Metas 2b Bet The Child Safety Bill That Builds A National Id.
"Strict age verification as commonly practiced today, by requiring hard identifiers, amounts to verification of one's identity — but it does not need to be that way, as age verification can be done without requiring a user to share any other information about themselves outside of their age." — Digital Information World
Think of it this way. Traditional age verification is like a club bouncer who makes a photocopy of your ID and files it in a cabinet by the door — every night, the cabinet gets fuller, and sooner or later someone breaks in. A zero-knowledge system is the bouncer who checks your ID, confirms you're old enough, hands it back, and remembers nothing. The entry still happened. The age was still verified. But there's no cabinet. Nothing to steal.
Why People Get This Wrong — And Why It's Not Their Fault
The misconception is almost perfectly engineered to persist. When a government announces "proven age" requirements for online platforms, that phrasing sounds identical to "prove your identity." The word "prove" does a lot of heavy lifting. Regulators talk about "strong age checks." Platforms respond with document upload flows. The entire public conversation assumes that more stringent age verification means more identity data collection — because every system anyone has actually used works exactly that way.
Nobody has experienced the alternative. So it feels hypothetical, or theoretical, or maybe a bit too clever to be real. That's a reasonable heuristic when the only examples in front of you point in the same direction. The problem is that the direction is wrong.
The Electronic Frontier Foundation raises a related point worth sitting with: even well-designed cryptographic systems carry socio-technical risks that go beyond the math. IP addresses, device fingerprints, and behavioral patterns can still create linkability between sessions — meaning the cryptography can be sound while the surrounding system still leaks context. This is a real limitation. It doesn't invalidate the approach; it argues for thinking carefully about the full implementation stack, not just the proof mechanism.
And the Brave research team makes an equally honest observation: many protocols described as "zero-knowledge" in technical literature don't actually meet rigorous formal definitions when deployed. Some leak privacy. Some can be forged. The gap between "this is theoretically elegant" and "this is correctly implemented at production scale" is where most real-world failures live. The math is trustworthy. The engineering discipline required to implement it faithfully is harder than it looks.
What You Just Learned
- 🧠 Two different questions — "Is this person over 18?" and "Who is this person?" are separate problems that current systems answer simultaneously, unnecessarily
- 🔬 How ZKPs work — A witness (private data) feeds into an arithmetic circuit with a public statement, generating a cryptographic proof that the verifier can confirm without extracting any underlying data
- 🏛️ The honeypot problem — Document-based age verification creates concentrated stores of identity data that become high-value breach targets; ZKPs eliminate the repository
- ⚠️ The implementation gap — The cryptography is sound; real-world deployment requires careful engineering to avoid privacy leakage through surrounding systems like device fingerprints and IP linkability
The Trust Problem That Cryptography Alone Can't Solve
Here's the part that doesn't get enough attention. Zero-knowledge proofs distribute trust away from the verifying platform — but they concentrate it somewhere else: the credential issuer. Someone has to issue the cryptographic credential that contains your age in the first place. That issuer sees your identity. That issuer becomes the new target. Up next: India Tried 6 Times To Force A Biometric App On Your Phone A.
The New America policy brief on privacy-preserving age verification explores this carefully — combining group signatures with ZKPs to further reduce what even the issuer can link back to individual users. It's a meaningful step. It's also a reminder that privacy architecture is a system design problem, not a cryptography problem. You can have mathematically perfect proofs running on a leaky infrastructure and still end up worse off than you started.
Regulatory momentum is accelerating faster than implementation maturity. Discord's recent shift to teen-by-default settings, legislative pressure across the UK, EU, and US, and parental advocacy groups pushing for stricter platform accountability — all of it is creating urgency for age verification at scale. The question is whether that urgency gets channeled toward systems that actually minimize data collection, or toward the path of least engineering resistance: more document uploads, more stored IDs, more honeypots.
Age verification and identity verification are not the same thing — they never were. A system that confirms "over 18" without storing your name, document number, or a reusable faceprint isn't a privacy compromise. It's a privacy success. The technology exists. What's missing is the will to deploy it properly.
The same distinction drives our thinking at CaraComp about facial comparison versus facial recognition — one answers a narrow question about a specific transaction, the other builds a persistent identity profile. The technology looks identical from the outside. The data architecture is completely different. Knowing which question you're actually asking turns out to be most of the work.
So here's the question worth sitting with: if a platform could confirm "21+" without storing your ID details — no document number, no faceprint on file, no data that persists beyond the moment of verification — would you accept that as a reasonable exchange? Or does the involvement of your face at any point feel like a line crossed, regardless of what gets stored?
That gut reaction is worth examining. Because the technology to make it genuinely safe already exists. The question is whether we'll demand it — or keep handing our passports to bouncers with filing cabinets.
Ready for forensic-grade facial comparison?
2 free comparisons with full forensic reports. Results in seconds.
Run My First SearchMore Education
Age Verification Just Changed Forever: Your Face Gets Checked Once — Then Never Again
The next shift in biometric identity isn't better accuracy — it's interoperability. Learn how cryptographic age credentials are eliminating repeated facial comparisons at the point of verification, and why that changes everything about how identity trust works.
biometricsWhy the Walk From Intake Is the Most Dangerous Moment in Your Hospital Stay
Most people think identity verification is a one-time event. In healthcare workflows, that assumption is exactly how patients get misidentified. Learn why continuous biometric identification changes the outcome—and why the industry is betting $42 billion on it.
Deepfakes Fool You With the Uniform, Not the Face
Most people think deepfakes are dangerous because the fake face looks real. The actual science says something far more unsettling—and every investigator needs to understand the difference. TOPIC: digital-forensics
