CaraComp
Log inStart Free Trial
CaraComp
Forensic-Grade AI Face Recognition for:
Start Free Trial
facial-recognition

Everyone Scans Faces. No One Does It Right.

Everyone's Scanning Faces. Almost No One Is Doing It Right.

Discord didn't know its identity vendor was running 269 separate checks on users. The TSA calls its airport face scans "optional" — but most travelers have no idea they can say no. And the DHS app that immigration agents are using in the field to "verify" people's identities? It can't actually verify anything. Welcome to facial recognition in 2026: deployed everywhere, understood almost nowhere.

TL;DR

Mass-scale facial scanning is accelerating across government, aviation, and social platforms at precisely the moment when accuracy standards, consent frameworks, and professional usability remain fundamentally unresolved — and this week produced three concrete examples of exactly how badly that's going.

This is the week that should have made every serious investigator, attorney, or compliance officer stop and ask a very simple question: do I actually know what this tool is doing? Because the answer, in almost every high-profile deployment making news right now, is no. And that's not a minor detail — it's the whole problem.

The Discord Situation Is Wilder Than It Sounds

Let's start with the story that got the least mainstream attention but arguably matters most. Discord, the platform used by hundreds of millions of people for everything from gaming to professional communities, was using Persona Identities for age verification. Fine, normal, lots of platforms do this. Except researchers found something unexpected: Persona's front-end code was sitting openly accessible on a U.S. government-authorized endpoint — nearly 2,500 files, available without any exploit required.

What those files revealed is the part that should make your jaw drop. According to Fortune's reporting, Persona wasn't just checking ages. It was running 269 distinct verification checks — including screening users against watchlists, screening for "adverse media" across 14 different categories including terrorism and espionage, and assigning risk and similarity scores to user data. All of this on a platform where users believed they were doing one thing: proving they were old enough to be there. This article is part of a series — start with Eu Ai Act Facial Recognition 2026.

269
distinct verification checks run by Persona Identities — including watchlist screening and "adverse media" across 14 categories — when Discord users thought they were simply verifying their age
Source: Fortune, February 2026

Persona, for what it's worth, is partially funded by Peter Thiel's Founders Fund, and continues to provide identity services for OpenAI, Lime, and Roblox. Discord has since distanced itself from the vendor. But the damage to trust — and the question about what 269 checks actually produces in terms of accurate output — isn't something a press statement fixes.

Here's the part that matters professionally: when a system is running that many overlapping checks, layering biometric data against adverse media flags against risk scores, the opacity doesn't just raise privacy concerns. It raises evidentiary concerns. If you can't explain what a system did, why it flagged someone, and how confident it was in each step, that output is worthless in any formal proceeding. It's not evidence. It's a black box with a verdict attached.

The TSA's "Optional" Scans and What Optional Actually Means

Meanwhile, at airports across the country, a similar consent fiction is playing out at scale. The TSA has deployed what it calls credential authentication technology — CAT-2 scanners — that capture your face in real time and compare it against your government-issued ID. The agency says participation is optional. But as McKenly Redmon of Southern Methodist University Dedman School of Law argues in analysis covered by The Regulatory Review, optional only matters if people actually know they can say no.

"Signage at airports frequently uses vague terms" and "travelers are likely unaware that they can opt out" of the biometric screenings, with the ability to decline often existing "only in theory." — McKenly Redmon, Southern Methodist University Dedman School of Law, via The Regulatory Review

The TSA maintains that photos are deleted after use (except in limited cases) and that the technology improves security while reducing bottlenecks. Those things might even be true. But the consent architecture — vague signage, no clear verbal opt-out prompt, social pressure of a security line moving behind you — isn't informed consent. It's passive enrollment. And the TSA is planning to expand this program significantly. Las Vegas is already running a second facial recognition trial, adding to a list of airports that grows longer every quarter.

The accuracy question matters here too. These systems are comparing your live face against an ID photo — a photo that might be years old, taken under different lighting, at a different weight. The throughput pressure of an airport security line does not lend itself to careful threshold calibration. Speed is the point. Precision is the casualty. Previously in this series: Facial Id Went Mainstream Safeguards Didnt.

Trusted by Investigators Worldwide
Run Forensic-Grade Comparisons in Seconds
Full platform access for 7 days. Run real searches — no credit card, no commitment.
Run My First Search →

The DHS App That Can't Do What It Says

And then there's Mobile Fortify. This is the face-recognition application that DHS launched in spring 2025 for use by ICE and CBP agents during field stops and detentions — marketed explicitly as a tool to "determine or verify" the identities of individuals encountered during immigration operations. The rollout was directly tied to an executive order signed on President Trump's first day in office calling for a "total and efficient" crackdown on undocumented immigrants.

The problem, as WIRED's investigation documented, is that Mobile Fortify doesn't actually verify identities. It can match a face to a document photo. What it cannot do is confirm that the document itself is genuine — which is, you know, the part that matters when you're making consequential decisions about a person's liberty.

"Every manufacturer of this technology, every police department with a policy makes very clear that face recognition technology is not capable of providing a positive" identification. — Source quoted in WIRED's investigation, WIRED

This is not a minor technical caveat buried in a user manual. This is the central limitation of facial recognition as a technology — and it's one that every responsible vendor, researcher, and policy document acknowledges plainly. Identification and verification are different operations. Identification asks: who is this person? Verification asks: is this person who they claim to be? Mobile Fortify does the first, incompletely. It was deployed — without the historical scrutiny typically applied to privacy-impacting technologies, per WIRED's review of records — as if it does the second.

Why This Week's News Matters

  • Consent is becoming theater — Whether it's Discord's identity vendor running 269 checks users never agreed to, or TSA's technically-optional face scans, passive enrollment is now the dominant deployment model across both government and commercial contexts.
  • 📊 Identification ≠ verification — The DHS Mobile Fortify situation makes explicit what professionals already know: matching a face to a photo is not the same as confirming identity. Any tool that blurs this line is not court-ready, period.
  • 🔍 Opacity kills defensibility — A 269-check black box that assigns risk scores cannot be cross-examined. Neither can a field app deployed without documented accuracy thresholds. Scale is not a substitute for methodology.
  • ⚖️ The legal risk is fragmenting by jurisdiction — Illinois, Texas, and Washington have active biometric privacy statutes; federal law is stalled. What's permissible — and what's presentable in court — varies enormously depending on where your case sits.

What Professionals Actually Need From Facial Comparison

Look, nobody's saying crowd-scale facial scanning has zero legitimate use. Panasonic and JR East are trialing face-based ticket gates on Japan's Shinkansen network. Alaska Airlines just added identity verification to automated bag drop units in Seattle and Portland. These are real convenience improvements for real operational problems. Fine.

But there's a meaningful gap between "useful for moving passengers faster" and "usable as evidence in an investigation." That gap is where professional standards live. If you're working a case — insurance fraud, missing persons, threat assessment, due diligence — and you need facial comparison that will hold up under scrutiny, the requirements are completely different from what any of these mass-deployment systems provide. Up next: Super Recognizers Facial Comparison Reliability.

What you need is tightly scoped comparison on controlled image sets, documented methodology, transparent confidence scoring, and output that can be explained step by step to a judge, an adjuster, or opposing counsel. This is exactly what professional face comparison is designed to deliver — not a population-level throughput metric, but a defensible answer about a specific image pair. The two use cases aren't competing. They're just different.

The aggregate accuracy argument — that systems processing tens of millions of faces daily must be reliable because of scale — sounds compelling until you do the math. A system that's 99.5% accurate at population scale still produces thousands of false positives when applied to millions of faces. For a solo investigator presenting a single comparison, population-level statistics are completely irrelevant. What matters is whether this comparison, this image pair, this result is defensible on its own terms.

Key Takeaway

Mass-scale facial scanning optimizes for speed and throughput. Professional investigation requires accuracy and defensibility. These are not the same thing, and this week's news — Discord's 269-check opacity, TSA's consent theater, DHS's verification-that-isn't — is a concrete demonstration of what happens when that distinction gets ignored at government scale.


The real tell in all of this is the word "verify." DHS used it to describe Mobile Fortify. Discord's users assumed it applied to Persona. TSA implies it every time a traveler shuffles through a CAT-2 scanner believing the machine has confirmed something meaningful. In each case, the technology was doing something narrower, less certain, and far more contingent than the word suggests. That gap between what "verify" promises and what any current facial recognition system can actually deliver — that's not a bug in these deployments. That's a design choice. And someone, eventually, is going to have to answer for it in court.

Ready to try AI-powered facial recognition?

Match faces in seconds with CaraComp. Free 7-day trial.

Start Free Trial