CaraComp
Log inStart Free Trial
CaraComp
Forensic-Grade AI Face Recognition for:
Start Free Trial
facial-recognition

Facial Tech Expands Fast. Guardrails Don't.

Facial Tech Is Expanding Fast. The Guardrails Aren't Keeping Up.

Nearly 2,500 files. Sitting on a U.S. government-authorized Google Cloud endpoint. Fully accessible. No exploit required. That's not a data breach — that's just negligence dressed up as a product.

TL;DR

Three stories this week — an exposed identity verification platform, coercive TSA face scans, and an immigration app that can't actually verify anyone — all point to the same structural failure: facial systems are scaling fast, but the ability to explain, defend, and audit them isn't.

This was not a quiet week for facial technology. Between an identity platform's scoring logic turning up on the open internet, a law review piece laying out exactly how TSA's "optional" face scans aren't optional in any meaningful sense, and a Wired investigation into an immigration app that DHS marketed as an identity verifier but technically can't verify a thing — the industry got a hard look in a very unflattering mirror. If you work with facial comparison professionally, you should be paying attention. Not because the sky is falling, but because the standard of proof just quietly shifted, and a lot of practitioners haven't noticed yet.


Act One: The Exposed Logic Problem

Start with the Persona story, because it's the most viscerally alarming. Persona Identities — partially funded by Peter Thiel's Founders Fund, and used by Discord, OpenAI, Roblox, and Lime among others — had its front-end verification code sitting on a publicly accessible endpoint. Researchers found it without writing a single line of exploit code. According to Fortune's reporting, those nearly 2,500 files revealed that Persona conducts 269 distinct verification checks — including facial comparisons against watchlists, screening against lists of politically exposed persons, and adverse media checks across 14 categories including terrorism and espionage. It then assigns risk and similarity scores.

All of that. On the open internet. This article is part of a series — start with Facial Recognition Checkpoint Convergence Investig.

269
distinct verification checks performed by Persona Identities — including facial comparisons against watchlists and politically exposed persons lists — found exposed in nearly 2,500 publicly accessible files on a U.S. government-authorized endpoint
Source: Fortune, February 2026

There are two separate harms here, and I want to be precise about both. The obvious one: bad actors now know exactly what triggers a flag and can engineer around it. That's a fraud enablement problem. But the less-discussed harm hits investigators and legal teams directly. When a platform's scoring methodology is publicly known and accessible, opposing counsel can argue — with a straight face — that the methodology was potentially gamed before your client ever ran a check. Chain-of-custody integrity doesn't just apply to physical evidence. It applies to the reproducibility and isolation of your analytical process. If the scoring logic was sitting on a Google Cloud endpoint anyone could read, the integrity of every output produced by that system is now a legitimate question in discovery.

"We didn't even have to write or perform a single exploit, the entire system was just sitting there for anyone to inspect." — Researchers, as quoted by Fortune

That quote should make every KYC compliance officer and licensed investigator uncomfortable. Because if researchers can say that, a defense attorney can say it too — and they will.


Trusted by Investigators Worldwide
Run Forensic-Grade Comparisons in Seconds
Full platform access for 7 days. Run real searches — no credit card, no commitment.
Run My First Search →

Act Two: "Optional" Is Doing a Lot of Heavy Lifting at the Airport

The TSA story is less dramatic but arguably more consequential for civil liberties at scale. TSA's credential authentication technology — CAT-2 scanners that capture real-time images and compare them against government-issued IDs — is now operating at airports nationwide. The agency promotes it as efficient and secure. Travelers are told it's optional.

Here's the problem with that framing. McKenly Redmon of Southern Methodist University's Dedman School of Law argues in a recent article, covered by The Regulatory Review, that passengers' ability to decline these scans "often exists only in theory." Redmon found that travelers are frequently unaware they can opt out, and that signage at airports uses vague language rather than clear disclosure. When opting out might mean a missed connection, a secondary screening line, or just a TSA agent giving you a look — that's not informed consent. That's social pressure dressed up as a choice.

The Government Accountability Office has previously flagged TSA's facial program for incomplete accuracy documentation across demographic groups, particularly for travelers with darker skin tones. So we have a system that: (a) passengers don't know is optional, (b) has documented demographic accuracy gaps, and (c) is actively expanding. TSA has been transparent about its plans to scale the program significantly. That combination should make anyone who cares about evidence quality very nervous — because biased error rates in the enrollment phase become biased evidence downstream. Previously in this series: Face As Boarding Pass Facial Comparison Evidence S.

Why This Matters for Investigators

  • Consent architecture is being tested in court — If "optional" biometric collection collapses under scrutiny at TSA checkpoints, the same framework will be examined anywhere investigators collect or rely on facial data without explicit, documented consent.
  • 📊 Demographic error rates matter for evidence — A system with documented accuracy gaps across skin tones isn't just an equity problem. It's an admissibility problem when a match — or a non-match — becomes a case hinge.
  • 🔮 Expansion without audit documentation sets a bad precedent — Every investigator who uses facial comparison without documented methodology is borrowing credibility from a system that regulators are about to demand accountability from anyway.

Act Three: Verification Isn't Identification — and Courts Are Starting to Know the Difference

The Mobile Fortify story is the one that directly implicates investigative practice. DHS launched the app in spring 2025 to, in their words, "determine or verify" the identities of individuals stopped by immigration agents. The rollout was explicitly tied to an executive order calling for expedited removals and expanded detention. High stakes by any measure.

But according to Wired's review of internal records, Mobile Fortify was not designed to reliably identify people in the field and was deployed without the scrutiny that has historically governed privacy-impacting technology rollouts. The kicker? DHS kept calling it an identity verifier. It isn't — and the distinction isn't semantic.

"Every manufacturer of this technology, every police department with a policy makes very clear that face recognition technology is not capable of providing a positive [identification]..." — Source cited in Wired

Verification and identification are technically distinct tasks with different error rate profiles — this is well-documented in NIST's Face Recognition Vendor Testing literature. Verification asks: "Is this person who they claim to be?" Identification asks: "Who is this person?" They're not the same question, they don't carry the same confidence levels, and conflating them in an operational context is — in the nicest possible terms — a methodological disaster waiting to happen in front of a judge.

This is exactly why the shift toward documented, methodology-first facial comparison matters right now. When a practitioner uses landmark-based Euclidean distance analysis to produce a numerical, reproducible similarity measure — rather than a black-box score from a platform whose logic may or may not be sitting on a public endpoint — they have something they can actually explain on the stand. They can say: here are the landmarks I used, here is the distance calculation, here is what the number means, here are its limitations. That's forensic discipline. That's what separates evidence from assertion.

Look, nobody's saying the only good facial comparison is a perfect one. The counterargument to all of this is real: facial technology, even imperfect, closes cases. It finds missing persons. It catches fraud that human review misses. Slowing adoption for the sake of methodological purity has genuine costs. But that's a false binary. The question was never "use it or don't." The question is whether you can defend exactly how you used it — the inputs, the process, the confidence level, the stated limitations — when someone in a courtroom asks. Up next: Facial Recognition Checkpoints Tsa Immigration Rai.

Key Takeaway

The professional risk in facial comparison is no longer just about whether the technology works — it's about whether you can produce a documented, auditable, limitation-acknowledged methodology when a client, a regulator, or opposing counsel demands one. The practitioners building that discipline now are ahead of a reckoning that's already in motion.

Three stories. One pattern. The facial systems drawing the most scrutiny this week — an exposed KYC platform, a coercive airport scan program, a field app marketed beyond its technical capabilities — all share the same core flaw: they present outputs without auditable process. They ask you to trust the score without showing you the work.

The investigators who will come out of this moment intact are the ones who already know what their Euclidean distance threshold is, why they chose it, what it means when a score hits 0.72 versus 0.91, and how to put that in writing for someone who's never seen a facial comparison report before. That's not a niche technical skill anymore. That's table stakes — and this week, in three separate news cycles, the industry made it very clear why.

So here's the question worth sitting with: if a client called you tomorrow and asked to see your facial comparison methodology documentation — not the output, the methodology — how long would it take you to produce it, and how confident would you be handing it to their lawyer?

Ready to try AI-powered facial recognition?

Match faces in seconds with CaraComp. Free 7-day trial.

Start Free Trial