CaraComp
Log inStart Free Trial
CaraComp
Forensic-Grade AI Face Recognition for:
Start Free Trial
facial-recognition

Face Scans at Scale: Speed vs. Security Liability

Face Scans at Scale: When Speed Becomes a Security Liability

The TSA would like you to know that standing in front of a camera at the airport checkpoint is completely optional. They'd also like you to know that if you decline, you'll be pulled aside for a more intensive manual screening. Both of these things are true simultaneously — and that tension is the story of where facial verification is right now: technically voluntary, structurally anything but, and deployed well ahead of any honest accounting of what these systems can and cannot do.

TL;DR

Governments and transit agencies are rolling out facial verification at massive scale — airports, immigration stops, bullet train gates — without solving the foundational problem: systems that can't reliably confirm identity aren't security upgrades, they're liabilities dressed as progress.

This isn't a fringe concern from privacy advocates writing op-eds in the dark. It's showing up in federal agency records, academic legal analysis, and investigative journalism from outlets that have done the document work. And the picture it paints should make anyone who relies on facial comparison for professional purposes stop and ask a very pointed question: if the government can't demonstrate that its facial tech actually verifies identity, what exactly are we normalizing?

The App That Can't Do What It Says It Does

Start here, because this is the one that should be keeping people up at night. WIRED reported that Mobile Fortify — the face-recognition app deployed by ICE and CBP to verify identities of people stopped during immigration operations across American towns and cities — is not, in fact, designed to reliably verify identity. That's not editorializing. That's what the records say.

The Department of Homeland Security launched Mobile Fortify in spring 2025, framing it explicitly as a tool to "determine or verify" the identities of individuals stopped or detained by DHS officers. That framing matters, because DHS linked the rollout directly to an executive order calling for a "total and efficient" crackdown on undocumented immigrants. Speed and scale were the mandate. Accuracy, apparently, was a secondary consideration. This article is part of a series — start with Facial Recognition Checkpoint Convergence Investig.

"Every manufacturer of this technology, every police department with a policy makes very clear that face recognition technology is not capable of providing a positive identification." — From records reviewed by WIRED

Read that again. The manufacturers themselves say this. The police departments that use it say this. And yet DHS deployed Mobile Fortify with explicit language about identity verification — without, per WIRED's reporting, "the scrutiny that has historically governed the rollout of technologies that impact people's privacy." In a country where a facial mismatch can trigger detention or expedited removal, that's not a footnote. That's the whole story.

The Airport Theater of "Optional" Consent

Over at the TSA, the setup is different but the underlying dynamic is familiar. The agency has been expanding its credential authentication technology — CAT-2 scanners that capture real-time images and compare them against government-issued IDs — across airports nationwide, with further expansion planned. The official line is that participation is voluntary.

McKenly Redmon of Southern Methodist University's Dedman School of Law has a pointed response to that. Writing in a recent article covered by The Regulatory Review, Redmon argues that passengers' ability to decline these scans "often exists only in theory." Travelers are largely unaware they can opt out — and the signage at airports uses vague language that doesn't clearly communicate the choice being made. When opting out means a longer, more intrusive screening process, the architecture of that choice isn't really a free choice at all. It's consent manufactured by inconvenience.

"Scholar contends that face scans at airports risk coercing consent and perpetuating bias." — Summary of McKenly Redmon's findings, The Regulatory Review

The TSA, for its part, says participation is voluntary, that photos are deleted except in limited cases, and that the technology represents "a significant security enhancement" that also improves passenger convenience. Those claims may all be technically accurate. They also don't address the core question: how accurate is the match, and how does performance vary across different demographics? NIST's Face Recognition Vendor Testing program has documented repeatedly that error rates shift meaningfully based on image quality, lighting conditions, and subject demographics. A high-throughput airport checkpoint — bright lights, tired travelers, inconsistent angles — is not a controlled environment. It's a stress test the system is running on real people in real time.

269
Distinct verification checks performed by an identity verification platform, including screening against watchlists and lists of politically exposed persons — with results sitting openly accessible on a government-authorized endpoint
Source: Fortune, reporting on researcher findings
Trusted by Investigators Worldwide
Run Forensic-Grade Comparisons in Seconds
Full platform access for 7 days. Run real searches — no credit card, no commitment.
Run My First Search →

Japan's Bullet Trains and the Transparency Gap

It's not just the U.S. government. In November 2025, Panasonic Connect announced a proof-of-concept trial for facial recognition ticket gates on the Joetsu Shinkansen line at Nagaoka Station, developed jointly with JR East and JR East Mechatronics. The stated goal is elegant: evolve beyond tapping IC cards at gates, create "walk-through" access, make the experience smooth and futuristic. The gates even feature visual and audio effects during passage, because apparently the future should have sound design. Previously in this series: Tsa Face Scan Rollout Consent Accountability.

None of that is inherently wrong. Transit systems optimizing for passenger flow is a legitimate engineering goal. But here's the thing — Panasonic Connect's announcement, like virtually every transit facial gate deployment, contains no published false-match rate disclosures. No independent accuracy audit. No demographic performance breakdown. Just the assurance that it's being trialed and the implication that smooth and exciting means reliable. Those are different things. (In forensic contexts, "exciting" is actually a red flag. Excitement is what happens before someone checks the methodology.)


Why This Pattern Should Concern Professionals

  • Deployment is outpacing accountability — Neither U.S. federal agencies nor international transit operators are publishing the accuracy audits that would be mandatory in any evidentiary or forensic context. Speed to deployment has become its own justification.
  • 📊 The consent framework is structurally broken — When "optional" participation comes with a material penalty for opting out, legal scholars argue it doesn't meet the threshold of meaningful informed consent. That's an active policy debate, not a resolved one.
  • 🔍 "Face capture" and "identity verification" are not the same operation — Capturing someone's face proves they were present. Verifying their identity requires a reliable, documented match against a trusted reference. Government agencies are conflating these two things in their public communications, and that conflation has real consequences for the people processed by these systems.
  • 🔮 Normalization risk is real — When government agencies deploy facial tech at scale with inadequate transparency, they set an implied standard. If the TSA doesn't need to prove demographic accuracy, why would anyone else feel pressure to?

The Standard These Deployments Are Failing to Meet

Here's where the professional implication lands — and it's worth being direct about it, because there's a version of this conversation that gets lost in abstract policy debate.

Anyone whose facial comparison work enters evidentiary proceedings knows that the credibility of a finding rests on three things: controlled image quality, documented methodology, and results that can be reproduced and examined. Those aren't bureaucratic niceties. They're the difference between a finding that holds under cross-examination and one that collapses the moment opposing counsel asks a competent expert witness what the error rate is.

The mass-deployment model — optimized for throughput, operating in uncontrolled environments, with limited public transparency about accuracy metrics — is structurally compromised on all three counts. That's not a criticism of the underlying technology's potential. It's a description of what happens when the implementation skips the discipline that makes the technology defensible. As CaraComp's approach to face comparison is built around, methodology documentation isn't overhead — it's the product.

There's a counterargument worth acknowledging honestly: imperfect technology deployed at scale still catches real fraud and real threats that manual processes miss entirely. Human visual identification is also error-prone, and nobody's published a false-match rate for a tired TSA agent at hour seven of a shift. Fair point. But that argument only holds if agencies are actively measuring their system performance, auditing outcomes, and publishing the results. Current evidence suggests most of these deployments are not doing that transparently. Incremental improvement requires knowing your baseline. Right now, many of these programs don't seem to have one they're willing to share. Up next: 269 Hidden Checks Id Verification Dragnet Profilin.

Key Takeaway

Speed without documented accuracy isn't a security upgrade — it's a liability wearing the badge of progress. The institutions most people assume are getting this right are, in documented cases, cutting the corners that professional practice requires you to keep. That's not a reason to avoid facial verification. It's a reason to treat methodology as the product, not the footnote.

The Discord situation — where Fortune reported that an identity verification vendor's software was found running 269 distinct verification checks, including facial recognition against watchlists, with files sitting openly accessible on a government-authorized cloud endpoint — is almost too on-the-nose. Nearly 2,500 accessible files, no exploit required to find them. Just... there. That's what happens when the urgency to deploy outruns the discipline to secure. The facial verification was running. The data governance was not.

So when you watch the TSA expand its camera footprint, ICE deploy an app that manufacturers acknowledge cannot provide positive identification, and Panasonic wire up bullet train gates with visually exciting but transparency-light facial gates — ask yourself this: if a government agency can't demonstrate that its facial system actually verifies who people are, and it's still making consequential decisions based on those outputs, what does that tell you about how seriously it's taking the consequences for the people on the wrong end of a false match?

Your methodology has to answer that question. Theirs, apparently, doesn't have to.

Ready to try AI-powered facial recognition?

Match faces in seconds with CaraComp. Free 7-day trial.

Start Free Trial