CaraComp
Log inStart Free Trial
CaraComp
Forensic-Grade AI Face Recognition for:
Start Free Trial
facial-recognition

Facial Recognition Is Everywhere. Nobody's Honest.

Facial Recognition Is Everywhere This Week — And Nobody's Being Honest About What It Can't Do

The TSA just kicked off a second facial recognition trial at Las Vegas's Harry Reid International Airport. Japan's Shinkansen bullet train network started testing face-based ticket gates at Nagaoka Station. Immigration enforcement agents across the U.S. are running a mobile app called Mobile Fortify that matches faces in the field. And verification code linked to a venture-backed identity platform quietly turned up on a U.S. government website — prompting Discord to publicly distance itself from the whole thing. That's four separate deployments, across four different sectors, in a single week.

Nobody's coordinating this. That's exactly what makes it worth paying attention to.

TL;DR

Facial recognition is being deployed at speed across airports, railways, and immigration enforcement — but the systems' own documentation admits they compare faces, not confirm identities, and the difference has serious legal implications for anyone using these tools professionally.

Everyone's Deploying It. Almost Nobody's Explaining the Limits.

Here's the pattern you'll notice if you read all four stories back to back: each deployment is framed as an identity verification tool. Each one is announced with the implicit authority of a government agency, a transport giant, or a tech company backed by serious money. And buried in the technical documentation — or in the reporting that digs past the press release — is a quiet admission that these systems don't actually do what the headline implies.

Take Mobile Fortify, the face-matching app now being used by ICE and CBP agents conducting field stops across the United States. WIRED obtained records showing the Department of Homeland Security launched the app in spring 2025, explicitly linking the rollout to an executive order signed by President Trump on his first day in office — one that called for a "total and efficient" crackdown on undocumented immigrants through expedited removals, expanded detention, and more. The framing from DHS was consistent: Mobile Fortify helps agents "determine or verify" the identities of individuals stopped during federal operations.

Except it doesn't. Not in any technically defensible sense of the word "verify." This article is part of a series — start with Eu Ai Act Facial Recognition 2026.

"Every manufacturer of this technology, every police department with a policy makes very clear that face recognition technology is not capable of providing a positive identification." — As reported by WIRED, citing records reviewed from DHS documentation and technical analysts

That's not a civil liberties talking point. That's the industry's own consensus position, stated plainly. The app flags potential matches. A human agent decides what happens next. The distinction sounds procedural — it isn't. It's the difference between evidence and a verdict, and right now, that line is getting blurry in the field.


Comparison vs. Verification: The Technical Fault Line Nobody Mentions

Let's get specific about what these systems actually do, because the public conversation keeps skating past it.

Facial comparison — the technology underneath all of these deployments — measures geometric similarity between two images. At the mathematical level, enterprise-grade systems are calculating something close to Euclidean distance between facial feature vectors: the spatial relationships between your eyes, nose, mouth, jawline. When the TSA system at Las Vegas captures your face and checks it against your passport photo, it's asking a very specific question: how similar are these two images? It returns a probability score, not a name.

That's a controlled, two-image scenario. Your face, your document, one comparison. Relatively high-quality inputs on both ends. This is about as favorable as conditions get for this technology — and even here, The Regulatory Review has reported that traveler rights advocates are raising substantive concerns about error rates and the consent framework around TSA's expansion.

Now take that same technology and put it in the hands of a field agent stopping someone on a street corner. Lighting is wrong. Angle is off. The subject may be moving. There's no enrollment photo in the system — no baseline comparison image that's been validated as belonging to this specific person. NIST's ongoing facial recognition vendor testing consistently shows that algorithm performance degrades sharply when any of these conditions shift. Sharply isn't a figure of speech here — accuracy can drop from 99% under controlled lab conditions to somewhere far less confidence-inspiring in real-world field use, depending on image quality and demographic factors. Previously in this series: Facial Recognition Deployment Vs Discipline Weekly.

The honest version of what Mobile Fortify does in a field stop: it looks for faces in a database that are geometrically similar to the face in front of the agent. That's useful. It is not verification. The gap between those two things is where wrongful detentions happen.

Why This Week's Stories Matter

  • The deployment pace is outrunning the literacy gap — Decision-makers are approving systems they describe as verification tools while the underlying technology can only produce comparison probabilities. That's a workflow design failure, not a technology failure.
  • 📊 Controlled environments are categorically different from field conditions — Japan's Shinkansen trial and the TSA airport system operate on enrollment-based matching (your face vs. your registered profile), which is fundamentally more defensible than open-field identification. Conflating the two cases is how bad policy gets made.
  • 🔮 The government software supply chain is murkier than it looks — The discovery of venture-backed verification code on a U.S. government website — reported by Fortune — and Discord's subsequent public distancing raises questions about how identity verification vendors are getting embedded in public infrastructure in the first place.
  • ⚖️ Human oversight is the load-bearing wall, not the backup plan — Every professional framework for using facial comparison responsibly treats the technology as evidence-support, not evidence-replacement. When deployment outpaces that principle, the legal exposure follows fast.

Trusted by Investigators Worldwide
Run Forensic-Grade Comparisons in Seconds
Full platform access for 7 days. Run real searches — no credit card, no commitment.
Run My First Search →

Japan's Shinkansen Is Actually Doing This Right — And It Still Won't Tell You What You Want to Hear

Here's where it gets interesting. Of all this week's stories, the one that got the least breathless coverage is arguably the most technically sound deployment. Panasonic Connect, working with JR East and JR East Mechatronics, launched a proof-of-concept trial for facial recognition ticket gates at Nagaoka Station on the Joetsu Shinkansen line on November 6. The goal is elegant: walk through the gate, your face replaces your Suica IC card tap.

The key detail that separates this from the immigration enforcement use case? It's enrollment-based. You register your face in advance. The system compares your live capture against your profile — a known baseline, collected under controlled conditions, linked to a verified account. This is the architecture that actually makes comparison meaningful. The trial is explicitly framed as a proof-of-concept, not a full rollout — because, to their credit, the companies involved seem to understand you don't just bolt face gates onto a national rail system and call it done.

Panasonic's announcement describes the gates as featuring "visual and audio effects during passage, delivering a smooth and exciting experience" — which, fine, that's marketing — but the underlying design philosophy (enrollment first, comparison second, human fallback built in) is exactly the workflow structure that professionals in investigative and forensic contexts already know is non-negotiable. Understanding the distinction between these responsible deployments and field-identification use cases is exactly why resources like CaraComp's breakdown of how face comparison actually works matter more than ever right now.

1
The single variable that separates defensible facial comparison from legal liability: a verified enrollment baseline to compare against
The principle behind every professional facial comparison framework

The Workflow Problem Nobody Wants to Own

Look, nobody's saying facial comparison is useless. The counterargument is real: even a tool with documented error rates reduces hours of manual photo review. A system that narrows 10,000 possible matches to 40 candidates isn't replacing investigative judgment — it's making it faster. That's a genuine operational benefit, and dismissing it entirely is its own kind of intellectual laziness. Up next: Facial Id Went Mainstream Safeguards Didnt.

But the benefit only holds if the workflow is built correctly. Your images. Your case. Documented methodology. A comparison score that feeds into analysis — not a comparison score that is the analysis. The moment an agent, investigator, or officer treats a facial match as a confirmed identity rather than a lead worth following up with additional evidence, the technology has been misused — regardless of how sophisticated the underlying algorithm is.

The Discord story is a useful sidebar here. When verification code linked to a venture-backed identity platform turns up embedded in a U.S. government website — and the platform has to publicly distance itself — it's a reminder that the software supply chain feeding these government deployments is not always as scrutinized as the deployment announcements suggest. Someone approved that integration. Someone missed it, or didn't ask the right questions about what the code was doing. That's a workflow failure before it's a technology failure.

💡 Key Takeaway

Facial comparison produces a probability, not a verdict. Every deployment that obscures this distinction — in an airport, on a train platform, or in a field stop — is a workflow waiting to generate a lawsuit. The technology isn't the problem. Pretending it does something it doesn't is.

The engagement question buried in all of this isn't really about technology. It's about institutional honesty. TSA's Las Vegas trial, Mobile Fortify, the Shinkansen gates, the mystery code on a .gov page — these aren't four separate stories. They're four chapters of the same story: facial technology being deployed at speed, described with terminology that implies more certainty than the systems can deliver, in contexts where the consequences of a false positive range from missing a train to being wrongfully detained by federal immigration agents.

The systems' own manufacturers, their own documentation, their own technical analysts all say the same thing: this technology cannot provide a positive identification. The question isn't whether you believe the critics. The question is whether you've read the fine print from the people selling the product — and what you're going to do about it the next time someone hands you a match score and calls it a confirmed ID.

Ready to try AI-powered facial recognition?

Match faces in seconds with CaraComp. Free 7-day trial.

Start Free Trial