Airports Scan Faces. Can Investigators Keep Up?
The TSA is scanning faces at over 80 airports across the United States. Millions of travelers a week. And most of them have absolutely no idea they can say no.
The federal government is normalizing facial comparison at massive scale with documented consent failures and accuracy gaps — which means professional investigators who use the technology now face a higher evidentiary bar, not permission to get careless.
Here's the uncomfortable truth at the center of this story: facial recognition isn't becoming normalized because it's been proven airtight. It's becoming normalized because it's convenient, it's fast, and the institutions deploying it are betting you won't ask hard questions at the checkpoint. That bet is mostly paying off. But the investigators, attorneys, and forensic professionals reading this? You don't get that luxury. When your facial comparison ends up in a deposition, a courtroom, or an insurance file, "the TSA does it this way" is not a defense.
The "Optional" Scan That Isn't Really Optional
Let's start with what the TSA is actually doing, because the gap between the official description and the practical reality is significant.
The agency deploys what it calls Credential Authentication Technology–2 scanners — CAT-2 units — at checkpoints. These devices capture a real-time image of your face and compare it against your government-issued ID. The TSA maintains the scans are optional and that photos are deleted after verification (with some exceptions). That sounds reasonable on paper. The problem is what happens when you actually try to opt out at a busy checkpoint at O'Hare on a Tuesday morning.
"Travelers are likely unaware that they can opt out, and signage at airports frequently uses vague terms." — McKenly Redmon, SMU Dedman School of Law, via The Regulatory Review
Redmon's analysis cuts right to it: the opt-out exists in theory. In practice, a traveler who doesn't already know their rights, who's running late, who doesn't see clear signage, or who doesn't want to create a scene at a federal security checkpoint — that traveler is going through the scan. Full stop. Consent that depends on the subject knowing to ask for an alternative isn't really consent. It's acquiescence under ambient authority pressure. For a comprehensive overview, explore our comprehensive photo comparison methods resource.
That's not a fringe civil liberties argument. That's a mainstream due process concern, and it's going to matter a lot as facial comparison evidence becomes more common in legal proceedings.
The Government Pilots Are Multiplying — and the Accuracy Questions Are Real
The TSA's Las Vegas trial at McCarran International Airport — the agency's second proof-of-concept after an earlier pilot at LAX — gives us a useful window into how these deployments actually work. According to FEDagent, the program collects live facial images, ID document photographs, issuance and expiration dates, travel date, document type, issuing organization, and the traveler's birth year. That's a meaningful data package tied to a biometric capture event — and it's happening at scale, at checkpoints, in seconds.
Meanwhile, across the border enforcement space, Wired has reported that ICE and CBP's mobile face-recognition tools have documented reliability failures — false matches, enrollment errors, identity verification gaps. The technology at the border isn't malfunctioning in some dramatic Hollywood sense. The problem is more mundane and more dangerous: when the methodology isn't rigorous, the errors are quiet. They don't announce themselves. Someone gets flagged, detained, or cleared, and nobody in the chain of custody stops to ask whether the underlying comparison was actually valid.
Over in Japan, Panasonic Connect is trialing facial recognition ticket gates on the Joetsu Shinkansen at Nagaoka Station — walk-through gates that replace IC card taps entirely, billed as a "smooth and exciting experience." JR East is framing this as the next evolution of their Suica platform. Frictionless. Invisible. Your face as your transit pass. Panasonic Connect describes the gates as delivering "visual and audio effects during passage." Immersive biometrics, basically. The normalization isn't just American — it's a global infrastructure shift happening simultaneously across transportation systems.
And then there's the Discord-Persona situation, which is a different flavor of the same problem. Fortune reported that nearly 2,500 accessible files from Persona Identities — an identity verification provider partially backed by Peter Thiel's Founders Fund — were found sitting on a U.S. government-authorized endpoint, openly accessible without any exploit required. Those files revealed Persona conducts facial recognition checks against watchlists, screens against politically exposed persons lists, and runs 269 distinct verification checks — including screening for "adverse media" across 14 categories including terrorism and espionage. The data was just... there. The gap between what these systems claim to do and how carefully that data is actually protected is a story in itself. For a full breakdown of the technology in use at airports, explore our facial recognition technology guide.
Why This Raises the Bar for Everyone Else
Why This Matters for Investigators
- ⚡ Judges are starting to ask "how" — Facial comparison evidence without documented methodology is increasingly vulnerable to Daubert-style challenges, mirroring the trajectory that dismantled bite mark analysis
- 📊 Government failures set expectations — When CBP's own tools produce false matches, opposing counsel will use that to attack any facial comparison that lacks a rigorous, documented process
- 🔮 Normalization isn't the same as validation — The TSA running mass face scans doesn't make "it looked similar" an acceptable professional standard; it makes the contrast between sloppy and defensible more visible
Here's where it gets interesting. The instinct in a lot of professional circles — investigation, insurance fraud, legal research — is to look at what the federal government is doing with facial technology and conclude that permission has been granted. The TSA does it. ICE does it. Airlines do it. So surely a licensed investigator running a comparison on a claimant who may have faked an injury can do it too, right? Continue reading: Government Facial Recognition Airports Reliability.
Directionally, yes. Practically, not that simply.
NIST research on facial comparison accuracy makes clear that results vary dramatically based on image quality, lighting, algorithm type, and whether a trained examiner reviews the algorithmic output. Euclidean distance analysis — the kind used in enterprise forensic tools — measurably outperforms visual human comparison. But only when it's applied with documented methodology and controlled inputs. The "documented" part is doing enormous heavy lifting in that sentence. Without it, you have an opinion, not evidence.
The professional investigator using facial comparison methods in an active case is not a TSA agent waving someone through a checkpoint. The stakes are structurally different. A traveler who gets a false match at a security scanner gets additional screening. A claimant who gets a false match in a fraud investigation could lose benefits, face legal action, or have their credibility destroyed in litigation. The asymmetry in consequence demands an asymmetry in standard.
What "Defensible" Actually Looks Like
The counterargument — and it's worth taking seriously — is that private investigators adopting facial comparison tools expands surveillance-adjacent power into an unregulated civilian space. That's a real tension. But the answer isn't abstention. The answer is distinction: comparison is not recognition. Running a structured comparison between two images tied to your specific case, with documented methodology, logged inputs, disclosed accuracy parameters, and a qualified examiner reviewing output is categorically different from running an unknown face against a database to see what comes back.
That distinction matters in court. It matters in depositions. And as facial comparison becomes more common in professional practice, it's going to be the line that separates admissible evidence from a challenged assertion.
The federal government normalizing face-as-ID at airports doesn't lower the bar for professional investigators — it raises it. When facial comparison becomes common knowledge, every court, every opposing counsel, and every jury will want to know not just that a match was made, but how. Methodology is the only answer that holds up.
Look, nobody's saying this is simple. The technology is real, the applications are legitimate, and the professional demand is only going to grow. But the TSA running 2 million faces a day through a system where opt-out consent is, by a law professor's own description, often theoretical — that's not a model. That's a warning about what happens when speed and convenience outpace standards.
The Persona files sitting open on a government endpoint. The CBP app that can't reliably verify identity. The Las Vegas pilot collecting seven distinct data fields per traveler on a voluntary basis that most travelers didn't know was voluntary. These aren't indictments of the technology. They're indictments of deployment without discipline.
Professionals don't get the institutional cover that lets the TSA shrug and say the program is still in proof-of-concept. Your reports carry your name. Your methodology gets deposed. Your results get cross-examined.
So here's the question worth sitting with: with TSA facial scans now "optional" in theory but confusing enough in practice that legal scholars are writing papers about coerced consent — where do you personally draw the ethical and evidentiary line on facial comparison in your own investigations? And more importantly: could you explain that line, in writing, to a judge who's never heard of a CAT-2 scanner?
Ready to try AI-powered facial recognition?
Match faces in seconds with CaraComp. Free 7-day trial.
Start Free TrialMore News
27 Million Gamers Face Mandatory ID Checks for GTA 6 — Your Cases Are Next
When a single video game can demand biometric ID checks from 27 million people overnight, biometric verification stops being niche security tech and starts being the default gatekeeper of digital life — including your cases.
digital-forensicsBrazil's 250% VPN Spike Just Made Your Location Data Unreliable
When Brazil's new age verification law kicked in, users didn't comply — they routed around it. A 250% overnight VPN surge just exposed how fragile location-based evidence really is.
digital-forensicsDeepfakes Force New Identity Rules — And Investigators’ Evidence Is on the Line
From Brazil's landmark age verification law to NIST's new deepfake controls for banks, regulators are formalizing exactly what "verified identity" means. Investigators who rely on ad-hoc image tools are about to get left behind.
