CaraComp
Log inTry Free
CaraComp
Forensic-Grade AI Face Recognition for:
Start Free Trial
Podcast

Facial Recognition's 81% Error Rate Is About to Blow Up in Court — Are Your Notes Ready?

Facial Recognition's 81% Error Rate Is About to Blow Up in Court — Are Your Notes Ready?

Facial Recognition's 81% Error Rate Is About to Blow Up in Court — Are Your Notes Ready?

0:00-0:00

This episode is based on our article:

Read the full article →

Facial Recognition's 81% Error Rate Is About to Blow Up in Court — Are Your Notes Ready?

Full Episode Transcript


In U.K. police trials of live facial recognition, the system got it wrong about four out of every five times. An eighty-one percent error rate. And yet, those same forces are now running more than twenty-five thousand retrospective facial recognition searches every single month.


Trusted by Investigators Worldwide
Run Forensic-Grade Comparisons in Seconds
2 free forensic comparisons with full reports. Results in seconds.
Run My First Search →

If you've ever walked past a security camera in a

If you've ever walked past a security camera in a city center, your face may already be part of a system like this — one that hasn't proven it can tell you apart from someone else. That's not speculation. That's what the data shows. And it's not just a U.K. story. According to a Congressional Research Service report, the U.S. federal government is deploying facial recognition faster than any regulatory framework can keep up with. Congress still hasn't passed a single federal law governing how this technology gets used. Meanwhile, U.K. police deployments jumped eighty-seven percent year over year, with nearly one point seven million faces scanned — and statutory oversight is still at least three years away. So the question running through all of this is simple. If the technology is moving this fast and the rules don't exist yet, who's making sure any of this holds up when it actually matters — in a courtroom?

Start with one case, one courtroom, one moment where a facial recognition match gets challenged. The judge asks the investigator: walk me through your process. Which photos did you use? What confidence threshold did the system set? What did you do after the algorithm returned a match to verify it was actually the right person? According to legal scholarship published in the National Center for Biotechnology Information, facial comparison techniques are generally accepted among practitioners — but they haven't been tested with known error rates. That means they likely wouldn't meet standard admissibility criteria under rules like the Daubert standard. And yet courts in both the United States and England and Wales admit this evidence anyway. Used everywhere, understood nowhere. For anyone who's ever been identified by a camera they didn't know was there, that gap between "we use it" and "we can prove it works" is the gap your rights fall through.

Now widen the lens. The American Bar Association published an analysis noting that the U.S. has a patchwork of local and state laws on facial comparison — some cities ban it outright, others have no restrictions at all. There's no federal standard. So a search that's illegal in San Francisco might be routine in Miami. Whether a match can be used against you depends entirely on your zip code. And courts are starting to push back. Recent rulings have emphasized that defendants have a right to transparency and discovery when facial comparison plays a role in their case. That means prosecutors may need to hand over not just the match result, but the methodology behind it — the feature lists, the verification steps, the training the examiner received.

The Federation of American Scientists flagged something critical in their research on bias. When facial recognition systems fail, the root cause often isn't the algorithm itself. Regulatory enforcement actions have shown that failures stem from missing risk assessments, inadequate testing, insufficient training, and a lack of ongoing monitoring. In plain terms — the technology might work in a lab, but the people and processes around it aren't keeping pace. That's the difference between a tool and a system. A tool gives you a match. A system tells you whether that match means anything. For investigators, that means documenting every step — which images were compared, what methodology was followed, whether the examiner used FISWG feature lists and the ACE-V methodology. FISWG stands for the Facial Identification Scientific Working Group — they publish standardized feature lists for comparing faces. ACE-V is a four-step process: analysis, comparison, evaluation, and verification. It's the closest thing this field has to a gold standard. For the rest of us, it means asking a harder question the next time someone says "we have a facial recognition match." A match and an identification are not the same thing. One is a probability. The other requires human judgment, documentation, and corroboration.


The Bottom Line

Privacy International's reporting on the regulatory void makes the timeline even starker. The E.U. has classified real-time facial recognition in public spaces as high-risk under its A.I. Act. The U.K. has no equivalent statute and won't for years. The U.S. hasn't even started the legislative process at the federal level. So the technology is deployed, the searches are running, the matches are being used in prosecutions — and the legal framework that's supposed to govern all of it is still being drafted. That's not a gap. That's a canyon.

Some people argue this oversight lag is a feature, not a bug — that law enforcement wants operational speed and would rather act now and answer questions later. But the institutions that will come out ahead aren't the ones running the most searches. They're the ones who can show a court exactly how they ran each search, why they trusted the result, and what they did to make sure the match was right. Speed without documentation isn't efficiency. It's liability.

So — a technology with an eighty-one percent error rate in live trials is being used tens of thousands of times a month. Courts are starting to demand proof that the process behind a match is sound. And the laws that should govern all of this are years behind the deployments already happening. Whether you're building a case or just walking past a camera on your way to work, the question is the same. Can anyone prove the system that flagged a face actually got it right? Right now, in most places, the honest answer is — nobody's required to. The full story's in the description if you want the deep dive.

Ready for forensic-grade facial comparison?

2 free comparisons with full forensic reports. Results in seconds.

Run My First Search