CaraComp
Log inStart Free Trial
CaraComp
Forensic-Grade AI Face Recognition for:
Start Free Trial
Podcast

99% Accurate Still Means Thousands of Wrong Arrests | Podcast

When 99% Accurate Still Means Thousands of Wrong Arrests

99% Accurate Still Means Thousands of Wrong Arrests | Podcast

0:00-0:00

This episode is based on our article:

Read the full article →

99% Accurate Still Means Thousands of Wrong Arrests | Podcast

Full Episode Transcript


What if a system got it right ninety-nine times out of a hundred — and still got ten thousand people wrong? That's not a hypothetical. That's the math law enforcement is dealing with right now.


Trusted by Investigators Worldwide
Run Forensic-Grade Comparisons in Seconds
Full platform access for 7 days. Run real searches — no credit card, no commitment.
Run My First Search →

If you've ever been in a crowd at a stadium, an

If you've ever been in a crowd at a stadium, an airport, or even a busy intersection with cameras, your face has probably been compared against a database. And the system doing that comparison might be incredibly accurate. But "incredibly accurate" and "good enough to build a case on" are two very different things. So here's the driving question — when a facial recognition match flags you as a suspect, how much should that match actually count?

Let's unpack this in three parts. First, the math problem. A system that's ninety-nine percent accurate sounds nearly perfect. But run it against a million faces, and that tiny one percent error rate produces about ten thousand false positives. Think of it like a smoke detector that's right almost every time — but still sends the fire department to thousands of homes with no fire. For investigators, that means the bigger the database you search, the more wrong answers you get mixed in with the right ones. And here's the thing — those accuracy numbers come from lab conditions. Clean lighting. Straight-on photos. Not the blurry, angled, badly lit images investigators actually work with in the field.

So what happens when someone trusts that match too much? That's the second point — the methodology failure. Multiple wrongful detentions in the U.S. and abroad follow the same pattern. A facial match flags a suspect. Investigators treat it as confirmation instead of a lead. And nobody gathers independent evidence. Think of it like a doctor diagnosing you based on one test and skipping every follow-up. The technology didn't fail in these cases. The process around it did.


The Bottom Line

Now, you might be wondering — is anyone fixing this? That's the third piece. N.I.S.T. and forensic science bodies now agree — one match does not equal probable cause. Several U.S. jurisdictions are writing that into policy. And systems that return confidence scores — not just a yes or no — give investigators something they can actually weigh against other evidence. Think of it like the difference between a thermometer giving you an exact temperature versus just saying "hot."

But here's what most people miss. Facial recognition actually gets it right far more often than eyewitnesses do. Eyewitness identification has an error rate above twenty-five percent. But "better than eyewitnesses" still doesn't mean "ready to stand alone in court."

So here's the bottom line. A facial recognition system can be extremely accurate and still produce thousands of wrong matches at scale. The real danger isn't the technology — it's treating a match like proof instead of a starting point. The sharpest investigators don't just cite a result. They can explain what it means and what it doesn't. Something worth thinking about next time you hear "ninety-nine percent accurate" — ask, ninety-nine percent of how many.

Ready to try AI-powered facial recognition?

Match faces in seconds with CaraComp. Free 7-day trial.

Start Free Trial