CaraComp
Log inTry Free
CaraComp
Forensic-Grade AI Face Recognition for:
Start Free Trial
Podcast

Investigators Can't Explain Their Own Facial Recognition Evidence. Courts Noticed.

Investigators Can't Explain Their Own Facial Recognition Evidence. Courts Noticed.

Investigators Can't Explain Their Own Facial Recognition Evidence. Courts Noticed.

0:00-0:00

This episode is based on our article:

Read the full article →

Investigators Can't Explain Their Own Facial Recognition Evidence. Courts Noticed.

Full Episode Transcript


A ninety-five percent confidence score sounds almost perfect. But apply that to a database of ten million faces, and you've just flagged five hundred thousand people as potential matches — every single one of them wrong. That's the math no one walks you through when they sell facial recognition as reliable.


This matters whether you've ever touched a

And this matters whether you've ever touched a biometric system or not. If you've unlocked your phone with your face, posed for a driver's license photo, or walked past a security camera at the airport, your face is already data inside a system like this. If that makes you uneasy, that's a reasonable response. But the thing that should concern you isn't the technology itself. It's that the people using it in criminal cases often can't explain how it actually works. Courts have started to notice. And that's reshaping what investigators are allowed to do — and what they're now expected to do. So how does facial recognition actually arrive at a result, and why does the answer matter so much in a courtroom?

Most people assume a facial recognition system gives a simple yes or no. Either the face matches or it doesn't. That belief makes sense because that's how we experience it on our phones — you look at the screen, it unlocks or it doesn't. But criminal systems don't work that way at all. The system takes a photograph of a face and converts it into what's called a vector embedding — basically a long string of numbers that represents the unique geometry of that face. Then it measures the mathematical distance between that string and every other string in the database. It uses metrics like Euclidean distance or angular measures to calculate how close two face templates are in high-dimensional space. If the distance is small, the system says these two photos probably show the same person.

The postal address analogy works well here. You hand the system a target photo, like giving someone an address to find. It searches the database and ranks results by closeness — nearest first, farthest last. But it never says "this is definitely your house." It says "these are ranked by how similar they look." A human investigator has to make the final call. And if that investigator picks the forty-seventh result on the list because they want a quick close on a case, they've just swapped algorithmic precision for personal bias.


Trusted by Investigators Worldwide
Run Forensic-Grade Comparisons in Seconds
2 free forensic comparisons with full reports. Results in seconds.
Run My First Search →

What about accuracy

So what about accuracy? According to research tracking these systems over time, face recognition error rates have dropped by roughly half every year since twenty-seventeen. That sounds incredible, and it is — for well-lit, cooperative, frontal photographs. The kind you'd take at a D.M.V. or a passport office. But real investigations don't produce those kinds of images. Accuracy hits a hundred percent when the face is looking straight at the camera. Tilt the head to forty-five degrees, and accuracy drops to about seventy percent. Turn to a full profile — ninety degrees — and accuracy falls to zero. That's not a minor limitation. That's the difference between evidence and guesswork, and most investigators never document which angle they were working with.

Which brings us to the legal shift that's changing everything. A landmark U.K. Supreme Court case known as D.S.D. established something that caught a lot of agencies off guard. The court ruled that police can be held liable for failing to conduct effective investigations — specifically when the tools to prevent harm existed and weren't used. That flips the old assumption on its head. It used to be that agencies could say "we weren't required to use facial recognition." Now courts are asking "why didn't you?" And in the U.S., Illinois considered sweeping restrictions on government biometric use because the state had strong commercial privacy protections but almost no comparable limits on police. The pressure is coming from both directions — use the tools responsibly, or explain why you didn't use them at all.

For anyone who's ever worried about being wrongly identified by one of these systems, this legal trend is actually protective. It means agencies that use biometrics without transparent, documented workflows now face evidence suppression in court and civil liability. Sloppy methodology isn't just bad practice anymore. It's a legal vulnerability.


The Bottom Line

The real shift isn't that the technology got better. It's that a similarity score was never an identification — it was always a ranking. And now courts are demanding that the humans behind the ranking prove they understood the difference.

So here's what to carry with you. Facial recognition doesn't say "that's the person." It says "these faces are mathematically similar — ranked from most to least." A human makes the final decision, and that decision is only as good as the documentation behind it. Whether you carry a badge or just carry a phone, the rules around what counts as a match just got a lot stricter — and that protects everyone. The full story's in the description if you want the deep dive.

Ready for forensic-grade facial comparison?

2 free comparisons with full forensic reports. Results in seconds.

Run My First Search