UK Cops Scanned 1.7M Faces. The Algorithm Won't Hold Up in Court.
UK Cops Scanned 1.7M Faces. The Algorithm Won't Hold Up in Court.
This episode is based on our article:
Read the full article →UK Cops Scanned 1.7M Faces. The Algorithm Won't Hold Up in Court.
Full Episode Transcript
Since the start of twenty-twenty-six, London's Metropolitan Police have scanned more than one-point-seven million faces using cameras mounted in public spaces. That's an eighty-seven percent jump over the same period last year. And the algorithm behind those scans almost certainly won't hold up as evidence in a courtroom.
If you've walked through a busy street in England
If you've walked through a busy street in England recently, your face may have been checked against a police watchlist without you ever knowing. That's unsettling. And if that thought makes your stomach tighten, I want you to sit with that feeling for a second, because it's reasonable. But fear without understanding keeps us stuck. What actually matters isn't whether police are using facial recognition. They are. Thirteen of the forty-three forces in England and Wales have adopted it as of March twenty-twenty-six. What matters is that most people — and honestly, a lot of professionals — don't realize there are two completely different kinds of facial recognition at work in policing. Confusing them can wreck a court case. And it can wreck public trust. So what's the actual difference, and why does it change everything?
Picture a border checkpoint. Every traveler walks past a guard who's holding a stack of wanted posters. The guard glances at each face, checks it against the posters, and waves people through or pulls them aside. That's live facial recognition. Cameras mounted in public spaces capture faces in real time and compare each one against a fixed watchlist — a database of people wanted by police or courts. It's one face checked against many entries, happening in fractions of a second, over and over, thousands of times a day.
Now picture something entirely different. A detective sitting at a desk with two photographs under a magnifying lamp, measuring the distance between eye sockets, comparing chin proportions, taking notes. That's forensic facial comparison. One image measured against one other image, with time, with expertise, with human judgment guiding every step. These two processes share a name — facial recognition — but they answer completely different questions. One asks, "Is this person on our list?" The other asks, "Are these two photos the same person?"
So why does that distinction matter so much? Because the accuracy numbers you hear in the news almost always describe the live system. And those numbers sound incredible. According to U.K. police data, out of two-thousand-and-seventy-seven potential alerts generated by live scanning, two-thousand-and-sixty-seven were true matches. The false positive rate came in at just zero-point-zero-zero-zero-three percent. That sounds almost perfect. And it's easy to see why people assume that number means the technology is reliable for investigations, too.
That assumption breaks down once you look at scale
But that assumption breaks down once you look at scale. U.K. police forces scanned nearly four-point-seven million faces with live cameras in twenty-twenty-four alone. That's more than double the number from twenty-twenty-three. When you run a tiny error rate across millions of faces, those errors stop being theoretical. They become real people stopped on real streets, pulled aside by real officers, because an algorithm flagged them. Even a fraction of a fraction of a percent turns into dozens of false alerts that each require an officer to investigate.
And who bears the cost of those errors? Not equally. Testing on one facial recognition system used by U.K. police found that Black women accounted for the highest share of false positive identifications — nine-point-nine percent at a zero-point-eight threshold. Separate testing in twenty-twenty-five on a commonly used retrospective algorithm showed higher false positive rates for Black and Asian faces overall. The reason traces back to training data. If the images used to teach an A.I. system lack diversity, the system internalizes that gap. It doesn't decide to be biased. It simply never learned to see certain faces as well as others. For anyone who's ever been misidentified or overlooked, that pattern isn't abstract. It's personal.
Now, on the forensic side — the detective-with-two-photos side — there's a different problem. According to peer-reviewed research published in Nature Scientific Reports, the average person makes between twenty and thirty percent errors when comparing two unfamiliar faces. And many trained professionals, including passport officers, perform at similar rates. That's a striking number. But forensic facial examiners — specialists with deep training — performed significantly better. They were slower, more deliberate, and strategically avoided misidentification. On a challenging face identification test, they outperformed not just students but even fingerprint examiners. Their advantage wasn't just better pattern recognition. It was controlled decision-making under uncertainty. That kind of expertise is something no algorithm currently replicates.
Meanwhile, the volume of retrospective searches — the kind investigators actually use after an event — is surging. U.K. police forces nearly doubled their retrospective facial recognition searches, going from about a hundred and thirty-nine thousand in twenty-twenty-three to over two hundred and fifty-two thousand in twenty-twenty-four. That's more than twenty-five thousand searches a month on the Police National Database. So the tool investigators rely on most is being used at vastly higher rates than live deployment. And yet, according to research from Georgetown Law's Center on Privacy and Technology, the algorithm step and the human step in a facial recognition search can each compound the other's mistakes. There's even a documented case where an officer copied facial features from a high-resolution image and pasted them onto a low-quality suspect photo before running a database search. The algorithm then returned results based on a manipulated input. Garbage in, confident garbage out.
The Bottom Line
The real risk isn't that these systems are inaccurate. It's that a match score of ninety-nine percent feels like proof — when it's actually a probability generated under controlled conditions that may have nothing in common with the image quality, the lighting, or the human choices that shaped the evidence in front of you.
So here's what this comes down to. Live facial recognition and forensic facial comparison share a name, but they're fundamentally different tools that answer different questions. A high accuracy score from one doesn't guarantee reliable results from the other. And the human decisions made before, during, and after the algorithm runs matter just as much as the math. Whether you're evaluating evidence for a case or just trying to understand why your face might get flagged on a city street, that distinction is the one that counts. Understanding it doesn't just make you smarter about the technology. It gives you the language to ask the right questions when it's pointed at you. The full story's in the description if you want the deep dive.
Ready for forensic-grade facial comparison?
2 free comparisons with full forensic reports. Results in seconds.
Run My First SearchMore Episodes
UK Just Spent £2M Spying on Benefit Claimants — With Zero Rules Governing How
The U.K. government just spent two million pounds on covert surveillance gear — including cameras mounted inside vehicles — to watch people who claim benefits. No new law authorized it. No legal stan
PodcastAge Verification Is a Lie: 3 Hidden Flaws That Make "Passed" Meaningless
A system built to answer one question about you — are you over eighteen — doesn't just check your age and move on. It keeps your government I.D., your selfie, and your biometric data sitting in a database you'll never se
PodcastFacial Recognition's 81% Error Rate Is About to Blow Up in Court — Are Your Notes Ready?
In U.K. police trials of live facial recognition, the system got it wrong about four out of every five times. An eighty-one percent error rate. And yet, th
