UK Scanned 1.7M Faces. Seven Regulators Can't Agree on the Rules.
UK Scanned 1.7M Faces. Seven Regulators Can't Agree on the Rules.
This episode is based on our article:
Read the full article →UK Scanned 1.7M Faces. Seven Regulators Can't Agree on the Rules.
Full Episode Transcript
London's Metropolitan Police scanned one-point-seven million faces so far in twenty-twenty-six. That's nearly double what they scanned in the same window last year. And according to at least seven separate regulatory bodies in the U.K., nobody can agree on the rules governing any of it.
If you've ever walked down a busy street in a major
If you've ever walked down a busy street in a major city, your face may have already passed through a system like this — whether you knew it or not. That's not speculation. It's how live facial recognition works in public spaces. A camera scans every face in a crowd and checks it against a watchlist in real time. In the London borough of Croydon, police used this technology to make more than a hundred arrests. The Met's national lead on facial recognition, Lindsey Chiswick, says the force has taken more than seventeen hundred dangerous offenders off London's streets since twenty-twenty-four. And polling shows roughly two out of three people in the U.K. support police using the technology. So what's the problem? The problem is that the legal framework holding all of this together isn't really a framework at all. It's a patchwork — and multiple regulators are now saying so publicly. The question running through this story is a simple one. If the rules aren't consistent, how can the results be trusted?
Start with what a regular person in Croydon would need to do just to understand why a camera was scanning their face on a Saturday afternoon. According to reporting from Biometric Update, that person would need to read four separate pieces of legislation. Then layer on police guidance documents, local force policy, and individual impact assessments. That's the legal basis for one camera on one high street. No single law in the U.K. governs how police use facial recognition. Instead, forces rely on a combination of common law powers, equalities law, human rights law, and data protection rules — none of which were written with this technology in mind.
Now zoom out from that one person in Croydon to the system itself. Oversight for police facial recognition in the U.K. is split across at least seven different bodies. The Forensic Science Regulator. Two separate Biometrics Commissioners. The Information Commissioner's Office. Police and Crime Commissioners. The Investigatory Powers Commissioner's Office. And the College of Policing. Seven regulators, each with a different piece of the puzzle, none holding the full picture. For anyone trying to challenge a facial recognition match in court — whether you're a defense solicitor or just someone who got stopped — figuring out which authority to appeal to is its own maze.
The accuracy standards are just as fractured
The accuracy standards are just as fractured. Some police forces set their match threshold at zero-point-six. Others follow the National Physical Laboratory's recommendation of zero-point-six-four. That gap might sound small, but in practice it means the same face, compared against the same watchlist, could trigger an alert in one jurisdiction and not in another. And police forces can lower their thresholds without any judge signing off. No judicial check. No external review. Just a local decision that changes who gets flagged and who doesn't. For investigators building cases on facial comparison evidence, that inconsistency is a landmine at trial. For everyone else, it means the confidence behind a match depends on your postcode.
One commissioner told The Guardian that the slow pace of legislation was trying to catch up with the real world. Meanwhile, the real world moved fast. The Met's scanning volume jumped eighty-seven percent in a single year. One-point-seven million faces. International legal standards say that when governments seriously interfere with rights, the law authorizing that interference needs to be specific and clear — not a loose patchwork that leaves room for interpretation. Vague legal norms that merely allow comparison don't meet that bar.
And yet the results keep coming. Arrests are up. Public support is strong. Police say the technology works, and by their numbers, it does. So the tension isn't between a tool that fails and a public that objects. It's between a tool that delivers and a legal system that hasn't decided what the rules should be.
The Bottom Line
The instinct is to frame this as a debate between privacy and security. But that misses the deeper problem. Even people who fully support facial recognition should want consistent rules — because without them, the evidence it produces can't hold up equally in every courtroom, and the rights it affects aren't protected equally in every borough.
U.K. police are scanning faces at a pace that nearly doubled in one year. Seven regulators oversee the process, but none of them owns it. And the accuracy threshold that decides whether your face triggers an alert can change depending on which police force is running the camera. Whether you're building a case on biometric evidence or you're just the person walking past the camera on your way to the shops, the question is the same. If the rules aren't settled, what exactly are we trusting? The full story's in the description if you want the deep dive.
Ready for forensic-grade facial comparison?
2 free comparisons with full forensic reports. Results in seconds.
Run My First SearchMore Episodes
Pakistan's $2.4B Airport Biometrics Deal: The Cameras Work. Nobody's in Charge.
A two-point-four-billion-dollar biometric system for Pakistan's airports. Cameras, e-gates, facial recognition — all of it backed by the U.S. government. The technology can clear a passenger in under
PodcastIs That Face Even Real? The New First Question Fraud Teams Must Ask
Nearly eighty percent of people worldwide were targeted by deepfake or A.I.-generated fraud at least once in the past year. According to the Veriff Fraud Index for twenty twenty-five, that's not a projection. That's alre
Podcast76% Hit, 40% Ready: The Deepfake Gap That Just Cost Arup $25 Million
Three out of four organizations in the U.K. have already been hit by a deepfake attack. But only about four in ten say they're actually ready for the next one. That gap — between what's already happe
