CaraComp
Log inTry Free
CaraComp
Forensic-Grade AI Face Recognition for:
Start Free Trial
ai-regulation

UK Scanned 1.7M Faces. Seven Regulators Can't Agree on the Rules.

UK Scanned 1.7M Faces. Seven Regulators Can't Agree on the Rules.

The Metropolitan Police scanned 1.7 million faces in the first part of 2026. That's an 87% increase on the same period in 2025. And throughout all of that — every van, every camera, every match, every arrest — the legal framework governing exactly what officers could do with a facial comparison result remained a patchwork of competing laws, inconsistent thresholds, and at least seven different regulatory bodies with overlapping and often contradictory remits. That's not a minor procedural footnote. That's a structural failure sitting right underneath one of the most powerful identification tools law enforcement has ever had.

TL;DR

UK regulators are sounding the alarm not because facial recognition doesn't work, but because the rules governing how it works — and when a match becomes actionable — differ wildly between forces, leaving a system where accuracy standards, oversight, and court admissibility are all up for interpretation.

The headlines, understandably, focus on the deployments. Croydon. A hundred-plus arrests. Officers on the street, camera on a van, watchlist on a server. Results. It's a compelling narrative, and the numbers are hard to argue with. But the real story — the one that will matter far more in five years than any individual pilot — is the regulatory incoherence sitting behind every one of those deployments.

Seven Regulators, Zero Unified Standard

Here's a number worth sitting with: seven. That's how many separate regulatory bodies currently have some form of oversight responsibility over law enforcement facial recognition in the UK. The Forensic Science Regulator. Two Biometrics Commissioners. The Information Commissioner's Office. Police and Crime Commissioners. The Investigatory Powers Commissioner's Office. And the College of Policing. Seven agencies, none of them fully in charge, all of them with slightly different mandates and interpretations.

And it gets worse when you zoom in on the rules themselves. According to Biometric Update, a member of the public in Croydon who wanted to understand the legal basis for a live scan of their face would need to read four separate pieces of primary legislation, alongside police guidance, local force policy documents, and impact assessments. Four pieces of legislation. For a single camera on a single street. That's not transparency — that's a maze. This article is part of a series — start with That 95 Face Match Scammers Built The Other 3 Layers To Fool.

1.7M
Faces scanned by the Metropolitan Police in 2026 — an 87% increase year-on-year — as regulatory frameworks remained incomplete
Source: Biometric Update / ResultSense reporting

Then there's the accuracy threshold problem, which is arguably the most technically consequential issue in this whole debate. Some forces use a match confidence threshold of 0.6. The National Physical Laboratory has recommended 0.64 as a more appropriate benchmark. The gap sounds small. It isn't. And here's the part that should give everyone pause: police can lower that threshold without any judicial oversight whatsoever. A force could, theoretically, decide that a lower confidence match is "good enough" to act on — and there's no external check stopping them.

The Gap Between "It Works" and "It Holds Up"

Look, nobody's saying the technology doesn't deliver results. The Met's own figures are striking. More than 1,700 dangerous offenders taken off London's streets since 2024 — that's the number Lindsey Chiswick, the Met's national lead for facial recognition, has pointed to publicly. The public, for its part, seems largely on board: two in three people support police use of the technology.

"More than 1,700 dangerous offenders taken off London's streets since 2024." — Lindsey Chiswick, Metropolitan Police National Lead for Facial Recognition, as reported by Biometric Update

But here's where the logic starts to buckle. Efficacy and public support are not — and have never been — substitutes for legal clarity. A tool can work brilliantly at identifying people and still produce evidence that falls apart in a courtroom. A system can be popular and still be operating in ways that no court has formally sanctioned. And when accuracy standards vary between forces with no unified minimum, the same match score that triggers an arrest in one jurisdiction might be quietly set aside in another. That's not a minor inconsistency. That's a problem that compounds every time a defendant's legal team digs into how their client was identified.

The UK Parliament POST has detailed this governance gap comprehensively. International legal standards are unambiguous on this: serious interferences with fundamental rights must be grounded in legislation with sufficient certainty and clarity. Vague norms that merely permit comparison work aren't enough. The UK's current arrangement — stitched together from equalities law, human rights frameworks, data protection rules, and common law powers — falls well short of that standard.


Trusted by Investigators Worldwide
Run Forensic-Grade Comparisons in Seconds
2 free forensic comparisons with full reports. Results in seconds.
Run My First Search →

Why This Matters Beyond the UK

It would be easy to treat this as a specifically British problem — a product of a legal system that loves precedent and muddles through. But the dynamic playing out in the UK is actually an early warning for every jurisdiction where facial recognition deployments are outrunning the legislation meant to govern them. The technology moves fast. Regulation, everywhere, moves slowly. One commissioner reportedly acknowledged to The Guardian that the "slow pace of legislation was trying to catch up with the real world." Which is honest — and damning. Previously in this series: Pakistans 2 4b Airport Biometrics Deal The Cameras Work Nobo.

For those working in professional investigation, forensic analysis, or legal proceedings that involve image-based identification, this fragmentation creates three compounding risks that don't resolve themselves just because the public is broadly supportive of the technology.

The Three-Layer Fragmentation Problem

  • Inconsistent accuracy standards — The same match confidence score can be acted on in one force and ignored in another, with no judicial check on threshold-lowering decisions. This is not a corner case; it's baked into current practice.
  • 📊 Uneven court admissibility — Without a unified evidence standard for facial comparison results, the same type of identification can face wildly different challenges in court depending on which force ran the scan and what threshold they used. Defence lawyers are already paying attention.
  • 🔮 Bias accumulation across systems — When individual forces fill governance gaps with their own local policies, variations in methodology introduce bias at the edges. Those biases don't stay contained — they compound across systems and across datasets, particularly when images sourced from different contexts are compared.

The distinction between live crowd scanning and targeted facial comparison — the kind used to match a crime scene image against a known database — matters enormously here, and the current framework blurs it. These are fundamentally different tools with different accuracy profiles, different legal justifications, and different implications for the people identified. Treating them as if they sit under the same vague legal umbrella isn't just sloppy — it's a liability waiting to be triggered the moment a high-profile conviction gets scrutinised on appeal.

At CaraComp, the difference between a usable identification and an actionable one is something we think about constantly — because a match that can't withstand scrutiny isn't a match worth making. That's not a product pitch; it's just the practical reality of working with technology that ends up in front of investigators, lawyers, and eventually courts.

Privacy International has called this a regulatory void — and it's hard to argue with that framing. The UK Government's own consultation on a new legal framework for police facial recognition use is at least an acknowledgment that the current situation is inadequate. But consultations are not legislation. And deployments aren't waiting.


What a Minimum Standard Actually Looks Like

If you accept that the patchwork is a problem — and at this point, you'd have to work quite hard to argue it isn't — then the logical next question is what the floor should look like. Not an ideal. A floor. The minimum below which no facial comparison result should be treated as actionable evidence. Up next: Retail Facial Recognition Watchlists No Appeals Process.

It's not a complicated list. A single nationally mandated accuracy threshold, set by an independent technical body, with no force-level override without judicial sign-off. A defined chain of custody for facial comparison evidence that aligns with the standards already applied to forensic DNA and fingerprints. And a consolidated oversight structure — not seven bodies with overlapping mandates, but one clear authority with actual enforcement power.

None of this requires stopping deployments. None of it requires scrapping what's working. It just requires treating facial recognition evidence with the same rigour that courts already demand of every other forensic identification method.

Key Takeaway

Patchwork policy doesn't just create inconsistency — it creates a system where the evidentiary value of a facial match depends less on the quality of the technology and more on which police force ran the scan and what threshold they happened to be using that day. That's not a foundation anyone should want to build criminal justice outcomes on.

The Croydon results are real. The 1,700 arrests are real. And the regulatory incoherence sitting underneath all of it is equally real. The question worth asking — and not just in the UK — is how many of those identifications would survive a properly rigorous legal challenge if the evidentiary standards were ever seriously tested. Right now, nobody actually knows. And that uncertainty, spread across 1.7 million scanned faces, is the most consequential data point in this entire story.

Ready for forensic-grade facial comparison?

2 free comparisons with full forensic reports. Results in seconds.

Run My First Search