AI Didn't Jail Angela Lipps for 5 Months. Sloppy Workflow Did.
Angela Lipps had never been to North Dakota. She had bank records to prove it. She had a life in Tennessee — family, routine, an alibi baked into the mundane paper trail of everyday existence. None of it mattered, because somewhere up the chain, an algorithm returned her face as a candidate match for a fraud case in Fargo, and someone — a human, not a machine — decided that was enough to arrest her. She spent more than five months in jail before the case fell apart.
Wrongful arrests tied to facial recognition aren't technology failures — they're workflow failures where investigators treat a probabilistic search result as courtroom-ready evidence, skipping the human comparison step entirely.
Read the headlines and you'd think the villain is the software. "AI facial recognition wrongly jails woman." "Facial recognition tech leads to false arrest." The technology gets the byline, and the story gets filed under "another reason to ban this stuff." That framing is lazy, and worse, it's letting the actual problem off the hook.
The real issue is process. Specifically, the catastrophic collapse of the line between a search lead and verified evidence. That distinction sounds bureaucratic until you realize it's the difference between five months of wrongful imprisonment and none.
How the Workflow Actually Breaks Down
Facial recognition systems don't identify people. Let's be precise about that. They produce ranked lists of candidates sorted by similarity scores — essentially telling investigators: "Here are faces in our database that geometrically resemble the face in your surveillance image, in descending order of similarity." That's it. That's the output. A probability ranking, not a positive ID.
What's supposed to happen next is investigation. You take that candidate list and you do the work: independent corroboration, a documented human comparison by someone trained in facial analysis, additional evidence linking the candidate to the crime scene. What's happening instead — in case after documented case — is that investigators are treating the top result like a fingerprint match and moving straight to arrest. This article is part of a series — start with Deepfake Calls Surge As Governments Bet On Biometric Verific.
According to The Washington Post's investigation into eight known wrongful arrests linked to facial recognition, every single one shared the same structural failure: police arrested someone without independently connecting that person to the crime. Not once. Not in any of the cases. The algorithm pointed, and the cuffs came out.
In the Lipps case specifically, CNN reported that Fargo police had bypassed the proper channel entirely — they were supposed to submit surveillance photos to the North Dakota State and Local Intelligence Center, a unit that is certified and trained in facial recognition work. Instead, they used a neighboring agency's unapproved system. So the match was not only unverified by human comparison, it came from a workflow that violated their own protocols from the start.
The Orlando Case Makes It Even More Damning
If the Lipps case shows how broken intake can be, the Orlando case shows how badly confirmation bias distorts the exit. WESH reported that an Orlando man was arrested despite visible physical evidence at the scene — specifically, tattoos on the suspect in the video footage — that didn't match the man who got arrested. An attorney in that case noted it fits an established pattern, not a one-off.
Think about what that means operationally. Someone looked at a surveillance image of a suspect with distinct tattoos. They ran a facial recognition search. They got a candidate. And at no point in the process — not during the match review, not during arrest preparation, not during affidavit drafting — did anyone apparently hold both images side by side and ask: "Does this person actually look like the person in the video?"
That's not an AI problem. That's a detective skipping the most elementary step in visual identification. The technology gave them a starting point; they turned it into a finish line.
"Before facial recognition technology was available, police needed investigative leads to pin down suspects from physical evidence or eyewitness statements. But with access to security cameras and facial recognition technology, police can quickly conjure up several possible suspects. Without further investigation and traditional police work to connect the match chosen by the technology to a crime scene, the match is useless." — Analysis via Clutch Justice
That's the exact dynamic playing out across jurisdictions. The technology didn't replace investigative work — it gave investigators an excuse to skip it. Previously in this series: Face Match Workflow 24 Months Courts Obsolete.
The Bias Layer Makes a Bad Problem Worse
There's an uncomfortable dimension to all of this that deserves direct acknowledgment. A 2017 NIST study examining 140 face recognition algorithms found that rates of false positives were highest for East and West African and East Asian individuals — with false positive rates up to 100 times higher in some cross-country comparisons. The Innocence Project has documented multiple wrongful arrests tied to facial recognition, noting that a significant proportion of those misidentified have been Black individuals.
This isn't incidental. When you combine a probabilistic tool with known demographic accuracy gaps, and then remove the human verification step that's supposed to catch errors, you're not just creating risk — you're systematically concentrating that risk on specific communities. The NBC News report on LaDonna Crutchfield's case in Detroit — where she was arrested for attempted murder despite a mismatched name, height, and age in the system — illustrates exactly this: a chain of failures that an elementary human comparison would have broken in under five minutes.
Why This Matters Beyond the Headlines
- ⚡ Probable cause is being manufactured algorithmically — a similarity score isn't legal basis for arrest, but it's being treated as one in department after department
- 📊 Demographic accuracy gaps aren't fixed by banning the tool — they're addressed by mandatory human review that catches what the algorithm misses
- 🔮 Settlements are now writing the rulebook — Detroit's post-Williams consent decree is the most comprehensive police facial recognition policy in the country, and it was born from exactly this kind of failure
Detroit Shows What Fixing This Actually Looks Like
Here's the thing about the "ban it all" argument: Detroit didn't ban it, and Detroit is now the standard-bearer for getting this right. After the Robert Williams wrongful arrest case — where Williams was held for 30 hours after a facial recognition match despite no corroborating evidence — the city negotiated what the ACLU describes as the nation's strongest police policies constraining facial recognition use.
The resulting rules are strict and specific. Detroit PD is now prohibited from arresting anyone based solely on a facial recognition result. They cannot conduct a lineup based on a facial recognition lead without independent, reliable evidence linking the suspect to the crime. Training is mandatory. Documentation is mandatory. The algorithm is a starting point, full stop.
Detroit's own police chief, at a 2020 Board of Police Commissioners meeting, described the Williams arrest plainly: "This was clearly sloppy, sloppy investigative work." Not sloppy technology. Sloppy investigative work. That's the admission that should be quoted in every story about wrongful arrests and facial recognition — because it locates the failure exactly where it belongs. Up next: Viral Deepfake Demo Broke Courts Trust Video Evidence.
According to Stateline, at least 15 states had facial recognition policing legislation in play by early 2025, most of it moving toward the Detroit model: mandatory corroboration, trained reviewers, documented chain of analysis. The policy direction is clear. The implementation is lagging badly behind the need.
Search Is Not Comparison. This Is the Whole Argument.
There's a two-step framework that investigators using any face technology — including professional-grade platforms built for court-ready work, like what we develop at CaraComp — need to internalize as non-negotiable. Step one is the search: algorithmic, probabilistic, fast, designed to surface candidates from a database. Step two is the comparison: methodical, human-reviewed, documented, built to survive scrutiny in a courtroom or an affidavit.
Collapsing those two steps into one isn't efficiency. It's the entire mechanism of wrongful arrest. The Michigan Law Quadrangle analysis of the Williams settlement makes this explicit: independent corroboration requirements aren't bureaucratic friction — they're the structural guarantee that an algorithm's guess doesn't become a person's imprisonment.
Consider the Robert Dillon case out of Jacksonville, documented by PetaPixel: a 51-year-old man was arrested based on a 93% confidence match, despite being 300 miles from the crime at the time of the incident. He was never charged, but he was jailed. A 93% confidence score sounds definitive. It isn't. It means the algorithm found a strong geometric resemblance — nothing more.
Facial recognition can be a fast way to find possible suspects, but it is never proof on its own. The only reliable safeguard against wrongful arrests like Angela Lipps', the Orlando misidentification, or the Detroit and Jacksonville cases is a disciplined, documented separation between algorithmic search and human comparison — with independent evidence required before anyone reaches for the handcuffs.
Ready for forensic-grade facial comparison?
2 free comparisons with full forensic reports. Results in seconds.
Run My First SearchMore News
Discord and Apple Turn Age Checks into Evidence Logs for Investigators
Age checks were supposed to keep kids safer online. Now they're creating timestamped identity trails that investigators will need to understand — and explain in court. Here's what that really means.
digital-forensicsViral Deepfake Demo Forces ByteDance to Limit AI Video Tool — Courts Feel the Fallout
A single viral demo forced ByteDance to restrict its own AI video tool in under 72 hours. For investigators and courts, that speed is the entire problem — and it's about to get expensive.
ai-regulationCourts Will Soon Judge Your Face Match Workflow, Not Just Your Results
A global AI identity regime is taking shape fast — and investigators who don't build a consent-deepfake-comparison workflow into their SOPs right now will be fighting admissibility battles they should have seen coming.
