CaraComp
Log inStart Free Trial
CaraComp
Forensic-Grade AI Face Recognition for:
Start Free Trial
Podcast

A 95% Confidence Score Falls Apart If the Media Was Faked Before You Ran the Match | Podcast

A 95% Confidence Score Falls Apart If the Media Was Faked Before You Ran the Match

A 95% Confidence Score Falls Apart If the Media Was Faked Before You Ran the Match | Podcast

0:00-0:00

This episode is based on our article:

Read the full article →

A 95% Confidence Score Falls Apart If the Media Was Faked Before You Ran the Match | Podcast

Full Episode Transcript


N.A.T.O. ran a military exercise earlier this year where cybersecurity firm Reality Defender slipped deepfake media into a simulated battlefield scenario. Experienced military officials — people trained to assess threats under pressure — struggled to spot the fakes. If they couldn't tell, neither can you.


Trusted by Investigators Worldwide
Run Forensic-Grade Comparisons in Seconds
Full platform access for 7 days. Run real searches — no credit card, no commitment.
Run My First Search →

That matters for anyone who runs facial comparison

That matters for anyone who runs facial comparison in an investigation, a background check, or an identity verification workflow. Because most of us assume the hard part is matching the face. It's not. The hard part is proving the image you're matching against is real in the first place. Today we're walking through why a ninety-five percent confidence score can be completely worthless — and what has to happen before you ever run a facial match. So what does a confident score actually miss?

Start with the tools themselves. According to digital forensics expert Hany Farid, some deepfake detection systems top out at roughly eighty percent effectiveness. That means one in five fakes slips through. And those systems often can't explain how they reached their verdict. A score without reasoning is just a number — in court, it's unusable evidence.

Now, you might assume humans can fill that gap. According to peer-reviewed research published in P.M.C., people's actual accuracy at spotting deepfakes lands at about fifty-seven point six percent. That's barely better than flipping a coin. Yet many of those same people felt confident in their judgment. That mismatch — high confidence, near-random accuracy — is where investigations go wrong.

The article's own analogy nails it. Facial comparison is like a fingerprint match. The tool is excellent at saying these two prints belong to the same person. But no forensic lab runs that match before establishing chain of custody. Where did the sample come from? Was the surface contaminated? Was it stored properly? A perfect fingerprint match on compromised evidence is worse than no match at all. It's false confidence.


The Bottom Line

And the threat keeps expanding. Attackers now use injection attacks — they substitute fraudulent video directly into the capture pipeline before it ever reaches the detection system. Your deepfake detector could flag a hundred percent of fake faces it actually sees. But if the attacker bypassed the camera sensor entirely, detection never gets a chance to work. According to Gartner, by twenty twenty-six, thirty percent of enterprises won't consider face-based verification reliable on its own because of threats exactly like this. One U.S. firm tracked a three thousand percent surge in deepfake incidents between twenty twenty-two and twenty twenty-three alone.

The real shift isn't better detection. It's asking a different question entirely. Not "does this look real" — but "how do I know this media came from an authentic source, wasn't swapped in transit, and genuinely shows what it claims to show?"

So — three things to remember. First, deepfake detectors miss about one in five fakes and often can't explain their reasoning. Second, humans do barely better than a coin toss at spotting fakes, even when they feel sure. Third, a confidence score only means something if you've already proven the media is genuine before you ran the match. Next time you see a ninety-five percent confidence score, don't ask whether the face matches. Ask whether anyone verified the video was real before the algorithm ever touched it. The full story's in the description if you want the deep dive.

Ready to try AI-powered facial recognition?

Match faces in seconds with CaraComp. Free 7-day trial.

Start Free Trial