CaraComp
Log inStart Free Trial
CaraComp
Forensic-Grade AI Face Recognition for:
Start Free Trial
Podcast

A 0.78 Match Score on a Fake Face: How Facial Geometry Stops Deepfake Wire Scams

A 0.78 Match Score on a Fake Face: How Facial Geometry Stops Deepfake Wire Scams

A 0.78 Match Score on a Fake Face: How Facial Geometry Stops Deepfake Wire Scams

0:00-0:00

This episode is based on our article:

Read the full article →

A 0.78 Match Score on a Fake Face: How Facial Geometry Stops Deepfake Wire Scams

Full Episode Transcript


A deepfake video call can reduce a human face to a string of a hundred and twenty-eight numbers in under two hundred milliseconds. And according to a report by Resemble.ai, deepfake fraud damage hit three hundred and fifty million dollars in just a single quarter — Q2 of 2025. Most of that money moved because someone on a video call looked and sounded exactly like the person they claimed to be.


If you work in investigations, fraud prevention, or

If you work in investigations, fraud prevention, or financial compliance, this matters to you right now. Criminals aren't just sending phishing emails anymore. They're running live video calls with real-time face-swapping software, and they're doing it at scale — dozens or even hundreds of calls a day. Today I'm going to walk you through exactly how facial geometry math can catch a synthetic face before money leaves the account. And I'll show you why your own eyes and ears are the weakest link in that chain. So how does a thirty-second comparison beat a convincing performance?

First, you need to understand what's actually happening on the other end of a deepfake video call. The current operational standard isn't a fully computer-generated face. According to reporting by Malwarebytes, scam operations hire real people to sit on camera, then run live face-swapping A.I. over their actual features. The software adjusts the real person's appearance to match whoever the victim expects to see — a boss, a romantic interest, a banking executive. That means a real human is blinking, breathing, reacting in real time. The A.I. just changes what their face looks like.

And the voice? According to Interpol research covered by The Register, criminals can now build a convincing voice clone from just ten seconds of reference audio — pulled from a social media post or a voicemail greeting. So you've got a real person's body language, a cloned voice, and a swapped face, all running simultaneously on a live call. No wonder people get fooled.


Trusted by Investigators Worldwide
Run Forensic-Grade Comparisons in Seconds
Full platform access for 7 days. Run real searches — no credit card, no commitment.
Run My First Search →

That brings us to why traditional verification fails

That brings us to why traditional verification fails. Our brains are wired to trust faces and voices together. When you see a mouth moving and hear words that match, when you see familiar eyes and hear a familiar tone, your brain integrates all of that sensory input and stamps it "real." Even if the synthesis is only ninety percent convincing, our pattern-matching instincts fill in the gap. The scammer is counting on exactly that shortcut. And the old advice — ask the caller to turn their head sideways to reveal glitches — is becoming less reliable as the A.I. improves.

So what can actually catch the fake? Facial comparison tools use something called Euclidean distance in an embedding space. In plain terms, the software converts a face image into a compact set of a hundred and twenty-eight numbers — a mathematical fingerprint of that face's geometry. It measures things like the distance between eye corners, the angle of the jaw, the prominence of the cheekbones, the position of the nose bridge. Then it compares those numbers against a reference photo of the person the caller claims to be.

If both sets of numbers are close together — small Euclidean distance — it's likely the same person. If the numbers are far apart, it's likely a different person or a synthetic variant. A common threshold sits around zero point six. Below that, probable match. Above it, probable mismatch. According to academic research published through P.M.C. and N.C.B.I., distances between embeddings of the same person are consistently much smaller than distances to any other person. That's not a subjective judgment call. It's geometry.


Why can't deepfakes beat this

Why can't deepfakes beat this? Because face-swapping A.I. synthesizes features rather than capturing them from the actual skull underneath. A real human face follows the laws of bone structure and lighting physics across every angle. A synthetic overlay struggles to maintain anatomical precision when the head rotates through multiple positions simultaneously. Those subtle inconsistencies in landmark positioning push the Euclidean distance just high enough to cross the threshold — and flag the face as a mismatch.

And the speed advantage is enormous. A forensic fingerprint comparison might take an hour. A facial comparison against a reference photo takes about thirty seconds. When a client's finger is hovering over the wire transfer button, that difference in speed is the difference between catching the scam and funding it.

According to a survey of roughly twelve thousand consumers by Hiya, one in four Americans received a deepfake voice call in the past twelve months. This isn't an edge case anymore. It's a normalized threat touching twenty-five percent of consumers.


The Bottom Line

The real shift is this — a victim under emotional pressure can doubt their own eyes. They can second-guess their instincts. But they can't argue with a number that says the face on screen doesn't match the face on file by a Euclidean distance of zero point seven eight, well above the zero point six threshold.

So — three things to remember. Deepfake video calls use real humans with A.I. face-swaps running in real time, and your brain is built to trust what it sees and hears together. Facial comparison tools convert a face into a hundred and twenty-eight numbers and measure the mathematical distance between two faces — no emotion, no guesswork. If that distance crosses the threshold, the face is flagged as a mismatch, and that flag can stop a wire transfer before money moves. Next time you're on a video call and everything looks right, remember — looking right is exactly what the software was designed to do. The question isn't whether the face looks real. The question is whether the geometry checks out. The full story's in the description if you want the deep dive.

Ready to try AI-powered facial recognition?

Match faces in seconds with CaraComp. Free 7-day trial.

Start Free Trial