A Deepfake Fooled a Notary on a Live Call. The Ears Gave It Away.
A Deepfake Fooled a Notary on a Live Call. The Ears Gave It Away.
This episode is based on our article:
Read the full article →A Deepfake Fooled a Notary on a Live Call. The Ears Gave It Away.
Full Episode Transcript
Sixty percent of people say they're confident they can spot a deepfake. According to a 2024 study cited by Florida Realtors, only zero point one percent actually can. That means the people who feel most certain they'd catch a fake face are almost always the ones who get fooled.
That gap between confidence and competence is
That gap between confidence and competence is exactly where fraud lives. And it's not just an abstract problem. In Maryland, someone used an A.I.-generated face to impersonate a property owner during a live video notarization call. The goal was to steal roughly a hundred thousand dollars in a vacant land deal. The notary on that call didn't catch it. Detection software did. If you've ever signed documents over a video call, or sold anything remotely, this isn't someone else's problem. It's the new shape of identity theft, and it's growing. According to Entrust's 2026 Identity Fraud Report, deepfake-related scams are climbing forty percent every single year. So how did software catch what a trained human couldn't?
Your face has geometry. Not just the way it looks, but the way it measures. Modern facial comparison systems map four hundred and sixty-eight landmarks across a single face. Those come from a framework called MediaPipe. Each landmark is a precise point — the inner corner of your eye, the tip of your earlobe, a specific spot along your jawline. The system then calculates the distances between those points using two types of math: Euclidean distance, which is a straight line between two points, and geodesic distance, which follows the curved surface of your face. It also calculates the ratios between those distances. The ratio of your eye-to-ear distance compared to your jawline angle relative to your cheekbone — that's a biometric constant. It doesn't change when you smile. It doesn't change when you tilt your head. And a deepfake can't replicate all four hundred and sixty-eight of those relationships at once without breaking something.
The article from HousingWire offers a useful way to picture this. Imagine two copies of an architectural blueprint — one real, one forged. The forger can make the building look right. They can add shadows, realistic details, convincing textures. But if you overlay both copies and measure fifty precise distances between specific corners, the forgery will have six that don't match. Deepfakes work the same way. They look convincing at a glance, but the underlying geometry doesn't obey physics.
Where do deepfakes actually fail
So where do deepfakes actually fail? The most common artifacts show up in places you wouldn't think to look. When a deepfake moves the mouth, the horizontal position of the lips relative to the nose-to-chin axis should stay constant. In a real face, it does. In a synthetic one, that ratio drifts across frames, creating measurable deviation in the distance calculations. Hands are another giveaway. When a hand crosses the face in a deepfake video, the fingers often warp because the A.I. struggles to render two overlapping objects at once. Facial hair renders inconsistently. And ears — ears are a persistent weak spot because deepfake generators focus their training data on the central face, not the periphery. That Maryland fraud attempt? The ears gave it away. For anyone who's ever been on a video call and thought, "something looked off but I couldn't say what" — this is what off looks like when you can measure it.
Now, a lot of people still believe they'd spot a deepfake just by watching carefully. That belief makes sense. Early deepfakes from 2017 to 2019 were genuinely bad. They had robotic blinking, stiff expressions, obvious glitches. News coverage of those early failures trained everyone to think deepfakes equal obvious artifacts. But the technology didn't stay there. Modern generative adversarial networks, trained on thousands of hours of video, now produce eye blinks that follow normal human timing distributions. Facial hair rendering has improved dramatically. The fakes that matter in 2026 don't look fake. That's the whole point. They're built to pass human inspection, and they do.
One more detail that stopped me cold. According to reporting from The Voice of San Francisco, a scammer needs just three seconds of someone's voice to generate an eighty-five percent voice match. Three seconds. That's a voicemail greeting. For a higher quality fake, they want about thirty seconds, but the barrier to entry is shockingly low. Pair a cloned voice with a synthetic face on a video call, and you've got a convincing impersonation of a property owner who might live three states away. According to F.B.I. data reported by HousingWire, real estate fraud accounted for over twelve thousand complaints and two hundred seventy-five million dollars in losses in 2025 alone. Remote closings removed the friction of meeting someone in person, and the dollar amounts involved make weeks of deepfake production worth the investment for criminals.
The Bottom Line
The real danger isn't that deepfakes exist. It's that almost everyone believes they'd catch one — and almost no one actually can. That confidence is the vulnerability. Measurement is the fix.
So, three things to carry with you. First — modern deepfakes pass human inspection. Your eyes are not enough. Second — every face has four hundred and sixty-eight measurable landmarks, and the math between those points is something A.I. still can't fake consistently. Third — the people most likely to be fooled are the ones most sure they won't be. Whether you verify identities for a living or you just signed a lease over Zoom last month, the era of trusting a face on a screen is over. Understanding the geometry underneath is how you take that power back. The full story's in the description if you want the deep dive.
Ready for forensic-grade facial comparison?
2 free comparisons with full forensic reports. Results in seconds.
Run My First SearchMore Episodes
EU's Age Check App Declared "Ready." Researchers Cracked It in 2 Minutes.
The European Commission declared its age verification app ready to roll out across the entire bloc. Security researchers broke through its core protections in about two minutes. Not two hours. Not tw
PodcastMeta's Smart Glasses Can ID Strangers in Seconds. 75 Groups Say Kill It Now.
A security researcher walked into the R.S.A.C. conference in twenty twenty-six wearing a pair of Meta Ray-Ban smart glasses. Within seconds, those glasses — paired with a commercial facial recognition system — identified
PodcastDiscord Leaked 70,000 IDs Answering One Simple Question: Are You 18?
Seventy thousand people uploaded photos of their government I.D.s to Discord. They weren't applying for a job or opening a bank account. They were just trying to prove they were eighteen. <break tim
