CaraComp
Log inStart Free Trial
CaraComp
Forensic-Grade AI Face Recognition for:
Start Free Trial
Podcast

Deepfakes Fool Your Eyes. These 3 Frame-Level Artifacts Still Expose Them. | Podcast

Deepfakes Fool Your Eyes. These 3 Frame-Level Artifacts Still Expose Them.

Deepfakes Fool Your Eyes. These 3 Frame-Level Artifacts Still Expose Them. | Podcast

0:00-0:00

This episode is based on our article:

Read the full article →

Deepfakes Fool Your Eyes. These 3 Frame-Level Artifacts Still Expose Them. | Podcast

Full Episode Transcript


In 2019, the C.E.O. of a British energy company wired two hundred and twenty thousand euros to a scammer. The scammer had used deepfake technology to mimic the voice and face of the C.E.O.'s boss. Nobody ran a single frame-level check before approving the transfer.


That case isn't ancient history

That case isn't ancient history. According to Sangfor cybersecurity researchers, public awareness of deepfake technology jumped from thirteen percent in 2019 to twenty-nine percent by 2022. But awareness of how to actually detect a deepfake? That number barely moved. Most investigators still rely on gut instinct — watching a clip and deciding whether a face "looks real." Today you're going to learn three specific artifacts that deepfakes leave behind in every single video they generate — and why finding them requires looking at frames, not faces. So what are investigators actually missing?

A deepfake doesn't fail the way you'd expect. It doesn't glitch or stutter in an obvious way. At the level of a single frame, a well-made deepfake can look flawless. That's exactly why investigators get fooled — they watch a five-second clip, see a natural-looking face, and move on. The human visual system evolved to recognize faces at normal distances, under normal light. A deepfake optimized to exploit that system will pass your eye test every time.

But deepfakes break down across multiple frames. A real person blinks fifteen to twenty times per minute. According to researchers at I.E.E.E. C.V.P.R., deepfake training datasets are built from internet photos — and almost none of those photos show people with their eyes closed. So the algorithm never learns to generate realistic blinks. Some deepfake videos show zero blinks across an entire two-minute sequence. No human does that. But you'd never catch it by watching casually, because your brain doesn't count blinks.


Trusted by Investigators Worldwide
Run Forensic-Grade Comparisons in Seconds
Full platform access for 7 days. Run real searches — no credit card, no commitment.
Run My First Search →

Beyond blinking, every deepfake contains at least

Now, beyond blinking, every deepfake contains at least one of two telltale artifact types. Researchers on ArXiv identified them as Face Inconsistency Artifacts and Up-Sampling Artifacts. Face Inconsistency Artifacts happen because the generator can't perfectly reproduce facial proportions — the distance from earlobe to jawline shifts slightly between frames, or the lips don't quite align with the original face's geometry. Up-Sampling Artifacts come from the decoder stage of the algorithm, where the image gets scaled back up to full resolution. That scaling leaves pixel-level texture patterns that are invisible to your eye but detectable with edge-detection software.

So why don't investigators just use that software? Most don't have frame-by-frame analysis protocols in their workflow. They treat video authentication as subjective expert judgment rather than measurable methodology. And when an A.I. matching tool returns a ninety-five percent confidence score, that feels definitive. But run that score against a database of ten million faces, and five hundred thousand candidates still meet that threshold. The number feels precise. It isn't.

Meanwhile, according to peer-reviewed research published through N.I.H., detection methods that analyze how computer vision features change between consecutive frames hit ninety-seven point three nine percent accuracy on one major dataset and ninety-five point six five percent on another. Frame-level analysis dramatically outperforms the human eye — but almost nobody uses it in practice. Color and lighting inconsistencies rank as the second most important spatial cue. Manipulated frames often show synthetic shadows or illumination gradients that don't match the rest of the scene.


The Bottom Line

A natural face is hard to match because billions of unique characteristics shift across frames. A deepfake is actually easier to expose — because the same algorithm generates the same systematic errors, repeating in the same pattern, frame after frame.

So remember three things. Deepfakes don't fail in a single frame — they fail across many frames, in blink rates, jaw alignment, and pixel texture. Every deepfake contains at least one type of artifact that software can catch but your eyes can't. And a confidence score isn't proof — it's a starting point. Next time you see a video presented as evidence, ask yourself whether anyone checked the frames — or just watched the face. The written version goes deeper — learn a Verified Does Not Mean Matched Face Age Verificatibout the limitations of face recognition software.

Ready to try AI-powered facial recognition?

Match faces in seconds with CaraComp. Free 7-day trial.

Start Free Trial