CaraComp
Log inStart Free Trial
CaraComp
Forensic-Grade AI Face Recognition for:
Start Free Trial
digital-forensics

Why You're Looking at the Wrong Part of Every Face

Why You're Looking at the Wrong Part of Every Face

Here's something that should bother you: the part of a face you instinctively look at first is probably the least useful for identifying who someone actually is. Not somewhat less useful. Consistently, measurably, demonstrably — the wrong region. And the more confident you feel about your ability to read a face, the more likely this applies to you.

TL;DR

Super-recognizers — people with exceptional face memory — don't scan more of a face than the rest of us. They instinctively fixate on a small cluster of high-information regions that most trained investigators consciously ignore, and new AI research has finally mapped exactly where that is.

This isn't a minor footnote in the face science literature. It's the central finding from a study published in Proceedings of the Royal Society B by researchers at the University of New South Wales, and it has serious implications for anyone whose job involves comparing two images of a face and deciding whether they belong to the same person. Which, depending on your field, could mean everything from border control to insurance fraud to criminal investigation.

The Fixation Paradox Nobody Talks About

The term "super-recognizer" gets thrown around casually — people who can spot a familiar face in a crowded airport years after a single brief meeting, who never forget a person they've seen. The London Metropolitan Police famously employs them as a specialized unit. But for a long time, the explanation for their ability was embarrassingly vague. "They're just better at faces" isn't a mechanism. It's a shrug in a lab coat.

What the UNSW researchers did differently was use AI to actually reconstruct what each glance sent to the retina during face-viewing tasks — essentially modeling the visual input itself, not just where subjects looked. Then they ran those reconstructed samples through nine separate AI models to measure the identity information value of each fixation. The result was surprising enough that it's worth sitting with for a moment.

“Super-recognizers don’t just see more; they sample face regions that carry more identity information. Their viewing advantage holds even when the total amount of seen information is the same.” — Research summary of Dunn et al., University of New South Wales, via StudyFinds

Super-recognizers actually spend less total time scanning a face than average observers. But their gaze concentrates disproportionately on the nose bridge, the inner eye corners, and the philtrum — the vertical groove between the nose and upper lip. These aren't the most visually striking parts of a face. They're just the parts with the highest between-person variability. The highest identity signal-to-noise ratio. More scanning, it turns out, correlates with lower accuracy in unfamiliar face matching tasks. Busier eyes, worse results. For a comprehensive overview, explore our comprehensive facial recognition technology resource.

30%+
Rate at which lay observers fail to correctly match two images of the same person — not due to poor vision, but because their gaze prioritizes the wrong facial features
Source: Cognition, 2018 face matching study

Meet the Midface Triangle — the Region That Actually Identifies People

Forensic facial comparison science has a name for the region super-recognizers naturally gravitate toward. It's called the midface triangle — the roughly triangular zone defined by the inner corners of both eyes (the medial canthi) down to the base of the nose. Small area. Enormous informational value.

Why this specific region? Three reasons, and they're worth understanding properly.

First, geometric stability. The distances and angular relationships within the midface triangle remain remarkably consistent across changes in lighting, aging, weight fluctuation, and facial expression. Your jaw can soften with age. Your hairline can shift dramatically. Your expression changes your whole lower face. But the spatial relationship between your inner eye corners and nose base? That barely moves across a decade of photographs. It's one of the most geometrically stable structures in human anatomy.

Second, disguise vulnerability. Here's the uncomfortable mirror image of that point: the outer face — hairline, jaw contour, ear shape — is exactly where most untrained investigators anchor their initial comparison. And it's the region most easily altered by a hat, a beard, weight change, or even a different camera angle. Someone running from identification knows, consciously or not, that a hoodie and a few weeks of facial hair changes how people read their face. They're exploiting precisely the regions most observers over-rely on.

Third, computational weight. This isn't just a forensic examiner's heuristic — it's also how well-designed face comparison systems prioritize their analysis. Euclidean distance calculations between landmark points in the midface region consistently outperform whole-face processing when images vary in pose, illumination, or resolution. The math agrees with the super-recognizers. Continue reading: Why Gut Feel Face Matching Fails.


Trusted by Investigators Worldwide
Run Forensic-Grade Comparisons in Seconds
Full platform access for 7 days. Run real searches — no credit card, no commitment.
Run My First Search →

Why "I Have a Good Eye for Faces" Is Often the Most Dangerous Thing Someone Can Believe

A 2018 study published in Cognition put untrained observers in front of pairs of images — two photos of the same person, or two different people — and asked them to make matching judgments. The failure rate exceeded 30%. Not because participants couldn't see clearly. Because their gaze instinctively prioritized salient decorative features: hair, beards, distinctive marks, the overall "vibe" of someone's face. They were looking at the face. Just not at the right parts of it.

Professional forensic examiners trained in systematic region-by-region analysis outperformed untrained observers by a statistically significant margin. The training wasn't about seeing more — it was about learning where to look and in what order. A discipline of gaze, not a volume of exposure.

Now here's the Dunning-Kruger part, and it genuinely stings a little. Research consistently shows that experienced but untrained observers — people who've spent years in security, investigations, or law enforcement looking at faces — become more confident in their face-matching judgments without becoming more accurate. The confidence grows. The accuracy doesn't. It's not that these people are bad at their jobs across the board. It's that they've accumulated experience looking at the wrong things, and that experience feels exactly like expertise from the inside.

What Gaze Research Tells Us About Face Comparison Errors

  • More scanning ≠ more accuracy — Super-recognizers use fewer fixations, not more; high fixation counts correlate with lower performance on unfamiliar face matching tasks
  • 📊 The outer face misleads — Hairline, jaw contour, and ear shape are the most visually salient regions and also the most vulnerable to disguise, aging, and camera angle variance
  • 🔬 The midface triangle anchors identity — Inner canthi to nose base is geometrically stable across lighting changes, decades of aging, and expression shifts, making it the highest-reliability zone for comparison
  • 🧠 Confidence without training is a liability — Experienced but untrained observers show classic Dunning-Kruger patterns in face matching: rising confidence, flat accuracy

The Sommelier Problem — and Why It Maps Perfectly Here

Think about how a trained sommelier approaches a glass of wine. They don't inhale everything at once and form a general impression. They isolate specific aromatic compounds in sequence — first volatile esters, then tannin structure, then the mid-palate development. Precision comes from knowing which signals to isolate and in what order, not from processing more input faster.

An investigator who "scans the whole face and trusts their gut" is doing the equivalent of walking into a wine cellar, breathing deeply, and calling it a tasting. There's information arriving — lots of it — but it's not being filtered through the framework that separates signal from noise.

What makes the super-recognizer research so useful is that it gives us, for the first time, an AI-verified map of where the signal actually lives. Nine separate computational models confirmed the same finding: fixations on the nose bridge and inner canthi region carry disproportionate identity value. That's not one model's quirk or one researcher's theory. That's convergent evidence from different architectures pointing at the same facial geography.

Key Takeaway

The quality of a face comparison doesn't depend on how much of the face you look at — it depends on whether you're looking at the regions that carry the highest identity information. Super-recognizers do this instinctively. Everyone else needs a system that enforces it.

The practical implication isn't subtle. Any face comparison workflow — human, assisted, or automated — that doesn't deliberately weight the midface triangle and inner canthi region is competing against the wrong model of how identity is encoded in facial structure. It's scanning the cellar instead of isolating the compound.

So here's the question worth sitting with today: when you look at two faces side by side, what's the first region your eyes move to? Most people, if they're honest, will say the eyes broadly — or maybe the general shape of the face, the overall impression. Almost nobody says "the nose bridge and the inner corners of both eyes." But that's where the answer actually lives.

The face region you've been ignoring your whole life turns out to be the one that knows who someone is.

Ready to try AI-powered facial recognition?

Match faces in seconds with CaraComp. Free 7-day trial.

Start Free Trial