CaraComp
Log inStart Free Trial
CaraComp
Forensic-Grade AI Face Recognition for:
Start Free Trial
Podcast

Platforms Rush to Face Scans to Fight Deepfakes. They're Solving the Wrong Problem.

Platforms Rush to Face Scans to Fight Deepfakes. They're Solving the Wrong Problem.

Platforms Rush to Face Scans to Fight Deepfakes. They're Solving the Wrong Problem.

0:00-0:00

This episode is based on our article:

Read the full article →

Platforms Rush to Face Scans to Fight Deepfakes. They're Solving the Wrong Problem.

Full Episode Transcript


According to security researchers, a convincing deepfake now costs about a dollar thirty-three to produce. One photo and a sixty-second voice clip. That's all an attacker needs to fake someone's face on a verification screen.


Trusted by Investigators Worldwide
Run Forensic-Grade Comparisons in Seconds
Full platform access for 7 days. Run real searches — no credit card, no commitment.
Run My First Search →

Platforms are rushing to fight back with face scans

So platforms are rushing to fight back with face scans and I.D. uploads. Sounds reasonable — until you realize they may be building the wrong defense. New account fraud hit six point two billion dollars in the U.S. last year alone. Attackers aren't just fooling people anymore — they're injecting synthetic video directly into live verification flows. The U.K. has already fined Reddit fourteen and a half million pounds for inadequate child protection. Regulators across Europe and Australia are demanding platforms prove they block underage access — or face fines up to ten percent of global revenue. But the mandates say "verify age effectively." They don't say "do it in a way that protects privacy." So who's actually getting this right — and who's just collecting faces?

Discord makes a useful case study. According to Discord's own C.T.O., more than ninety percent of users never need to verify at all. The platform uses account-level signals — things like how old the account is and what payment methods are on file — to estimate age without ever reading messages or scanning conversations. When a facial age estimate is needed, that scan runs on the user's device. It never leaves the phone. That's not the industry default. Most other U.K. platforms require government I.D. uploads, credit card checks, or third-party services that store your biometric data on someone else's server.

Why does that distinction matter? Because centralized face databases are exactly what deepfake attackers need. They thrive on volume and scale. Every platform that warehouses millions of face scans is building a target, not a shield. Meanwhile, some users in the U.K. and Australia have already spoofed facial verification using video game photo modes — literally holding up a rendered character's face to pass the check.


The Bottom Line

The defense that actually works combines multiple signals. Multimodal biometric systems — ones that cross-check facial data against voice patterns and document features simultaneously — can catch inconsistencies that fool any single method. Pair that with on-device processing and audit trails that record what was verified without storing raw identity data, and you get proof without permanent exposure. A U.K. petition with over four hundred twenty thousand signatures is calling for repeal of these age verification rules entirely. Groups like E.P.I.C. argue platforms should build safer defaults instead of invasive checks. That tension isn't going away — the E.U. A.I. Act now classifies biometric verification systems as high-risk, demanding transparency and data minimization.

The platforms scanning the most faces aren't the most secure. They're the most exposed. Collecting less data, more reliably, is the harder path — and it's the only one that survives both regulators and attackers.

Mass face scanning feels like safety. It's actually a liability — a centralized honeypot for fraud and a compliance headache waiting to happen. The smarter play is verifying age without ever storing identity. By twenty twenty-seven, platforms still running centralized biometric databases will be explaining breaches to regulators who've already moved on. The ones processing everything on-device and deleting it will own the market. The full story's in the description if you want the deep dive.

Ready to try AI-powered facial recognition?

Match faces in seconds with CaraComp. Free 7-day trial.

Start Free Trial