CaraComp
Log inStart Free Trial
CaraComp
Forensic-Grade AI Face Recognition for:
Start Free Trial
biometrics

Platforms Rush to Face Scans to Fight Deepfakes. They're Solving the Wrong Problem.

Platforms Rush to Face Scans to Fight Deepfakes. They're Solving the Wrong Problem.

Creating a convincing deepfake now costs $1.33. Think about that for a second. The price of a cup of bad airport coffee is all it takes to fabricate a face, clone a voice, and bypass the kind of verification that regulators are currently demanding platforms install everywhere. And what's the industry's answer to this $1.33 problem? Collect more faces. Store more government IDs. Build bigger centralized identity databases. It's the digital equivalent of buying a bigger lock for a door the thief is already walking through.

TL;DR

Platforms are rushing toward mass ID and face-scan collection to satisfy regulators and fight deepfakes — but the winning play over the next 3–5 years belongs to whoever figures out how to prove authenticity with less data, not more.

Discord's global age verification rollout has kicked off another round of the same tired debate: surveillance state versus online safety, privacy versus protection. Both sides are arguing past the real issue. Discord's official CTO blog post is actually worth reading before you join either camp — because the details tell a different story than the headlines suggest. More than 90% of Discord users will never be asked to verify anything. The platform's age determination system reads account-level signals: account age, payment method history, behavioral patterns. When a facial age estimate is needed, that scan never leaves your device. No central database. No vendor holding your face indefinitely.

That's not the industry default. That's the exception. And the gap between what Discord built and what most platforms are sprinting toward is where the next major regulatory and commercial crisis is going to come from.


The False Choice Being Sold to Regulators

Here's how the current conversation gets framed: platforms either implement "highly effective" age assurance — which in practice means ID uploads, AI facial scans, credit card verification, or third-party age-check services — or they get fined into irrelevance. The UK already fined Reddit £14.5 million for inadequate child protection under the Online Safety Act. Ofcom is actively pursuing adult websites for failing to meet its age assurance standards. The EU is baking age verification into Digital Services Act obligations, with interoperable standards expected to land around the end of 2026. Non-compliant platforms face fines up to £18 million or 10% of worldwide revenue, per IDScan's 2026 regulatory roadmap analysis. This article is part of a series — start with Deepfakes Hit 8 Million Courts Still Cant Prove A Single One.

So platforms panic. They bolt on ID verification from the nearest vendor. They scan faces. They upload documents to third-party processors. They check the compliance box. Job done, right?

Wrong. Because none of that actually stops a determined bad actor with $1.33 and a photo. What it does create is an enormous, attractive target: a centralized store of real government IDs and biometric data belonging to millions of people who just wanted to use a social platform.

$6.2B
New account fraud losses in the US alone in 2024 — with AI-generated synthetic identities increasingly cited as a primary attack vector
Source: Brilliance Security Magazine

New account fraud hit $6.2 billion in the US last year, according to Brilliance Security Magazine's deepfake threat analysis. Attackers are systematically targeting verification flows specifically — introducing synthetic media into live facial checks, exploiting gaps between what the verification vendor can detect and what the underlying model was trained on. Collecting more identity data doesn't close that gap. It widens the attack surface.


Trusted by Investigators Worldwide
Run Forensic-Grade Comparisons in Seconds
Full platform access for 7 days. Run real searches — no credit card, no commitment.
Run My First Search →

When 420,000 People Sign a Petition, You Have a Product Problem

Over 420,000 people in the UK have signed a petition calling for the repeal of online age verification requirements. Some Members of Parliament have publicly criticized the rules. Meanwhile, early implementations in UK and Australian markets have already produced reports of users spoofing facial age checks using video game photo modes — which is both darkly funny and entirely predictable when you deploy single-modality verification at scale with no behavioral layer underneath it. Previously in this series: A 95 Match Score Sounds Definitive Heres Why It Might Mean A.

The public backlash isn't anti-safety. People aren't saying "let children see anything online." They're saying "we don't trust you with our ID and our face, and we have very good reasons not to." That's a legitimate position. Identity verification providers who dismiss it as technophobia are going to keep walking into the same wall.

"Key concerns with implementation include age verification providers collecting excessive personally identifiable information and processing it for other purposes in violation of GDPR." — Industry analysis cited in IDScan's Age Verification in 2026 Roadmap

The UK's Companies House incident — flagged by the IDV industry itself to regulators at Biometric Update — highlights exactly this tension. The IDV industry is simultaneously the loudest voice for verification mandates and the most vocal critic of how those mandates are being written. They know better than anyone that sloppy implementation creates liability, not safety. That's actually a productive conversation, if anyone outside the industry is paying attention.

Why This Matters Right Now

  • Regulatory timelines are compressing — UK enforcement is live, EU Digital Services Act age assurance standards arrive in 2026, and US state-level requirements are multiplying. Platforms that haven't built privacy-respecting verification infrastructure are already behind.
  • 📊 The deepfake attack surface is growing faster than single-modality defenses — peer-reviewed research from Springer Nature's Discover Applied Sciences confirms that GAN-based identity-swap techniques and facial synthesis methods are outpacing dataset-specific detection models. One modality isn't enough.
  • 🔮 The market is splitting — Regula's user base surged 62% to 240 million as IDV becomes core digital infrastructure (per Biometric Update), but that growth masks a split between platforms doing document-plus-facial comparison properly and those doing it as theater.

What "Verify Less, Prove More" Actually Looks Like

The CMS expansion of digital identity options for Medicare and Medicaid beneficiaries — reported by SC Media — is worth watching closely, and not because government healthcare is glamorous. It's worth watching because CMS is being forced to verify identity for a population that is disproportionately privacy-sensitive, technically diverse, and legally protected. The solutions that work in that context are the ones that will generalize everywhere else.

What works is not: upload your passport, let us scan your face into our vendor's cloud, trust us. What works is: document-to-facial comparison in a tightly scoped workflow, processed on-device wherever possible, with an auditable trail proving what was verified rather than storing the underlying biometric indefinitely. That satisfies a regulator asking "did you check?" It defeats deepfake attacks that depend on volume and scale to probe verification systems. And it doesn't hand users an evidence trail they can never take back. Up next: Platforms Rush To Face Scans To Fight Deepfakes Theyre Solvi.

The EU AI Act already categorizes biometric identification systems as high-risk, mandating transparency and data minimization. The NIH's comprehensive review of deepfake detection and multimodal biometric systems makes clear that combining facial comparison with behavioral analytics catches spoofing attempts that fool single-modality checks — but the architecture matters enormously. Multimodal doesn't mean "store more data in more places." It means corroborate signals without centralizing them.

South Korea just delayed its facial recognition SIM registration trial to mid-2026, according to Telecompaper — not because facial recognition doesn't work, but because the implementation design wasn't ready for scrutiny. That's the right call. Deploying a blunt instrument and calling it safety isn't courage. It's just risk transfer from the platform to the user.

Key Takeaway

The platforms and IDV providers that invest now in scoped, auditable, on-device facial comparison workflows will own the verification market by 2028. Those that keep defaulting to mass ID uploads and centralized face databases will spend the next regulatory cycle explaining data breaches, consent failures, and ineffective deepfake defenses to regulators who are no longer impressed by box-ticking compliance.

Ready to try AI-powered facial recognition?

Match faces in seconds with CaraComp. Free 7-day trial.

Start Free Trial