Australia Just Made Face-Matching Obsolete. Here's the New Bar Every ID System Must Clear.
Australia's national digital ID system, myID, can already match your face. That's no longer the impressive part. What the Australian Taxation Office just signaled — through a quiet but consequential Request for Information — is that matching faces was always the easy half of the problem. The harder half? Proving there's actually a living human being on the other end of the camera.
Australia's planned liveness detection refresh for myID — requiring ISO/IEC 30107-3:2023 compliance and third-party attestation at 10,000 verifications per hour — sets a new baseline that any system making high-stakes identity decisions will need to match or explain why it doesn't.
The RFI doesn't make front-page news. It's a government procurement document, and those rarely do. But read between the lines and you're looking at something that will reshape expectations across financial services, law enforcement, and identity verification at scale. When a government that onboards millions of citizens into a digital ID platform decides its current liveness detection is no longer good enough, that's worth paying attention to.
What Australia Is Actually Asking For
The ATO's RFI for myID — covered in detail by Biometric Update — isn't a minor tweak. The procurement calls for a SaaS-delivered liveness detection capability that can handle 10,000 verifications per hour at sub-second response times. That's the kind of throughput that reflects a system expecting serious usage pressure, not a pilot program.
More significant than the speed requirement is the compliance mandate. The system must meet Evaluation Assurance Level 2 under ISO/IEC 30107-3:2023 — the international standard for biometric presentation attack detection — and that compliance must be attested by qualified third parties. Not self-certified. Not vendor-claimed. Actually verified by someone with no commercial stake in the outcome. This article is part of a series — start with Ai Fraud Identity Verification Spending Deepfake Detection W.
According to ID Tech Wire, the current liveness capability dates to 2021. That's not ancient history in most industries, but in adversarial biometrics — where attackers are actively probing every system, iterating constantly, and now armed with generative AI — four years is a long time to stand still.
The Threat Model Changed While the Standards Didn't
Here's what makes this refresh genuinely interesting rather than just a routine procurement cycle. The attacks that liveness detection needs to catch in 2026 are categorically different from those it was designed to catch in 2021.
Replay attacks using a photo held up to a camera? Old news. OLOID's breakdown of biometric spoofing vectors outlines the current threat picture: 3D-printed masks, injected video streams that bypass the camera entirely, and deepfake overlays generated in near-real-time. The injection attacks are particularly nasty — instead of spoofing the camera, the attacker spoofs the data stream after it. A liveness system that checks for micro-movements and blink patterns can't catch an attack that never goes near the camera hardware in the first place.
"Deepfakes represent a potential staple tool for organised crime groups in the future, which could be used to create false evidence, manipulate public opinion, produce non-consensual pornography, or commit identity fraud." — Europol, as cited in biometric security research
That framing — deepfakes as organised crime infrastructure — is important context for why governments aren't treating this as a software update. They're treating it as a trust architecture problem. And trust architecture problems require standards bodies, third-party auditors, and formal compliance frameworks. Not just a better algorithm pushed in a patch.
The sophistication gap between what systems were built to handle and what attackers are now deploying is precisely why ISO/IEC 30107-3:2023 matters here. Regula Forensics explains the active/passive detection distinction clearly: active liveness asks users to perform challenges (blink, turn your head), while passive liveness analyzes the biometric input itself for artifacts and anomalies without user interaction. Each approach has attack surfaces. The standard exists because no single technique covers every vector, and because ad hoc vendor testing doesn't produce defensible results when someone's identity — or their access to government services — is on the line. Previously in this series: Why Your Eyes Cant Spot A Deepfake And What Actually Can.
Liveness Is Necessary. It's Not Sufficient.
There's a harder truth buried in Australia's decision to refresh. Even a liveness-certified system can be defeated if everything else around it is weak. Keyless makes the case that liveness detection alone doesn't clear the bar for high-assurance identity — systems that layer liveness with device verification and behavioral signals are materially more resistant than those treating liveness as the final checkpoint.
Think about what that means for identity systems beyond the government context. A bank onboarding a new customer remotely. An insurance company verifying a claimant. An investigator comparing images across open-source archives and agency databases. Every one of these scenarios involves a facial comparison that could have legal or financial consequences, and every one of them carries the same spoofing exposure that pushed Australia to refresh its standards. (The question of whether private-sector operators will move at the same pace as a government under procurement pressure is, let's say, a reasonable one to ask.)
Why This Refresh Matters Beyond Australia
- ⚡ The compliance floor just moved — When a major government mandates ISO/IEC 30107-3:2023 with third-party attestation, courts and regulators in other jurisdictions take note. That standard becomes the implicit benchmark for "reasonable" identity assurance.
- 📊 Vendor consolidation is coming — Third-party attestation at EAL 2 isn't cheap or fast. Smaller vendors without the resources to certify will be squeezed out of high-stakes identity contracts, concentrating the market around certified players.
- 🔮 The attack surface is still growing — Four years separated 2021's "good enough" from 2025's "needs refreshing." The next inflection point won't take four years. Generative AI capabilities are advancing faster than procurement cycles.
- 🏛️ Legal credibility now has a technical spec — Any facial comparison used to support an arrest, a fraud determination, or a legal identity claim will increasingly be evaluated against liveness compliance. Systems that can't demonstrate certified anti-spoofing are going to have a very uncomfortable time in discovery.
The Two-Class System That's Forming
What Australia's refresh is quietly creating — and this is the part that should keep compliance teams up at night — is a two-tier biometric world. On one side: systems that can demonstrate certified liveness detection, documented against international standards, verified by qualified third parties. On the other side: systems that match faces well and have always matched faces well, but whose anti-spoofing capability lives inside a vendor's white paper rather than an auditor's report.
For everyday consumer applications, that gap may not matter much for a while. For any application where the output of an identity check influences something consequential — access to services, financial transactions, legal proceedings, investigative leads — the gap is going to matter very soon. Courts don't need to formally adopt ISO/IEC 30107-3:2023 as a legal standard for it to start appearing in expert witness testimony. Once it appears in testimony a few times as the benchmark against which a system's adequacy is measured, the precedent sets itself.
This is how technical standards become legal norms without anyone passing a law. A government procures to a standard. A vendor achieves certification. A court case references the standard as the basis for evaluating whether an identity system was fit for purpose. The next system that appears in court without certification has a harder time. The pattern repeats. Platforms like CaraComp that already align their anti-spoofing architecture to internationally recognized standards aren't over-engineering — they're positioning for exactly this dynamic. Up next: Why 340m In Fraud Fighting Revenue Should Terrify Every Inve.
Australia's liveness refresh isn't a technical upgrade — it's a trust declaration. Matching a face was always the easy part. Proving that face belongs to a live human, in the moment, against adversarial attack methods that didn't exist four years ago, is the actual hard problem. Any identity system that can't answer that question should not be making high-stakes decisions.
The Question That Needs an Answer
Australia's decision covers a system serving roughly 14 million users — a scale where getting liveness wrong isn't an edge case, it's a class-action. But the logic holds at any scale where identity verification carries weight. The attack techniques that prompted this refresh aren't targeting the Australian government specifically. They're deployed opportunistically, wherever the payoff justifies the effort, which now includes bank onboarding flows, insurance verification, and any remote identity check where a camera and an algorithm stand between an attacker and something valuable.
The honest engagement question here isn't whether liveness should be a baseline. It almost certainly should be, for any check that carries financial, legal, or access consequences. The real question is whether the industry will get there voluntarily before a high-profile failure forces the issue — and which systems will be left standing when the first major spoofing incident in a national ID context runs its way through a courtroom.
So: if a digital ID system can match a face but can't reliably prove live presence, would you trust it for a high-stakes identity check? Or has Australia just drawn the line that everyone else now has to step over?
Ready for forensic-grade facial comparison?
2 free comparisons with full forensic reports. Results in seconds.
Run My First SearchMore News
Deepfake Laws Are Fracturing. Your Evidence May Not Survive 2026.
AI regulation is heading into the 2026 midterm elections as a live political weapon — and for investigators relying on digital evidence, the biggest risk isn't new technology. It's a fragmenting legal framework that may make your current workflow indefensible before you even know the rules changed.
digital-forensicsDeepfake Fraud Just Broke Your Intake Process — Here's What Investigators Need to Fix Now
Ireland's Deputy Prime Minister had to watch a video of himself twice to confirm it wasn't real. That single sentence explains why investigation workflows are broken — and what has to change right now.
digital-forensicsWhy $340M in Fraud-Fighting Revenue Should Terrify Every Investigator
When an identity verification platform crosses $340M in ARR driven by AI fraud pressure, that's not a revenue story — it's a workflow warning for every investigator still relying on manual methods. Here's why the number matters more than the headlines.
