CaraComp
Log inTry Free
CaraComp
Forensic-Grade AI Face Recognition for:
Start Free Trial
digital-forensics

Every Image Is Guilty Until Proven Authentic

Every Image Is Guilty Until Proven Authentic

A retired Saskatchewan woman lost $3,000 to a video of Mark Carney—except it wasn't Mark Carney. It was a deepfake, stitched together with enough polish to pass a casual scroll-and-click. The CBC logo appeared in the frame. The Prime Minister's voice sounded right. She invested. The money disappeared. CBC News reported the scam as one of hundreds now flowing through Canadian financial systems—but the story isn't really about one retiree or one bad video. It's about what that video tells us about the world every investigator now lives in.

TL;DR

Deepfakes have moved from curiosity to operational threat — and if your investigative workflow still treats images as trustworthy by default, you're working with broken assumptions.

This week served up a concentrated dose of what the next five years are going to look like. Deepfakes impersonating political leaders to run crypto schemes. AI-generated sexual content targeting real women spreading unchecked across platforms. A Pennsylvania State Police corporal who pleaded guilty to manufacturing thousands of deepfake pornographic images. Political candidates deploying AI-altered video of opponents days after fraud accusations. And, running in parallel, governments scrambling to build biometric identity checks into everything from social media logins to immigration services. The story threading all of it together? Trust in image and video is gone. Not declining — gone. And the investigators and institutions that haven't updated their operating assumptions are already behind.

The $40 Billion Problem Nobody's Ready For

The financial exposure is staggering, and it's moving fast. According to Fourthline's industry analysis, deepfake-related fraud losses exceeded $410 million in the first half of 2025 alone — with projections suggesting AI-enabled financial fraud could hit approximately $40 billion annually by 2027. Canada reported more than $388 million in cryptocurrency scam losses between January 2024 and September 2025, and that figure almost certainly undercounts reality: only an estimated 5–10% of victims ever report the fraud. So the prosecutable cases, the ones where evidence actually needs to hold up, represent a fraction of the damage.

$410M+
in deepfake-related fraud losses in the first half of 2025 alone, with projections of ~$40B annually by 2027
Source: Fourthline, Deepfakes in Financial Services

What makes the Mark Carney case a useful anchor isn't the dollar amount a single victim lost. It's the production quality. We're past the uncanny-valley era of deepfakes. The CBC branding, the voice cadence, the confidence of the delivery — this content is designed to pass a quick visual check by someone who isn't looking for fraud. And why would they be? Most people still approach video with an assumption of basic authenticity. That assumption is now a liability. This article is part of a series — start with China Made Creating A Deepfake The Crime Not Sharing It U S .

The New York Attorney General issued a formal investor alert warning New Yorkers about deepfake investment scams circulating on Meta platforms — describing sophisticated impersonation techniques that standard authentication processes simply weren't built to catch. The Illinois AG put out similar warnings. Oklahoma. The pattern repeats. What's notable isn't that AGs are alarmed; it's that they're all alarmed simultaneously, which tells you the fraud infrastructure has reached operational scale.

From Investment Fraud to Intimate Harm — Same Weapon, Different Targets

Pull back from the financial fraud angle for a second. The Pennsylvania State Police corporal case is a different category of crime but the same underlying technology shift. A law enforcement officer — someone whose professional world presumably includes evidence collection, chain of custody, media authentication — used AI tools to manufacture thousands of deepfake pornographic images. He also pleaded guilty to viewing child pornography. The case is being prosecuted, which is good. But here's the uncomfortable question it raises: if someone inside law enforcement with access to investigative tools and training couldn't resist weaponizing deepfake generation, how much of this content exists that was made by people with no institutional accountability at all?

German celebrity Collien Fernandes went public this week with the allegation that her husband had been creating and spreading deepfake sexual images of her for years. Her case is high-profile enough to generate coverage; most aren't. The UN has been tracking this at the transnational level — UN News has flagged deepfakes and voice cloning as tools now embedded in organized criminal networks, generating billions in illicit flows and creating genuine challenges for cross-border investigation. The volume is the problem. A Seoul-based startup launched specifically to provide preemptive deepfake protection for graduation photos — because that's now a category of harm that requires a product response. Think about that for a moment.

Why This Matters for Investigators Right Now

  • Evidence is compromised at intake — Any image or video submitted as evidence needs authenticity verification before it enters your case file, not after
  • 📊 Biometric ID systems are themselves being attacked — Deepfakes now cause 1-in-20 identity verification failures at onboarding, meaning the verification layer can't be trusted without artifact analysis
  • 🔮 The detection arms race is real — AI-generated voices have crossed an "indistinguishable threshold" according to Axis Intelligence, and major retailers are fielding 1,000+ AI-generated scam calls daily with no reliable perceptual tells
  • ⚖️ Courts will start demanding authentication chains — As deepfake evidence challenges become standard defense strategy, proving an image was NOT manipulated becomes as important as what the image shows
Trusted by Investigators Worldwide
Run Forensic-Grade Comparisons in Seconds
2 free forensic comparisons with full reports. Results in seconds.
Run My First Search →

The Biometric Push Creates Its Own Paradox

Here's where it gets genuinely interesting. Governments are responding to rising synthetic identity fraud by doubling down on biometric verification — Greece announced a social media age verification system and is calling for EU-wide tools; Massachusetts passed a social media age verification digital ID bill; USCIS is actively exploring remote biometric identity checks for immigration services. The logic is sound: if digital identity is being spoofed, verify biology instead.

But HyperVerge's 2026 analysis identifies a brutal irony buried in this strategy. Deepfake attacks are now sophisticated enough to beat biometric verification at onboarding — synthetic identities are being injected directly into KYC and AML processes, with deepfakes accounting for roughly one in twenty identity verification failures. So the systems being built to catch deepfake fraud are themselves being targeted by deepfakes. The answer to that specific problem is liveness detection combined with artifact analysis — not just "does this face match?" but "is this face demonstrably real, in this moment, with physiological coherence?" That's a meaningful tactical shift, but it requires building forensic intent into systems that were designed primarily for speed and scale. Previously in this series: Deepfake Fraud Tripled To 1 1b Your Evidence Workflow Didnt.

"Deepfakes and voice cloning are no longer fringe threats — they are tools of organized crime, generating billions in illicit flows and challenging investigators across borders." — UN News, March 2026

The volume data from Axis Intelligence is worth sitting with. Deepfake files jumped from roughly 500,000 in 2023 to 8 million in 2025. That's not gradual adoption — that's a technology reaching mass deployment. And the FBI's $16.6 billion internet scam data for the same period reflects what happens downstream when synthetic media becomes cheap, accessible, and effective.

The Three-Layer Workflow You Actually Need

So what does a workflow that accounts for this actually look like? At CaraComp, the framing we keep coming back to is that facial comparison and image analysis now need to happen in three layers, not one.

First is intake triage: every image and video that enters a case file gets a default flag of "authenticity unconfirmed." That's not paranoia — it's the same logic that makes you wear gloves at a crime scene. The evidence might be clean, but you don't assume it until you've checked. Second is forensic comparison: technical analysis of compression artifacts, eye reflection consistency, temporal coherence across video frames, and liveness signals. This is where the question shifts from "who is this?" to "was this face ever digitally altered?" — and that second question is increasingly the one that determines whether your evidence holds.

Third — and this is the piece most teams skip — is evidence presentation. Courts are going to start seeing deepfake challenges as standard defense strategy within the next two to three years. "That video was manipulated" is already being floated in cases where it has no basis, simply because the technology exists and juries know it. The investigators who win those challenges will be the ones who can produce a documented, reproducible authentication chain — not just "we checked" but "here's how we checked, here's what we found, and here's why that finding is technically sound." Up next: Law Enforcement Biometrics Facial Comparison Compliance.

Key Takeaway

The investigators who win the next five years won't just be better at identifying faces — they'll be the ones who built artifact analysis and authenticity documentation into routine casework before the courts started requiring it.

Look, nobody's saying this is simple. The detection arms race is genuine — generation technology moves faster than authentication technology, and that gap isn't closing quickly. But the alternative — continuing to treat images as credible until something feels off — is how a Saskatchewan retiree loses her retirement savings to a video of a man who was never in the room.


The Saskatchewan woman told CBC she felt the video "seemed real." She was right that it seemed real. That's the entire point — and the entire problem. The question every investigator now has to answer isn't whether a piece of media seems authentic. It's whether you can prove it is. And if you can't answer that question about the last ten images that entered your case file, you already know where to start.

If every photo and video you receive is now guilty until proven authentic, what's the first change you'd make to your current investigative workflow? Drop your answer in the comments — and tag someone who needs to think about this today.

Ready for forensic-grade facial comparison?

2 free comparisons with full forensic reports. Results in seconds.

Run My First Search