CaraComp
Log inStart Free Trial
CaraComp
Forensic-Grade AI Face Recognition for:
Start Free Trial
digital-forensics

Deepfakes Felony Law in South Dakota Raises the Bar for Photo Evidence

Deepfakes Felony Law in South Dakota Raises the Bar for Photo Evidence

On March 17, 2026, South Dakota's governor signed a bill making the creation and distribution of deepfake pornography a felony. The same day, Brazil's mandatory age verification law went live — and VPN signups surged 250% overnight. Meanwhile, a House panel advanced a federal bill to criminalize deepfake images of minors, and a new lawsuit landed targeting the AI porn industry directly. That's not a news cycle. That's a stress fracture running through the entire way we think about identity, evidence, and proof.

TL;DR

Governments worldwide are tightening legal standards around identity and synthetic media faster than investigators have upgraded their methodology — and the gap between "looks like them" and "court-ready proof" just got a lot more expensive to close.

Let's be honest about what this week actually revealed. It wasn't that lawmakers suddenly care about deepfakes. It's that the regulatory stack — criminal penalties, age-gate mandates, biometric verification requirements — is now being built at speed, without waiting for investigators, prosecutors, or platform operators to catch up. The people who prove "who is who" for a living are about to get cross-examined on methodology they've never had to defend before.


The Week That Changed the Evidentiary Standard

Start with South Dakota Searchlight's reporting on the felony signing. On its face, it's a state-level criminal statute targeting non-consensual synthetic sexual content. Straightforward enough. But zoom out for a second: South Dakota is not exactly Silicon Valley. When a relatively conservative, lower-population state moves this fast on AI-generated content law, it's because the cases showing up in prosecutors' offices made the issue impossible to ignore any longer.

Washington's governor signed a similar identity-rights bill the same week. The House panel moved on minors. Ted Cruz has been publicly tying deepfake porn legislation to his broader child protection agenda. This isn't a patchwork anymore — it's a coordinated federal-and-state tightening that will eventually make "I thought the image was real" a legally inadequate response in any proceeding where photo or video evidence is contested.

Then there's the age verification front. Australia's rules requiring adult platforms to verify user age went live on March 9, with sites like Pornhub simply blocking Australian IPs rather than comply — a move that Kotaku documented in detail, including concerns about the law's downstream effects on gaming platforms. Brazil followed eight days later. The VPN spike isn't just about privacy-conscious users dodging content filters. It's a signal that mass biometric collection as a condition of internet access — which is effectively what these laws require — hits serious public resistance fast. This article is part of a series — start with Deepfakes Hit 8 Million Courts Still Cant Prove A Single One.

30%
of enterprises will no longer trust identity verification solutions relying solely on face biometrics due to AI deepfakes by 2026
Source: Gartner, as reported by Deep Media

The Detection Problem Nobody Wants to Say Out Loud

Here's where things get genuinely uncomfortable for anyone in the identity verification or investigative space. The legal standard for what counts as proof is rising. At the same time, the tools meant to meet that standard are quietly struggling.

Peer-reviewed research published in NIH/PMC puts average deepfake detection accuracy at roughly 80% — and notes that most detection systems can't coherently explain how they reached their verdict. That last part is the killer detail. An 80% accurate black box doesn't survive a competent cross-examination. A defense attorney doesn't need to prove the system is wrong. She just needs to establish that you can't prove it's right.

The trade-off the research describes is genuinely tricky: detection models calibrated for high sensitivity flag legitimate content as manipulated. Models tuned to reduce false positives let subtle fakes through. Neither version is court-ready by default. And yet investigators are being asked — by clients, by employers, by prosecutors — to make definitive identity calls on video and audio evidence in an environment where Forbes has explicitly called deepfake audio an evidence crisis, not just a cybersecurity problem.

"AI-based detection models struggle to balance accuracy with false positives — overly sensitive models may flag legitimate content as manipulated, while less sensitive ones miss subtle manipulations, a trade-off particularly critical in legal and security systems." — NIH/PMC Peer-Reviewed Research, 2026

The biometric ID side has its own set of problems piling up this week. Discord's age verification system reportedly runs 269 separate facial checks against external databases per user — a number that will make any privacy attorney's eyes light up. Spain fined identity verification app Yoti for privacy violations tied to its biometric data practices. Essex Police paused its facial recognition cameras after a study found racial bias in results. Illinois lawmakers introduced a sweeping ban on law enforcement biometric surveillance. The Coalition for Content Provenance and Authenticity (C2PA) standard — which uses cryptographic signatures to verify media origin — is gaining traction as a potential solution, per UncovAI's 2026 detection methods analysis, but adoption across platforms is still far from universal.

Taken together, this is not random noise. It's a system under pressure from both sides simultaneously. Previously in this series: Deepfakes Hit 8 Million Courts Still Cant Trust The Evidence.


Trusted by Investigators Worldwide
Run Forensic-Grade Comparisons in Seconds
Full platform access for 7 days. Run real searches — no credit card, no commitment.
Run My First Search →

What This Means for Investigators — Specifically

The Investigator's New Reality

  • Visual identification alone is legally exposed — "it looks like them" won't survive a deepfake challenge when seniors are losing thousands to AI voice scams and courts are already questioning video authenticity
  • 📊 Methodology documentation is now the work product — a comparison that can't show its reasoning in writing is a liability, not an asset, in any contested proceeding
  • 🔮 A clear regulatory lane is opening — controlled, auditable, case-bound facial comparison is gaining legal legitimacy exactly as mass biometric surveillance faces backlash; professional investigators who document purpose and scope are well-positioned
  • 🛡️ Deepfake defense is now standard cross-examination territory — any video or image used as evidence in a case where identity is disputed will face the question: "How do you know this wasn't generated or manipulated?"

The investigator who wins in this environment isn't necessarily the one with the most sophisticated AI tools. It's the one who can hand opposing counsel a document explaining — in plain language — the methodology behind a facial comparison, the confidence thresholds applied, the chain of custody for the image or video in question, and why that analysis holds up against deepfake manipulation as a hypothesis. That's a different skill set than visual pattern matching. And frankly, most investigators haven't built it yet. (The Gartner projection that 30% of enterprises will stop trusting face biometrics alone by 2026 isn't a hypothetical — it's already happening in procurement decisions right now.)

At CaraComp, the conversations we're having with investigators and legal teams are increasingly about documentation workflows and court-ready reporting — not just "did the faces match." Because the technical comparison is the easy part. The hard part is being able to explain it to a judge who just read about deepfakes in the news, or to a defense attorney who is specifically trained to make your methodology sound unreliable.

Look, nobody's saying this is simple. A €1.7M pre-seed round just went to Neuramancer specifically to scale deepfake detection tools. VeryAI raised $10M for its "Proof of Reality" identity verification platform. The market is telling you something: the demand for verifiable, explainable identity proof is accelerating faster than any single regulator is moving.


The Regulatory Ratchet Only Turns One Direction

There's a counterargument worth taking seriously: governments are legislating a problem they don't fully understand. The VPN spikes in Brazil and Australia show that mandatory biometric collection at internet access points generates real public backlash. Initial resistance often tapers off — but if verification systems prove clunky, generate data breaches, or fail disproportionately on certain populations (see: Essex Police, the facial recognition bias findings), entrenched workarounds become permanent behaviors.

The deeper issue is that regulatory activity and investigative capability are moving on completely different timelines. Laws get signed in an afternoon. Court-ready analytical methodology takes months or years to develop, document, and test under adversarial conditions. The South Dakota felony bill is already law. The investigative standards that will be required to prosecute cases under it are still being written. Up next: Deepfakes Felony Law In South Dakota Raises The Bar For Phot.

Key Takeaway

Governments are drawing hard legal lines around deepfakes and biometric identity verification simultaneously — which means every investigator who uses photo or video evidence in a contested case now needs to answer one question before a defense attorney asks it first: can you prove, on paper, that your facial comparison methodology is immune to a deepfake challenge?

The investigators who got ahead of this didn't wait for the South Dakota bill. They started building documented, repeatable facial comparison workflows when deepfake scams first started hitting the news — when seniors were losing thousands and C-suites were being spoofed on video calls. They understood that the legal bar was moving and moved with it.

Everyone else is now playing catch-up against a standard that was just written into criminal statute.

So here's the specific question worth sitting with this week: if a defense attorney challenged your next photo or video identification as "just another deepfake," what hard documentation — not confidence, not experience, not gut instinct, but actual written methodology — could you put on the table today to prove your analysis holds up? If the honest answer takes longer than five seconds to arrive, you already know what you need to do next.

Ready to try AI-powered facial recognition?

Match faces in seconds with CaraComp. Free 7-day trial.

Start Free Trial