CaraComp
Log inTry Free
CaraComp
Forensic-Grade AI Face Recognition for:
Start Free Trial
ai-regulation

47 States, 4 Legal Regimes, One Deepfake: The Jurisdiction Trap Investigators Never Saw Coming

47 States, 4 Legal Regimes, One Deepfake: The Jurisdiction Trap Investigators Never Saw Coming

In January 2024, an employee at engineering firm Arup joined a routine video call. The CFO was there. So were several colleagues. All of them turned out to be AI-generated. Before anyone figured that out, 15 wire transfers had gone through — totaling $25 million. The deepfake worked. But here's the thing that keeps lawyers up at night: if investigators had caught it faster, would the evidence they collected have held up in every jurisdiction the money touched?

TL;DR

Deepfake laws now exist across 47 states and the EU — but they define synthetic media differently, criminalize different behaviors, and impose different evidentiary standards, meaning a single cross-border case can hit four conflicting legal frameworks simultaneously.

Everyone in this industry has been watching the fraud numbers. Synthetic identity fraud hitting record highs. Voice cloning scams stealing millions. AI impersonation incidents that range from embarrassing to catastrophic. Those numbers matter — but they're not the most important data point right now. The number that should be keeping investigators and legal teams awake is this: 47 states, plus a patchwork of federal law, plus EU mandates that kick in August 2026, all govern synthetic media — and they do not agree on what a deepfake is, who's liable, or what evidence is admissible.

The Fragmentation Nobody Warned You About

Most people assume legal standards for new technology start thin and gradually fill in. What's happening with deepfake regulation is the opposite. Laws are multiplying faster than anyone can track them — and they're pulling in different directions.

146
Deepfake-specific bills introduced to state legislatures in 2025 alone
Source: Ballotpedia

According to Ballotpedia's state deepfake legislation tracker, 82% of all state deepfake laws were enacted in just the last two years — 2024 and 2025. That's not gradual policy development. That's a legislative pile-on, with every state essentially writing its own playbook. Some states use the term "synthetic media." Others say "materially deceptive media." A handful actually use the word "deepfake." These aren't just semantic differences. They define the scope of what's prosecutable and what evidence you need to prove it.

Then the federal TAKE IT DOWN Act landed in May 2025, creating a national framework — primarily for intimate image abuse — that sits alongside, not above, state laws. And across the Atlantic, the EU AI Act's Article 50 transparency requirements take effect in August 2026, imposing disclosure obligations on AI-generated content without creating any general ownership right in someone's image or voice. So you've got three distinct legal regimes — state, federal, European — that can apply to a single deepfake incident simultaneously, and they define the problem from completely different starting points. This article is part of a series — start with The 3 Second Face Scan 5 Hidden Steps Between You And Your G.

"Each synthetic output can trigger criminal liability, consumer-protection claims, platform-removal obligations, or identity-rights lawsuits — depending on where your business operates and which country's law applies first." Harris Sliwoski LLP

Read that again. The same piece of synthetic media can trigger multiple legal theories across multiple countries at once. Which legal theory gets applied first — and where — will determine what investigators need to prove and how they need to prove it.

The Evidence Problem Nobody Is Talking About

Here's where investigators who focus only on detection are missing something important. Detecting a deepfake is a technical problem, and it's one the industry is actively solving. Proving a deepfake in ways that survive cross-border legal scrutiny — that's a different challenge entirely, and it's a lot harder.

Consider what Jones Walker LLP's AI Law Blog describes when analyzing the current state patchwork: investigators working a case that crosses three states are already working under four different legal definitions of what makes a deepfake provable. Add an international component — a wire transfer to Hong Kong, a video call with a London-based colleague — and you're threading evidence through frameworks that weren't designed to talk to each other.

Chain of custody has always mattered in digital forensics. But the standards for what constitutes an adequate chain of custody for AI-generated content vary by jurisdiction. What satisfies evidentiary requirements in one state may face admissibility challenges in an EU court operating under the AI Act's transparency framework. The International Bar Association has flagged exactly this problem: national experimentation across EU member states — Italy and Denmark being early examples — is creating sub-jurisdictional variation even within the bloc, meaning EU-wide compliance doesn't guarantee admissibility across all member states.

Why This Matters for Investigators

  • Definitional chaos is real — "Synthetic media," "materially deceptive media," and "deepfake" are not interchangeable across state statutes, and the difference determines what you must prove
  • 📊 Evidence collected under one standard can fail in another — U.S. state-level evidentiary handling may not satisfy EU AI Act transparency requirements, and vice versa
  • 🔍 Provenance documentation is now non-negotiable — C2PA cryptographic provenance tracking and tools like Google's SynthID (already embedded in over 10 billion pieces of content) are moving toward ISO standardization for exactly this reason
  • 🔮 Vendor and insurance risk is quietly growing — organizations without jurisdiction-mapped compliance matrices are carrying incident-response risk they haven't priced
Trusted by Investigators Worldwide
Run Forensic-Grade Comparisons in Seconds
2 free forensic comparisons with full reports. Results in seconds.
Run My First Search →

Provenance Is Now an Evidentiary Weapon

The investigators who are going to win cross-border deepfake cases aren't necessarily the ones with the best detection technology. They're the ones who document provenance, consent, and methodology before anyone challenges their process in court. Previously in this series: Your Phone Unlocked That Doesnt Prove Who Used It.

This is where the technical and legal worlds are starting to collide in important ways. The Coalition for Content Provenance and Authenticity — C2PA — backed by Adobe, Microsoft, Google, and OpenAI, provides cryptographic provenance tracking that embeds metadata about content origin and modifications. Google's SynthID watermarks AI-generated content at the pixel level, designed to survive compression and editing. Both are advancing toward ISO international standardization. They won't solve the jurisdictional fragmentation problem, but they give investigators something critical: a consistent documentation layer that can be presented to multiple legal regimes simultaneously.

For platforms built around facial comparison and identity verification — including tools used in active investigations — this is where methodology documentation becomes as important as the match result itself. If you're comparing faces across a case that touches multiple states or crosses a border, the comparison is only half the story. How you documented the comparison, under what consent framework you operated, and whether your provenance trail satisfies the evidentiary requirements of every jurisdiction in play — that's the other half. (And frankly, it's the half more likely to blow up on you in court.)

According to Ondato's global deepfake regulation analysis, deepfake incidents surged 257% in 2024. The scale of the problem is not in question. But scale without legal clarity creates a specific kind of mess: investigators closing cases under standards that are already shifting, defendants challenging evidence collected before jurisdictions hardened their requirements, and courts making it up as they go.

The Convergence Argument — and Why Betting on It Is Risky

Some legal observers argue that the global regulatory picture will converge. G7 discussions, UNESCO AI ethics principles, regional compacts — the argument is that shared priorities around transparency, consent, and rapid takedown will eventually produce something like a unified standard. The Columbia Journal of European Law notes real convergence pressures between the EU AI Act and the Digital Services Act, with transparency obligations pulling toward alignment.

Fair enough. But convergence is a future state. Cases are happening now. Evidence is being collected now. And every investigator or legal team that operates as if convergence is already here is building blind spots into their process. The organizations that will handle cross-border deepfake cases well are the ones treating current fragmentation as a permanent operating condition — not a temporary inconvenience on the way to a cleaner future. Up next: India Anganwadi Mandatory Facial Recognition Court Challenge.

That means jurisdiction-mapped compliance matrices. It means documentation standards that satisfy the most demanding framework in play, not the most convenient one. And it means building provenance tracking into investigations from the start, not retrofitting it when a legal challenge lands.

Key Takeaway

The deepfake fraud problem has become a deepfake jurisdiction problem. Investigators who document provenance, consent, and evidentiary methodology to the standard of the most demanding applicable legal regime — before any court challenge — will close cases. Those who don't will spend their time defending their process instead of their findings.


The Arup case became famous because a $25 million loss is a staggering number. But the harder question — the one nobody's written about — is what would have happened if investigators tried to prosecute it across the multiple jurisdictions the scheme touched, under the 47 different state definitions of what a synthetic media crime actually is. Some of that money crossed borders. Some of those AI-generated "colleagues" were rendered by tools hosted in countries with their own rules. The deepfake worked once. The legal theory for prosecuting it might have to work in four places at once.

That's not a hypothetical problem. That's Tuesday morning for the next investigator who opens a cross-border AI impersonation case — and discovers that the real deepfake they have to defeat isn't the video. It's the assumption that their process was legally sound everywhere it needed to be.

Ready for forensic-grade facial comparison?

2 free comparisons with full forensic reports. Results in seconds.

Run My First Search