CaraComp
Log inTry Free
CaraComp
Forensic-Grade AI Face Recognition for:
Start Free Trial
Podcast

47 States, 4 Legal Regimes, One Deepfake: The Jurisdiction Trap Investigators Never Saw Coming

47 States, 4 Legal Regimes, One Deepfake: The Jurisdiction Trap Investigators Never Saw Coming

47 States, 4 Legal Regimes, One Deepfake: The Jurisdiction Trap Investigators Never Saw Coming

0:00-0:00

This episode is based on our article:

Read the full article →

47 States, 4 Legal Regimes, One Deepfake: The Jurisdiction Trap Investigators Never Saw Coming

Full Episode Transcript


An employee at the engineering firm Arup joined a video call with his C.F.O. and several colleagues. They talked through a series of wire transfers. He authorized fifteen of them. Every single person on that call was a deepfake. The company lost twenty-five million dollars before anyone realized what happened.


Trusted by Investigators Worldwide
Run Forensic-Grade Comparisons in Seconds
2 free forensic comparisons with full reports. Results in seconds.
Run My First Search →

That case landed in one country

That case landed in one country. But if it had crossed a border — if the employee sat in London, the servers ran through Singapore, and the bank accounts lived in the U.S. — the legal mess would've been almost as damaging as the fraud itself. Forty-seven U.S. states have now passed laws targeting deepfakes. The federal TAKE IT DOWN Act became law in May of twenty twenty-five. The E.U. A.I. Act's transparency rules kick in by August of twenty twenty-six. And none of them agree on what a deepfake is, how to prove one, or what to do about it. If you've ever been on a video call, or had your photo posted online, or sent a voice message — your face and your voice now exist in a world where faking them is easy and prosecuting the fakers is a jurisdictional maze. According to a new analysis from the law firm Harris Sliwoski, a single piece of synthetic media can trigger criminal charges, consumer protection claims, platform removal orders, and identity rights lawsuits — all at once, all in different places, all under different rules. So what happens when the evidence that proves a deepfake in Texas gets thrown out in Brussels?

Start with the sheer speed of what's happened. According to Ballotpedia's legislative tracker, more than four out of every five state deepfake laws on the books right now were approved in just twenty twenty-four and twenty twenty-five. In twenty twenty-five alone, state legislators saw a hundred and forty-six bills introduced with language specifically targeting A.I.-generated deepfakes. That's not a slow build. That's a stampede. And when legislatures move that fast, they don't coordinate. Some states call it "synthetic media." Seven states use that term. Six others say "materially deceptive media." Three just say "deepfakes." Those aren't just different labels — they carry different legal definitions, different burdens of proof, and different penalties. For anyone who's ever shared a video online, that means the same clip could be legal in one state and criminal in the next.

Now zoom out past U.S. borders. The E.U. is building its approach around disclosure. Article fifty of the E.U. A.I. Act requires anyone who creates or manipulates content with A.I. to label it transparently. But it doesn't give individuals a general ownership right over their own image or voice. Across Asia, the emphasis lands differently — on consent and rapid takedown requirements. So you've got three broad regulatory philosophies operating at the same time. Transparency in Europe. A patchwork of identity rights, election rules, and intimate-content bans across U.S. states. And consent-plus-takedown frameworks in Asia. For an investigator working a case that touches even two of those zones, the question isn't whether the deepfake is detectable. It's whether the proof survives the trip from one legal system to another.

And that's the part that keeps practitioners up at night. An investigator who collects evidence under a U.S. state standard may find that exact same evidence inadmissible in an E.U. court. The chain of custody requirements differ. The metadata standards differ. Even the definition of what counts as "manipulated" differs. For everyday people, this means something unsettling — someone could use your face in a fake video, and whether you have any legal recourse depends almost entirely on where you happen to live and where the person who made it happens to sit.


The Bottom Line

Some of the infrastructure to fix this is already being built. The C.2.P.A. — the Coalition for Content Provenance and Authenticity — is backed by Adobe, Microsoft, Google, and OpenAI. It uses cryptographic tracking to prove where a piece of content came from and whether it's been altered. That standard is moving toward international I.S.O. certification. Meanwhile, Google's SynthID system has already watermarked more than ten billion pieces of content with pixel-level signals designed to survive compression and editing. Ten billion. Those tools matter because they create a provenance trail — a way to prove what's real before you ever walk into a courtroom. For investigators, that trail is becoming the single most important piece of any deepfake case. For the rest of us, it's the invisible layer that might eventually tell you whether the video you're watching actually happened.

The optimistic view says all of this is converging. The G.7 and UNESCO are already discussing shared A.I. ethics principles, including content labeling. But convergence is a destination, not a current address. And anyone who builds their process around where the law is headed instead of where it actually stands today is creating gaps they can't see until a case blows up.

So — forty-seven states, one federal law, the E.U., and multiple Asian frameworks have all written rules about deepfakes. None of them define the problem the same way. And the evidence that proves a fake in one place might mean nothing in another. Whether you're building a case or you're just someone whose face is already out there on the internet, the rules that are supposed to protect you depend on lines drawn on a map — and deepfakes don't stop at borders. The written version goes deeper — link's below.

Ready for forensic-grade facial comparison?

2 free comparisons with full forensic reports. Results in seconds.

Run My First Search