CaraComp
Log inTry Free
CaraComp
Forensic-Grade AI Face Recognition for:
Start Free Trial
Podcast

Baltimore Sues xAI Over Deepfake Porn — and Exposes a Forensic Gap Courts Can't Close

Baltimore Sues xAI Over Deepfake Porn — and Exposes a Forensic Gap Courts Can't Close

Baltimore Sues xAI Over Deepfake Porn — and Exposes a Forensic Gap Courts Can't Close

0:00-0:00

This episode is based on our article:

Read the full article →

Baltimore Sues xAI Over Deepfake Porn — and Exposes a Forensic Gap Courts Can't Close

Full Episode Transcript


According to C.N.B.C., a single A.I. tool generated roughly three million sexualized images in just eleven days. About twenty thousand of those depicted children. And the city that decided to do something about it? Baltimore.


Baltimore just became one of the first major U

Baltimore just became one of the first major U.S. cities to sue over A.I.-generated deepfake pornography. The city filed suit against xAI, the company behind the Grok chatbot, using its own consumer protection authority to go after the platform. Individual victims have filed cases before. But when a city government has to step in as a pioneer just to shield its own residents, that tells you something about the legal vacuum underneath all of this. And the question threading through this whole story is simple. We now have laws that say deepfake porn is illegal — but can anyone actually prove a deepfake is a deepfake in a courtroom?

Start with what Congress did. President Trump signed the TAKE IT DOWN Act into law on 05-19-2025. That law requires platforms to set up a notice-and-removal process. Once a victim reports non-consensual intimate imagery, the platform has forty-eight hours to take it down. Forty-eight hours sounds fast. But removal isn't prosecution. And prosecution requires evidence that holds up under scrutiny.

That's where the whole system starts to buckle. Traditional evidence rules were built for a world where someone might doctor a photo — crop it, alter the lighting, splice two images together. A deepfake doesn't distort reality. It fabricates reality from scratch. It can mimic a real person's face, voice, and body with near-perfect accuracy. So the old methods courts use to check whether evidence is authentic? They weren't designed for content that never existed in the first place.


Trusted by Investigators Worldwide
Run Forensic-Grade Comparisons in Seconds
2 free forensic comparisons with full reports. Results in seconds.
Run My First Search →

Legal scholars have proposed a new federal evidence

Legal scholars have proposed a new federal evidence rule — Rule 901(c) — specifically for this problem. According to the University of Illinois Chicago Law Library, it would cover what they call "potentially fabricated or altered electronic evidence." Under that proposal, the rule kicks in when the opposing side shows that a reasonable jury could find the evidence was generated or altered by A.I. That rule doesn't exist yet. It's a proposal. Meanwhile, cases are already landing on judges' desks.

So what tools do investigators actually have right now? According to analysis from Kennedys Law, A.I. detection tools are still an emerging field. Many of them are proprietary. Their methods aren't standardized. And their results carry built-in uncertainties. If you want to prove in court that an image is synthetic, you likely need to hire a digital forensics expert who can perform a detailed technical analysis. That drives up litigation costs dramatically — and it puts the burden on the victim's legal team to fund that expertise.

Who does this actually affect most? According to multiple sources tracking the deepfake landscape, roughly nine out of every ten deepfake videos online are non-consensual pornography. The vast majority target women. That's why Congress and state legislatures moved with unusual speed and bipartisan support. But speed in writing a law doesn't translate to readiness in a courtroom. You can prove that a piece of content exists. Proving it's a deepfake — in a way that survives a Daubert challenge, the legal standard for expert testimony — requires forensic sophistication that doesn't yet have clear judicial standards behind it.


The Bottom Line

And the First Amendment complicates things further. Some legal scholars argue that sexualized mockery, however repugnant, can overlap with political speech. First Amendment groups have flagged concerns about the TAKE IT DOWN Act's language. The bill targets non-consensual intimate imagery, but it doesn't specifically exempt other legal content from its takedown enforcement. Critics worry enforcement could reach non-public content stored on servers — and might even require providers to break end-to-end encryption. Courts are still trying to figure out who's actually liable. Is it the person who typed the prompt? The company that built the tool? The platform that hosted the result?

The gap most people miss isn't between legal and illegal. We've mostly sorted that out. The gap is between "this looks fake" and "this forensically qualifies as synthetic under rules a court will accept." And that gap is getting wider every day the technology improves.

So the short version. Baltimore sued because no one else with that kind of authority had. Congress passed a law that forces platforms to remove deepfakes within two days — but removing an image and proving it's fake in court are two completely different problems. The forensic tools to bridge that gap exist, but they're uneven, expensive, and lack the standardized judicial backing investigators need. What to watch for next — whether proposed Rule 901(c) gains traction, and whether courts start setting real benchmarks for what counts as forensic proof of synthetic media. That's the domino that changes everything downstream. I linked the full article below — worth a read.

Ready for forensic-grade facial comparison?

2 free comparisons with full forensic reports. Results in seconds.

Run My First Search