CaraComp
Log inTry Free
CaraComp
Forensic-Grade AI Face Recognition for:
Start Free Trial
Podcast

The Cop Who Made 3,000 Deepfakes Exposed a Bigger Problem Than Deepfakes

The Cop Who Made 3,000 Deepfakes Exposed a Bigger Problem Than Deepfakes

The Cop Who Made 3,000 Deepfakes Exposed a Bigger Problem Than Deepfakes

0:00-0:00

This episode is based on our article:

Read the full article →

The Cop Who Made 3,000 Deepfakes Exposed a Bigger Problem Than Deepfakes

Full Episode Transcript


A Pennsylvania state trooper just pleaded guilty to creating three thousand deepfake images. He built them using driver's license photos and law enforcement databases — the same systems police use every day to identify suspects. And the week that plea came down, Connecticut moved forward on a bill to ban deepfakes near elections.


Two stories, same week, pulling in opposite directions

Two stories, same week, pulling in opposite directions. One state punishing a cop who abused facial image systems. Another state rushing to outlaw manipulated media — without ever asking how those systems should work in the first place. If you've ever had a driver's license photo taken, your face is sitting in one of these databases right now. You didn't opt in. There's no rule governing who touches that image or what they do with it.

The trooper's name is Stephen Kamnik. According to the Philadelphia Inquirer, he was a corporal with the Pennsylvania State Police who used A.I. tools alongside access to PennDOT records and law enforcement files to generate thousands of fake intimate images. Meanwhile, Connecticut's House Bill 5342 would restrict manipulated images, audio, or video within ninety days of an election. It defines synthetic media as anything a "reasonable person" would believe is deceptive. So the question running through both of these stories is this — if lawmakers are willing to criminalize deepfake abuse, why won't they standardize how police are supposed to use facial image technology legitimately?

Start with Kamnik, because his case cracks open the real problem. This wasn't some outsider hacking into a government server. He had authorized access. He sat inside the system. And he used that access to pull real people's photos from state databases and feed them into A.I. generators. Three thousand images. That's not a one-time lapse. That's an operation. And the infrastructure that made it possible — the databases, the access permissions, the lack of audit trails — that infrastructure is still running. Every investigator who uses facial comparison tools for a case is working inside the same architecture that Kamnik exploited. There's no published standard separating what he did from what a detective does when they run a legitimate facial comparison. No codified method. No peer-reviewed protocol. That's the gap.


Trusted by Investigators Worldwide
Run Forensic-Grade Comparisons in Seconds
2 free forensic comparisons with full reports. Results in seconds.
Run My First Search →

Shift to Connecticut

Now shift to Connecticut. H.B. 5342 targets election-period manipulation and non-consensual intimate imagery. On its face, that sounds like progress. But look at how the bill defines the problem. It relies on whether content is "intended to influence" an election and whether a "reasonable person" would find it deceptive. Those are subjective calls. And subjective legal standards are exactly what make facial comparison evidence shaky in court. If you're a defense attorney, you love vague language — it gives you room to challenge everything. If you're an investigator trying to present facial analysis to a jury, vague law is your enemy. For anyone who's ever served on a jury or watched a trial, this matters too — because the rules that govern what counts as evidence shape whether the right person goes to prison or the wrong one does.

And Connecticut isn't alone. According to Ballotpedia's annual deepfake legislation tracker, states introduced nearly a hundred and fifty bills in 2025 with A.I. deepfake language. A hundred and fifty. Almost all of them focus on criminalizing harmful output — revenge imagery, election fraud, that kind of thing. Almost none of them address the tools and databases that make deepfakes possible in the first place.

Meanwhile, the harm is accelerating. The National Center for Missing and Exploited Children documented a more than thirteen-fold increase in reports of A.I.-generated child sexual abuse material between 2023 and 2024. That's roughly sixty-seven thousand reports in a single year. Sixty-seven thousand. Nobody can argue we should wait around while that number climbs. But criminalizing the output and ignoring the infrastructure is like arresting drunk drivers without ever inspecting the bar that served them.


The Bottom Line

There's a political layer here too. According to the C.T. Mirror, Connecticut's governor threatened to veto a separate bill that would have required A.I. companies to disclose how their systems work. The concern was that transparency rules would hurt the state's tech sector. So Connecticut will criminalize what bad actors do with A.I. — but it won't force the companies building A.I. to open the hood. That's a choice. And it leaves investigators without a regulatory safe harbor. No clear rules saying: this is how you run a facial comparison, this is how you document it, this is what makes it defensible in court. Louisiana, to its credit, added provisions requiring courts to authenticate digital and synthetic evidence. That's a start. But it's defensive — it asks judges to catch problems after the fact, rather than preventing them up front.

The assumption most people make is that deepfake laws protect us from A.I. abuse. They do — partially. But they also create a vacuum. By criminalizing the bad uses without defining the legitimate ones, legislators leave every lawful application of facial image technology legally exposed. A detective running a facial comparison and a corrupt trooper building deepfakes are using overlapping tools with zero regulatory daylight between them.

So — a state trooper used police databases to manufacture three thousand fake images. States are racing to ban deepfakes but refusing to set standards for how law enforcement should handle the same facial data responsibly. And the people caught in the middle are investigators who can't defend their methods in court — and everyone whose photo is already in a system they never signed up for. What separates abuse from legitimate investigation isn't technology. It's standards. And right now, those standards don't exist in law. The written version goes deeper — link's below.

Ready for forensic-grade facial comparison?

2 free comparisons with full forensic reports. Results in seconds.

Run My First Search