A Cop Made 3,000 Deepfake Porn Images. A Bandwidth Spike Caught Him — No Investigator Did.
A Cop Made 3,000 Deepfake Porn Images. A Bandwidth Spike Caught Him — No Investigator Did.
This episode is based on our article:
Read the full article →A Cop Made 3,000 Deepfake Porn Images. A Bandwidth Spike Caught Him — No Investigator Did.
Full Episode Transcript
A Pennsylvania state police corporal just pleaded guilty to creating three thousand deepfake pornography images. Three thousand. And the thing that caught him wasn't an investigation. It was a bandwidth spike on a network.
That detail should sit with you for a second
That detail should sit with you for a second. A law enforcement officer — someone with access to people's personal information through his job — was generating thousands of fake explicit images. And no detective, no internal affairs unit, no digital forensics team flagged it. An I.T. anomaly did. If you've ever had your photo on social media, on a work badge, in a school yearbook — your image could be raw material for this kind of abuse. According to the Philadelphia Inquirer, Corporal Stephen Kamnik used his position to access victims and then used A.I. tools to generate explicit images of them — images they never consented to, depicting things that never happened. This wasn't some anonymous troll in a basement. This was a trusted authority figure operating inside the system that's supposed to protect people. So why didn't the system catch him?
Start with the scale of the problem. According to data compiled by researchers at Views4You, roughly ninety-eight percent of all deepfake videos online are non-consensual pornography. Nearly all of them target women. That's not a niche corner of the internet. That is the primary use of the technology right now. And yet, when the federal TAKE IT DOWN Act passed, only about twenty states had laws that specifically addressed deepfake image abuse. The other thirty were working with statutes that never imagined A.I.-generated exploitation. Even among those twenty states, the penalties varied enormously — some treated it as a misdemeanor, others as a felony, and the definitions of what counted as a deepfake crime didn't match from one jurisdiction to the next.
Now layer on the enforcement side. According to Our Rescue, a nonprofit tracking child exploitation, cyber tips for child sexual abuse material — what investigators call C.S.A.M. — have tripled nationwide since 2020. Tripled. That means digital evidence is piling up faster than police departments can process it. And deepfake abuse? It often sits in a completely separate queue. Not treated as sexual exploitation demanding the same forensic rigor. Treated more like a tech novelty. A "cyber crimes miscellaneous" problem. For investigators, that means cases stall before they even begin. For everyone else, it means someone could be generating fake explicit images of you or your kid, and the system designed to stop it is sorting that into the wrong pile.
The U.S. Department of Justice published a report
The U.S. Department of Justice published a report on digital forensics in child exploitation cases, and one line stands out. There are few standard protocols for forensic examinations across these cases. Evidence collected by one agency may not connect to evidence held by another. Without adequate computer forensic expertise — and many departments simply don't have it — investigators miss the patterns that would let them get ahead of offenders instead of chasing them after the damage is done.
Pennsylvania saw this play out in 2024. According to the Philadelphia Inquirer, nearly fifty girls at a Lancaster private school were targeted with A.I.-generated explicit images. Fifty kids at one school. That scandal pushed Pennsylvania lawmakers to draft legislation requiring schools and mandated reporters to flag A.I.-generated explicit images of minors as child abuse. Which sounds like progress — until you realize it's asking teachers and counselors to report a category of crime that law enforcement often lacks the forensic capacity to investigate once the report lands. A U.N. Women analysis put it bluntly: barriers to justice are made worse by weak legal frameworks, limited law enforcement capacity, and gender bias in how legal protections are applied. You can write all the laws you want. If the people enforcing them don't have the training, the tools, or the mandate to treat this as a priority, legislation becomes a press release.
And the speed of the threat keeps accelerating. According to Views4You's tracking data, reports of A.I.-generated C.S.A.M. surged by more than sixty times in the first half of 2025 compared to prior periods. Sixty times. Detection tools exist — this isn't a technology gap. Algorithms can flag synthetic images. Watermarking standards are in development. The gap is in how the justice system classifies and resources the problem.
The Bottom Line
Some people look at the response — a federal law in just a couple of years, legislation in forty-eight states, international criminal statutes — and say the system is working. But legislation without enforcement capacity isn't accountability. It's performance. Deepfake abuse isn't a technology problem. It's a classification problem. And until it's resourced the way other forms of sexual exploitation are resourced — same forensic rigor, same victim advocacy, same prosecution priority — perpetrators will keep calculating the odds as favorable.
So — a police corporal made three thousand fake explicit images using A.I. A network usage alert caught him. No investigator did. The tools to detect this abuse exist, but the system still treats it as a side issue instead of what it is — sexual exploitation that demands the same resources as any other form. Whether you investigate these cases for a living or you just have a photo of yourself anywhere online, that gap between the law on paper and enforcement on the ground is the gap where harm lives. The full story's in the description if you want the deep dive.
Ready for forensic-grade facial comparison?
2 free comparisons with full forensic reports. Results in seconds.
Run My First SearchMore Episodes
The Cop Who Made 3,000 Deepfakes Exposed a Bigger Problem Than Deepfakes
A Pennsylvania state trooper just pleaded guilty to creating three thousand deepfake images. He built them using driver's license photos and law enforcement databases — the same systems police use every day to identify s
PodcastThat Facial Match Score Is Lying to Your Face
Every time your phone unlocks with a glance, it isn't recognizing your face. It's measuring the distance between two points in a space with a hundred and twenty-eight dimensions. And that distance ca
PodcastEvery Image Is Guilty Until Proven Authentic
A retiree in Saskatchewan handed over three thousand dollars to someone she believed was Prime Minister Mark Carney. She watched a video of him endorsing a cryptocurrency investment. His face, his vo
