CaraComp
Log inTry Free
CaraComp
Forensic-Grade AI Face Recognition for:
Start Free Trial
ai-regulation

The Cop Who Made 3,000 Deepfakes Exposed a Bigger Problem Than Deepfakes

The Cop Who Made 3,000 Deepfakes Exposed a Bigger Problem Than Deepfakes

A Pennsylvania State Police corporal named Stephen Kamnik just pleaded guilty to creating roughly 3,000 deepfake pornographic images — using, among other sources, PennDOT driver's license photos and law enforcement databases he had access to through his job. The same week, Connecticut was busy debating a bill to police synthetic media in elections. Both stories are true. Both are happening simultaneously. And together, they reveal something that almost nobody in the legislative debate is saying out loud.

TL;DR

Legislators are racing to criminalize deepfake abuse while simultaneously expanding biometric data collection with almost no standards for legitimate facial comparison work — and that gap is going to cost investigators dearly in court.

The regulatory conversation right now is almost entirely reactive. Lawmakers see deepfake revenge porn — and they're right to be horrified. They see AI-manipulated political ads. They see German celebrity Collien Fernandes publicly disclosing that her husband spread sexual deepfakes of her for years. They react. They draft bills. They hold press conferences. What they're not doing is asking the uncomfortable structural question underneath all of this: if we're going to regulate synthetic facial imagery, what exactly are we saying is acceptable? Because the answer to that question determines whether legitimate investigative facial analysis survives as a legal tool — or gets quietly strangled in the same net.

The Bill That's Trying Too Hard (In the Wrong Direction)

CT Examiner laid out the core problem with Connecticut's HB 5342 pretty plainly: the bill restricts manipulated media within a 90-day election window and leans on a "reasonable person" standard to define what counts as deceptively synthetic. That's a lot of interpretive weight to put on a legal phrase that courts have been arguing about for decades in completely different contexts. Subjective standards in AI legislation aren't just philosophically messy — they're practically dangerous for anyone whose work depends on facial image analysis holding up under cross-examination.

Here's what that looks like in practice. An investigator builds a facial comparison analysis. The methodology is sound, the documentation is thorough, the results are defensible. But the opposing counsel stands up and asks: under what regulatory framework was this conducted? What published standard governs this process? Who certified the methodology? And the answer — right now, in most jurisdictions — is effectively "none, none, and nobody." That's not a hypothetical. That's the current state of play, and Connecticut's deepfake bill does exactly nothing to fix it.

"The bill uses broad, subjective standards — it targets content 'intended to influence' elections and defines synthetic media as anything a 'reasonable person' would believe is deceptive." — Analysis of Connecticut HB 5342, CT Examiner

Meanwhile, Connecticut's Governor Lamont was busy signaling he'd veto business AI regulation bills over concerns about harming the state's tech sector, according to CT Mirror's reporting. So: criminalize the worst abuses, protect the industry from accountability. Got it. That's a policy position, I suppose — just not a coherent one. For a comprehensive overview, explore our comprehensive face comparison technology resource.


The Kamnik Case Is the Whole Argument in One Horrible Story

Stephen Kamnik wasn't some random bad actor with a laptop. He was a law enforcement professional with institutional access to the exact facial image databases that legitimate investigators depend on. The Philadelphia Inquirer reported that among his sources were PennDOT driver's license photos — government-held biometric data that law enforcement accesses routinely for legitimate investigative purposes.

This is the part that should make everyone uncomfortable. Not because it proves legitimate facial comparison is inherently corrupt — it doesn't — but because it demonstrates, in the starkest possible terms, that the same infrastructure supports both legitimate investigative work and spectacular abuse. And yet, we have detailed legislative proposals for punishing the abusers while having almost nothing in the way of codified standards for what the legitimate practitioners are supposed to do.

1,325%
increase in AI-generated child sexual abuse material reports between 2023 and 2024 — totaling 67,000 reports to NCMEC
Source: Enough Abuse

That number — 1,325% — is genuinely staggering, and it explains why lawmakers feel urgency. Nobody reasonable is saying the abuse isn't real, escalating, and worth legislative action. The National Center for Missing and Exploited Children documented 67,000 reports of AI-generated CSAM in a single year. That's a crisis. But the response to a crisis doesn't have to be this blunt. You can criminalize the abuse and define the legitimate use. Those aren't competing priorities — unless you're drafting legislation in a hurry, which is exactly what 146 state-level bills in 2025 looks like, according to Ballotpedia's 2025 State of Deepfake Legislation Annual Report.

One hundred and forty-six bills. Almost all of them focused on what you can't do. Almost none focused on what defensible practice actually looks like.


Trusted by Investigators Worldwide
Run Forensic-Grade Comparisons in Seconds
2 free forensic comparisons with full reports. Results in seconds.
Run My First Search →

While Everyone's Watching the Left Hand...

Here's what's happening quietly on the other side of the regulatory conversation. Europe is rolling out a biometric entry/exit system — France joining Austria, Germany, Italy, Spain, and Switzerland in implementation — that will process facial data on millions of travelers. USCIS is actively exploring remote identity verification using biometrics for immigration services. India's Supreme Court is fielding a PIL requesting biometric facial recognition for voters at polling stations. Airports globally are racing toward biometric boarding at a pace that's leaving some countries scrambling just to keep up.

All of this is expanding the biometric infrastructure — the exact same technological family as the tools being debated in deepfake legislation — with comparatively minimal scrutiny. The Illinois legislature is one of the few moving in a different direction, with a bill that would restrict police use of facial recognition altogether. But restriction isn't standardization. Banning something and defining how it should be done properly are very different legislative acts, and right now, we're getting a lot of the former and almost none of the latter.

Why This Double Standard Matters

  • Courts have no framework — Without published standards for legitimate facial comparison methodology, every analysis becomes individually contestable, regardless of quality
  • 📊 The Kamnik precedent cuts both ways — A guilty plea involving law enforcement database access to create deepfakes will be cited in defense arguments against legitimate investigative facial analysis for years
  • 🔮 Biometric expansion without standards is a ticking clock — As governments build out massive biometric ID infrastructure, the absence of clear investigative standards creates a growing liability gap that eventually surfaces in court
  • 🚨 146 reactive bills, zero proactive frameworks — The legislative energy is entirely on punishment, not on defining what court-defensible practice actually requires

The authority bias here is almost textbook. Lawmakers respond to organized, visible constituencies — revenge porn survivors testifying before committees, election officials worried about synthetic political ads, child protection advocates with devastating statistics. Those are real people with real pain, and they deserve legislation that protects them. But investigators who need clear evidentiary standards for facial comparison work? They're not showing up to hearings. They're not a voting bloc. So their needs get processed through a filter of "what's the most egregious thing happening right now" — which is never "a court just rejected a valid facial analysis because no published standard existed." Continue reading: The Cop Who Made 3 000 Deepfakes Exposed A Bigger Problem Th.

That's how you end up with 146 bills defining what you can't do with facial imagery and approximately zero bills defining what careful, transparent, auditable facial comparison analysis is supposed to look like. The platform-level forensic tools that investigators rely on for court-ready facial comparison documentation exist in a regulatory vacuum, operating on internal methodology and professional judgment with no legislative backing. That's fine — until the opposing counsel finds the Kamnik case and starts drawing comparisons.


What an Actual Standard Would Look Like

This is the part where most commentary pieces throw their hands up and say "it's complicated." It is complicated. But it's not that complicated. Louisiana, notably, has already added provisions requiring courts to authenticate digital and synthetic evidence — an acknowledgment that generative AI tools compromise evidentiary integrity. That's a defensive move, but it points toward what a proactive standard could require: documented methodology, reproducible analysis, clear disclosure of tools and training data, chain-of-custody for facial image sources, and peer-reviewed protocols that practitioners can cite under oath.

None of that is science fiction. Forensic document analysis has standards. DNA analysis has standards. Fingerprint comparison has standards — contested ones, sure, but standards. Facial comparison for investigative purposes is the only major forensic discipline that remains almost entirely unstandardized at the legislative level, even as the technology matures and the case volume grows.

Key Takeaway

Deepfake criminalization and investigative facial comparison standards aren't competing priorities — they're the same problem viewed from opposite ends. Legislators who refuse to address both simultaneously are handing defense attorneys a gift every time a facial analysis goes to trial.

The Kamnik case will be cited in courtrooms for a decade. A state police officer with database access generating 3,000 synthetic images is exactly the nightmare scenario that makes every facial comparison analysis look suspect by association — unless there's a clear, codified, public standard that separates what he did from what a rigorous investigative methodology looks like. Connecticut can pass all the election-period deepfake bills it wants. Until some legislature — any legislature — answers the question "what does defensible facial comparison actually require," investigators are on their own every time they walk into a courtroom.

And the irony is that the Kamnik case is itself the strongest possible argument for those standards. He had access. He had tools. He had zero accountability framework preventing the abuse. The solution to that isn't just making the abuse criminal — it's making the legitimate use so clearly defined, so transparently documented, that "he was doing what investigators do" stops being a viable defense strategy for the next person who tries it.

So here's the question worth asking every lawmaker who votes yes on the next deepfake bill: if you can define what abusive synthetic facial imagery looks like in 90-day election windows, why can't you define what court-ready investigative facial comparison looks like at all? One of those definitions protects voters. The other one actually solves the problem.

Ready for forensic-grade facial comparison?

2 free comparisons with full forensic reports. Results in seconds.

Run My First Search