MP's Nude Deepfake Stunt Just Rewrote the Rules for Every Lawmaker on Earth
MP's Nude Deepfake Stunt Just Rewrote the Rules for Every Lawmaker on Earth
This episode is based on our article:
Read the full article →MP's Nude Deepfake Stunt Just Rewrote the Rules for Every Lawmaker on Earth
Full Episode Transcript
A member of New Zealand's Parliament stood up in the chamber, held up a nude photograph of herself, and told her colleagues it was fake. She'd made it in under five minutes. A quick search online, a free app, and she had a fully fabricated naked image of her own body — generated by A.I. — ready to show the world.
She did it to prove a point
She did it to prove a point. New Zealand has no law that specifically addresses deepfakes. Not one. And if you've ever posted a photo of yourself online — a selfie, a LinkedIn headshot, a vacation picture — someone could do the same thing to you, right now, with the same free tools she used. That's not a hypothetical. According to researchers tracking synthetic media, roughly nineteen out of every twenty deepfake videos online are non-consensual pornography. The subject didn't agree. The subject often doesn't even know. Act Party M.P. Laura McClure put her own image on the line because her deepfake bill had been sitting in Parliament's member ballot alongside about forty other bills — potentially untouched for years. So the question running through this entire story is: why does it take a lawmaker humiliating herself in public to get a law written?
That question matters far beyond New Zealand. The gap between what synthetic media can do and what the law actually covers is enormous — and it looks different in every country. The U.S. passed the Take It Down Act last year, which criminalized non-consensual intimate deepfakes and requires platforms to take them down after being notified. That law's platform requirements kick in by mid-twenty-twenty-six. The U.K. moved even faster on one piece — in mid-January, it sped up a section of its Data Act to make the creation of sexual deepfakes a criminal offense, not just the sharing. Creation alone is now enough.
But passing a law and solving the problem are two very different things. In Europe, regulators are still debating whether they can outright ban so-called nudification apps under the Digital Services Act. These are apps built for one purpose — to strip clothing from photos of real people using A.I. And even under existing frameworks, enforcement agencies aren't sure they have the authority to pull them from app stores. That's a gap you could drive a truck through. For anyone who's ever worried about a teenager downloading the wrong app, that uncertainty isn't abstract.
Meanwhile, the U
Meanwhile, the U.S. ran into a constitutional wall. A federal judge blocked California's law prohibiting certain political deepfakes, ruling it raised First Amendment concerns. Political speech — even when it's fabricated — sits in a legally protected zone that's extremely difficult to regulate. That tension between preventing harm and protecting speech is slowing legislation everywhere. For investigators building cases around manipulated media, it means the rules you're working under today could be rewritten — or thrown out — by a court tomorrow. For the rest of us, it means a fake video of a politician saying something they never said might be perfectly legal to share, depending on where you live.
The next wave of legislation, expected in twenty-twenty-six, is shifting the target. Instead of only going after the person who creates a deepfake, lawmakers are looking at the entire supply chain. Platforms that host the content. Payment processors that let people buy access to nudification tools. Hosting services that keep the infrastructure running. That's a fundamental shift — from punishing individuals to holding the pipes accountable. If you run a website, process transactions, or store data, that change could land on your desk. If you're a parent, it means the companies profiting from these tools may finally face consequences, not just the anonymous user who uploaded the image.
What makes the New Zealand moment different from every statistic and every policy paper is something harder to quantify. Researchers and advocates had been citing the data for years. Nearly all deepfake video online is non-consensual pornography. That number didn't move the needle. An elected official standing in Parliament holding a fake nude of herself — that moved it. Abstract harm doesn't create urgency. Personal, visible, undeniable evidence does.
The Bottom Line
And that's the pattern no one wants to admit. Laws don't arrive when the technology is ready. They arrive after a crisis goes public. California's deepfake law didn't come from a white paper. It came from outrage — and then a judge struck part of it down anyway. The U.K. didn't criminalize creation because a committee recommended it. It did it because the pressure became impossible to ignore.
So the smartest approach — and this is where the sharpest legislation is heading — doesn't try to regulate A.I. itself. It targets specific harms. Non-consensual intimate imagery. Fraud. Election interference. The technology is a tool. The damage is what you can write a law around. One M.P. in New Zealand proved that in five minutes with a free app and her own face. Anyone with a photo online is already in the same position she put herself in — they just don't know it yet. The full story's in the description if you want the deep dive.
Ready for forensic-grade facial comparison?
2 free comparisons with full forensic reports. Results in seconds.
Run My First SearchMore Episodes
UK Just Spent £2M Spying on Benefit Claimants — With Zero Rules Governing How
The U.K. government just spent two million pounds on covert surveillance gear — including cameras mounted inside vehicles — to watch people who claim benefits. No new law authorized it. No legal stan
PodcastAge Verification Is a Lie: 3 Hidden Flaws That Make "Passed" Meaningless
A system built to answer one question about you — are you over eighteen — doesn't just check your age and move on. It keeps your government I.D., your selfie, and your biometric data sitting in a database you'll never se
PodcastFacial Recognition's 81% Error Rate Is About to Blow Up in Court — Are Your Notes Ready?
In U.K. police trials of live facial recognition, the system got it wrong about four out of every five times. An eighty-one percent error rate. And yet, th
