CaraComp
Log inTry Free
CaraComp
Forensic-Grade AI Face Recognition for:
Start Free Trial
Podcast

1 in 25 Kids Are Now Deepfake Victims — and Your Investigators Aren't Ready

1 in 25 Kids Are Now Deepfake Victims — and Your Investigators Aren't Ready

1 in 25 Kids Are Now Deepfake Victims — and Your Investigators Aren't Ready

0:00-0:00

This episode is based on our article:

Read the full article →

1 in 25 Kids Are Now Deepfake Victims — and Your Investigators Aren't Ready

Full Episode Transcript


In the past year alone, according to a joint study by UNICEF, ECPAT, and INTERPOL, roughly one point two million children across eleven countries told researchers someone had taken their photos and used A.I. to turn them into sexually explicit images. One in twenty-five kids. Not adults. Not public figures. Children.


That number isn't abstract

That number isn't abstract. In Montgomery Township, New Jersey, prosecutors charged a seventeen-year-old with child sexual abuse material offenses after the teen allegedly used A.I. to generate nude images of classmates. The case started with a cyber tip to the National Center for Missing and Exploited Children. And if you're thinking this is one school, one bad actor — a joint investigation by WIRED and a data firm called Indicator found close to ninety schools and six hundred students worldwide caught up in the same kind of incident. This isn't a handful of headlines anymore. It's a pattern. Anyone who's ever handed a teenager a phone with a camera has a stake in what happens next. So the question running through every part of this story is simple. When the images look real but aren't, how does anyone — a parent, a principal, a detective — figure out what actually happened?

Start with who's being targeted. According to reporting from NPR, women and girls make up an estimated ninety percent of the victims of nonconsensual deepfake crimes. And in most of these cases, both the victims and the people creating the images are between fourteen and sixteen years old. Kids doing this to other kids. That demographic reality changes everything about how these cases move through schools and through courts. A principal who sees an explicit image of a student might assume it's real and launch a discipline process based on something that never happened. A parent might see the same image and pull their child from school. Without a way to verify whether the image is authentic before anyone acts on it, the deepfake does its damage whether or not anyone ever proves it's fake.

New Jersey actually isn't new to this. Three years ago, students at Westfield High School created and shared fake explicit images of classmates. Since then, the state passed laws making it a crime to create or distribute nonconsensual deepfake pornography. But passing a law and having the tools to enforce it are two very different things. Investigators working these youth cases told reporters that just two years ago, they had virtually zero deepfake cases on their desks. Now they're juggling five or six at the same time. The A.I. went from obscure to mainstream faster than the legal system could build a playbook.


Trusted by Investigators Worldwide
Run Forensic-Grade Comparisons in Seconds
2 free forensic comparisons with full reports. Results in seconds.
Run My First Search →

The federal government is playing catch-up too

And the federal government is playing catch-up too. Congress passed the TAKE IT DOWN Act, which requires covered platforms to set up removal processes by 05-19-2026. Once someone files a valid request, platforms have forty-eight hours to take down nonconsensual intimate images. That's a real deadline with real teeth. But forty-eight hours is a long time on the internet. And removal doesn't undo the harm that's already spread.

Consider what happened in Baltimore. Someone created a deepfake audio clip of a school principal — made it sound like the principal said things they never said. That clip circulated twenty-seven thousand times before A.I. experts were brought in to prove it was artificially generated. Twenty-seven thousand shares. The principal's reputation was shredded while the verification process was still getting started. Eventually, experts confirmed the audio was fake, and the principal was exonerated. But "eventually" came after the damage was done. For anyone who shared that clip or saw it on a feed, the correction never travels as far as the lie.

That Baltimore case exposes a deeper problem for investigators. When facial comparison and authenticity checks aren't standard procedure in school investigations, administrators may act on deepfake evidence without ever questioning whether it's real. That opens districts up to defamation lawsuits and employment disputes. It also means a student could be suspended, expelled, or publicly humiliated based on an image a classmate generated in minutes on a laptop. Investigators now need systematic protocols — routine verification steps built into every case — not one-off heroic forensics after the fact. Checking whether digital evidence is authentic is moving from a forensic specialty to a baseline requirement.


The Bottom Line

And there's a tension underneath all of this that doesn't get enough attention. Detection tools are locked in an arms race with the generation technology itself. By the time forensic software can reliably spot fakes from one generation of A.I. models, the next generation has already made those detection methods obsolete. That means technical detection alone won't protect students. Educator training, institutional accountability, and platform responsibility have to carry weight that no single forensic tool can.

The instinct is to treat this as a technology problem — build better detectors, catch the fakes faster. But the real shift is institutional. The schools, the courts, and the investigators who handle these cases need to stop assuming that any image or audio file is authentic just because it looks or sounds convincing.

So — across eleven countries, more than a million children had their photos turned into explicit deepfakes in a single year. Most of the victims are girls between fourteen and sixteen. And the systems meant to protect them — schools, platforms, law enforcement — are still building the procedures to tell real evidence from fabricated evidence before anyone gets hurt. Whether you investigate these cases for a living or you're a parent whose kid just got their first phone, the same question applies. Can you trust what you're seeing? And if you can't — who checks? The written version goes deeper — link's below.

Ready for forensic-grade facial comparison?

2 free comparisons with full forensic reports. Results in seconds.

Run My First Search