CaraComp
Log inTry Free
CaraComp
Forensic-Grade AI Face Recognition for:
Start Free Trial
Podcast

$12 Telegram Kits Are Gutting Your Bank's Biometric Defenses

$12 Telegram Kits Are Gutting Your Bank's Biometric Defenses

$12 Telegram Kits Are Gutting Your Bank's Biometric Defenses

0:00-0:00

This episode is based on our article:

Read the full article →

$12 Telegram Kits Are Gutting Your Bank's Biometric Defenses

Full Episode Transcript


A kit that costs about twelve dollars on Telegram can now fool the identity checks most banks use to verify you're a real person. Twelve dollars. That's less than a movie ticket. And according to the World Economic Forum, deepfake injection attacks — where someone feeds a fake face directly into a verification system — jumped nearly eight hundred percent in the past year.


If you've ever opened a bank account on your phone,

If you've ever opened a bank account on your phone, you've probably done the drill. You hold up your I.D., you take a selfie, maybe you blink on command. That process is called K.Y.C. — know your customer — and it's supposed to prove you are who you say you are. Criminals are now buying cheap tools that hijack that entire process before the bank ever sees the camera feed. According to Biometric Update, virtual camera attacks worldwide were more than twenty-five times as common in twenty twenty-four as they were the year before. That's not a gradual increase. That's an avalanche. The digital bank Revolut reported deepfake-assisted fraud attempts surged through twenty twenty-four, with attackers using A.I.-generated documents and facial spoofing to pass onboarding checks that the company previously considered strong. So the question running through all of this — can we still trust the camera on the other end of a verification call?

Start with how these kits actually work. A virtual camera is software that sits between your real camera and whatever app is trying to use it. Normally, your phone's camera sends a live image straight to the banking app. A virtual camera intercepts that stream and replaces it with whatever the attacker wants — a stolen photo, a deepfake video, a face swap running in real time. The banking app thinks it's seeing a live person. It's not. For anyone who's ever used a fun background filter on a video call, the underlying technology is eerily similar. Except instead of putting a beach behind you, someone is putting a stolen face in front of you.

One tool getting attention from researchers is called JINKUSU CAM. According to Biometric Update's technical analysis, JINKUSU doesn't just generate a static fake image. It manipulates the live stream itself — in real time — to satisfy the specific checks a verification system asks for. Blink detection? It blinks. Turn your head? The fake turns its head. The system was designed to catch someone holding up a printed photo. It wasn't designed for an attacker who's rewriting the video feed before the app even receives it.


Trusted by Investigators Worldwide
Run Forensic-Grade Comparisons in Seconds
2 free forensic comparisons with full reports. Results in seconds.
Run My First Search →

The attackers aren't stopping at the camera

And the attackers aren't stopping at the camera. According to M.I.T. Technology Review, hackers are now compromising both the phone itself and the code of the financial institution's app before feeding the virtual camera its fake input. Talsec, a mobile security firm, told M.I.T. Technology Review that they tracked about thirty virtual-camera-based incidents in a single year, up from fewer than ten the year before. That means liveness detection — the "blink now, turn left" prompts — can't close the gap on its own. The attack surface now includes the device, the app, and the camera stream all at once.

Apple devices used to be somewhat insulated from this. Not anymore. K.Y.C. bypass attempts targeting Apple's ecosystem are now a growing category, which matters because a lot of financial apps leaned on Apple's hardware security as an extra layer of trust. That assumption is cracking.

For investigators and compliance teams, this creates a problem that mirrors what banks face. If you're comparing faces in a case — from surveillance footage, social media, a witness photo — how do you know the image you're analyzing hasn't been synthetically altered or injected? For everyone else, it means the selfie you took to open a savings account last year was verified by a system that twelve-dollar software can now defeat.


The Bottom Line

The instinct is to say facial biometrics are broken. They're not — but they're no longer enough on their own. The real shift is this: identity systems have to stop asking "is there a real face here" and start asking "has this input been tampered with before we ever saw it." That's a fundamentally different question, and most systems weren't built to answer it.

So — the short version. Cheap tools on Telegram let criminals hijack your phone's camera feed and fool the identity checks banks use to verify real people. These attacks exploded over the past year, and the defenses most institutions rely on — selfie matching, blink detection — were built for a world where the camera feed was trustworthy. That world is gone. Whether you're building fraud cases or just opening an account from your couch, the question isn't whether the face matches anymore. It's whether the face was ever real to begin with. I linked the full article below — worth a read.

Ready for forensic-grade facial comparison?

2 free comparisons with full forensic reports. Results in seconds.

Run My First Search