KYC, or “know your customer,” is a process intended to help financial institutions, fintech startups and banks verify the identity of their customers. Not uncommonly, KYC authentication involves “ID images,” or cross-checked selfies used to confirm a person is who they say they are. Wise, Revolut and cryptocurrency platforms Gemini and LiteBit are among those relying on ID images for security onboarding.
But generative AI could sow doubt into these checks.
Viral posts on X (formerly Twitter) and Reddit show how, leveraging open source and off-the-shelf software, an attacker could download a selfie of a person, edit it with generative AI tools and use the manipulated ID image to pass a KYC test. There’s no evidence that GenAI tools have been used to fool a real KYC system — yet. But the ease with which relatively convincing deepfaked ID images is cause for alarm.
Fooling KYC
In a typical KYC ID image authentication, a customer uploads a picture of themselves holding an ID document — a passport or driver’s license, for example — that only they could possess. A person — or an algorithm — cross-references the image with documents and selfies on file to (hopefully) foil impersonation attempts.
ID image authentication has never been foolproof. Fraudsters have been selling forged IDs and selfies for years. But GenAI opens up a range of new possibilities.
Tutorials online show how Stable Diffusion, a free, open source image generator, can be used to create synthetic renderings of a person against any desired backdrop (e.g., a living room). With a little trial and error, an attacker can tweak the renderings to show the target appearing to hold an ID document. At that point, the attacker can use any image editor to insert a real or fake document into the deepfaked person’s hands.
AI will rapidly accelerate broad use of private key cryptography and decentralized ID.
Check out this Reddit "verification post" and ID made with Stable Diffusion. When we can no longer trust our eyes to ascertain whether content is genuine we'll rely on applied cryptography. pic.twitter.com/6IjybWRhRa
— Justin Leroux (@0xMidnight) January 5, 2024
Now, yielding the best results with Stable Diffusion requires installing additional tools and extensions and procuring around a dozen images of the target. A Reddit user going by the username _harsh_, who’s published a workflow for creating deepfake ID selfies, told TechCrunch that it takes around one to two days to make a convincing image.
But the barrier to entry is certainly lower than it used to be. Creating ID images with realistic lighting, shadows and environments used to require somewhat advanced knowledge of photo editing software. That’s not necessarily the case now.
Feeding deepfaked KYC images to an app is even easier than creating them. Android apps running on a desktop emulator like BlueStacks can be tricked into accepting deepfaked images instead of a live camera feed, while apps on the web can be foiled by software that lets users turn any image or video source into a virtual webcam.
Growing threat
Some apps and platforms implement “liveness” checks as additional security to verify identity. Typically, they involve having a user take a short video of themselves turning their head, blinking their eyes or demonstrating in some other way that they’re indeed a real person.
But liveness checks can be bypassed using GenAI, too.
NEWS: Our latest research is out!
We found that 10 of the most popular biometric KYC providers are severely vulnerable to realtime deepfake attacks. And so may be your bank, insurance or health providers
Full report at https://t.co/vryGJ7na0ihttps://t.co/VVaSZrCZRn
— Sensity (@sensityai) May 18, 2022
Early last year, Jimmy Su, the chief security officer for cryptocurrency exchange Binance, told Cointelegraph that deepfake tools today are sufficient to pass liveness checks, even those that require users to perform actions like head turns in real time.
The takeaway is that KYC, which was already hit or miss, could soon become effectively useless as a security measure. Su, for one, doesn’t believe deepfaked images and video have reached the point where they can fool human reviewers. But it might only be a matter of time before that changes.