Deepfakes are rapidly on the rise

I have written about the deepfake problem for many years, including this piece that was posted almost two years ago. The practice has reached epidemic proportions. A new report from VentureBeat cites its abuse in new candidate hiring practices. While the data originates from one of the numerous deepfake prevention vendors, it still is an indication of the widespread threat that this has become. The vendor claims that they have blocked 75M attempts in 2024, a 50x increase.

Hiring someone remotely has never been more pervasive and more difficult. Last year, Crowdstrike identified a North Korean state-sponsored actor that infiltrated new hires in more than 100 companies with fake identities. Gartner predicts that in a few years, a quarter of all candidates will be fakers. It used to be that North Korean hackers were the main source of the fakers, trying to get their spies hired at crypto and other tech companies. If you follow this link to a post that I wrote three years ago and scroll to my added comments, you’ll see that things have gotten much worse. Many companies receive numerous daily deepfake prospects, and security vendors such as KnowBe4 and Hypr have hired them.

One contributing factor has been the development of AI tools that enable deepfakery at scale. Another is that the trend towards remote workers has become the norm, making the initial test — having a candidate physically show up in your office — no longer possible.This also makes the traditional background check workflow useless, because in the past that assumed the candidate was a real person, with a legitimate identity.

And AI isn’t just deployed to manage the deepfake supply pipeline, but also create and flood the resume zone as well. Soon we will have nothing but AI on both ends: will my AI screening tool work to spot your AI submission? It probably is already happening.

An issue with the deepfake prospective candidate supply is that it crosses three layers that were separate security domains: the initial candidate credential submission, the network and other digital footprint of the computers used for the submission, and more general population characteristics. This is where the better protective products can help flag a deepfake: for example, when a device that is used to submit a resume and headshot is running on a free VPN with mismatched time zone and geolocation parameters, and with a newly-minted social media account. The general threat intel products are good at flagging malware code that employs recently created domains or social accounts, for example, but these tools have been slow to work in the candidate deepfake arena.

But it is only a matter of time before AI can conquer these inconsistencies. That will leave hiring managers more dependent on AI created  security tools.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.