Deepfakes are rapidly on the rise

I have written about the deepfake problem for many years, including this piece that was posted almost two years ago. The practice has reached epidemic proportions. A new report from VentureBeat cites its abuse in new candidate hiring practices. While the data originates from one of the numerous deepfake prevention vendors, it still is an indication of the widespread threat that this has become. The vendor claims that they have blocked 75M attempts in 2024, a 50x increase.

Hiring someone remotely has never been more pervasive and more difficult. Last year, Crowdstrike identified a North Korean state-sponsored actor that infiltrated new hires in more than 100 companies with fake identities. Gartner predicts that in a few years, a quarter of all candidates will be fakers. It used to be that North Korean hackers were the main source of the fakers, trying to get their spies hired at crypto and other tech companies. If you follow this link to a post that I wrote three years ago and scroll to my added comments, you’ll see that things have gotten much worse. Many companies receive numerous daily deepfake prospects, and security vendors such as KnowBe4 and Hypr have hired them.

One contributing factor has been the development of AI tools that enable deepfakery at scale. Another is that the trend towards remote workers has become the norm, making the initial test — having a candidate physically show up in your office — no longer possible.This also makes the traditional background check workflow useless, because in the past that assumed the candidate was a real person, with a legitimate identity.

And AI isn’t just deployed to manage the deepfake supply pipeline, but also create and flood the resume zone as well. Soon we will have nothing but AI on both ends: will my AI screening tool work to spot your AI submission? It probably is already happening.

An issue with the deepfake prospective candidate supply is that it crosses three layers that were separate security domains: the initial candidate credential submission, the network and other digital footprint of the computers used for the submission, and more general population characteristics. This is where the better protective products can help flag a deepfake: for example, when a device that is used to submit a resume and headshot is running on a free VPN with mismatched time zone and geolocation parameters, and with a newly-minted social media account. The general threat intel products are good at flagging malware code that employs recently created domains or social accounts, for example, but these tools have been slow to work in the candidate deepfake arena.

But it is only a matter of time before AI can conquer these inconsistencies. That will leave hiring managers more dependent on AI created  security tools.

CSO: How to make your multicloud security more effective

The days of debating whether cloud or on-premises is the best location for your servers are thankfully far behind us. But lately, more enterprises are shifting their workloads as they realize that security and simplicity matter. This movement isn’t uniform because of the richness and complexity of multicloud computing in the modern era.

In this piece for CSOonline, I look at ways to be more purposeful about cloud security and focus on containing and managing tool sprawl with recommended courses of action that you can take.

CSOonline: Threat Intelligence Platforms Buyer’s Guide

The bedrock of a solid enterprise security program begins with the choice of an appropriate threat intelligence platform (TIP) and then to use this to design the rest of your program. Without the TIP, most security departments have no way to integrate the various component tools and develop the appropriate tactics and processes to defend their networks, servers, applications and endpoints.

What is newsworthy is that the threat universe has gotten a lot more complex and focused. For example, the Verizon VDBIR found that threats aimed at VPN and edge devices have surged to more than eight times what was reported last year.

The early TIPs were very unsophisticated products, often just cobbled together intelligence feeds of the latest exploits, with little or no details. Today’s TIP has a lot richer information, including underlying complexities and specifics about how the threat operates I talk about what some of these are in my latest post for CSOonline, along with short summaries of several TIPs from Bitsight, Cyware, Greynoise, Kela, Palo Alto Networks, Recorded Future, SilentPush and SOCRadar.

CSOonline: Top tips for successful threat intelligence usage

Enterprises looking to stem the tide of breaches and attacks usually end up purchasing a threat intelligence platform (TIP). These can take one of several forms, including a managed cloud-based service or a tightly coupled tool collection that provides a wider risk management profile by tying together threat detection, incident response and vulnerability management. More than a dozen vendors offer TIPs, and I will be posting my buyer’s guide in a few weeks that go into more details of some of them. In the meantime, you can examine my top tips tor selecting a TIP here on CSOonline.

 

 

Privacy perils of the connected car

The connected car has become the latest casualty in the war on personal privacy. This is because your car’s “subscription-based features drastically increase the amount of data that can be accessed during law enforcement investigations,” Dell Cameron wrote for Wired magazine recently. And while most car makers state that they can’t obtain access to this data without some kind of court order, that isn’t the final answer. What congressional investigators found is that some car makers will divulge this data when contacted by law enforcement. And then there is this: there is no hard and fast rule of what data can be collected, because it varies by the make and model of your car, whether you once had any connected car subscription service (such as GM’s OnStar), and what broadband provider you use.

Re-read that last sentence again. Even if you cancelled your OnStar subscription, your Chevy might still be recording when you took it to the levee.  There is some direct evidence of this, based on data found in police documents that Wired and the ACLU saw from several investigations.

I wrote about connected car issues almost three years ago, but not from a privacy POV. That post shows that car companies have embraced subscription services, thanks to Telsa’s early lead (so much for that) and a realization that they could extract recurring revenue that had a better aura than so-called “extended warranties.” Figuring out the costs of the various subscription options is still not easy. For example, GM’s OnStar has a confusing series of different plans. With BMW, you can get an idea of what connected service is available here, but to get actual prices you will first have to become a BMW customer. Some features are free and some require the latest car OS v9 or are only available on particular vehicles. And for those of you still interested in Tesla, they have a free basic plan – which just includes GPS. If you want more features you will have to sign up for its premium plan that includes dozens of other features for $10 a month. And we found out the hard way that all Teslas are really roving reality video studios – meaning that they are constantly recording from their numerous cameras — when one of their cars blew up outside a Vegas casino.

Think of the data originating from your connected car as the hidden browser pixel: you know there is something fishy going on. Whether or not you are paranoid enough to worry about it, or just accept it as another part of modern life, is up to you.

 

CSOonline: CNAPP buyer’s guide: Top cloud-native app protection platforms compared

It is time to re-examine my review of cloud native protection products, commonly known as CNAPP. The category has expanded to include more devsecops coverage, such as API and supply chain security, and more posture management tools for tracking data and SaaS apps.

The category is also under scrutiny because the CNAPP vendor landscape has shifted, most notably around Wiz. They recently were purchased by Google, who will maintain it as a separate division. Check Point Software has formed a strategic partnership with Wiz, and has discontinued selling its own CloudGuard CNAPP and will migrate its customers to Wiz. Lacework has been purchased by Fortinet and is now called Lacework Fortinet FortiCNAPP. Palo Alto Networks has rebranded and reconstituted its CNAPP offering as part of its Cortex Cloud product line.

My review for CSOonline has been updated to include 11 CNAPP vendors. 

CSOonline: Agentic AI is both boon and bane for security pros

AI agents are predicted to reduce time to exploit by half in two years, here is what you need to know to figure out if your business need agentic AI and how to find the right one. Agentic AI has proved to be a huge force multiplier and productivity boon. But while powerful, agentic AI isn’t dependable, and that is the conundrum. In this post for CSOonline, I describe some of the issues and make some recommendations for how to safely and productively deploy this tech.

 

A new type of disinformation campaign based on LLM grooming

Most of us are familiar with the Russian state-sponsored Internet Research Agency. The group has been featured in numerous fictional spy movies and is responsible for massive misinformation campaigns that center around weaponizing political social media posts.

But the Russian misinformation network is branching out into the world of AI, specifically around poisoning or grooming the training models used by western AI chatbots. A recent report by NewsGuard documents this latest insidious move. 

Called Pravda — not to be confused with the print propaganda cold war “newspaper” of the former Soviet Union — it targets these chatbots by flooding search results and web crawlers, It doesn’t generate any original content. Instead, it aggregates a variety of Russian propaganda and creates millions of posts of false claims and other news-like items. The Pravda network serves as a central hub to overwhelm the model training space. As a result, many of the most popular chatbots reference these fictions a third of the time in their replies. In effect, they have turned chatbots into misinformation laundering machines. “All 10 of the chatbots repeated disinformation from the Pravda network, and seven chatbots even directly cited specific articles from Pravda as their sources,” Many of the responses found by their researchers included direct links to the Pravda-based stories, and in many cases, the AI citations don’t distinguish between reliable and unreliable sources.

What is curious about the Pravda network is that it isn’t concerned with influencing organic ordinary searches. Its component domains have few if any visitors of its websites or users on Telegram or other social media channels. Instead, its focus is on saturating search results from automated content scanners, such as would happen with AI training models. On average, the network posts more than 10,000 pieces of daily content.

Researchers at the American Sunlight Project call this LLM grooming and go into further details on how this works and why the Pravda network isn’t designed around human content consumption or any interaction. They show how Pravda makes extensive use of machine translation of its content into numerous languages, which post awkwardly worded pages. “The top objective of the network appears to be duplicating as much pro-Russia content as widely as possible,” they wrote.

The NewsGuard researchers examined 10 leading large-language model chatbots: OpenAI’s ChatGPT-4, You.com’s Smart Assistant, xAI’s Grok, Inflection’s Pi, Mistral’s le Chat, Microsoft’s Copilot, Meta AI, Anthropic’s Claude, Google’s Gemini, and PerplexityAI.

NewsGuard has been around for several years now and provides various auditing and transparency services. They found Pravda uses more than 150 different domains spreading more than 200 false claims in more than 40 languages, such as describing Zelensky’s personal fortune and how the U.S. operated secret bioweapons labs in Ukraine, just to pick two. The company, founded by Court TV’s Steven Brill and former Wall Street Journal publisher Gordon Crovitz, began tracking AI-based misinformation last summer. The American Sunlight Project is run by Nina Jankowicz, who has held fellowships at the Wilson Center and other NGOs as well as working for a Homeland Security disinformation board during the Biden years.

The risks are high: “There are few apparent guardrails that major companies producing generative AI platforms have deployed to prevent propaganda or disinformation from entering their training datasets,” writes the Sunlight team. And as this data is flooded with garbage, it will get harder for AI models to distinguish genuine human interaction in the future.

Beware of evil twin misinformation websites

Among the confusion over whether the US government is actively working to prevent Russian cyberthreats comes a new present from the folks that brought you the Doppelganger attacks of last year. There are at least two criminal gangs involved, Struktura and Social Design Agency. As you might guess, these have Russian state-sponsored origins. Sadly, they are back in business, after being brought down by the US DoJ last year, back when we were more clear-headed about stopping Russian cybercriminals.

Doppelganger got its name because the attack combines a collection of tools to fool visitors into thinking they are browsing the legit website when they are looking at a malware-laced trap. These tools include cybersquatting domain names (names that are close replicas of the real websites) and using various cloaking services to post on discussion boards along with bot-net driven social media profiles, AI-generated videos and paid banner ads to amplify their content and reach. The targets are news-oriented sites and the goal is to gain your trust and steal your money and identity. A side bonus is that they spread a variety of pro-Russian misinformation along the way.

Despite the fall 2024 takedowns, the group is once again active, this time after hiring a bunch of foreign speakers in several languages, including French, German, Polish, and Hebrew. DFRLab has this report about these activities.They show a screencap of a typical post, which often have four images with captions as their page style:

These pages are quickly generated. The researchers found sites with hundreds of them created within a few minutes, along with appending popular hashtags to amplify their reach. They found millions of views across various TikTok accounts, for example. “During our sampling period, we documented 9,184 [Twitter] accounts that posted 10,066 of these posts. Many of these accounts were banned soon after they began posting, but the campaign consistently replaces them with new accounts.” Therein lies the challenge: this group is very good at keeping up with the blockers.

The EU has been tracking Doppleganger but hasn’t yet updated its otherwise excellent page here with these latest multi-lingual developments.

The Doppelganger group’s fraud pattern is a bit different from other misinformation campaigns that I have written about previously, such as fake hyperlocal news sites that are primarily aimed at ad click fraud. My 2020 column for Avast has tips on how you can spot these fakers. And remember back in the day when  Facebook actually cared about “inauthentic behavior”? One of Meta’s reports found these campaigns linked to Wagner group, Russia’s no-longer favorite mercenaries.

It seems so quaint viewed in today’s light, where the job of content moderator — and apparently government cyber defenders — have gone the way of the digital dustbin.

Don’t fall for this pig butchering scam

A friend of mine recently fell victim to what is now called pig butchering. Jane, as I will call her, lives in St. Louis. She is a well-educated woman with multiple degrees and decades of management experience. But Jane is also out more than $30,000 and has had her life upended as a result of this experience, having to change bank accounts, email addresses and obtain a new phone number..

The term refers to a complex cybercrime operation that has at its heart the ability to control the victim and compel them to withdraw cash from their bank account and send it via bitcoin to the scammer. The reason why this scam works is because the victim is taking money from their account. The various fraud laws don’t cover you making this mistake. I will explain the details in a moment.

Many of us are familiar with the typical ransomware attacks, where the criminals receive the funds directly from their victims: these transactions might be anonymous but they are reversible. So let’s back up for a moment and track Jane’s actions leading up to the scam.

In Jane’s situation, the attack began when her computer received a warning message that it had been hacked and for her to call this phone number to disinfect it. Somehow, this malware was transmitted, typically via a phishing email. This is the weak point of the scam. Every day I get suspicious emails — most are caught by the spam filters, but occasionally things break through. As I was helping Jane get her life back on track, my inbox was flooded with email confirmations of an upcoming stay at a hotel. At one point, I think I had a dozen such “confirmations.” Perhaps the guest made a legitimate mistake and used my email address — but more likely, as these emails piled up, this was an attempted phishing scam. 

Anyway, back to Jane. She called the number and the attacker proceeded to convince her that she was the victim of a scammer — which ironically was true at the time, and probably the first and last thing he said that was true. Her computer was infected with all sorts of child porn, and she could be legally liable. She believed the scammer, and over the course of several hours, stayed on the phone with him as she got in her car, drove to her bank and withdrew her cash.

Now, in the cold light of a different day, Jane understands her mistake. “I was a lawyer. I should have recognized this was all a fabrication,” she told me, rather abashedly. “I should have known better but I was caught up in the high emotional drama at that moment and wasn’t thinking clearly.” Eventually, her attacker directed her to a bitcoin “ATM” where she could feed in her $100 bills and turn it into electrons of cybercurrency. Her attacker had thoughtfully sent her a QR code that contained his address. Think about that — she is standing in a convenience store, feeding $100 bills into this machine. That takes time. That takes determination. 

Jane is computer literate, but doesn’t bank online. She manages her investments the old-fashioned way: by calling her advisors or visiting them in person. She has a cellphone and a computer, and while I was helping her get her digital life back in order we were remembering where we were when we first used email many decades ago and how new and shiny it was before scammers roamed the interwebs.

So how did the scam unravel? After spending all afternoon on the phone, the scammer got greedy and wanted more fat on the pig, so to speak. She called him back on her special hotline number and he asked her to withdraw more money from her bank account. She went back to her bank, and fortunately got the same teller that she had the day before. He questioned her withdrawal and that brought the butcher shop operation to a halt when she revealed that she was being directed by the scammer.  

But now comes the aftermath, the digital cleanup in Aisle 7. And that will take time, and effort on Jane’s part to ensure that she has appropriate security and that her contact info is sent to the right places and people. But she is still out the funds. She knows now not to get caught up in the moment just because an email or a popup message tells her something. 

Avoiding pig butchering scams means paying attention when you are reading your email and texts. Don’t multitask, focus on each individual message. And when in doubt, just delete.