Podcast: with Sam Whitmore on offensive agentic AI tactics

This week I spoke to Sam Whitmore of MediaSurvey about two eports that came out this month, one from the Google Threat Intel group and one from Anthropic, the makers of Claude AI

The Google report says that “adversaries are no longer leveraging AI just for productivity gains, they are deploying novel AI-enabled malware in active operations. Malware threat groups are using LLMs during their execution to dynamically generate scripts on demand and hide their own code from detection.” They are also using social engineering pretexts to bypass security guardrails. That is pretty scary stuff.

The Anthropic report found ways that threat actors manipulate Claude Code to automate the orchestration of reconnaissance, vulnerability discovery, exploitation, lateral movement, credential harvesting, data analysis, and exfiltration operations largely autonomously. The researchers claim that this is the first documented attack without much human intervention or control at huge scale and showed how Claude agents were able to decompose these multiple attack stages into smaller parts. One small issue: the events depicted in this report happened about a year ago, using tools that now seem ancient given the rapid state of things in the AI world.

The key to the behavior chronicled in both reports was how AI assumed some pretty human role-play: the human operators claimed that they were employees of legitimate cybersecurity firms and convinced Claude that they were playing a capture-the-flag, a common white-hat technique.

Both reports show just how the bad guys can use agentic AI to be more effective at stealing data than any group of human operators. The challenge will be stopping these and even more advanced threats going forward.

Watch out for browser cache smuggling

Browser caches can be difficult to secure, because our insatiable hunger for web content means our browsers often deposit files there that could turn out to be trouble. In the past, malware actors would try to poison web server caches — these were holding areas that the servers put aside to deliver frequently requested pages or pieces of content, such as large image files.

“Think of cache poisoning as poisoning a town’s shared well—everyone who draws from it is affected,” said Satnam Narang, senior staff research engineer at Tenable. “Browser cache smuggling, however, is like getting a meal kit with a hidden poisonous ingredient. It sits harmlessly in your private kitchen until you are tricked into following the recipe and cooking it yourself.” Cooked, indeed. The attacker hides an executable program inside a misnamed file that appears to be storing an image in the cache. Marcus Hutchins wrote about this recently.

Cache Smuggling has been around for years, but lately it is being paired with zero-click malware that makes the deposit and then the activation without any user intervention. Or as Marcus documents, a misleading pop-up instructs a user to do a series of Windows commands that bring this all about in the background. Or a phishing email that tells you how you have a large reward just waiting for your click to approve.

I recently got one of these emails from the Facebook User Privacy Settlement, asking me to activate a debit card. I was about to hit the delete key when I thought I should investigate further, and found out that I was wrong: the email offer was legit and moments later, I was now about $38 richer. Woo-hoo!

One way to fix this across the enterprise is to use one of the class of enterprise browsers that encrypt the cache, or can place global policies when a user brings up one of their browsers. Island.io and Authentic8.com are two of these vendors. A consumer version is available from Opera or Brave that provides various content blockers, which can stop the smuggling route.

Another mechanism is to make use of various network defensive tools (such as is available from one of my clients, Corelight). These can monitor odd network flows, such as unexpected uses of PowerShell, which often are clues that some hanky-panky is going on.

Three new malware variants you might BOLO

Of all men’s miseries the bitterest is this: to know so much and to have no power.

That was something attributed to the Greek philosopher Herodotus, who lived in what is now Turkey and Italy more than 2400 years ago. It is a fitting name for a new kind of Android banking trojan that is making the rounds. The trojan works by inserting a small but randomly variable delay between keystrokes, to make them appear as to be typed by a (relatively poor) human typist. It has other features, such as being able to steal 2FA codes sent via SMS (yet another reason not to use this transport method), intercept everything that’s displayed on the screen, grab the lockscreen PIN or pattern, and install executable files. The malware looks like an ordinary mobile banking app but there is nothing ordinary about it.

But Herodotus isn’t the only bad news bear that is out there. How about the RedTiger malware that steals data by flooding targeted systems with hundreds of processes and random files to confuse forensic examiners. That essentially buries any warnings to make it harder for security personnel to figure out where the pony is in this massive alert pile. And another malware that goes by the name CoPhish — it hides Microsoft Copilot commands within phishing the HTML text of emails. That text is designed to not be displayed if you are just reading them in your browser or email client.

What these three attack methods show is that the bad guys are getting better at hiding in plain sight, using AI methods and more subtle mechanisms to distribute their malware and then try to remain out of sight for several months while the attacker moves about trying to document the soft center of your network that will be compromised.

So you have been warned. Pick a better MFA method than SMS texts to get your pin codes. (My favorite is Authy, but there are plenty of others.)  Make sure to carefully vet any downloaded app to your phone before you start using it, and at the install time, please pay attention to the warnings about what permissions it requires to ensure that it isn’t grabbing everything it can. And don’t reply to any text message involving money that comes out of the blue, whether from your bank, your long-lost cousin traveling abroad, or someone who is acting friendly (want to join me for dinner). It’s a jungle out there, and sadly an old Greek guy was spot on about how much we know but still don’t have any power to do anything about it.

Deleting your private data will get easier: thanks California

Most of us have seen those annoying pop-up screens when browsing the web that ask us to accept some turgid privacy policies or approve the use of cookies to track our sessions. California and a few other states are trying to make things more secure and protect our privacy by introducing new regulations that will go into effect in the coming months or years. One of these technologies is called a universal opt-out preference signal or sadly the acronym OOPS. California’s explanation can be found here.

The universal part of the deal is that many websites will recognize these signals, so users don’t have to individually opt-out of tracking for each website that they visit where they are buying something online or sharing their personal information (such as a social network). CalOOPS will make this mandatory in January 2027. That is a long ways off to wait for this convenience. Several other states are moving to enact similar laws, although it is a long road ahead. The OOPS signals are already not required in six of the 19 states that have privacy protections — just showing how much of a crazy quilt our privacy picture is and will continue to be.

The OOPS laws are just one of a triad of regulations that were enacted earlier this month in California. The others required major social media platforms to provide users with a clear way to delete their accounts and ensure that the data in your account would be completely wiped. The third law requires data brokers to more stringent standards, including how deletion requests are handled by a new service called DROP. Those two go into effect in January 2026. Husch Blackwell (who does an excellent job tracking state privacy laws) has more info on this page describing the three laws.

DROP stands for Data Removal and Opt-Out Platform, and it will be a central place where consumers can begin the process of removing their data from multiple data brokers. If you have ever tried this on your own, you probably know how frustrating the process can be: first, the brokers are numerous and many of which are companies that you probably never heard of. Here is a list of more than 600 of them. Then, once you can find one, they make this deletion action as obscure as possible, or put you through various pathways (download a special app, submit a web form) that don’t inspire confidence. And realistically, how many brokers are you going to do this with anyway? And finally, is Facebook et al. a broker or a social network or just all-around evilness?

Remember the do-not-track phone settings on your phone? Probably not, because these were for the most part ineffective, and not mandatory. These new laws have enforcement provisions. We’ll see if that matters in the end.

Browser vendors with privacy controls are one answer, such as Brave, DuckDuckGo, or extensions such as PrivacyBadger (which I wrote about here). I have been using Opera Air, which has an ad blocker built in. There are two problems. First, these browser-based tools don’t always work on some websites that require pop-ups as part of a normal workflow, or the websites don’t want you to run ad blockers, because they lose revenue from displaying the ad banners. And second, as you might have guessed, there are no federal data privacy laws, and given the state of our Congress, chances are slim that we will see any soon. That means that laws could be enacted that work at cross-purposes.

I would be interested in hearing any strategies that work for you.

 

CSOonline: 12 Attack Surface Management tools reviewed

Potential Attack Surface Management buyers need to understand how various network and other infrastructure changes happen and how they can neutralize them.

Periodic scans of the network are no longer sufficient for maintaining a hardened attack surface. Continuous monitoring for new assets and configuration drift are critical to ensure the security of corporate resources and customer data.

New assets need to be identified and incorporated into the monitoring solution as these could potentially be part of a brand attack or shadow IT. Configuration drift could be benign and part of a design change, but also has the potential to be the result of human error or the early stages of an attack. Identifying these changes early allows for the cybersecurity team to react appropriately and mitigate any further damage.

I review 12 different ASM tools and also provide some questions to ask your team and the vendors about their ASM offerings in this updated article for CSOonline.

 

CSOonline: 5 steps for deploying agentic AI red teaming

Building secure agentic systems requires more than just securing individual components; it demands a holistic approach where security is embedded within the architecture itself. For my latest article for CSO Online, I delve into the world of using agentic AI for red teaming exercises. It is very much a work in progress. Many vendors of defensive AI solutions are still in their infancy when it comes to protecting the entirety of a generative AI model and the attack space is enormous.

CSOonline: Seven ASPM products compared

Having a central protections platform for application security requires a deep understanding of issues and product capabilities. Protecting your enterprise application collection requires near-constant vigilance and a careful choice of the right collection of defensive tools. As threats continue to become more complex and difficult to discover, applications have also become more complex and bridge the worlds of cloud, containers, and on premises. This presents all sorts of challenges for tools which have struggled to keep pace.

The latest category of products goes by the moniker of application security posture managers, or ASPM. I review seven different tools from these vendors in my latest post for CSOonline:

  • ArmorCode
  • Crowdstrike
  • Cycode
  • Ivanti
  • Legit Security
  • Nucleus Security
  • Wiz

 

How hackers can live inside your network for months

You might have seen this week’s story about how Ukrainian and other anti-Russian hackers brought down parts of Aeroflot’s networks, resulting in massive flight delays and cancellations. It turns out these hackers have had access to the airline’s systems for a year or more, and only recently have begun to play their hand. The hackers coordinated their efforts with numerous drone attacks on civilian airports and other Russian military targets, which has disrupted internet services across Russia to try to disconnect the drones from their commanders.

Despite sanctions, a predicted dearth of spare parts, and other restrictions, Aeroflot has flown millions of passengers in the past year. A report from Finland recently found about $1B in parts being purchased through cut-outs and other third-parties located in China and the UAE. It also didn’t hurt that at the onset of the war and subsequent sanctions, Russia seized about 500 planes that were present in the country, once owned by other airlines. (One crashed shortly after I wrote this post, the cause could be a lack of parts.)

As I was researching this story, I came across a tale from one of my IT contacts. He told me about a situation that happened about ten years ago at a mortgage services company that he was working with as a consultant. “On my first day I found most of their 2000 servers hadn’t been patched, for years! Many were running out of support for their operating systems and applications. The place was a cyber nightmare waiting to happen.” He eventually got the company to agree to patching and upgrading their servers. “Thankfully, we got everything fixed and put in a good security monitoring and incident management system. But then, a few weeks after the new security systems went online, the company detected an attempted breach.”

What happened was the attackers had been spending months accumulating intelligence and doing research into the corporate management chart by dialing into various public phone numbers and taking note of any names, departments and other info attached to those phone numbers. “Essentially, they built a phone book of the company. They then searched names to identify the exec’s, their admins, and anyone who would have elevated access to the company’s systems.” Thus began their second phase to spoof caller IDs to the company’s help desk, and phishing their targets, sending malware-laced emails under the guise of fixing some made-up cyber problem.The assembled phone book was used to give the phishing more cred.

“That morning four people took the bait and ran the attached file. Our security tools quickly spotted the problem. If this had happened a few weeks sooner it would have been very, very bad.”

Lesson learned: hackers can take their time to learn your vulnerabilities, and map your weaknesses. You have to be in the long game too.

Deepfakes are rapidly on the rise

I have written about the deepfake problem for many years, including this piece that was posted almost two years ago. The practice has reached epidemic proportions. A new report from VentureBeat cites its abuse in new candidate hiring practices. While the data originates from one of the numerous deepfake prevention vendors, it still is an indication of the widespread threat that this has become. The vendor claims that they have blocked 75M attempts in 2024, a 50x increase.

Hiring someone remotely has never been more pervasive and more difficult. Last year, Crowdstrike identified a North Korean state-sponsored actor that infiltrated new hires in more than 100 companies with fake identities. Gartner predicts that in a few years, a quarter of all candidates will be fakers. It used to be that North Korean hackers were the main source of the fakers, trying to get their spies hired at crypto and other tech companies. If you follow this link to a post that I wrote three years ago and scroll to my added comments, you’ll see that things have gotten much worse. Many companies receive numerous daily deepfake prospects, and security vendors such as KnowBe4 and Hypr have hired them.

One contributing factor has been the development of AI tools that enable deepfakery at scale. Another is that the trend towards remote workers has become the norm, making the initial test — having a candidate physically show up in your office — no longer possible.This also makes the traditional background check workflow useless, because in the past that assumed the candidate was a real person, with a legitimate identity.

And AI isn’t just deployed to manage the deepfake supply pipeline, but also create and flood the resume zone as well. Soon we will have nothing but AI on both ends: will my AI screening tool work to spot your AI submission? It probably is already happening.

An issue with the deepfake prospective candidate supply is that it crosses three layers that were separate security domains: the initial candidate credential submission, the network and other digital footprint of the computers used for the submission, and more general population characteristics. This is where the better protective products can help flag a deepfake: for example, when a device that is used to submit a resume and headshot is running on a free VPN with mismatched time zone and geolocation parameters, and with a newly-minted social media account. The general threat intel products are good at flagging malware code that employs recently created domains or social accounts, for example, but these tools have been slow to work in the candidate deepfake arena.

But it is only a matter of time before AI can conquer these inconsistencies. That will leave hiring managers more dependent on AI created  security tools.