This week at SiliconANGLE

I had an unusually productive week here at SA. This is the rundown.

First and foremost is my analysis of kubernetes and container security, which describes the landscape, the challenges, the opportunities for security vendors to fill the numerous gaps, and what else is going on here. There is a lot going on in this particular corner of the infosec universe, and I think you will find this piece interesting and helpful.

There were some shorter pieces that I also wrote:

This week in SiliconANGLE

In addition to my AI data leak story, (which you should read) here are several other posts that I wrote this week that might interest you:

— A new kind of hackathon was held last month, which prompted me to talk to several nerds who are working to improve the machinery that runs our elections. Fighting disinformation is sadly an ever-present specter. The hackathon brought together for the first time a group of security researchers and vendor reps who make the equipment, all in search of a common goal to squash bugs before the machines are deployed around the country.

— Managing your software secrets, such as API tokens, enryption keys and the like, has never been a pleasant task. A new tool from GitGuardian is available that kinda works the same way HaveIBeenPwned does for leaked emails, so you can lock these secrets down before you are compromised.

— The FBI has taken down 17 websites that were used to prop up the identities of thousands of North Korean workers, who posed as potential IT job candidates. This crew then funneled their paychecks back to the government, and spied on their employers as an extra added bonus. Thousands of “new hires” were involved in this scheme, dating back years.

SiliconANGLE: How companies are scrambling to keep control of their private data from AI models

This week I got to write a very long piece on the state of current data leak protection specifically designed to protect enterprise AI usage.

Ever since artificial intelligence and large language models became popular earlier this year, organizations have struggled to keep control over accidentally or deliberately exposing their data used as model inputs. They aren’t always succeeding.

Two notable cases have splashed into the news this year that illustrate each of those types of exposure: A huge cache of 38 terabytes’ worth of customer data was accidentally made public by Microsoft via an open GitHub repository, and several Samsung engineers purposely put proprietary code into their ChatGPT queries.

I covered the waterfront of numerous vendors who have added this feature across their security products (or in some cases, startups focusing in this area).

What differentiates DLP between the before AI times and now is a fundamental focus in how this protection works. The DLP of yore involved checking network packets for patterns that matched high-risk elements, such as Social Security numbers, once they were about to leave a secure part of your data infrastructure. But in the new world order of AI, you have to feed the beast up front, and if that diet includes all sorts of private information, you need to be more proactive in your DLP.

I mentioned this to one of my readers, who had this to say about how our infrastructures have evolved over the years since we both began working in IT:

“In the late 90’s we had mostly dedicated business networks and systems, a fairly simple infrastructure setup. Then we went through web hosting and the needs to build DMZ networks. Then came shared web hosting facilities and shared cloud service offerings. Over the years cloud services have built massive API service offerings. Each step introduced an order of magnitude of complexity. Now with AI we’re dealing with massive amounts of data.”

If you are interested in how security vendors are approaching this issue, I would love to hear your comments after reading my post in SA. You can post them here.

This week in SiliconANGLE

One of the stories that I wrote this week for SiliconANGLE is chronicling the start of the Israeli/Hamas war. As many of you know, my daughter has been living there for several years, finding her husband and now has two small boys. The horrors of this catastrophe are too much for me to describe. My story is about the cyber dimension of the war, and what we know so far in terms of hacking attempts on various institutions. For those of you interested, I have begun writing my thoughts about what is happening to my family there and sending them out via email, LMK if you would like to see these remarks.

Today’s story is about the hopeful demise of Microsoft’s NTLM protocol. Well, sort of. Microsoft has been trying — not too hard — to rid itself of this protocol for decades. Many IT managers probably weren’t born when it was invented back in the 1980s, and few of them even remember when it ran on their networks.

I have written several times about the hackers behind the Magecart malware, which is used to compromise ecommerce servers, such as from Woo Commerce and Magento. This week’s story is about how the latest versions conceal the code inside a web 404 status page. Talk about hiding in plain sight. Most of us — probably all of us — haven’t given a 404 page much second thought, but maybe now you will.

This week also saw a new uptick in DDoS threats that have been observed by several of the major online operators. What is particularly troubling is that this botnet isn’t all that big — maybe 20,000 endpoints — yet is amplifying and generating enormous traffic loads, in some case more than a fifth of a normal’s day in web traffic. I write about how they happen with a new type of threat called rapid reset, based on the HTTP/2 protocol.

Finally, one more chilling story about a new type of spyware called Predator. It is another multinational journalistic endeavor that has simillarities to the Pegasus files from 2021. What makes this spyware lethal is that you don’t have to click on it to activate it, and how pervasive it has been seen across the planet.

Thanks for reading my work, and stay safe out there.

SiliconANGLE: The Hamas-Israeli war is also being fought in cyberspace

The war between Hamas and Israel is also raging across the cybersecurity realm, with various malware exploits, disinformation campaigns and recruitment of citizen hackers seen on both sides of the conflict. Security researchers are seeing an increase in cyberattacks targeting Israeli businesses and government agencies.

In my story for SiliconANGLE, I document some of the hacker groups involved.

SiliconANGLE: How the International Red Cross aims to make civilian wartime hacking more humanitarian

The role of civilian hackers during warfare continues to expand, and now at least one group is trying to set up some rules of engagement. But whether the proposal from the International Committee of the Red Cross announced Wednesday will gain any traction and make these attempts more humane is anyone’s guess. In this story for SiliconANGLE, I review the roles that civilian hackers have played in previous conflicts, how the Russian/Ukrainian war has escalated civilian participation, and what this new proposal means for future conduct.

 

SiliconANGLE: This week’s news

I have known John Kindervag for many years, going back to the days when Novell Netware was a major power and Interop a must-see international conference. Yes, those dinosaurs have become extinct, but John soldier’s on with promoting zero trust networking far and wide. Now he is with Illumio, which seems like a great fit. I interview him for a post here.

Have you heard the term purple teams in reference to IT security? There is yet another new vendor on the purple scene, and the purple trend is catching on, albeit slowly. The notion is to have both defenders and attackers collaborate, and learn something from each other. Here is my take on the situation.

Finally, there has been yet another NFT hack, this time with one of the OG NFT marketplaces OpenSea. It is not their first time when funds were stolen. You would hope by now they would have gotten their act together. Here is my post about the situation.

SiliconANGLE: Security threats of AI large language models are mounting, spurring efforts to fix them

A new report on the security of artificial intelligence large language models, including OpenAI LP’s ChatGPT, shows a series of poor application development decisions that carry weaknesses in protecting enterprise data privacy and security. The report is just one of many examples of mounting evidence of security problems with LLMs that have appeared recently, demonstrating the difficulty in mitigating these threats. I take a deeper dive into a few different sources and suggest ways to mitigate the threats of these tools in my post for SiliconANGLE here.

 

SiliconANGLE: California stays ahead on state privacy protection

California has become the latest state to enact a special law regulating how consumers can remove themselves from data brokers. The Delete Act was passed this week and it’s now up to Governor Gavin Newsom to sign it into law. But it has already led to similar laws and bills being proposed in other states in next year’s legislative sessions.

My summary of the past summer’s privacy laws enacted across the country, what makes California stand out, and the problem with data brokers all can be found in my latest piece for SiliconANGLE here.

SiliconANGLE: Deepfake cyberthreats keep rising. Here’s how to prevent them

As expected, this summer has seen a rise in various cybersecurity threats based on deepfake audio and video impersonations.

Despite warnings from the Federal Bureau of Investigation in June, it’s now quite common to experience these types of threats. The fakes are used to lend credibility to larger exploits, such as for a phishing email lure or a request from a superior. These can run the gamut of executive impersonation, performing various forms of financial fraud and obtaining stolen account credentials. My story for SiliconANGLE provides some perspective.