This week in SiliconANGLE

Here are this week’s stories in SiliconANGLE.  My most interesting story is about one man’s effort to improve the power grid in Ukraine, thanks to a very clever collection of Cisco networking gear that provides backups when the GPS systems are jammed by the Russians.

Two stories of intrepid Red Cross volunteers

The American Red Cross responds quickly when disaster strikes. News programs are filled with striking scenes of disaster relief — shelters housing hundreds of survivors, the distribution of thousands of meals and disaster assessment volunteers at work across the affected area. But these efforts would be impossible without the support of the Operations Department working behind the scenes.

For one story, I interview Randy Whitehead and Dan Stokes and their various roles as volunteers. Both have transported a Red Cross emergency response vehicle from one location to another. That effort doesn’t capture news headlines, but it is essential to the mission.

For a second story, I spoke to the people behind an effort to help lawyers better understand international humanitarian law, something very much in the news these days. Lori Arnold-Ellis, the Executive Director of the Greater Arkansas chapter, and Wes Manus, an attorney and Red Cross board member, have expanded and extended a course first assembled by the International Red Cross called Even War Has Rules and are teaching it in our region to lawyers and non-lawyers alike. I took one of the courses and learned a lot too!

That is one of the reasons why I keep coming back to volunteer at the Red Cross: there are so many places to help out and you meet the most interesting people. It is terrific to get to talk to them and hear their stories.

This week in SiliconANGLE

Here are four stories that I wrote this week.

This week in SiliconANGLE

Happy holidays! Here are my stories for the week:

  • The group behind LockBit ransomware is now exploting the Citrix Bleed vulnerability, which made big news last month and still at risk for thousands of devices around the world. US and Australian cybersec officials released a security advisory this week that provide the details, and my article follows up with what is going on with this very dangerous and prolific ransomware operation.
  • The group behind the Phobos ransomware is also stepping up its game too.
  • I examine a series of recent cloud security reports, some surveys of IT managers and some taken from actual network telemetry of customers and public sources, to show a not very rosy picture of the situation. Secondary issues such as security alerts take too much time to resolve, and risky behaviors fester without any real accountability to prevent or change.

The latest ransomware ploy

Say your company has just been attacked by a ransomware gang, and they are demanding payment or they will do various criminal acts. So whom do you call first?

  1. The corporate security manager, to lockdown your network and begin the process of figuring out how they got in, what damage they have caused, and what your company needs to do to get back to normal operations,
  2. The chief legal officer, to activate law enforcement solutions,
  3. Your insurance agent, to find out the specifics of your cybersecurity policy and to begin the claims process
  4. The chief compliance officer, to begin the process of letting the various regulatory authorities know that a breach has occurred.

Ideally, you should make all of these calls in quick succession. But a situation involving a finserv firm’s ransom attack earlier this month has brought about a new wrinkle in what is now called the multipoint extortion games. This term refers to ransomware gangs using more than just encrypting your data as a way to motivate a company to pay up. Now they file a complaint with the SEC.

Say what? You mean that the folks who caused the breach are now letting the feds know? How is this possible? Read this story by Ionut Ilascu in Bleeping Computer for the deets. They have the victim on the record that they were breached, and information from the ransomware group seems to match up with a complaint that was filed with the SEC at about the same time period. So how annoyed were the ransomware gang that they decided on this course of action? The victim says they have contained the attack. The one trouble? Apparently the breach notification law doesn’t come into effect until next month that requires the mandatory disclosure. Someone needs to provide legal assistance to the bad guys and at least let them know their rights. (JK)

But seriously, if you have a corporate culture that prevents breach disclosure to your customers — at a minimum — now is the time to fix that and become more transparent, before you lose your customers along with the data that the ransomware folks supposedly grabbed.

This week on SiliconANGLE, I covered major security announcements adding AI features to the product lines of Microsoft, Palo Alto Networks, and Wiz. All are claiming — incorrectly — to be the first to do so.

This week at SiliconANGLE

I had an unusually productive week here at SA. This is the rundown.

First and foremost is my analysis of kubernetes and container security, which describes the landscape, the challenges, the opportunities for security vendors to fill the numerous gaps, and what else is going on here. There is a lot going on in this particular corner of the infosec universe, and I think you will find this piece interesting and helpful.

There were some shorter pieces that I also wrote:

SiliconANGLE: Biden’s AI executive order is promising, but it may be tough for the US to govern AI effectively

President Biden signed a sweeping executive order yesterday covering numerous generative AI issues, and it’s comprehensive and thoughtful, as well as lengthy.

The EO contains eight goals along with specifics of how to implement them, which on the surface sounds good. However, it may turn out to be more inspirational than effective, and it has a series of intrinsic challenges that could be insurmountable to satisfy. Here are six of my top concerns in a post that I wrote for SiliconANGLE today.

All in all, the EO is still a good initial step toward understanding AI’s complexities and how the feds will find a niche that balances all these various — and sometimes seemingly contradictory — issues. If it can evolve as quickly as generative AI has done in the past year, it may succeed. If not, it will be a wasted opportunity to provide leadership and move the industry forward.

This week in SiliconANGLE

In addition to my AI data leak story, (which you should read) here are several other posts that I wrote this week that might interest you:

— A new kind of hackathon was held last month, which prompted me to talk to several nerds who are working to improve the machinery that runs our elections. Fighting disinformation is sadly an ever-present specter. The hackathon brought together for the first time a group of security researchers and vendor reps who make the equipment, all in search of a common goal to squash bugs before the machines are deployed around the country.

— Managing your software secrets, such as API tokens, enryption keys and the like, has never been a pleasant task. A new tool from GitGuardian is available that kinda works the same way HaveIBeenPwned does for leaked emails, so you can lock these secrets down before you are compromised.

— The FBI has taken down 17 websites that were used to prop up the identities of thousands of North Korean workers, who posed as potential IT job candidates. This crew then funneled their paychecks back to the government, and spied on their employers as an extra added bonus. Thousands of “new hires” were involved in this scheme, dating back years.

SiliconANGLE: How companies are scrambling to keep control of their private data from AI models

This week I got to write a very long piece on the state of current data leak protection specifically designed to protect enterprise AI usage.

Ever since artificial intelligence and large language models became popular earlier this year, organizations have struggled to keep control over accidentally or deliberately exposing their data used as model inputs. They aren’t always succeeding.

Two notable cases have splashed into the news this year that illustrate each of those types of exposure: A huge cache of 38 terabytes’ worth of customer data was accidentally made public by Microsoft via an open GitHub repository, and several Samsung engineers purposely put proprietary code into their ChatGPT queries.

I covered the waterfront of numerous vendors who have added this feature across their security products (or in some cases, startups focusing in this area).

What differentiates DLP between the before AI times and now is a fundamental focus in how this protection works. The DLP of yore involved checking network packets for patterns that matched high-risk elements, such as Social Security numbers, once they were about to leave a secure part of your data infrastructure. But in the new world order of AI, you have to feed the beast up front, and if that diet includes all sorts of private information, you need to be more proactive in your DLP.

I mentioned this to one of my readers, who had this to say about how our infrastructures have evolved over the years since we both began working in IT:

“In the late 90’s we had mostly dedicated business networks and systems, a fairly simple infrastructure setup. Then we went through web hosting and the needs to build DMZ networks. Then came shared web hosting facilities and shared cloud service offerings. Over the years cloud services have built massive API service offerings. Each step introduced an order of magnitude of complexity. Now with AI we’re dealing with massive amounts of data.”

If you are interested in how security vendors are approaching this issue, I would love to hear your comments after reading my post in SA. You can post them here.

This week in SiliconANGLE

One of the stories that I wrote this week for SiliconANGLE is chronicling the start of the Israeli/Hamas war. As many of you know, my daughter has been living there for several years, finding her husband and now has two small boys. The horrors of this catastrophe are too much for me to describe. My story is about the cyber dimension of the war, and what we know so far in terms of hacking attempts on various institutions. For those of you interested, I have begun writing my thoughts about what is happening to my family there and sending them out via email, LMK if you would like to see these remarks.

Today’s story is about the hopeful demise of Microsoft’s NTLM protocol. Well, sort of. Microsoft has been trying — not too hard — to rid itself of this protocol for decades. Many IT managers probably weren’t born when it was invented back in the 1980s, and few of them even remember when it ran on their networks.

I have written several times about the hackers behind the Magecart malware, which is used to compromise ecommerce servers, such as from Woo Commerce and Magento. This week’s story is about how the latest versions conceal the code inside a web 404 status page. Talk about hiding in plain sight. Most of us — probably all of us — haven’t given a 404 page much second thought, but maybe now you will.

This week also saw a new uptick in DDoS threats that have been observed by several of the major online operators. What is particularly troubling is that this botnet isn’t all that big — maybe 20,000 endpoints — yet is amplifying and generating enormous traffic loads, in some case more than a fifth of a normal’s day in web traffic. I write about how they happen with a new type of threat called rapid reset, based on the HTTP/2 protocol.

Finally, one more chilling story about a new type of spyware called Predator. It is another multinational journalistic endeavor that has simillarities to the Pegasus files from 2021. What makes this spyware lethal is that you don’t have to click on it to activate it, and how pervasive it has been seen across the planet.

Thanks for reading my work, and stay safe out there.