This week in SiliconANGLE

Happy holidays! Here are my stories for the week:

  • The group behind LockBit ransomware is now exploting the Citrix Bleed vulnerability, which made big news last month and still at risk for thousands of devices around the world. US and Australian cybersec officials released a security advisory this week that provide the details, and my article follows up with what is going on with this very dangerous and prolific ransomware operation.
  • The group behind the Phobos ransomware is also stepping up its game too.
  • I examine a series of recent cloud security reports, some surveys of IT managers and some taken from actual network telemetry of customers and public sources, to show a not very rosy picture of the situation. Secondary issues such as security alerts take too much time to resolve, and risky behaviors fester without any real accountability to prevent or change.

The latest ransomware ploy

Say your company has just been attacked by a ransomware gang, and they are demanding payment or they will do various criminal acts. So whom do you call first?

  1. The corporate security manager, to lockdown your network and begin the process of figuring out how they got in, what damage they have caused, and what your company needs to do to get back to normal operations,
  2. The chief legal officer, to activate law enforcement solutions,
  3. Your insurance agent, to find out the specifics of your cybersecurity policy and to begin the claims process
  4. The chief compliance officer, to begin the process of letting the various regulatory authorities know that a breach has occurred.

Ideally, you should make all of these calls in quick succession. But a situation involving a finserv firm’s ransom attack earlier this month has brought about a new wrinkle in what is now called the multipoint extortion games. This term refers to ransomware gangs using more than just encrypting your data as a way to motivate a company to pay up. Now they file a complaint with the SEC.

Say what? You mean that the folks who caused the breach are now letting the feds know? How is this possible? Read this story by Ionut Ilascu in Bleeping Computer for the deets. They have the victim on the record that they were breached, and information from the ransomware group seems to match up with a complaint that was filed with the SEC at about the same time period. So how annoyed were the ransomware gang that they decided on this course of action? The victim says they have contained the attack. The one trouble? Apparently the breach notification law doesn’t come into effect until next month that requires the mandatory disclosure. Someone needs to provide legal assistance to the bad guys and at least let them know their rights. (JK)

But seriously, if you have a corporate culture that prevents breach disclosure to your customers — at a minimum — now is the time to fix that and become more transparent, before you lose your customers along with the data that the ransomware folks supposedly grabbed.

This week on SiliconANGLE, I covered major security announcements adding AI features to the product lines of Microsoft, Palo Alto Networks, and Wiz. All are claiming — incorrectly — to be the first to do so.

This week at SiliconANGLE

I had an unusually productive week here at SA. This is the rundown.

First and foremost is my analysis of kubernetes and container security, which describes the landscape, the challenges, the opportunities for security vendors to fill the numerous gaps, and what else is going on here. There is a lot going on in this particular corner of the infosec universe, and I think you will find this piece interesting and helpful.

There were some shorter pieces that I also wrote:

My 30-year love affair with TCP/IP

Is it possible to fall in love with a protocol? I mean, really? I know I am a nerd, and I guess this is yet further evidence of my nerdom. But to properly tell this story, we have to go in the Wayback Machine with Mr. Peabody to 50 years ago, when Vint Cerf and Bob Kahn were working at Stanford and inventing these protocols. I was too young to appreciate the events at the time, but later my life would change drastically as I learned more about TCP/IP and how to get it working in my professional life.

You can read the original 1974 paper here as well as watch an interview with both men that was recorded earlier this year.

In the mid 1990s, I would meet Vint and so began our correspondence that has lasted to this day. I posted an interview with him in 2005 here that is still one of my favorite profiles. This was when he was about to start at Google and when I was running Tom’s Hardware. I asked him to recall the most significant moments of TCP/IP’s development:

  • 1/1/1983 – The cutover on Arpanet to TCP/IP
  • 6/1986 — The beginning of NSFNET
  • 1994 — Netscape supports HTTP over TCP/IP and when Berkeley BSD 4.2 unix release with support for TCP/IP
  • 2007 – The introduction of the iPhone

That is a pretty broad piece of computing history.

TCP/IP spent its first couple of decades growing up. Few people used it, and those that did were more akin to being members of a secret society, the keepers of the flame called Unix. (Unix would evolve into Linux, as well as the MacOS, and then into containers.) But then something called the Internet caught hold in the early 1990s. I wrote a blog post not too many years ago about the early tools we had to suffer with during that era to get TCP/IP working on other computers, such as DOS and Windows and Netware. It was far from easy, and many businesses had all sorts of pain points to get TCP/IP working properly. BTW, that link also has a hilarious clip about “the internet” that has held up well.

Netware is actually where my love for the protocols blossomed. Many of you might recall how powerful this early network operating system was, and how it could run multiple protocols with relative ease. They saw the importance of TCP/IP and invested heavily in equipment that would bring it to ordinary desktops, and by ordinary I mean the versions of Windows that we had to suffer with back then. Setting up a computer to connect to something else then was made a lot easier with Netware’s TCP/IP support.

But it wasn’t just Netware, but the web that really turbo-charged TCP/IP. That also took off during the 1990s, and it went from curiosity to standard practice seemingly overnight. The web really changed how we interacted with information. In my own case, I saw the publications that were making millions of dollars selling printed magazines go to a much reduced online form, and editorial staffs drop dozens of people from their mastheads. Now it is rare that a publication has more than a single full-time editor, which is great if you are a freelancer (which I am) but then budgets continue to shrink too, which is not great.

But in spite of these cataclysmic moments, I still say that I love TCP/IP. I don’t blame the protocol for the transformation of my industry. Au contraire, it made my computing life so much easier. Its beauty was its extensibility, its universal connectedness that was useful in so many different situations. And it also enabled so many apps, both then and now. And every app tells another story, which is after all my bread and butter.

This week, I bought a lighting controller that supports TCP/IP, for example. And that brings up another point. Today, we don’t give TCP/IP much attention, because it has been woven into the fabric of our computing systems so well. It is pervasive: you would be hard pressed to name a computer that doesn’t support TCP/IP. And by computer, I mean our smart TVs and other home appliances, our cable modems, our networks, our cars.

Vint wrote me after he read this essay: “TCP/IP has been improved over the years by people like Van Jacobson and David Taht among others. Google introduced QUIC which provides TCP-like functionality with some additional features. But it has certainly been a workhorse for the world wide web and its applications.” Note what he is doing here: giving credit to other innovators and extensions who have built some interesting things on what he and Kahn came up with 50 years ago. A class act.

So much love to spread around. I count myself lucky to have been present for the last 30 years of the tenure of TCP/IP, and chronicle its growth and popularity.

The decline of Skype

About 20 years ago, Skype was the backbone of my telecoms. I used it to stay in touch with a worldwide collection of editors when I was running Tom’s Hardware and to make all of my international calls for pennies per minute. Some of you are old enough to remember when these calls cost dearly, if they could be made at all.

When you think about this broad stretch of time, and that you can now reach people on the other side of the world, with usually solid audio (and in some cases video) quality, it is pretty amazing. And it is nice to have lots of choices for your comms too.

If you want some perspective on how much this tech has changed since 2006, check out this piece that I wrote for the NY Times about the business instant messaging use. Remember Lotus Sametime? Jabber? AOL? Yahoo?

I wrote about this most recently in 2020 here, where I staked out the entire messaging interoperability problem, and when Teams was just muscling into this market.

This week I gave up my subscription and last remaining Skype credits of some $3. I haven’t used the thing in months, and it was time to say goodbye. Since being absorbed by the Redmond Borg, it has gotten less usable and useful. I almost always get stuck trying to figure out how to authenticate myself into live.com among my numerous accounts.

My choices for international communications is now plentiful. If I have to actually talk to someone, the most used is WhatsApp, which works reasonably well and is almost universal among people that I connect with. In second place is texting, either using SMS/iMessage or sometimes with Facebook Messenger. If I were younger I would probably put texting in first place. I use Microsoft Teams or Slack to communicate with my business colleagues, depending on which platform they are using. Sometimes I use Google Talk to make a few calls from my computer. My mother-in-law has an Alexa show, which makes for yet another channel to use.

Juggling all this tech can be tiresome to be sure. But it meant that Skype was gradually marginalized as time went on.

SiliconANGLE: Biden’s AI executive order is promising, but it may be tough for the US to govern AI effectively

President Biden signed a sweeping executive order yesterday covering numerous generative AI issues, and it’s comprehensive and thoughtful, as well as lengthy.

The EO contains eight goals along with specifics of how to implement them, which on the surface sounds good. However, it may turn out to be more inspirational than effective, and it has a series of intrinsic challenges that could be insurmountable to satisfy. Here are six of my top concerns in a post that I wrote for SiliconANGLE today.

All in all, the EO is still a good initial step toward understanding AI’s complexities and how the feds will find a niche that balances all these various — and sometimes seemingly contradictory — issues. If it can evolve as quickly as generative AI has done in the past year, it may succeed. If not, it will be a wasted opportunity to provide leadership and move the industry forward.

Ten Biggest PR Blunders of the year

I wrote this some time ago. Can you guess when?

So it is that time of year, when we think back on all of our past successes and failures. Here are the most notable PR blunders that we’ve seen in the year. We have removed the actual names of the offending parties, just to make it a more sporting game.

  1. Berating the reporter for non-responsive emails. This includes: cc’ing my boss about my behavior, intimating that I was in bed with one of the client’s competitors, and USING ALL CAPS. Totally not cool. Focus on building a relationship with me and my colleagues.
  2. Calling after emailing some news. See above about berating. Once is enough for contact. Twice is annoying. Thrice means you go to the back to the bus. I do look at my emails. Assume no response from me means I am not interested. Realize that every day I get dozens upon dozens of requests to “have the CEO brief you on this amazing trend.” Also, if you email multiple people here at my website, don’t expect any of them to answer. The more is not the merrier.
  3. Stating this is the “first ever thing” when it most certainly isn’t. Don’t you think I would check? Shouldn’t you challenge your client to provide more details and specifics and you’ll find out they really aren’t the first. And don’t argue with me. If I don’t think it is the first, accept this and move on. We always have the last word.
  4. Not answering a direct question for more information with specifics. I am on deadline. Seconds count. Get your ducks in a row before calling me. You would be amazed how many emails and press releases omit basic information, such as pricing. “We don’t publish pricing because we are a Web service and every deal is custom.” Still not an excuse.
  5. Starting a conference call with more than three people on it: you (PR rep), me, the client is all that is needed. Actually, we don’t really need you on the call. But more than that isn’t going to end well. It is hard to ask a question when so many people are on a conference call. And speaking of which, don’t just read me a script either. Interact and ask me real questions about what I am interested in. Don’t know what I am interested in? Try reading my clips, and more than the one that the client is berating you for not appearing in too.
  6. Insist on making it slide-by-slide through the entire 57 slide PPT deck. Three slides should be enough. Or none at all. See above. The less scripted your presentation, the more I will actually listen. Calls shouldn’t last more than 20 minutes.
  7. Don’t schedule a Webex to show me slides without any demo, particularly after I said that I wanted to see a demo. Listen to me please. Better yet, give me an eval account to your client’s new whizbang Web service and I can try it out on my own and not tie up everyone’s time. If it really requires hand-holding, then perhaps it isn’t ready for the press to look at it either.
  8. Don’t send me an analyst’s report without a URL where I can actually download it and read it. I don’t want your summary, if I am interested; I want to read the report. A link to a lead-gen capture page doesn’t count. Same goes for the press release: you would be amazed how many releases aren’t posted on the client’s website.
  9. If you want me to do an embargo, play fair with all of my competitors. And be specific about times and dates. Yes, I can get confused sometimes. Put the expiration date information on each piece of correspondence, because sometimes I forget. Better yet, forget embargoes entirely. And understand that embargoes also complicate my ability to reference something that won’t appear on your client’s website until the due date. We like to actually check our outbound links before we post the article containing them.
  10. Remember we have a comments/discussion forum for the following things that you are free and welcome to use:
    –Your client was not mentioned in my article, but does offer these amazing things and you want me to write a separate piece on them. Use the comments.
    — You would like me to make these additional points that I some how forget to mention in my article. Use the comments.
    –The CEO has a different take on things than I. S/he is entitled to that opinion, and welcome to post a comment.
    –You have this great case study about a customer using your client’s tech. Post a link to the website where you have more info.

The decline of online shopping

I have been writing about online shopping for more than 25 years, starting in the mid-1990s when I became so enmeshed in it that I taught classes for IT folks to implement it in practice in their companies. I reviewed that history in an earlier post here.

Back in those early days,​ I had fun assignments like trying to figure out how long it took staff from an online storefront to respond to me-as-a-customer email queries, or documenting how hard it was to actually buy stuff online. Yes, someone actually was paying me to write an article about online stuff, which then would be published in a printed magazine weeks later. It seems so quaint now.

I also had a two-day seminar at various international trade shows about understanding internet commerce, payment systems, and installing and operating your own web storefront. One group of the attendees were from the US Postal Service, who were trying to put up a storefront selling stamps. Seems simple, right? What happens when your inventory can’t reflect the actual real-time situation — then you have a lot of angry stamp collectors. As I said, fun times.

Today I want to vent about a more basic issue: why has the online storefront become such a shopping hellscape? Let me explain.

Last week I wasted about an hour of my life trying to purchase two toiletries: shaving cream and deodorant. For many things, I am not brand-sensitive, but for these two items I am. Being a Prime Family, I went first to Amazon, where I was presented by dozens of online merchants that would try to sell me the exact item that I wanted. Except, they weren’t actually Amazon itself, but third parties. Many of which had “only 2 items left” warning labels — the latest come-on employed by online scammers everywhere. Create that sense of urgency, fueled by Covid supply chain issues, and get the customer to commit NOW! I moved on.

Next was Target.com, where I was greeted by first making sure that I had captured my account password before I attempted to buy anything. Then I had to decide which of three methods to get my stuff: by mail, pickup in the nearest store (what was my zip code, since I neglected — deliberately — to have that in my account profile), or same-day delivery. Each had a raft of options depending on how quickly I needed my items. And I hadn’t yet gotten to where I actually could search for my two precious toiletries. Forget Target.

Walgreens and CVS websites weren’t much better. I almost bought something here — I can’t recall which drug store — that would have one item mailed, one that I could pickup. Only it wasn’t at the nearest store, but one a few miles away. What was I doing? That was when I came to my senses.

I closed my computer in disgust and got on with my day.

Yesterday, I resumed my quest. There is a local drugstore that is a few blocks from my house, and I happened to walk by and thought, let’s just go in and see what they have in stock. Now, this is a small family operation not affiliated with the big chains. But that is a good thing because of three reasons. First, if you call them, you can actually talk to a live pharmacist within a moment, without having to wait on hold for 20 minutes or more. Second, they don’t lock away their stuff, like the big chains do, because of theft problems. But they get around that with an interesting twist: their shelves look very bare, because only one item of a given product is put there. That is their solution to shoplifters and the effect initially is quite eerie. But they didn’t have my brands, so I went away empty handed. (The third reason is that I have gone there to get my shots, because again they are easy to deal with.)

I came home frustrated. Then I thought I would try the small grocery store literally across the street from my home. Finally — and ironically — success. After running around in circles, the solution was simple, and the prices just a little more for the convenience of not having to navigate a series of lengthy menus and other effluvia.

Mission accomplished.

So what has happened to online storefronts in the past 25 or so years? In the quest to make everyone able to buy just about anything, they have become unusable. Menus are inscrutable, choices confound, and delivery mechanisms are so plentiful that they can paralyze consumers. So as I am looking through my slide deck for those c.1997 seminars that I taught around the world, I happened upon this summary of the implications of ecommerce:

  • Consumer control of privacy is essential  — most folks simply want the choice of opting out
  • The granularity of control must be fine, e.g.,
    • over number and frequency;
    • over categories of interests; and/or
    • over (indirect) dissemination to third-parties

In some respects, we have come a long way since those early days. In others, we are still learning these basic concepts. And next time I need something, I will head across the street to my local shop first.

This week in SiliconANGLE

In addition to my AI data leak story, (which you should read) here are several other posts that I wrote this week that might interest you:

— A new kind of hackathon was held last month, which prompted me to talk to several nerds who are working to improve the machinery that runs our elections. Fighting disinformation is sadly an ever-present specter. The hackathon brought together for the first time a group of security researchers and vendor reps who make the equipment, all in search of a common goal to squash bugs before the machines are deployed around the country.

— Managing your software secrets, such as API tokens, enryption keys and the like, has never been a pleasant task. A new tool from GitGuardian is available that kinda works the same way HaveIBeenPwned does for leaked emails, so you can lock these secrets down before you are compromised.

— The FBI has taken down 17 websites that were used to prop up the identities of thousands of North Korean workers, who posed as potential IT job candidates. This crew then funneled their paychecks back to the government, and spied on their employers as an extra added bonus. Thousands of “new hires” were involved in this scheme, dating back years.

SiliconANGLE: How companies are scrambling to keep control of their private data from AI models

This week I got to write a very long piece on the state of current data leak protection specifically designed to protect enterprise AI usage.

Ever since artificial intelligence and large language models became popular earlier this year, organizations have struggled to keep control over accidentally or deliberately exposing their data used as model inputs. They aren’t always succeeding.

Two notable cases have splashed into the news this year that illustrate each of those types of exposure: A huge cache of 38 terabytes’ worth of customer data was accidentally made public by Microsoft via an open GitHub repository, and several Samsung engineers purposely put proprietary code into their ChatGPT queries.

I covered the waterfront of numerous vendors who have added this feature across their security products (or in some cases, startups focusing in this area).

What differentiates DLP between the before AI times and now is a fundamental focus in how this protection works. The DLP of yore involved checking network packets for patterns that matched high-risk elements, such as Social Security numbers, once they were about to leave a secure part of your data infrastructure. But in the new world order of AI, you have to feed the beast up front, and if that diet includes all sorts of private information, you need to be more proactive in your DLP.

I mentioned this to one of my readers, who had this to say about how our infrastructures have evolved over the years since we both began working in IT:

“In the late 90’s we had mostly dedicated business networks and systems, a fairly simple infrastructure setup. Then we went through web hosting and the needs to build DMZ networks. Then came shared web hosting facilities and shared cloud service offerings. Over the years cloud services have built massive API service offerings. Each step introduced an order of magnitude of complexity. Now with AI we’re dealing with massive amounts of data.”

If you are interested in how security vendors are approaching this issue, I would love to hear your comments after reading my post in SA. You can post them here.