Do you really know where your XP lurks?

I was visiting an industrial firm this week and had a chance to walk around their shop floor to see their equipment. It was a mix of high and low tech, machines that cost several thousands of dollars sitting alongside some very primitive pieces of hardware. Unfortunately, these primitive things were PCs running Windows XP.

Now, I have a fond spot in my being for XP. Just playing that startup sound sends chills up my spine (well, almost). I spent a lot of time running it for various tests that I got paid to do back in the day when IT pubs paid for that sort of thing. I had a stack of VMs running various situations, along with a couple of real PCs that had different versions of XP that I maintained for years. It was only with some reluctance that I eventually gave them up. Since then I have rarely run any XP on anything, because it has been superseded by several newer (and supported) versions of Windows. It appears I am not alone: XP is still around: according to this report, it can be found on 3% of total PCs on consumer desktops, and I am sure that number doesn’t include those in industrial and embedded environments such as I witnessed this week. BTW, Microsoft ended support for XP five years ago, although earlier this year it did create a patch to fix the Bluekeep flaw for XP.

The XP PCs that I saw were used by the firm to control some of their pricey industrial machines. I have no idea the network infrastructure at this shop, nor how much protection was put in place to continue to use XP in their environment. But it almost doesn’t matter: if you have XP, you are basically hanging a sign outside your virtual door that says, “come on in and hack me.” It is just a matter of time before some bad actor finds and exploits these PCs. It is like leaving a jar of honey out. This post written to help consumers use XP more safely recommends, “stop using IE or go offline.” That is harder to do than you might think.

Most likely, replacing this equipment with a more modern version of Windows isn’t all that simple. The machinery has to be tested, and probably has code that needs to be rewritten to work on the newer Windows. And you will say, that is the entire point, and you would be right. But the firm isn’t going to stop using XP, because then they would be out of business. So they are in between a rock and a hard place, to be sure.

So here is a simple security test that you can try out in your business. How many endpoints do you have that are still running XP? Just take a census, using whatever automated tool you might have. Now walk around and see if you can find a few others that are hidden inside industrial equipment, or a printer server, or some other likely location. Do you have the right network isolation and protections in place? Can you do without an internet connection to these PCs? Why did your automated scanners fail to identify these devices? Can you get rid of them completely, or is the vendor still insisting on using XP for their equipment? I think you will be surprised, and not in a good way, what the answers are.

And for those of you that are running XP at home, do yourself a favor and take a trip this weekend to MicroCenter (or whatever is your local computer store) and buy yourself a new computer, and dispose of your old one (after first removing your hard drive). And if needed, conduct an appropriate memorial service to bid this OS a fond farewell.

 

RSA blog: Taking hybrid cloud security to the next level

RSA recently published this eBook on three tips to secure your cloud. I like the direction the authors took but want to take things a few steps further.  Before you can protect anything, you first need to know what infrastructure you actually have running in the cloud. This means doing a cloud census. Yes, you probably know about most of your AWS and Azure instances, but probably not all of them. There are various ways to do this – for example, Google has its Cloud Deployment Manager and Azure has an instance metadata service to track your running virtual machines. Or you can employ a third-party orchestration service to manage instances across different cloud platforms.

Here are my suggestions for improving your cloud security posture.

CSOonline: Evaluating DNS providers: 4 key considerations

The Domain Name System (DNS) is showing signs of strain. Attacks leveraging DNS protocols used to be fairly predictable and limited to the occasional DDoS floods. Now attackers use more than a dozen different ways to leverage DNS, including cache poisoning, tunneling and domain hijacking. DNS pioneer Paul Vixie has bemoaned the state of DNS and says that these attacks are just the tip of the iceberg. This is why you need to get more serious about protecting your DNS infrastructure and various vendors have products and services to help. You have four key options; here’s how to sort them out in a piece that I wrote for CSOonline..

Dark Reading: Understanding & Defending Against Polymorphic Attacks

I first wrote about polymorphic malware four years ago. I recall having a hard time getting an editor to approve publication of my piece because he claimed none of his readers would be interested in the concept. Yet in the time since then, polymorphism has gone from virtually unknown to standard practice by malware writers. Indeed, it has become so common that most descriptions of attacks don’t even call it out specifically. Webroot in its annual threat assessment from earlier this year reported that almost all malware it has seen had demonstrated polymorphic properties. You can think of it as a chameleon of malware.

In this post for Dark Reading, I describe how polymorphism has gotten popular with both attackers and defenders alike, the different approaches that the vendors have taken, and some suggestions on keeping it out of your infrastructure.

What becomes an online museum most?

Those of you of a certain age might remember a print ad campaign for the Blackgama fur company that ran for many years, beginning in the 1960s with this image of Lauren Bacall wearing one of their mink coats.

4 R

I am riffing on this theme after visiting the National Cryptologic Museum outside the NSA offices in suburban Maryland this week. I remembered the ads because of my overall experience with the museum, and its relationship between its physical plant and the online and other publications that the historical arm of the NSA has produced.

As long-time readers recall, last summer I visited Bletchley Park in the UK. It was a great day spent at the complex and I learned a lot. Sadly, the NSA’s museum was a disappointment. And it made me realize that what makes a great museum when you first go to the actual building is part of what makes for a great online museum experience. Unfortunately, the NSA museum has neither.

Many of the world’s greatest museums have played catch-up when it comes to their websites. This is more than getting their catalogs digitized, then getting them redone with higher resolution or newer imaging technologies. It is more than organizing their collection for visitors, academics and other specialists that want to search them for their own research or just personal interests. It is also more than having something that is visually attractive to leverage the latest curatorial trends.

These great museums have also had to embrace technology in their actual buildings, something that I first wrote about for the NY Times when I visited the Abe Lincoln museum in Springfield, Ill. back in 2008. At that visit, I got to see first-hand a variety of things that are normally used in theatrical productions or rock concerts, such as spotlights, one-way mirrors and sophisticated sound systems to tell the story about Lincoln’s life and times. From this piece, I wrote for HPE’s blog about how the best code developers are learning to hone their craft and improve their user experience from these innovative museum designers. For example, augmenting the visuals with other sensory experiences, understanding the consequences of context switching when it comes to tell your story and so forth.

That is why the NSA museum stands out, but not in a good way. It is a subject that is near and dear to my heart, cryptology and its origins and use in the modern era. Check. It is located near the NSA, an interesting place in its own right. Check. It has plenty of classic stories about some key developments, going back to the Revolutionary War and how codes and encryption played a role in the birth of our country. Check. It has several Engima units on display, showing the evolution of the machine that you can actually touch. Check. It is dull as dishwater and has exhibits that looked like they were created back when the Apollo program was in its heyday. Big fail.

The best part of the museum wasn’t any of the exhibits but a tour that I happened upon led by a docent. Turns out he was a former Russian linguist that worked for the NSA for many years. His stories were great, and he answered all my questions with interesting personal insights (and correctly, I might add). I only wish he had a better physical plant to show his visitors.

For example, one exhibit is about how the Soviets bugged our buildings in Moscow. It begins with this object that is on display in the museum: it looks like a nicely crafted wooden replica of the US government seal. It was given to then U.S. Ambassador Averell Harriman back in 1945 as a gift from Soviet children. It hung in the Ambassador’s residence for seven years, until a bug was found inside the carving. While what is shown is a replica, you can open a special hinge that was installed by the museum so you can see where the bug was located.

This story was a nice precursor to a major operation that took place in the 1970s called The Gunman Project. At that time, we found out the Soviets had increased their bugging program and put technology into 16 different IBM typewriters in our Moscow embassy offices to record the documents that were being prepared. I saw the Great Seal replica (and engaged with opening and closing it) at the museum, I took home a pamphlet about Gunman that I read avidly on the flight home. I tried to find an online copy of this document, I did find the text here. The document was nicely produced and I learned a lot from it. Now contrast that information with this link to another Gunman story, this one produced by two private Dutch crypto enthusiasts. It actually is a much better explanation, and even with the pictures included in the original NSA pamphlet, this latter piece is 1000% better and more engaging.

So if you are interested in the history of crypto, my suggestion is to forgo the actual visit. The NSA is working on building a new museum, but that could take years. In the meantime, read some of the supporting materials on their website or better yet, check out other entries at the online Crypto Museum. Second, if you are going to design a new museum, think of how the online and actual physical presence have to work together to build the best visitor experience.

HP Enterprise.nxt: Ways to expose your business to ransomware


No computing professional wants to encounter a ransomware attack. But these six poor IT decisions can make that scenario more likely to occur. Ransoms are not the result of an isolated security incident but the consequence of a series of IT missteps. Moreover, it often exposes poor decision-making that indicates deeper management issues that must be fixed. In this article for HPE’s Enterprise.nxt website, I discuss how these missteps can be corrected before you are the subject to the next attack.

If we do our job, nothing happens

There is a line in a recent keynote speech by Mikko Hypponen, the CRO of F-Secure that goes something like this: “If we do our job in cyber security, then nothing happens.” It is so true, and made me think of the times when various corporate executives challenge their investments in cyber security, wanting to see something tangible. Mikko makes this point by asking them to look around at the conference room where these conversations are taking place, asking them if the rooms are cleaned to the satisfaction of the execs. If so, perhaps they should fire their cleaning staff, because they are no longer needed.

Now the difference between your security engineering staff and your janitors is obvious. You can’ t see all the virtual dirt that is building up across your network, the cruft of old software that needs updating and polishing, and the garbage that your users download on to their PCs that will leave them susceptible to attack. And that is part of the problem with cyber security: most things are invisible to mere mortals, and even some specialists can’t always agree on the best cyber hygiene techniques. Most of us have an innate sense that mopping the floor before dusting the shelves above is the wrong way to go about cleaning the room. That is because we all understand (at least on a basic level) how gravity operates. But when it comes to cyber, should we be changing our password regularly (some say yes, some say nay)? Or using complex strings of a certain length (some say 10 digits is fine, others say longer ones are needed)?

Mikko ends his talk by saying that we must assume that we are all targets by someone, whether they be a hacker who is still in high school or a government spy that is eager to get inside our company’s network. He says, “The times of building walls are over, because eventually someone will get in our enterprise. Breach detection is key, and we all have to get better at it.”

I agree with him completely. We must get better at seeing the virtual dirt on our networks. Building a better or bigger wall won’t stop everyone and will just foster a false sense of cyber security. And just because nothing happens, this doesn’t mean that cyber security folks aren’t hard at work. They are the cleaners that we don’t ever see, unless one day they leave someone’s mess behind.

 

RSA blog: Risk analysis vs. assessment: understanding the key to digital transformation

When it comes to looking at risks, many corporate IT departments tend to get their language confused. This is especially true in understanding the difference between doing risk analysis, the raw materials used to collect data about your risks, with risk assessment, the conclusions and resource allocation to do something about these risks. Let’s talk about the causes, challenges and why this confusion exists and how we can avoid them as we move along our various paths of digital transformation. Part of this confusion has to do with the words we choose to use than any actual activity. When an IT person says some practice is risky, oftentimes what our users hear us say is, “No, you can’t do that.” That gets to the heart of the historical IT/user conflict.

In my latest blog post for RSA, I discuss how this is more than choosing the proper words, but goes towards a deeper understanding of how we evaluate digital risks.

HPE Enterprise.nxt: Six security megatrends from the Verizon DBIR

Verizon’s 2019 Data Breach Investigations Report (DBIR) is probably this year’s second-most anticipated report after the one from Robert Mueller. In its 12th edition, it contains details on more than 2,000 confirmed data breaches in 2018, taken from more than 70 different reporting sources and analyzing more than 40,000 separate security incidents.

What sets the DBIR apart is that it combines breach data from multiple sources using the common industry collection called VERIS – a third-party repository where threat data is uploaded and made anonymous. This gives it a solid authoritative voice, and one reason why it’s frequently quoted.

I describe six megatrends from the report, including:

  1. The C-suite has become the weakest link in enterprise security.
  2. The rise of the nation state actors.
  3. Careless cloud users continue to thwart even the best laid security plans.
  4. Whether insider or outsider threats are more important.
  5. The rate of ransomware attacks isn’t clear. 
  6. Hackers are still living inside our networks for a lot longer than we’d like.

I’ve broken these trends into two distinct groups — the first three are where there is general agreement between the DBIR and other sources, and last ones . are where this agreement isn’t as apparent. Read the report to determine what applies to your specific situation. In the meantime, here is my analysis for HPE’s Enterprise.nxt blog.

RSA blog: Managing the security transition to the truly distributed enterprise

As your workforce spreads across the planet, you must now support a completely new collection of networks, apps and endpoints. We all know this increased attack surface is more difficult to manage. Part of the challenge is having to create new standards and policies to protect your enterprise and reduce risk as you make the transformation to become a more distributed company. In this blog post for RSA, I examine some of the things to look out for. My thesis is that you’ll want to match the risks with the approaches, so that you focus on the optimal security improvements to make the transition to a distributed staffing model.