The legalities of hacking back (presentation)

There is a growing trend in information security to be able to hack back or use various direct measures to attack your attackers. There are several issues:

  • attributing an attack to the right source,
  • understanding the attacker’s intent, and
  • developing the right red team skills.

In this talk given at Secure World St. Louis this month, I will talk about the ways that an enterprise can defend itself, and how to go about this process. 

How Tachyon brings a fresh perspective on keeping your endpoints healthy

If you run the IT security for your organization, you probably are feeling two things these days. First, you might be familiar with the term “box fatigue,” meaning that you have become tired of purchasing separate products for detecting intrusions, running firewalls, and screening endpoints for malware infections. Secondly, you are probably more paranoid too, as the number of data breaches continues unabated, despite all these disparate tools to try to keep attackers at bay.

I spent some time last month with the folks behind the Tachyon endpoint management product. The vendor is 1E, which isn’t a name that you often see in the press. They are based in London with a NYC office, and have several large American corporations as customers. While they paid me to consult with them, I came away from my contact with their product genuinely impressed with their approach, which I will try to describe here.

A lot of infosec products try to push the metaphor of searching for a needle (such as malware) in a haystack (your network). That notion is somewhat outdated, especially as malware authors are getting better at hiding their infections in plain sight, reusing common code that is part of the Windows OS or chaining together what seems like innocuous routines into a very destructive package. These blended threats, as they are known, are very hard to detect, and often live inside your network for days or even months, eluding most security scanners. This is one of the reasons why the number of breaches continues to make news.

What Tachyon does isn’t trying to find that needle, but instead figures out that first you need to look for something that doesn’t appear to be a piece of hay. That is an important distinction. In the memorable words of Donald Rumsfeld, there are unknown unknowns that you can’t necessary anticipate. He was talking about the fog of war, which is a good analogy to tracking down malware.

The idea behind Tachyon is to help you discover all sorts of ad hoc and serendipitous things out of your collection of computers and networks that you may not even have known required fixing. Often, issues that start out with some security problem end up becoming general IT operations related when they need to be fixed. Tachyon can help bridge that gap.

Today’s enterprise has an increasingly more complex infrastructure. As companies move to more virtual and cloud-based servers and more agile development, there are more moving parts that can be very brittle. Some cloud-based businesses have hundreds of thousands of servers running: if just a small fraction of a percent of that gear has a bug, it becomes almost impossible to ferret out and fix. This post on LinkedIn’s engineering blog is a good case in point. “Any service that is live 24/7 is in a state of change 24/7, and with change comes failures, escalations, and maybe even sleepless nights spent firefighting.” And that is just dealing with production systems, rather than any deliberate infections.

Unlike more narrowly-focused endpoint security products, Tachyon operates in a wider arena that responds to a lot of different events that deal with the entire spectrum of IT operations– not just related to your security posture. Does it matter if you have been infected with malware or have a problem because of an honest mistake by someone with setting up their machine? Not really: your environment isn’t up to par in either situation.

So how does Tachyon do this? It is actually quite simple to explain, and let me show you their home screen:

Does that query box at the top remind you of something? Think about Tachyon as what Google was trying to do back in the late 1990s. Back then, no one knew about search engines. But we quickly figured out that its simple query interface was more than an affectation when we got some real utility out of those queries. That is where we are today with Tachyon: think of it as the search tool for finding out the health of your network. You can ask it a question, and it will tell you what is happening.

Many security products require specialized operators that need training to navigate their numerous menus and interpret their results. What Tachyon is trying to do is to use this question-and-answer rubric that can be used by almost anyone, even a line manager, to figure out what is ailing your network.

But having a plain Jane home page is just one element of the product. The second important difference with Tachyon is how it automates finding and updating that peculiar piece of hay in the stack. I won’t get into the details here, but Tachyon isn’t the only tool in the box that has automation. While there are many products that claim to be able to automate routine IT functions, they still require a lot of manual intervention. Tachyon takes its automation seriously, and puts in place the appropriate infrastructure so it can automate the non-routine as well, to make it easier for IT staffs to do more with fewer resources. Given the reduced headcounts in IT, this couldn’t come at a better time.

Now I realize that having 1E as a client could bias my thinking. But I think they are on to something worthwhile here. if you are looking for way to respond and resolve network and endpoint problems at scale,  they deserve a closer look.

CSOonline: New ways to protect your AWS infrastructure

Properly testing your virtual infrastructure has been an issue almost since there were virtual VMs and AWS. Lately, the tool sets have gotten better. Part of the problem is that to adequately test your AWS installation, you need to know a lot about how it is constructed. CPUs can come and go, and storage blocks are created and destroyed in a blink of an eye. And as the number of AWS S3 data leaks rises, there have to better ways to protect things. Rhino Security and Amazon both offer tools to improve visibility into your AWS cloud environments, making it easier to find configuration errors and vulnerabilities.I write about Pacu and CloudGoat tools as well as various AWS services to test your VMs in my article from CSOonline here.

My visit to Bletchley Park and Colossus

I have been a fan of the WWII effort to build special-purpose machines to break German codes for many years, and last wrote about Colossus here, the two-room sized digital computer that was the precursor to the modern PC era. I first found out about this remarkable machine and the effort behind it with a 2007 book that you can still purchase from Amazon. 

But there is no substitute for actually visiting the hallowed ground where this all happened, which I finally did last weekend when I was in London on a consulting assignment. I was fortunate that I had a colleague (and avid reader) who lived nearby and was willing to take me around: he hadn’t been there in a while. I have included some photos that I took during my day at Bletchley Park, and it was great to finally see Colossus in all of its mechanical glory. As you can see from this photo, it looks more like an attic of used spare parts but I can assure it is quite a special place.

When most people think of decrypting codes, they think a “Matrix” style special effect where gibberish is turned into readable text (German in this instance). Or when you open a file and hit a button that will automatically decrypt the message. This is far from what happened in the 1940s. Back then, it was a herculean effort that involved recoding Morse code radio signals, transferring them to paper tape, using various cribs and cheat sheets to guess at the codes, and then processing the paper tapes through Colossus. What we also don’t realize is that these two rooms full of gear were built without anyone actually seeing the actual German Lorentz coding machine that was used to encrypt the messages to begin with. (The Bombe, a much simpler device, was used to decrypt Enigma codes.) It looks like a very strange machine, but obviously something that was designed to be carted around in the field to send and receive messages.

Sadly, the reconstructed computer is not doing very well.  This is no surprise, given that it is made of thousands of vacuum tubes (what the Brits call valves, which evokes an entire Steam Punk ethos). Only a few segments of the process that was used for the decrypts could be demonstrated, and it seems a collection of volunteer minders is kept very busy at keeping the thing in some working order. Here you can see an illustration of what it took to use the Bombe machines, which were more mechanical and didn’t involve true digital computing.

If you decide to visit Colossus, you will need to go to two separate places that are only a short walk apart. The first is the Bletchley Park estate itself, where there are several outbuildings that contain curated exhibits about the wartime effort, including several tributes to the some of the thousands of men and women that worked there during the war. One of the more notorious was Alan Turing, and you can see a mock up of his office here. After the movie The Imitation Game came out, his popularity rose and the park was quite crowded, albeit it was a holiday weekend. There are copies of some of his mathematical papers (shown below), a brick wall that honors many of the park’s contributors, and the formal apology letter from the British government that cleared his name. Turing’s 1950 paper was one of the seminal works in the history of digital computing and was also shown in one of the exhibits. What I found fascinating was how much of this stuff was being soaked up by the ordinary folks that were wandering around the park. I mean, I am a geek but there were school kids that were absorbed in all of this stuff.

One of the lesser-known individuals that was honored at the park was a double agent that was known by Garbo, because he was such an impressive actor. I read this book not too long ago about his exploits, and he played key roles in the war effort that had nothing to do with computers. He invented entire networks of imaginary spies when he filed his reports with the Germans that were so convincing that they moved their troops before D-Day, thus saving countless Allied lives.

But the entry to the park doesn’t get you to the reconstructed Colossus, and for that you have to walk down the road and pay another fee to gain access to the British computer history museum. It has numerous other exhibits of dusty old gear, including the first magnetic disks that held a whopping 250 MB 2 MB of data (I think it was mislabeled) and were the size of a small appliance. It was interesting, although not as much fun nor as comprehensive as the museum in San Jose, California. I hope you get a chance to visit both of the Bletchley places and see for yourself how computing history was made.

Internet Protocol Journal: Understanding fileless malware

I have written for this excellent 20 year-old publication occasionally. My article in this issue is about fileless malware.

Malware authors have gotten more clever and sneaky over time to make their code more difficult to detect and prevent. One of the more worrying recent developments
goes under the name “fileless.” There is reason to worry because these kinds of attacks can do more damage and the malware can persist on your computers and networks for weeks or months until they are finally neutralized. Let’s talk about what this malware is and how to understand it better so we can try to stop it from entering our
networks to begin with. Usually, the goal of most malware is to leave something behind on one of your endpoints: one or more files that contain an executable program that can damage your computer, corral your PC as part of a botnet, or make copies of sensitive data and move them to an external repository. Over the years, various detection products have gotten better at finding these residues, as they are called, and blocking them.

You can read my article here, along with other fine pieces on the state of the Internet in this month’s edition.

In-house blogging at RSA Archer Summit in Nashville

Last week I was in Nashville, covering the RSA Archer Summit conference. Here are my posts about the show:

Watch that browser add-on

This is a story about how hard it is for normal folks to keep their computers secure. It is a depressing but instructive one. Most of us take for granted that when we bring up our web browser and go to a particular site, we are safe and we know what we see is malware-free. However, that isn’t always the case, and is getting harder.

Many of you make use of browser add-ons for various things: Right now I am running a bunch of them from Google, to view online documents and launch apps. One extension that I rely on is my password manager. I used to have a lot of other ones but found that after the initial excitement (or whatever you want to call it, I know I live a sheltered life) wears off, I don’t really take advantage of them.

So my story today is about an add-on called Web Security. It is oddly named, because it does anything but what it says. And this is the challenge for all of us: many add-ons or smartphone apps have misleading names, because their authors want you to think they are benign. Initially, Mozilla wrote a recommendation for this add-on earlier this month. Then they started getting complaints from users and security researchers. Turns out that they made a big mistake. Web Security tries to track what you are doing in your browsing around the Internet, and could compromise your computer. When Mozilla add-on analyst (that is his real job) Rob Wu looked into this further, he found some very nasty behavior that made it finally clear to him that the add-on was hiding malicious code. Mozilla basically turned off the extension for the hundreds of thousands of users that had installed it and would have been vulnerable. This story on Bleeping Computer provides more details.

In the process of researching this one add-on’s behavior, Wu found 22 other add-ons that did something similar, and they were also disabled and removed from the add-on store. More than half a million Firefox users had at least one of them add-ons installed.

So what can we learn from this tale of woe? One thing is the sobering thought when security experts have trouble identifying badly behaving programs. Granted, this one was found and fixed quickly. But it does give me (and probably you too) pause.

Here are some suggestions. First off, take a look at your extensions. Each browser does this slightly differently. Cisco has a great post here to help you track them down in Chrome and IEv11. Make sure you don’t have anything more than you really need to get your work done. Second, keep your browser version updated. Most of the modern browsers will warn you when it time for an update, and don’t tarry when you see that warning. Finally, be aware of anything odd when you bring up a web page: look closely at the URL and any popups that are displayed. Granted, this can get tedious, but you are ultimately safer.

CSO Online: Mastering email security with DMARC, SPF and DKIM

Phishing and email spam are the biggest opportunities for hackers to enter the network. If a single user clicks on some malicious email attachment, it can compromise an entire enterprise with ransomware, cryptojacking scripts, data leakages, or privilege escalation exploits. Despite making some progress, a trio of email security protocols has seen a rocky road of deployment in the past year. Going by their acronyms SPF, DKIM and DMARC, the three are difficult to configure and require careful study to understand how they inter-relate and complement each other with their protective features. The effort, however, is worth the investment in learning how to use them.

In this story for CSO Online, I explain the trio and how to get them setup properly across your email infrastructure. Spoiler alert: it isn’t easy and it will take some time.

The story has been updated and expanded since I first wrote about it earlier this year, to include some new surveys about the use of these protocols.

The wild and wacky world of cyber insurance (+podcast)

If you have ever tried to obtain property insurance, you know you have a “project” cut out for you. Figuring out what each insurer’s policies cover — and don’t cover — is a chore. When you finally get to the point where you can compare premiums, many of you just want the pain to end quickly and probably pick a carrier more out of expediency than economy.

Now multiple this by two factors: first, you want to get business insurance, and then you want to get business cyber insurance. If you are a big company, you probably have specialists that can handle these tasks — maybe. The problem is that insurance specialists don’t necessarily understand the inherent cyber risks, and IT folks don’t know how to talk to the insurance pros. And to make matters more complex, the risks are evolving quickly as criminals get better at plying their trade.

My first job was working after college in a key punch department of a large insurance company in NYC. We filled out forms for the keypunch operators to cut the cards that were used to program our mainframe computers. It was strictly a clerical position, and it motivated me to go back and get a graduate degree. I had no idea what the larger context of the company was, or anything really about insurance. I was just writing numbers on a pad of paper.

Years later, I worked in the nascent IT department of another large insurance company in downtown LA. This was back in the mid 1980s. We didn’t know from cyber insurance back then: indeed, we didn’t even have many PCs in the building. At least not when I started: my job was to join an end-user support department that was bringing in PCs by the truckload.

So those days are thankfully behind me, and behind most of us too. Cyber insurance is becoming a bigger market, mainly because companies want to protect themselves against any financial losses that stem from hacking or data leaks. So far, this kind of insurance has been met with mixed success. Here is one recent story about a Virginia bank that was hit with two different attacks. They had cyber insurance, and filed a claim, and ended up in a court battle with their insurer who (surprise!) didn’t want to pay out, claiming some fine print on the policy.

Sadly, that is where things stand for the present day. Cyber insurance is still a very immature market, and there are many insurers who frankly shouldn’t be writing policies because they don’t know what they are doing, what the potential risks are, and how to evaluate their customers. If you live in a neighborhood with a high rate of car thefts, your auto premiums are going to be higher than a safer neighborhood. But there is no single metric — or even a set of metrics — that can be used to evaluate the cyber risk context.

I talk about these and other issues with two cyber insurance gurus on David Senf’s 40 min. podcast Threat Actions This Week here. I am part of a panel with Greg Markell of Ridge Canada and Visesh Gosrani of Guidewire. If you are struggling with these issues, you might want to give it a listen.

Why adaptive authentication matters for banks

The typical banking IT attack surface has greatly expanded over the past several years. Thanks to more capable mobile devices, social networks, cloud computing, and unofficial or shadow IT operations, authentication now has to be portable, persistent, and flexible enough to handle these new kinds of situations. Banks have also realized that they aren’t just defending themselves against external threats, that authentication challenges have become more complex as IoT has expanded the potential sources of attacks.

That is why banks have moved towards adopting more adaptive authentication methods, using a combination of multi-factor authentication (MFA), passive biometric and other continuous monitoring efforts that can more accurately find fraudulent use. It used to be that adaptive authentication forced a trade-off between usability and security, but that is no longer the case. Nowadays, adaptive authentication can improve overall customer experience and help compliance regulations as well as simplifying a patchwork of numerous legacy banking technologies.

In this white paper I wrote for VASCO (now OneSpan), I describe the current state of authentication and its evolution of adaptive processes. I also talk about the migration from a simple binary login/logout situation to more nuanced states that can be deployed by banks, and why MFA needs to be better integrated into a bank’s functional processes.