The legalities of hacking back (presentation)

There is a growing trend in information security to be able to hack back or use various direct measures to attack your attackers. There are several issues:

  • attributing an attack to the right source,
  • understanding the attacker’s intent, and
  • developing the right red team skills.

In this talk given at Secure World St. Louis this month, I will talk about the ways that an enterprise can defend itself, and how to go about this process. 

Brian NeSmith, providing SOC-as-a-Service with Arctic Wolf Networks

Brian NeSmith is the CEO of Arctic Wolf Networks, which was started back in 2012.  They provide Security Operations Center-as-a-Service. I have known him for decades when he started a quirky company called Cacheflow that eventually became part of Blue Coat where he was also CEO. I asked him a few questions.

Q: What has changed in enterprise infosec compared to when you first started at AWN six years ago?

Back when we started the company breaches were smaller with little lasting damage.  The stakes are much higher profile now. We started the company before Target, Equifax and Petya, major attacks that put cybersecurity on the evening news. Nowadays cybersecurity is a boardroom topic, and a company’s brand and business are affected by how good their security is.

Q: How does a SOC-as a S differ from just a MSP who sells managed SOC services?

SOC-as-a-service provides experienced security analysts doing real security work.  MSPs selling managed SOC services are usually just managing the infrastructure or forwarding alerts, but they are not doing the actual security work. The pressing issue in our industry today is how we detect and respond to threats and not just managing the infrastructure more cost effectively.  SOC-as-a-service provides that, and managed SOC services from an MSP does not.

Q: What portion of the resources you monitor are on premises vs. cloud of your current customers? How has that changed from six years ago?

The portion of cloud resources we monitor has been steadily increasing over the past six years.  But the largest resource we monitor in most companies is still the employees and their endpoints.  Many people view people as the weakest link in the chain, and we find that still to be the case.  Most security incidents are still due to some sort of human error or mistake even when they have the best security products in place.

Q: You ran Blue Coat through some very turbulent times, when it was first called CacheFlow. How have web apps changed from those early days and will enterprises ever feel secure deploying them?

It is a completely different world today than when I first started leading CacheFlow.  There is not a company out there that does not rely on a web app to operate or serve their customers.  If they have not, companies do not have a choice but to embrace web apps, so they need to figure out what is needed to feel secure deploying them.

Q: Is ransomware or fileless malware more of a threat today from your POV?

I don’t think they are any more of a threat than other types of malware.  Ransomware is different in that it can literally bring your business to a halt.  That is very different from traditional malware.  When it comes to fileless malware, the increased danger comes from how openly information is on how to exploit these.  We have seen malware become commercialized so you can literally purchase the malware you want to use and even get technical support.  This means that anyone can become a hacker, and it will result in more attacks.

How Tachyon brings a fresh perspective on keeping your endpoints healthy

If you run the IT security for your organization, you probably are feeling two things these days. First, you might be familiar with the term “box fatigue,” meaning that you have become tired of purchasing separate products for detecting intrusions, running firewalls, and screening endpoints for malware infections. Secondly, you are probably more paranoid too, as the number of data breaches continues unabated, despite all these disparate tools to try to keep attackers at bay.

I spent some time last month with the folks behind the Tachyon endpoint management product. The vendor is 1E, which isn’t a name that you often see in the press. They are based in London with a NYC office, and have several large American corporations as customers. While they paid me to consult with them, I came away from my contact with their product genuinely impressed with their approach, which I will try to describe here.

A lot of infosec products try to push the metaphor of searching for a needle (such as malware) in a haystack (your network). That notion is somewhat outdated, especially as malware authors are getting better at hiding their infections in plain sight, reusing common code that is part of the Windows OS or chaining together what seems like innocuous routines into a very destructive package. These blended threats, as they are known, are very hard to detect, and often live inside your network for days or even months, eluding most security scanners. This is one of the reasons why the number of breaches continues to make news.

What Tachyon does isn’t trying to find that needle, but instead figures out that first you need to look for something that doesn’t appear to be a piece of hay. That is an important distinction. In the memorable words of Donald Rumsfeld, there are unknown unknowns that you can’t necessary anticipate. He was talking about the fog of war, which is a good analogy to tracking down malware.

The idea behind Tachyon is to help you discover all sorts of ad hoc and serendipitous things out of your collection of computers and networks that you may not even have known required fixing. Often, issues that start out with some security problem end up becoming general IT operations related when they need to be fixed. Tachyon can help bridge that gap.

Today’s enterprise has an increasingly more complex infrastructure. As companies move to more virtual and cloud-based servers and more agile development, there are more moving parts that can be very brittle. Some cloud-based businesses have hundreds of thousands of servers running: if just a small fraction of a percent of that gear has a bug, it becomes almost impossible to ferret out and fix. This post on LinkedIn’s engineering blog is a good case in point. “Any service that is live 24/7 is in a state of change 24/7, and with change comes failures, escalations, and maybe even sleepless nights spent firefighting.” And that is just dealing with production systems, rather than any deliberate infections.

Unlike more narrowly-focused endpoint security products, Tachyon operates in a wider arena that responds to a lot of different events that deal with the entire spectrum of IT operations– not just related to your security posture. Does it matter if you have been infected with malware or have a problem because of an honest mistake by someone with setting up their machine? Not really: your environment isn’t up to par in either situation.

So how does Tachyon do this? It is actually quite simple to explain, and let me show you their home screen:

Does that query box at the top remind you of something? Think about Tachyon as what Google was trying to do back in the late 1990s. Back then, no one knew about search engines. But we quickly figured out that its simple query interface was more than an affectation when we got some real utility out of those queries. That is where we are today with Tachyon: think of it as the search tool for finding out the health of your network. You can ask it a question, and it will tell you what is happening.

Many security products require specialized operators that need training to navigate their numerous menus and interpret their results. What Tachyon is trying to do is to use this question-and-answer rubric that can be used by almost anyone, even a line manager, to figure out what is ailing your network.

But having a plain Jane home page is just one element of the product. The second important difference with Tachyon is how it automates finding and updating that peculiar piece of hay in the stack. I won’t get into the details here, but Tachyon isn’t the only tool in the box that has automation. While there are many products that claim to be able to automate routine IT functions, they still require a lot of manual intervention. Tachyon takes its automation seriously, and puts in place the appropriate infrastructure so it can automate the non-routine as well, to make it easier for IT staffs to do more with fewer resources. Given the reduced headcounts in IT, this couldn’t come at a better time.

Now I realize that having 1E as a client could bias my thinking. But I think they are on to something worthwhile here. if you are looking for way to respond and resolve network and endpoint problems at scale,  they deserve a closer look.

SaltStack: beyond application configuration management

When it comes to building online applications, you can build them with old tools and attitudes or with new methods that are purpose-built for solving today’s problems and infrastructures. Back in the days when mainframes still walked the earth, setting up a series of online applications used some very primitive tools. And while we have more integrated development environments that embrace SaaS apps running in the cloud, it is more of a half-hearted acceptance. Few tools really have what it takes for handling and automating online apps.

I wrote this white paper which talks about typical use cases of the SaltStack Enterprise product and Salt’s key features.

Ten tips towards better collaboration with your consultants

One of the more fun things that I do is working as a consultant with different project teams around the world, helping them to make their products better and more secure. I got inspired when I read some of these horror stories posted on The Freelancer blog. Over the years I have learned a few things about how to best collaborate. I thought I would share ten specific lessons. I have removed any identifying details to spare any of the potential guilty parties.

1.     How to have virtual meetings.  The best tools for doing this are Webex and GotoMeeting, and both offer free versions. Next best is Zoom.us. The problem is that often times a company has more than one meeting product, and sometimes they use tools such as Lync/Skype for Business which is a great tool for internal meetings but breaks down if used by outside contractors that don’t have domain credentials. Best to pick one standard and use it: in some engagements, we couldn’t start the meeting on time because the participants got multiple invites with different products (including one phone bridge just to make matters worse). This creates all sorts of confusion as to where the “real” meeting was taking place.

2.     Part of a meeting is to show a presentation or review documents. That is great, but if you are going to do real-time group editing that can get tedious. Better to collect comments offline and appoint one person in charge of that process. The times that I have done real-time line editing it isn’t very efficient, and often the loudest voice in the virtual room dominates over lesser ones that could have important points to make. Yes, there is a time and place for real-time line editing, but only when a team is used to working this way and everyone knows each other really well.

3.     Another way to do joint line editing is to send out an email with a link to an online document in GDocs or O365, and allow everyone to post their suggested changes over a fixed time period. If you go this route, make sure all the participants have the correct access rights to the document, especially if you are using contractors outside your corporate domain. Also, if you do send out emails, send out the link and not the actual document as an attachment – that could be counter-productive too.

4.     Avoid endless edit cycles. I have had my stories go through several edit passes, and often after the first one these edits aren’t adding any value to the piece and instead are more political nods to a manager’s whims. While everyone thinks she or he is a great editor, few often have the right skills. It also helps to be clear on who is going to be doing the editing, and who just needs to see the document prior to any final distribution. Sometimes you get stuck in a seemingly endless loop between two editors: one undoes the other’s changes.

5. Appoint one person to collect all comments and resolve them if possible. Doodle did this survey a while back that triaged meeting participants into three types: initiators, herders, and loners. It is worth reviewing their study to see how it can apply to your particular team. You don’t have to go a full-on Myers-Briggs but it helps to know whom you are dealing with.

6.     Another tip: don’t schedule any meeting until you are sure you have deliverables in hand to actually discuss. I had this happen to me a few times: someone would schedule a series of weekly meetings, and nothing transpired during the week so the meeting was pointless.

7.     This brings up another tip. Part of running a great meeting is sending out an agenda in the meeting invite so everyone can start with the same points to cover. And then making sure you stick to the agenda.

8.     If you need to have audio conference calls, you should pick a single conferencing product and stick to it, and ensure that it can be accessed from international numbers if you have clients overseas. Many companies that I have worked for have multiple conferencing vendors, which gets confusing when you are trying to schedule one.

9.     Don’t have a final project meeting without inviting the contractors who worked on it. This seems like common sense, but you would be surprised how often this happens.

10.     Use Calendly.com (or equivalent) to schedule your appointments. If you have several clients and they need to book your time in advance, this is a great tool that removes the need for phone tag and a human appointment-taker. They have s a free basic account, with premium accounts at $8/user/month that add custom branding and URL links and reporting options.

Feel free to share some of your own collaboration or freelancing horror stories here too.

 

 

Getting rid of Facebook

One of my readers asked me how to go about removing Facebook completely from their online lives. After I pulled together the various links that you’ll see below, I thought I would share with you all. Now, I am not saying that I am contemplating doing this: sadly, my online professional life requires that I continue to be a part of Facebook, whether I like it or not. But that doesn’t mean I have to agree with its corporate policies, as I have made clear in several posts earlier this summer. But read on or save this column somewhere, just in case you are thinking about de-Facing your life.  And be prepared to spend a few hours going through the numerous steps.

Your first to-do is to download all of your data that Facebook has on you. I wrote about this process earlier (and covered the other social networks too) in this post. But if you just want the Facebook archive download, go to this page.  You might have to wait a few days until your archive is ready: don’t worry, you will be notified.

Next, decide whether you want a trial separation or a total divorce. Facebook refers to the former as deactivating your account. This keeps your data in their grubby digital hands, but at least you will disappear from your friends’ social networks. You can change your mind in the future and re-activate your account just by logging back into your account, so if you are somewhat serious about this but don’t want to inadvertently login, make sure you delete the login details from your password manager or any saved websites on your various browsers and computers.

Before you opt for the total divorce, take a look at the connected apps that you once allowed access to your Facebook account. You might not have remembered doing this, and in another column I spoke about what you should do for a social media “spring cleaning” for the other networks and for your various privacy settings. You should spend some time doing this app audit for the other networks as well.

Why do you want to deal with your connected apps before total account deletion? Because you might want to still access one or more of these apps, and if you delete your Facebook presence, your access goes away if that particular app depends on that. For example, a web portal that my doctors use to communicate with me could depend on my Facebook login. (It doesn’t, but that is because I decided to use another login mechanism other than Facebook.) By going to the connected apps page, you can see the complete list of whom you have authorized.

Still with me? I realize that it seems as if the scope of this project continues to widen, but that is to be expected. Let’s continue.

Mashable has this nice article that will walk you through the steps of both deactivation and a complete deletion process. I won’t repeat the numerous steps here, but you should take the time to review their post.

If you opt for deletion, remember you have to cleanse your entire computing portfolio of everything Facebook: this means all your browsers, your mobile devices, and your mobile messenger apps too. I don’t particularly like the mobile messenger app, as one friend described it accurately as a “rabid dog” that just grabs your contacts and other data. Indeed, if you have examined your downloaded archive you can see that for yourself.

Now for the final step, the actual deletion. The Mashable piece has a long list of what you have to do, aside from hitting the delete button in the Facebook interface. If you want a more visual aid, check out this screencast that shows you these first steps.

I realize this is a lot of effort, and Facebook has very nicely put in a number of “Are you sure” checks along the path, just in case you aren’t completely ready for the divorce. I would be interested in hearing from you if you do go through the entire process and what your reasons are for doing it.

FIR B2B PODCAST #103: WHY MARKETERS SHOULDN’T FEAR DATA ANALYTICS

This week our guest is Adam Jones, who is the head of marketing insights at Springer Nature, a public of many well-respected periodicals that include Nature and Scientific American. Jones is probably one of the few digital marketers that doesn’t hate click-through rates and page view numbers. Rather, he things we have to reinterpret them in new contexts to better understand what readers do after they click or view a page. “We get tons of data from every click, and create stronger calls to action as a result,” he told us during our interview.

Jones talks about why marketers are scared of data and analytics, but says you have to build a solid foundation in these techniques if you are going to be successful in marketing these days. He also discusses the unique challenges Springer faces catering to a highly educated and technical audience. Loyalty and longevity of readership are two of the company’s greatest assets.

Also on the podcast, I recount my recent trip to the Bletchley Park museums where modern digital computing was born during WWII. I blogged about it here.

Listen to our podcast.

CSOonline: New ways to protect your AWS infrastructure

Properly testing your virtual infrastructure has been an issue almost since there were virtual VMs and AWS. Lately, the tool sets have gotten better. Part of the problem is that to adequately test your AWS installation, you need to know a lot about how it is constructed. CPUs can come and go, and storage blocks are created and destroyed in a blink of an eye. And as the number of AWS S3 data leaks rises, there have to better ways to protect things. Rhino Security and Amazon both offer tools to improve visibility into your AWS cloud environments, making it easier to find configuration errors and vulnerabilities.I write about Pacu and CloudGoat tools as well as various AWS services to test your VMs in my article from CSOonline here.

My visit to Bletchley Park and Colossus

I have been a fan of the WWII effort to build special-purpose machines to break German codes for many years, and last wrote about Colossus here, the two-room sized digital computer that was the precursor to the modern PC era. I first found out about this remarkable machine and the effort behind it with a 2007 book that you can still purchase from Amazon. 

But there is no substitute for actually visiting the hallowed ground where this all happened, which I finally did last weekend when I was in London on a consulting assignment. I was fortunate that I had a colleague (and avid reader) who lived nearby and was willing to take me around: he hadn’t been there in a while. I have included some photos that I took during my day at Bletchley Park, and it was great to finally see Colossus in all of its mechanical glory. As you can see from this photo, it looks more like an attic of used spare parts but I can assure it is quite a special place.

When most people think of decrypting codes, they think a “Matrix” style special effect where gibberish is turned into readable text (German in this instance). Or when you open a file and hit a button that will automatically decrypt the message. This is far from what happened in the 1940s. Back then, it was a herculean effort that involved recoding Morse code radio signals, transferring them to paper tape, using various cribs and cheat sheets to guess at the codes, and then processing the paper tapes through Colossus. What we also don’t realize is that these two rooms full of gear were built without anyone actually seeing the actual German Lorentz coding machine that was used to encrypt the messages to begin with. (The Bombe, a much simpler device, was used to decrypt Enigma codes.) It looks like a very strange machine, but obviously something that was designed to be carted around in the field to send and receive messages.

Sadly, the reconstructed computer is not doing very well.  This is no surprise, given that it is made of thousands of vacuum tubes (what the Brits call valves, which evokes an entire Steam Punk ethos). Only a few segments of the process that was used for the decrypts could be demonstrated, and it seems a collection of volunteer minders is kept very busy at keeping the thing in some working order. Here you can see an illustration of what it took to use the Bombe machines, which were more mechanical and didn’t involve true digital computing.

If you decide to visit Colossus, you will need to go to two separate places that are only a short walk apart. The first is the Bletchley Park estate itself, where there are several outbuildings that contain curated exhibits about the wartime effort, including several tributes to the some of the thousands of men and women that worked there during the war. One of the more notorious was Alan Turing, and you can see a mock up of his office here. After the movie The Imitation Game came out, his popularity rose and the park was quite crowded, albeit it was a holiday weekend. There are copies of some of his mathematical papers (shown below), a brick wall that honors many of the park’s contributors, and the formal apology letter from the British government that cleared his name. Turing’s 1950 paper was one of the seminal works in the history of digital computing and was also shown in one of the exhibits. What I found fascinating was how much of this stuff was being soaked up by the ordinary folks that were wandering around the park. I mean, I am a geek but there were school kids that were absorbed in all of this stuff.

One of the lesser-known individuals that was honored at the park was a double agent that was known by Garbo, because he was such an impressive actor. I read this book not too long ago about his exploits, and he played key roles in the war effort that had nothing to do with computers. He invented entire networks of imaginary spies when he filed his reports with the Germans that were so convincing that they moved their troops before D-Day, thus saving countless Allied lives.

But the entry to the park doesn’t get you to the reconstructed Colossus, and for that you have to walk down the road and pay another fee to gain access to the British computer history museum. It has numerous other exhibits of dusty old gear, including the first magnetic disks that held a whopping 250 MB 2 MB of data (I think it was mislabeled) and were the size of a small appliance. It was interesting, although not as much fun nor as comprehensive as the museum in San Jose, California. I hope you get a chance to visit both of the Bletchley places and see for yourself how computing history was made.

Internet Protocol Journal: Understanding fileless malware

I have written for this excellent 20 year-old publication occasionally. My article in this issue is about fileless malware.

Malware authors have gotten more clever and sneaky over time to make their code more difficult to detect and prevent. One of the more worrying recent developments
goes under the name “fileless.” There is reason to worry because these kinds of attacks can do more damage and the malware can persist on your computers and networks for weeks or months until they are finally neutralized. Let’s talk about what this malware is and how to understand it better so we can try to stop it from entering our
networks to begin with. Usually, the goal of most malware is to leave something behind on one of your endpoints: one or more files that contain an executable program that can damage your computer, corral your PC as part of a botnet, or make copies of sensitive data and move them to an external repository. Over the years, various detection products have gotten better at finding these residues, as they are called, and blocking them.

You can read my article here, along with other fine pieces on the state of the Internet in this month’s edition.