In-house blogging at RSA Archer Summit in Nashville

In August 2018 I was in Nashville, covering the RSA Archer Summit customer annual conference. Here are my posts about the show:

Watch that browser add-on

This is a story about how hard it is for normal folks to keep their computers secure. It is a depressing but instructive one. Most of us take for granted that when we bring up our web browser and go to a particular site, we are safe and we know what we see is malware-free. However, that isn’t always the case, and is getting harder.

Many of you make use of browser add-ons for various things: Right now I am running a bunch of them from Google, to view online documents and launch apps. One extension that I rely on is my password manager. I used to have a lot of other ones but found that after the initial excitement (or whatever you want to call it, I know I live a sheltered life) wears off, I don’t really take advantage of them.

So my story today is about an add-on called Web Security. It is oddly named, because it does anything but what it says. And this is the challenge for all of us: many add-ons or smartphone apps have misleading names, because their authors want you to think they are benign. Initially, Mozilla wrote a recommendation for this add-on earlier this month. Then they started getting complaints from users and security researchers. Turns out that they made a big mistake. Web Security tries to track what you are doing in your browsing around the Internet, and could compromise your computer. When Mozilla add-on analyst (that is his real job) Rob Wu looked into this further, he found some very nasty behavior that made it finally clear to him that the add-on was hiding malicious code. Mozilla basically turned off the extension for the hundreds of thousands of users that had installed it and would have been vulnerable. This story on Bleeping Computer provides more details.

In the process of researching this one add-on’s behavior, Wu found 22 other add-ons that did something similar, and they were also disabled and removed from the add-on store. More than half a million Firefox users had at least one of them add-ons installed.

So what can we learn from this tale of woe? One thing is the sobering thought when security experts have trouble identifying badly behaving programs. Granted, this one was found and fixed quickly. But it does give me (and probably you too) pause.

Here are some suggestions. First off, take a look at your extensions. Each browser does this slightly differently. Cisco has a great post here to help you track them down in Chrome and IEv11. Make sure you don’t have anything more than you really need to get your work done. Second, keep your browser version updated. Most of the modern browsers will warn you when it time for an update, and don’t tarry when you see that warning. Finally, be aware of anything odd when you bring up a web page: look closely at the URL and any popups that are displayed. Granted, this can get tedious, but you are ultimately safer.

CSO Online: Mastering email security with DMARC, SPF and DKIM

Phishing and email spam are the biggest opportunities for hackers to enter the network. If a single user clicks on some malicious email attachment, it can compromise an entire enterprise with ransomware, cryptojacking scripts, data leakages, or privilege escalation exploits. Despite making some progress, a trio of email security protocols has seen a rocky road of deployment in the past year. Going by their acronyms SPF, DKIM and DMARC, the three are difficult to configure and require careful study to understand how they inter-relate and complement each other with their protective features. The effort, however, is worth the investment in learning how to use them.

In this story for CSO Online, I explain the trio and how to get them setup properly across your email infrastructure. Spoiler alert: it isn’t easy and it will take some time.

The story has been updated and expanded since I first wrote about it earlier this year, to include some new surveys about the use of these protocols.

The wild and wacky world of cyber insurance (+podcast)

If you have ever tried to obtain property insurance, you know you have a “project” cut out for you. Figuring out what each insurer’s policies cover — and don’t cover — is a chore. When you finally get to the point where you can compare premiums, many of you just want the pain to end quickly and probably pick a carrier more out of expediency than economy.

Now multiple this by two factors: first, you want to get business insurance, and then you want to get business cyber insurance. If you are a big company, you probably have specialists that can handle these tasks — maybe. The problem is that insurance specialists don’t necessarily understand the inherent cyber risks, and IT folks don’t know how to talk to the insurance pros. And to make matters more complex, the risks are evolving quickly as criminals get better at plying their trade.

My first job was working after college in a key punch department of a large insurance company in NYC. We filled out forms for the keypunch operators to cut the cards that were used to program our mainframe computers. It was strictly a clerical position, and it motivated me to go back and get a graduate degree. I had no idea what the larger context of the company was, or anything really about insurance. I was just writing numbers on a pad of paper.

Years later, I worked in the nascent IT department of another large insurance company in downtown LA. This was back in the mid 1980s. We didn’t know from cyber insurance back then: indeed, we didn’t even have many PCs in the building. At least not when I started: my job was to join an end-user support department that was bringing in PCs by the truckload.

So those days are thankfully behind me, and behind most of us too. Cyber insurance is becoming a bigger market, mainly because companies want to protect themselves against any financial losses that stem from hacking or data leaks. So far, this kind of insurance has been met with mixed success. Here is one recent story about a Virginia bank that was hit with two different attacks. They had cyber insurance, and filed a claim, and ended up in a court battle with their insurer who (surprise!) didn’t want to pay out, claiming some fine print on the policy.

Sadly, that is where things stand for the present day. Cyber insurance is still a very immature market, and there are many insurers who frankly shouldn’t be writing policies because they don’t know what they are doing, what the potential risks are, and how to evaluate their customers. If you live in a neighborhood with a high rate of car thefts, your auto premiums are going to be higher than a safer neighborhood. But there is no single metric — or even a set of metrics — that can be used to evaluate the cyber risk context.

I talk about these and other issues with two cyber insurance gurus on David Senf’s 40 min. podcast Threat Actions This Week here. I am part of a panel with Greg Markell of Ridge Canada and Visesh Gosrani of Guidewire. If you are struggling with these issues, you might want to give it a listen.

Why adaptive authentication matters for banks

The typical banking IT attack surface has greatly expanded over the past several years. Thanks to more capable mobile devices, social networks, cloud computing, and unofficial or shadow IT operations, authentication now has to be portable, persistent, and flexible enough to handle these new kinds of situations. Banks have also realized that they aren’t just defending themselves against external threats, that authentication challenges have become more complex as IoT has expanded the potential sources of attacks.

That is why banks have moved towards adopting more adaptive authentication methods, using a combination of multi-factor authentication (MFA), passive biometric and other continuous monitoring efforts that can more accurately find fraudulent use. It used to be that adaptive authentication forced a trade-off between usability and security, but that is no longer the case. Nowadays, adaptive authentication can improve overall customer experience and help compliance regulations as well as simplifying a patchwork of numerous legacy banking technologies.

In this white paper I wrote for VASCO (now OneSpan), I describe the current state of authentication and its evolution of adaptive processes. I also talk about the migration from a simple binary login/logout situation to more nuanced states that can be deployed by banks, and why MFA needs to be better integrated into a bank’s functional processes.

 

RSA Blog: New ways to manage digital risk

Organizations are becoming increasingly digital in their operations, products and services offerings, as well as with their business methods. This means they are introducing more technology into their environment. At the same time, they have shrunk their IT shops – in particular, their infosec teams – and have less visibility into their environment and operations. While they are trying to do more with fewer staff, they are also falling behind in terms of tracking potential security alerts and understanding how attackers enter their networks. Unfortunately, threats are more complex as criminals use a variety of paths such as web, email, mobile, cloud, and native Windows exploits to insert malware and steal a company’s data and funds.

In this post for RSA’s blog, I talk about how organizations have to become better at managing their digital risk through using more advanced security and information event management systems and adaptive authentication tools. Both of these use more continuous detection mechanisms to monitor network and user behaviors.

The Russians are coming! The Russians are coming!

There has been a great deal of misinformation about Russian hackers lately in the news. Let me try to set the record straight.

Earlier this week the Wall Street Journal reported on a briefing given by the Department of Homeland Security about attempts at compromising electric utility control rooms to bring down our power grid. These attempts were actually documented by another US government entity called CERT here back in March.

According to the WSJ piece, “Hackers compromised US power utility companies’ corporate networks with conventional approaches, such as spearphishing emails and watering-hole attacks. After gaining access to vendor networks, hackers turned their attention to stealing credentials.”

However, as this Twitter stream describes, the claims made in the WSJ article are somewhat misleading. The reporters claim the control centers operate with air gaps, meaning that their computers aren’t directly connected to the Internet. That isn’t quite true. DHS and CERT both learned about these hacks from private security firms.

But that isn’t the only hacking effort that the Russian government has been involved. Mueller’s GRU indictment was announced earlier this month, naming 12 individuals involved in the hacking of various political organizations’ networks. That document makes for interesting reading and shows the lengths that Russian spies went to penetrate the DNC and the Clinton campaign.

Here are just some of their techniques mentioned in the indictment:

  • Spearphishing and watering-hole emails using URL shorteners to hide malware webpages, in one case using a phony email account that differed by a single character that mimicked a Clinton staffer
  • Stealing account credentials to obtain emails from DNC and Clinton staffers
  • Entered the DNC network using open source tools to install various RATs and keyloggers to obtain additional credentials.

These three attacks were also used in compromising the utility networks too. But wait, there is more:

  • Spoofing Google security notification email messages
  • Using the malware-infested document hillaryclinton-favorable-rating.xlsx that linked to a GRU-created website
  • Coping and exfiltrating documents via encrypted connections to a GRU computer in Illinois
  • Using PowerShell scripting attacks on Exchange email servers
  • Deleting log files and other traces deliberately to hide their presence
  • Setting up various websites: some mimicked a typical political fundraising page, others that appeared to be news sites with negative stories on the DNC
  • Making cloud-based site backups and then used them to create their own accounts to steal additional DNC data
  • Creating fake Facebook and Twitter accounts to leak DNC data and promote the leakers websites

Some in our administration debate whether Russians were behind both of these attacks, but the evidence is pretty clear to me. If you want to see the data firsthand, you might want to first take a look at an analysis of the Russian Troll farm’s Tweets by academic researchers here and then download their data on GitHub if you want to do your own analysis,

The indicted members of the GRU were first seen in the political networks in June 2016, at which point the DNC hired CrowdStrike to investigate further. However, the GRU spies continued to operate their RAT tools and persist on the DNC network until October 2016.

These efforts have been known for some time: Motherboard ran a story in April 2016, and then came out in July with this piece from Thomas Rid that offered a detailed technical explanation, saying that the forensic evidence about Russia is very strong. And a December 2016 story in the New York Times actually shows one of the rack-mounted servers breached by the GRU, sitting in the DNC offices, shown above. The Times documents the “series of missed signals, slow responses and a continuing underestimation of the seriousness of the cyberattack.”

As many security analysts well know, you don’t remove the physical servers anymore. That is strictly old school. Instead, forensic investigators make digital copies of their hard drives and memory so that they can preserve their state and detect in-memory exploits that would be gone if the machines are unplugged. This is called imaging and has been around for decades.

It is time to get more serious about protecting your email

Did you get a strange email last week from someone that you didn’t know, including one of your old passwords in the subject line? I did, and I heard many others were part of this criminal ransomware activity. Clearly, they were sent out with some kind of automated mailing list that made use of a huge list of hacked passwords. (You can check if your email has been leaked on this list.) It really annoyed me, and I got a few calls from friends wanting to know how this criminal got ahold of their passwords. (BTW: you shouldn’t respond to this email, because then you become more of a target.)

But the question that I asked my friends was this: Do you still have logins that make use of that password? You probably do.

Email is inherently insecure. Sorry, it has been that way since its invention, and still is. All of us don’t give its security the attention it needs and deserves. So if you got one of these messages, or if you are worried about your exposure to a future one, I have a few suggestions.

First, you need to read this piece by David Koff on rethinking email and security. It brought to mind the many things that folks today have to do to protect themselves. I would urge you to review it carefully. Medium calculates it will take you 17 minutes, but my guess is that you need to budget more time. There is a lot to unpack in his post, so I won’t repeat it here.

Now Koff suggests a lot of tools that you can use to become more secure. I am going to just give you four of them, listed from most to least importance.

  1. Set up a password manager and start protecting your passwords. This is probably the biggest thing that you can do to protect yourself. It will make it easier to use stronger and unique passwords. I use LastPass.com, which is $2 per month. For many of my accounts, I don’t even know my passwords anymore because they are just some combination of random letters and symbols. If you don’t want to pay, there are many others that I reviewed at that link here that are free for personal accounts.
  2. Create disposable email accounts for all your mailing lists. Koff suggests using 33mail.com, but there are many other services including Mailinator.com, temp-mail.org, and throwawaymail.com. They all work similarly. The hard part is unsubscribing from mailing lists with your current address, and adding the new disposable addresses.
  3. Even with a password manager, you need to make use of some additional authentication mechanism for your most sensitive logins. Use this for as many accounts as you can.
  4. Finally, if you are still looking for something to do, at least try encrypted email. Protonmail.com is free for low-end accounts and very easy to use.

There is a lot more you can to make yourself more secure. Please take the time to do the above, before you get someone else trying to steal your money, your identity, or both.

Cyber Security Threat Actions This Week (podcast)

If your organization is not using the MITRE ATT&CK framework yet, it’s time to start. Katie Nickels from MITRE, Travis Farral from Anomali and I join host David Senf from Cyverity to talk about ATT&CK tactics, techniques and tools. You can listen to this 45-minute podcast here.  We discuss what ATT&CK is and isn’t, how it can be used to help defenders learn more about how exploits work and how to become better at protecting their enterprises, what some of the third-party tools (such as Mitre’s own Caldera shown here) that leverage ATT&CK and what are some of the common scenarios that this framework can be used for.

I did two stories for CSOonline about ATT&CK earlier this year:

 

Watch that keyboard!

We are using our mobile phones for more and more work-related tasks, and the bad guys know this and are getting sneakier about ways to compromise them. One way is to use a third-party keyboard that can be used to capture your keystrokes and send your login info to a criminal that then steals your accounts, your money, and your identity.

What are these third-party keyboards? You can get them for nearly everything – sending cute GIFs and emojis, AI-based text predictors, personalized suggestions, drawing and swiping instead of tapping and even to type in a variety of colored fonts. One of the most popular iOS apps from last year was Bitmoji, which allows you to create an avatar and adds an emoji-laden keyboard. Another popular Android app is Swiftkey. These apps have been downloaded by millions of users, and there are probably hundreds more that are available on the Play and iTunes stores.

Here is the thing. In order to install one of these keyboard apps, you have to grant it access to your phone. This seems like common sense, but sadly, this also grants the app access to pretty much everything you type, every piece of data on your phone, and every contact of yours too. Apple calls this full access, and they require these keyboards to ask explicitly for this permission after they are installed and before you use them for the first time. Many of us don’t read the fine print and just click yes and go about our merry way.

On Android phones, the permissions are a bit more granular, as you can see in this screenshot. This is actually just half of the overall permissions that are required.

An analysis of Bitmoji in particular can be found here, and it is illuminating.

Security analysts have known about this problem for quite some time. Back in July 2016, there was an accidental leak of data from millions of users of the ai.type third-party keyboard app. Analyst Lenny Zeltser looked at this leak and examined the privacy disclosures and configurations of several keyboard apps.

So what can you do? First, you probably shouldn’t use these apps, but trying telling that to your average millennial or teen. You can try banning the keyboards across your enterprise, which is what this 2015 post from Synopsys recommends. But many enterprises today no longer control what phones their users purchase or how they are configured.

You could try to educate your users and have them pay more attention to what permissions these apps require. We could try to get keyboard app developers to be more forthcoming about their requirements, and have some sort of trust or seal of approval for those that actually play by the rules and aren’t developing malware, which is what Zeltser suggests. But good luck with either strategy.

We could place our trust in Apple and Google to develop more protective mobile OSs. This is somewhat happening: Apple’s iOS will automatically switch back to the regular keyboard when it senses that you are typing in your user name or password or credit card data.

In the end though, users need to understand the implications of their actions, and particularly the security consequences of installing these keyboard replacement apps. The more paranoid and careful ones among you might want to forgo these apps entirely.