If we do our job, nothing happens

There is a line in a recent keynote speech by Mikko Hypponen, the CRO of F-Secure that goes something like this: “If we do our job in cyber security, then nothing happens.” It is so true, and made me think of the times when various corporate executives challenge their investments in cyber security, wanting to see something tangible. Mikko makes this point by asking them to look around at the conference room where these conversations are taking place, asking them if the rooms are cleaned to the satisfaction of the execs. If so, perhaps they should fire their cleaning staff, because they are no longer needed.

Now the difference between your security engineering staff and your janitors is obvious. You can’ t see all the virtual dirt that is building up across your network, the cruft of old software that needs updating and polishing, and the garbage that your users download on to their PCs that will leave them susceptible to attack. And that is part of the problem with cyber security: most things are invisible to mere mortals, and even some specialists can’t always agree on the best cyber hygiene techniques. Most of us have an innate sense that mopping the floor before dusting the shelves above is the wrong way to go about cleaning the room. That is because we all understand (at least on a basic level) how gravity operates. But when it comes to cyber, should we be changing our password regularly (some say yes, some say nay)? Or using complex strings of a certain length (some say 10 digits is fine, others say longer ones are needed)?

Mikko ends his talk by saying that we must assume that we are all targets by someone, whether they be a hacker who is still in high school or a government spy that is eager to get inside our company’s network. He says, “The times of building walls are over, because eventually someone will get in our enterprise. Breach detection is key, and we all have to get better at it.”

I agree with him completely. We must get better at seeing the virtual dirt on our networks. Building a better or bigger wall won’t stop everyone and will just foster a false sense of cyber security. And just because nothing happens, this doesn’t mean that cyber security folks aren’t hard at work. They are the cleaners that we don’t ever see, unless one day they leave someone’s mess behind.

 

RSA blog: Risk analysis vs. assessment: understanding the key to digital transformation

When it comes to looking at risks, many corporate IT departments tend to get their language confused. This is especially true in understanding the difference between doing risk analysis, which is the raw materials that is used to collect data about your risks, with risk assessment, which contains the conclusions and resource allocation to do something about these risks. Let’s talk about the causes, challenges and why this confusion exists and how we can avoid them as we move along our various paths of digital transformation.

Part of this confusion has to do with the words we choose to use than any actual activity. When an IT person says some practice is risky, oftentimes what our users hear is us say is “No, you can’t do that.” That gets to the heart of the historical IT/user conflict. We must do a better job of moving away from being the “party of no” to understanding what our users want to do and enabling their success. This means if they are suggesting doing something that is inherently risky, we have to work with them and guide them to the more secure promised land.

IT also has to change its own vocabulary from techno-babble to start talking in the language of business – meaning talking about money and the financial impacts of their actions — if they want the rest of the company to really grok what they are talking about. Certainly, I am not the first (nor will I be the last) person to say this. This is a common complaint from David Froud when he talks to the C-suite: “If I can’t show how a risk to the assets at my level can affect an entire business at theirs, how can I possible expect them to understand what I’m talking about?”

Certainly, it isn’t just proper word choice, and many times users don’t necessarily see the risky consequences of their actions – nor should they, that really isn’t part of their job description. Here is a recent example. Look at this Tweet stream from SwiftOnSecurity about what is going on in one corporation. Their users pick evergreen user ID accounts for their VPN signons. Rather than have unique IDs that match a specific and actual person, they reuse the same account name (and of course, password) and pass it along to the various users that need access. Needlessly risky, right? The users don’t see it quite in this light. Instead, they do this because of a failure for IT to deliver a timely solution, and one that is convenient and simple. I imagine the thinking behind this decision went something like this:

IT person: “You have to use our VPN if you are going to connect to our network from a remote location. You need to fill out this form and get it approved by 13 people before we can assign you a new logon.”

User: “Ok, but that is too much work. I will just use Joe’s logon and password.”

Granted, IT security is often the enemy of the convenient, and that is a constant battle – which is why we have these reused passwords and why our adversaries can always rely on this flaw to infiltrate our networks. The onus is on us, as technologists, to make our protection methods as convenient and reduce risk at the same time.

There are some bright signs of how far we have all come. In the second Dell survey of digital transformation attitudes, a third of the subjects said that concerns about data privacy and security was their biggest obstacle towards digital progress. This was the top concern in this year’s survey – two years ago, it was much further down the list. Fortunately, security technology investments also topped the list of planned improvements in the survey too. Two years ago, these investments didn’t even make the top ten, which gets to the heightened awareness and priority that infosec has become. Nevertheless, half of the respondents feel they will continue to struggle to prove that they are a trustworthy organization.

So where do we go from here? Here are a few suggestions.

 

As I mentioned in my earlier blog post, Understanding the Trust Landscape, RSA CTO Dr. Zulfikar Ramzan advocates replacing the zero trust model with one focusing on managing zero risk.” That is an important distinction and gets to the reworking towards a common vocabulary that any business executive can understand.

 

Second, we must do a better job with sharing best practices between our IT security and risk management teams. Many companies deliberately keep these two groups separate, which can backfire if they start competing for budget and personnel.

 

Finally, listen carefully to what you are saying from your users’ perspective. “Technologists show up with a basket of cute little kittens to business leaders with a cat allergy,” said Salesforce VP Peter Coffee. Think carefully about how you assess risk and how you can sell managing its reduction in the language of money.

HPE Enterprise.nxt: Six security megatrends from the Verizon DBIR

Verizon’s 2019 Data Breach Investigations Report (DBIR) is probably this year’s second-most anticipated report after the one from Robert Mueller. In its 12th edition, it contains details on more than 2,000 confirmed data breaches in 2018, taken from more than 70 different reporting sources and analyzing more than 40,000 separate security incidents.

What sets the DBIR apart is that it combines breach data from multiple sources using the common industry collection called VERIS – a third-party repository where threat data is uploaded and made anonymous. This gives it a solid authoritative voice, and one reason why it’s frequently quoted.

I describe six megatrends from the report, including:

  1. The C-suite has become the weakest link in enterprise security.
  2. The rise of the nation state actors.
  3. Careless cloud users continue to thwart even the best laid security plans.
  4. Whether insider or outsider threats are more important.
  5. The rate of ransomware attacks isn’t clear. 
  6. Hackers are still living inside our networks for a lot longer than we’d like.

I’ve broken these trends into two distinct groups — the first three are where there is general agreement between the DBIR and other sources, and last ones . are where this agreement isn’t as apparent. Read the report to determine what applies to your specific situation. In the meantime, here is my analysis for HPE’s Enterprise.nxt blog.

RSA blog: Managing the security transition to the truly distributed enterprise

As your workforce spreads across the planet, you now have to support a completely new collection of networks, apps, and endpoints. We all know that this increased attack surface area is more difficult to manage. Part of the challenge is that you have to create new standards, policies, and so forth to protect your enterprise and reduce risk as you make this transformation to become a more distributed company. In this blog post, I will examine some of the things to look out for. My thesis is that you want to match the risks with the approaches, so that you focus on the optimal security improvements as you make this transition to a distributed staffing model.

There are two broad brush items to think about: one has nothing to do with technology, and one that does. Let’s take the first item. Regardless of what technologies we deploy, the way we choose them is really critical. Your enterprise doesn’t have to be very large before you have different stakeholders and decision-makers that are influencing what gets bought and when. This isn’t exclusively a technology decision per se – but it has huge security and risk implications. If

You buy the wrong gear, you don’t do yourself any favors and can increase your corporate risk profile rather than reduce it. The last thing any of us need is to have different departments with their own incompatible security tools. This different stakeholder issue is something that I spoke about in my last blog post on managing third party risk.

Why is this important now? Certainly, we have had “shadow IT” departments making independent computer purchases almost since corporations first began buying PCs in the early 1980s. But unlike that era, where corporations were concerned about buying Compaqs vs. IBM, it has more serious implications and greater risk, because of the extreme connectivity that we now face in the average business. One weak link in your infrastructure, one infected Android phone, and your risk can quickly escalate.

But there is another factor in the technology choice process, and that is because getting security right is hard. It isn’t just buying something off the shelf, it is more likely you will need several items and that means you have to fit them together in the right way to provide the most protection and to address all the various vectors of compromise and risk. This makes sense, because as the attack surface area increases, we add technologies to our defensive portfolio to match and step up our game. But here’s the catch: what we choose is also as important as the way we choose them too.

Assuming you can get both of these factors under control, let’s next talk about some of the actual technology-related issues. They roughly fall into three categories: authentication/access, endpoint protection and threat detection/event management.

Authentication, identity and access rights management.  Most of us immediately think about this class of problems when it comes to reducing risk, and certainly there are a boatload of tools to help us do so. For example, you might want to have a tool to enable single sign-ons, so that you can reduce password fatigue and also improve on- and off-boarding of employees. No arguments there.

But you before you go out and buy one or more of these products, you might want to understand how out of date is your Active Directory. And by this, I mean quantify the level of effort you will need to make it accurate and represent the current state of your users and network resources. The Global Risk Report from Varonis found that more than half of their customers had more than a thousand stale user accounts that weren’t removed from the books. That is a lot of housecleaning before any authentication mechanism is going to be useful. Clearly, many of us need to improve our offboarding processes to ensure that terminating access rights are done at the appropriate moment – and not six months down the road when an attacker had seized control of a terminated user with an active account.

This level of accuracy means that organizations will also have to match identity assurance mechanisms with the right levels of risk. Otherwise, you aren’t protecting the right things with the appropriate level of security. You’ll want to answer questions such as:

  • Do you know where you most critical business assets are and know to protect them properly?
  • How will your third-party partners and others outside your immediate employ authenticate themselves? Will they need (or should they use) a different system from your full-time staff?
  • Can you audit your overall portfolio of access rights for devices and corporate computing resources to ensure they are appropriate and offer the best current context? At many firms, everyone has admin access to every network share: clearly, that is a very risky path to take.

Endpoint protection. This topic understandably gets a lot of attention, especially these days as threats are targeting vulnerabilities of specific endpoints such as Windows and Android devices. Back in the days when everyone worked next to each other in a few physical office locations, it was relatively easy to set this up and effectively screen against incoming malware. But as our corporate empire has spread around the world, it is harder to do. Many endpoint products were not designed for the kinds of latencies that are typical across wide-area links, for example. Or can’t produce warnings in near-real-time to be effective. Or can’t handle endpoints as effectively without pre-installed agents.

That is bad enough, but there is another complicating factor. That is few products do equally well at protecting mobile, PCs and endpoints running embedded systems. You often multiple products to cover your complete endpoint collection. As the malware writers are getting smarter at hiding their activities in plain sight, we must do a better job of figuring out when they have compromised an endpoint and shut them down. How these multiple products play together can introduce more risk.

Threat detection and event management. Our third challenge for the distributed workforce is being able to detect and deter abuses and attacks in a timely and efficient manner across your entire infrastructure. This is much harder, given that there is no longer any hard division between corporate-owned devices and servers and non-owned devices, including personal endpoints and cloud workloads. Remember when we used to refer to “bring your own device”? That seems so last year now: most corporations just assume that remote workers will use whatever they already have. That places a higher risk on their security teams to be able to detect and prevent threats that could originate on these devices.

The heterogeneous device portfolios of the current era also place a bigger burden – and higher risk – on watching and interpreting the various security event logs. If malware has touched any of these devices, something will appear on a log entry and this means security analysts need to have the right kinds of automated tools to alert them about any anomalies.

As I have said before, managing risk isn’t a one-and-done decision, but a continuous journey. I hope the above items will stimulate your own thinking about the various touchpoints you’ll need to consider for your own environment as you make your journey towards improving your enterprise security.

AI is both a boon and a bane for IT security

Next week I am giving a speech at the Inside AI/LIVE event in San Francisco. I have been working for Inside.com for nearly three years, producing a daily email newsletter on infosec topics. The speech will cover the current trends in how AI is both the bane and the boon of IT security. In my talk, I will point to some of the innovators in this space that I have found in my travels. I thought I would touch on what I will be talking about here.

Usually, when we first hear about AI, we tend to go towards what I call the “Skynet scenario.” For those of you who haven’t seen any of the Terminator movies, this is that point in the future where the machines take over and kill all of the humans, and we are left with Arnold-as-robot and Kyle Reese to save us all from extinction. That isn’t a great place to start thinking about the relationship between AI and security to be sure.

Certainly, we have heard many of the more recent notable AI fails, such as the gender-bias of the AI-based HR recruiting tool from Amazon, the self-driving Uber car that killed a pedestrian, and where Google Photo confused a skier with a mountain peak. But we need to get beyond these scenarios.

Perhaps a better place to start is to understand the workflow of machine learning (ML). Here we see that AI isn’t all that well suited to infosec. Why? Because the typical ML process tries to collect data, build an algorithm to model something that we think we know, and then use the model to predict some outcomes. That might work well for certain situations, but the infosec world is far too chaotic and too reliant on human interpretation of the data to work well with AI techniques.

On top of this is that the world of malware is undergoing a major transformation these days. Hackers are moving from being mere nuisances like script kiddies to professional criminals that are interested in making money from their exploits. Malware is getting more complex and the hackers are getting better at hiding their craft so that they can live longer inside our corporate networks and do more targeted damage. Adversaries are moving away from “spray and pray,” where they just blanket the globe with malware and towards “target and stay,” where they are more selective and parsimonious with their attacks. This is also a way to hide themselves from detection too.

One issue for using AI techniques is that malware attribution is hard, something that I wrote about in a blog post for IBM’s Security Intelligence last year. For example, the infamous WannaCry ransomware was eventually attributed to the North Koreans, although at first it seemed to come from Chinese agents. It took a lot of research to figure this out, and one tell was the metadata in the code which showed the Korean time zone. AI can be more of a hindrance than help sometimes.

Another problem for security-related AI is that oftentimes developers don’t think about security until they have written their code and they are in their testing phase. Certainly, security needs to be top-of-mind. This post makes some solid reasons why this needs to change.

In the past several years, Amazon, Google, (most recently Microsoft) and many other IaaS players have come out with their ML toolkits that are pretty impressive. For a few bucks a money, you can rent a very capable server and build your own ML models for a wide variety of circumstances. That assumes that a) you know what you are doing and b) that you have a solid-enough dataset that you can use for creating your model. Neither of those circumstances may match your mix of skills or situation.

So there is some hope in the AI/security space. Here are a few links to vendors that are trying to make better products using AI techniques.

First is a group that is using what is called homomorphic encryption. This solves the problem where you want to be able to share different pieces of the same database with different data owners yet encrypt the entire data so that no one can inadvertently compromise things. This technology has been the darling of academia for many years, but there are several startups including ICE CybersecurityDuality Technologies’ SecurePlus, Enveil’s ZeroReveal, Capnion’s Ghost PII, and Preveil’s email and file security solutions. A good example of this is the San Diego-based Community Information Exchange, where multiple social service agencies can share data on their clients without revealing personal information.

Google’s Chronicle business has a new security tool it calls Backstory. While still in limited release, it has the ability to ingest a great deal of data from your security logs and find patterns of compromise. In several cases, it identified intrusions that happened years ago for its clients – intrusions that had not been detected by other means. That is showing the power of AI for good!

Coinbase is using ML techniques to detect fraudulent users, such as those that upload fake IDs to try to open accounts. It matches patterns in these uploads, such as if someone uses a fake photo or makes a copy of someone else’s ID.  And Cybraics has developed an AI engine that can be used to scan for vulnerabilities across your network.

Probably one of the more interesting AI/security applications is being developed by ZeroEyes. While not quite in production, it will detect weapons in near-real time, hopefully identifying someone before they commit a crime. This isn’t too far afield from the thesis of Minority Report’s pre-crime activities. We have certainly come a long way from those early Skynet days.

You can view the slide deck for my presentation at the conference below:

 

Endgame white paper: How to replace your AV and move into EPP

The nature of anti-virus software has radically changed since the first pieces of malware invaded the PC world back in the 1980s. As the world has become more connected and more mobile, the criminals behind malware have become more sophisticated and gotten better at targeting their victims with various ploys. This guide will take you through this historical context before setting out the reasons why it is time to replace AV with newer security controls that offer stronger protection delivered at a lower cost and with less of a demand for skilled security operations staff to manage and deploy. In this white paper I co-wrote for Endgame Inc., I’ll show you what is happening with malware development and protecting your network from it. why you should switch to a more modern endpoint protection platform (EPP) and how to do it successfully, too.

Security Intelligence: How to Defend Your Organization Against Fileless Malware Attacks

The threat of fileless malware and its potential to harm enterprises is growing. Fileless malware leverages what threat actors call “living off the land,” meaning the malware uses code that already exists on the average Windows computer. When you think about the modern Windows setup, this is a lot of code: PowerShell, Windows Management Instrumentation (WMI), Visual Basic (VB), Windows Registry keys that have actionable data, the .NET framework, etc. Malware doesn’t have to drop a file to use these programs for bad intentions.

Given this growing threat, I provide several tips on what can security teams can do to help defend their organizations against these attacks in my latest post for IBM’s Security Intelligence blog.

How to protect your mobile apps using Zimperium’s zIAP SDK (screencast)

If you are looking for a way to protect your Android and iOS apps from malware and other mobile threats, you should look at Zimperium ‘s In-App Protection (zIAP) SDK . It supports both Apple X-Code for iOS apps and Android Studio for those apps. One of the advantages of zIAP is that you don’t have to redeploy your code because changes are updated dynamically at runtime and automatically pushed to your devices. zIAP ensures that mobile applications remain safe from cyber attacks by providing immediate device risk assessments and threat alerts. Organizations can minimize exposure of their sensitive data, and prevent their customers and partners’ data from being jeopardized by malicious and fraudulent activity. I tested the product in April 2019.

Pricing starts for 10K Monthly Active Devices at $12,000 per year, with steep quantity discounts available.

https://go.zimperium.com/david-strom-ziap

Keywords: strom, screencast review, webinformant, zimperium, mobile security, app security, Android security, iOS security

RSA blog: Third-party risk is the soft underbelly of cybersecurity

In the past several weeks, we have seen the effects of ignoring the risks of our third-party vendors. They can quickly put your enterprise in peril, as this story about a third-party provider to the airline industry.  In this case, a back-end database supplier grounded scheduled flights because of a computer outage. And then there is this story about how two third-party providers from Facebook exposed more than 500M records with unsecured online databases. These are just the more notable ones. Hackers are getting more clever about how and when they attack us, and often our third-party apps and vendors are the soft underbelly of our cyber security. Witness the various attacks on point-of-sale vendors, or back-end database vendors, payment providers or ecommerce plug-ins, etc. And then there are system failures, such as what happened to the airline databases.

This isn’t a new problem, to be sure, but we have to ssume these attacks and system failures are going to happen more frequently, and expose more data. A study of 40 incident responders found that half of them have seen attacks that are targeting their entire supply chain, and more attackers are trying to move across an enterprise network laterally to find better opportunities to ply their trade.

We mention the issue with third-party risk management in this post for the RSAC blog. I want to follow up where this post leaves off and talk about more specific and actionable suggestions to protect your third-party flank. The problem is that we are too trusting when it comes to these third-party apps. We have developed complex infrastructures that combine the best of the online and on-premises worlds, with a nice sprinkling of browser-based interfaces riding over all of this. That is great for building some powerful apps, but not as great when it comes to making sure that all of this gear is properly protected.

Here are a few practical things to check as your formulate your own strategies for mitigating these third-party risks:

Vet your providers as if their were your own employees. As this post describes in more detail, you should do due diligence on any prospective vendor, partner or third-party supplier. This is helpful not just for cyber security, but also for compliance reasons too. For example, when did your prospective supplier perform its last pen test or business continuity exercise? Speaking of which, does your breach response plan take into account the posture of all of your key suppliers?

– With any of your security tools, don’t just set and forget but regularly examine your third-party apps to see if any of them have had recent exploits or if they are behind on a software version. NotPetya depended on this lazy upgrade experience. Their authors sent out their malware by taking advantage of a bug that was only patched by Microsoft a few weeks before it was exploited. There are numerous other examples. Hackers are paying attention to the timing of these bugs and zero-days, and that means you have to be ever-vigilant with your upgrades. That means your network is only as good as everyone’s equipment, whether you own it, connect to its servers or just share data with it. When you purchase a new third-party app, make sure you understand your supplier’s update policies and audit this  equipment regularly.

– Speaking of audits, you should regularly perform data inventories too. By this I mean what data sets your third-parties have access to or store on their own computing infrastructures. Part of this assessment is understanding what data from your third-party provider is available online and how it is protected at rest and in motion. This will come in handy if any of your third-party providers is hit with a breach, and can minimize the blowback to your own computing environment.

Segment your network and put each vendor’s servers behind its own firewall. vLANs are not anything new, but they are the first line of defense when trying to stop a hacker from being able to roam around your network at will. A number of third-party attacks happened because the attackers were able to move about an internal flat network from their point of entry, until they found a victim’s PC that they could leverage for a breach.

–  Review your user access policies and know who has administrative and other more privileged rights. This becomes more of an issue when you have numerous third-parties attached to your network, and you may not know if a trusted employee of theirs has been terminated – but his account still has those higher-level access rights. Indeed, as I was writing this post, I got an email from one of my co-workers who told me that I had admin rights to our CMS. I can’t remember why I had these rights, but glad that someone was checking.

–  Have good management techniques on third-party devices and treat them as if they were your own. This is especially important for internet-connected devices.  As our networks become more complex, it becomes harder to lock every endpoint down. This means a network-connected printer at one of your suppliers could become infected and used to move onto your own network.

–  Recognize that these attacks could be a symptom of bigger problem with your security and structure your protection accordingly. Have you lagged behind keeping up with your network documentation? Do you vet all your suppliers consistently? Do you have representatives from your suppliers working on premises – and know what physical access restrictions they have?

– Finally, who stores your secrets (keys, certs) and where do they put and protect them? Just because a third-party uses encryption doesn’t mean they are using it effectively, or universally. The recent Facebook breaches with plaintext passwords discovered unprotected is a good case in point.

As you can see, managing third-party risks isn’t anything fancy, but just the basic block-and-tackling of issues that we have known about for decades. If you follow these best practices, you will go a long way towards protecting your business.

CSOonline: How to improve container security

Gartner has named container security one of its top ten concerns for this year, so it might be time to take a closer look at this issue and figure out a solid security implementation plan. While containers have been around for a decade, they are becoming increasingly popular because of their lightweight and reusable code, flexible features and lower development cost. In this post for CSOonline, I’ll look at the kinds of tools needed to secure the devops/build environment, tools for the containers themselves, and tools for monitoring/auditing/compliance purposes. Naturally, no single tool will do everything.