RSA blog: Managing the security transition to the truly distributed enterprise

As your workforce spreads across the planet, you now have to support a completely new collection of networks, apps, and endpoints. We all know that this increased attack surface area is more difficult to manage. Part of the challenge is that you have to create new standards, policies, and so forth to protect your enterprise and reduce risk as you make this transformation to become a more distributed company. In this blog post, I will examine some of the things to look out for. My thesis is that you want to match the risks with the approaches, so that you focus on the optimal security improvements as you make this transition to a distributed staffing model.

There are two broad brush items to think about: one has nothing to do with technology, and one that does. Let’s take the first item. Regardless of what technologies we deploy, the way we choose them is really critical. Your enterprise doesn’t have to be very large before you have different stakeholders and decision-makers that are influencing what gets bought and when. This isn’t exclusively a technology decision per se – but it has huge security and risk implications. If

You buy the wrong gear, you don’t do yourself any favors and can increase your corporate risk profile rather than reduce it. The last thing any of us need is to have different departments with their own incompatible security tools. This different stakeholder issue is something that I spoke about in my last blog post on managing third party risk.

Why is this important now? Certainly, we have had “shadow IT” departments making independent computer purchases almost since corporations first began buying PCs in the early 1980s. But unlike that era, where corporations were concerned about buying Compaqs vs. IBM, it has more serious implications and greater risk, because of the extreme connectivity that we now face in the average business. One weak link in your infrastructure, one infected Android phone, and your risk can quickly escalate.

But there is another factor in the technology choice process, and that is because getting security right is hard. It isn’t just buying something off the shelf, it is more likely you will need several items and that means you have to fit them together in the right way to provide the most protection and to address all the various vectors of compromise and risk. This makes sense, because as the attack surface area increases, we add technologies to our defensive portfolio to match and step up our game. But here’s the catch: what we choose is also as important as the way we choose them too.

Assuming you can get both of these factors under control, let’s next talk about some of the actual technology-related issues. They roughly fall into three categories: authentication/access, endpoint protection and threat detection/event management.

Authentication, identity and access rights management.  Most of us immediately think about this class of problems when it comes to reducing risk, and certainly there are a boatload of tools to help us do so. For example, you might want to have a tool to enable single sign-ons, so that you can reduce password fatigue and also improve on- and off-boarding of employees. No arguments there.

But you before you go out and buy one or more of these products, you might want to understand how out of date is your Active Directory. And by this, I mean quantify the level of effort you will need to make it accurate and represent the current state of your users and network resources. The Global Risk Report from Varonis found that more than half of their customers had more than a thousand stale user accounts that weren’t removed from the books. That is a lot of housecleaning before any authentication mechanism is going to be useful. Clearly, many of us need to improve our offboarding processes to ensure that terminating access rights are done at the appropriate moment – and not six months down the road when an attacker had seized control of a terminated user with an active account.

This level of accuracy means that organizations will also have to match identity assurance mechanisms with the right levels of risk. Otherwise, you aren’t protecting the right things with the appropriate level of security. You’ll want to answer questions such as:

  • Do you know where you most critical business assets are and know to protect them properly?
  • How will your third-party partners and others outside your immediate employ authenticate themselves? Will they need (or should they use) a different system from your full-time staff?
  • Can you audit your overall portfolio of access rights for devices and corporate computing resources to ensure they are appropriate and offer the best current context? At many firms, everyone has admin access to every network share: clearly, that is a very risky path to take.

Endpoint protection. This topic understandably gets a lot of attention, especially these days as threats are targeting vulnerabilities of specific endpoints such as Windows and Android devices. Back in the days when everyone worked next to each other in a few physical office locations, it was relatively easy to set this up and effectively screen against incoming malware. But as our corporate empire has spread around the world, it is harder to do. Many endpoint products were not designed for the kinds of latencies that are typical across wide-area links, for example. Or can’t produce warnings in near-real-time to be effective. Or can’t handle endpoints as effectively without pre-installed agents.

That is bad enough, but there is another complicating factor. That is few products do equally well at protecting mobile, PCs and endpoints running embedded systems. You often multiple products to cover your complete endpoint collection. As the malware writers are getting smarter at hiding their activities in plain sight, we must do a better job of figuring out when they have compromised an endpoint and shut them down. How these multiple products play together can introduce more risk.

Threat detection and event management. Our third challenge for the distributed workforce is being able to detect and deter abuses and attacks in a timely and efficient manner across your entire infrastructure. This is much harder, given that there is no longer any hard division between corporate-owned devices and servers and non-owned devices, including personal endpoints and cloud workloads. Remember when we used to refer to “bring your own device”? That seems so last year now: most corporations just assume that remote workers will use whatever they already have. That places a higher risk on their security teams to be able to detect and prevent threats that could originate on these devices.

The heterogeneous device portfolios of the current era also place a bigger burden – and higher risk – on watching and interpreting the various security event logs. If malware has touched any of these devices, something will appear on a log entry and this means security analysts need to have the right kinds of automated tools to alert them about any anomalies.

As I have said before, managing risk isn’t a one-and-done decision, but a continuous journey. I hope the above items will stimulate your own thinking about the various touchpoints you’ll need to consider for your own environment as you make your journey towards improving your enterprise security.

AI is both a boon and a bane for IT security

Next week I am giving a speech at the Inside AI/LIVE event in San Francisco. I have been working for Inside.com for nearly three years, producing a daily email newsletter on infosec topics. The speech will cover the current trends in how AI is both the bane and the boon of IT security. In my talk, I will point to some of the innovators in this space that I have found in my travels. I thought I would touch on what I will be talking about here.

Usually, when we first hear about AI, we tend to go towards what I call the “Skynet scenario.” For those of you who haven’t seen any of the Terminator movies, this is that point in the future where the machines take over and kill all of the humans, and we are left with Arnold-as-robot and Kyle Reese to save us all from extinction. That isn’t a great place to start thinking about the relationship between AI and security to be sure.

Certainly, we have heard many of the more recent notable AI fails, such as the gender-bias of the AI-based HR recruiting tool from Amazon, the self-driving Uber car that killed a pedestrian, and where Google Photo confused a skier with a mountain peak. But we need to get beyond these scenarios.

Perhaps a better place to start is to understand the workflow of machine learning (ML). Here we see that AI isn’t all that well suited to infosec. Why? Because the typical ML process tries to collect data, build an algorithm to model something that we think we know, and then use the model to predict some outcomes. That might work well for certain situations, but the infosec world is far too chaotic and too reliant on human interpretation of the data to work well with AI techniques.

On top of this is that the world of malware is undergoing a major transformation these days. Hackers are moving from being mere nuisances like script kiddies to professional criminals that are interested in making money from their exploits. Malware is getting more complex and the hackers are getting better at hiding their craft so that they can live longer inside our corporate networks and do more targeted damage. Adversaries are moving away from “spray and pray,” where they just blanket the globe with malware and towards “target and stay,” where they are more selective and parsimonious with their attacks. This is also a way to hide themselves from detection too.

One issue for using AI techniques is that malware attribution is hard, something that I wrote about in a blog post for IBM’s Security Intelligence last year. For example, the infamous WannaCry ransomware was eventually attributed to the North Koreans, although at first it seemed to come from Chinese agents. It took a lot of research to figure this out, and one tell was the metadata in the code which showed the Korean time zone. AI can be more of a hindrance than help sometimes.

Another problem for security-related AI is that oftentimes developers don’t think about security until they have written their code and they are in their testing phase. Certainly, security needs to be top-of-mind. This post makes some solid reasons why this needs to change.

In the past several years, Amazon, Google, (most recently Microsoft) and many other IaaS players have come out with their ML toolkits that are pretty impressive. For a few bucks a money, you can rent a very capable server and build your own ML models for a wide variety of circumstances. That assumes that a) you know what you are doing and b) that you have a solid-enough dataset that you can use for creating your model. Neither of those circumstances may match your mix of skills or situation.

So there is some hope in the AI/security space. Here are a few links to vendors that are trying to make better products using AI techniques.

First is a group that is using what is called homomorphic encryption. This solves the problem where you want to be able to share different pieces of the same database with different data owners yet encrypt the entire data so that no one can inadvertently compromise things. This technology has been the darling of academia for many years, but there are several startups including ICE CybersecurityDuality Technologies’ SecurePlus, Enveil’s ZeroReveal, Capnion’s Ghost PII, and Preveil’s email and file security solutions. A good example of this is the San Diego-based Community Information Exchange, where multiple social service agencies can share data on their clients without revealing personal information.

Google’s Chronicle business has a new security tool it calls Backstory. While still in limited release, it has the ability to ingest a great deal of data from your security logs and find patterns of compromise. In several cases, it identified intrusions that happened years ago for its clients – intrusions that had not been detected by other means. That is showing the power of AI for good!

Coinbase is using ML techniques to detect fraudulent users, such as those that upload fake IDs to try to open accounts. It matches patterns in these uploads, such as if someone uses a fake photo or makes a copy of someone else’s ID.  And Cybraics has developed an AI engine that can be used to scan for vulnerabilities across your network.

Probably one of the more interesting AI/security applications is being developed by ZeroEyes. While not quite in production, it will detect weapons in near-real time, hopefully identifying someone before they commit a crime. This isn’t too far afield from the thesis of Minority Report’s pre-crime activities. We have certainly come a long way from those early Skynet days.

You can view the slide deck for my presentation at the conference below:

 

Sometimes, the tin-foil hat types are right

A recent story in the NY Times caught my attention. It is about a block in the Cleveland area where some of their wireless car key fobs and garage door openers stopped working. The block is near a NASA research facility, so that was an obvious first suspect. But it wasn’t. The actual source of the problem turned out to be an inventor that was flooding the radio spectrum at the same frequency as the fobs use: 315 Mhz. Once the radio emitter was turned off, the fobs and garage doors started working again. The issue was the inventor’s radio signal was so strong it was preventing anything else from transmitting on that frequency. He had no idea that he was the source of the radio interference.

This story reminds me of an experience that I had back in 1991 or so. At the time, I was the editor-in-chief of Network Computing magazine for what was then called CMP. It was a fun and challenging job, and one day I got a call from one of my readers who was the IT manager for the American Red Cross headquarters in DC. This was Jon Arnold, who spent a long career in IT, sadly dying of a heart attack several years ago. Turns out they had a chapter in Norfolk Va. that was having networking issues. Their office was a small one, of about 25 or so staffers as I recall. Every day their network would start slowing down and then eventually go kaput for several hours. It was at a different time during the day, so it wasn’t the Cleaning Person Problem (I will get to that in a moment). It would come back online sometimes by itself, sometimes with a server reboot. The IT manager asked if I would be willing to lend and hand, and the first person that I thought of to help me was Bill Alderson.

I first met Bill when he was a young engineer for Network General, which made the fabulous Sniffer protocol analyzer. Many of you who are not from that generation may not realize what this tool was or how much of a big deal it was to have a device that could record packet traffic and examine it bit by bit. Today we have open source tools that do the same thing for free, but back then the Sniffer cost four or five figures and came with a great deal of training. Bill cut his teeth on this product and now has his own company, HopZero that has an interesting way to protect your servers by restricting their hop count.

Bill and I first met back in 1989 when I worked at PC Week and we wanted to test the first local area network topologies. We set up three networks, running Ethernet, Token Ring and Arcnet in a networked classroom at UCLA during spring break. All were connected to the same Novell Netware server. Ethernet won the day (as you can see in this copy of the story), and the other topologies died of natural causes. But I digress.

Jon, Bill and I flew to Norfolk and spent a day with the Red Cross staff to try to figure out what was happening with their Novell network. We did all sorts of packet captures that weren’t conclusive. Our first thought was that it had to be something wrong with the server, but we didn’t see anything wrong. Our second thought was more insidious. Being in Norfolk, we were directly down the road from the naval base (you could say that about much of the town, it is a big base). We actually managed to get through to the base commander to find out if their radar was active when they were coming into port. Imagine making that phone call these days in our post-9/11 world? Anyway, the answer we got was negative. Eventually, after hours of shooting down various theories, we figured out the cause of the problem was a wonky network adapter card in someone’s PC. It usually operated just well enough that it didn’t interfere with the network most of the time. Once we replaced the card, everything went swimmingly, and we could put away our tin-foil hats.

Okay, so what is the Cleaning Person Problem? This sounds like folklore, but another reader told me about a problem they had on their network years ago. The reader was periodically disconnected from his network at the same time every night. He was one of the few people online at the time in their office, so it wasn’t like there was high traffic across the network. Eventually, after several evenings he figured out the problem: The cleaning crew was vacuuming the rug in the server room, and the network cable to the server was being run over by the vacuum. Because the cable wasn’t properly crimped and because it was run under the carpet (who knows why this was done), it was shaken just loose enough to disconnect it from the server. When the crew was finished, the cable would operate just fine. Thankfully, they made a better cable and ran it elsewhere where no one could step on it.

The Cleveland folks that had their car fobs disabled actually had it easy: the fault was in a very deliberate emitter that – while initially difficult to trace – was a very binary cause. Their challenge was that not every car fob and garage door was affected. The two scenarios that I mention here were not so cut-and-dried, which made troubleshooting them more difficult. So keep these stories in mind when you are troubleshooting your next computer or networking problem, and don’t be so quick to blame user error. It could be something not as obvious as the odd radio transmission.

FIR B2B podcast #120: Voice search, a survey rant and great tips for engaging mobile visitors

Paul Gillin kicks off with a short rant about the lack of rigor and news value of surveys, and how marketers should spend more time vetting their results to determine what questions/objections they’re likely to get. With the release next week of the Verizon Data Breach Report setting a high bar, it is a timely topic.

This week we saw a tweet from Chase bank that not only fell flat but incurred many folks’ antipathy. It was so tone deaf that it was hard to even understand how the bank could have put it out there. This incident, combined with the offensive NY Times International edition political cartoon that was published last week,  reinforces the need to be more careful about what your brand shares socially.

Speaking of social shares, this article by Bloomberg’s BHive research outfit has a lot to say about ways they found to increase the sharing of their news articles from mobile devices. As more and more news is read on these devices, content providers have to do a better job of not cluttering up  small screens with extraneous ads and other diversions. Bloomberg was able to improve engagement by a significant amount with just doing a few simple tweaks to their stories. One key point: They interviewed actual readers.

The Workamajig blog’s post on how Voice Search is Changing B2B Marketing (And What You Can Do About It) is well worth your time. Consider that voice searches are by definition conversational. People don’t speak in keywords. They ask “What’s the height of the Empire State building?” not “Empire State building height”. Voice opens up new opportunities for content marketing. Republishing your content as an Alexa skill, for instance, can bring you a whole new set of listeners. In fact, if you look at the best reviewed Business & Finance skills on the Alexa store right now, you’ll see content-focused skills dominate the list. David met one vendor called VoiceXP that can help you create your own voice apps. Clearly this will grow in importance in the near future.

Finally, we note this analysis by our colleague Mike Vizard in the Barracuda blog about how the Russian hacking of the DNC back in 2016 went down, as documented in the Mueller report. It all started with a spear phishing email. You have been warned. listen to our 17min episode here:

Endgame white paper: How to replace your AV and move into EPP

The nature of anti-virus software has radically changed since the first pieces of malware invaded the PC world back in the 1980s. As the world has become more connected and more mobile, the criminals behind malware have become more sophisticated and gotten better at targeting their victims with various ploys. This guide will take you through this historical context before setting out the reasons why it is time to replace AV with newer security controls that offer stronger protection delivered at a lower cost and with less of a demand for skilled security operations staff to manage and deploy. In this white paper I co-wrote for Endgame Inc., I’ll show you what is happening with malware development and protecting your network from it. why you should switch to a more modern endpoint protection platform (EPP) and how to do it successfully, too.

Thoughts on being a digital nomad

When the first personal computers were purchased by businesses back in the early 1980s, I was a freshly-minted engineer that was working in Washington, DC. I was trying to change the world, like so many other 20-somethings that were living there, working in and around the federal government. Little did I know that my love affair with PCs would become my career, and that they would change the world on their own, without much effort on my part.

I was thinking about this arc of my own humble life when thinking about the concept of digital nomads, those folks who have used the ubiquitous technology that has infused our lives over the past 40 years that we all now take for granted. Most of you inherently know what this means: the ability to travel and work anywhere in the world, as if you were sitting at your desk. Essentially, your desk becomes wherever you are: thanks to Wifi, the cloud and a truckload of communications technologies, you can be present globally.

While I am not one of them, I can certainly understand the appeal. In this edition of Web Informant, I want to highlight some of the folks who have interesting lives as digital nomads.

The concept of nomadic technology certainly has changed since I first started reporting for PC Week. Back then, the payphone was my go-to tool. Actually, let me revise that: Having a phone charge card was the killer app that enabled me to make calls without having any coins. Now we use a bunch of smartphone apps and the appropriate SIM card for our nomad connections. Of course, things aren’t always that easy, but still it is pretty amazing how far we have come since those early days.

My earliest memory of the prototype nomadic lifestyle is Steve Roberts. He is definitely into hardware, and his first experiment was to equip a recumbent bike with all sorts of tech that enabled him to ride 17,000 miles around the country and report on his travels. He started in the 1980s, just as the PC was taking hold, and the bike is now in the Silicon Valley Computer History Museum.  Back then, you had to have a strong back and a lot of knowledge to cobble together the tools to report on the road. I also consider him one of the early “makers” as he has what has to be the most well-equipped travel workshop that now used to build his hi-tech boats. He is still active in his nomadness, just from dockside.

Another deep resource on nomadic tips and tricks is from Jodi Ettenberg on LegalNomads. As you might assume, she was a former corporate lawyer who took to the road back in 2008 and started writing about food. But then her lawyerly training took over and she dug deeper. She has a very extensively curated page of meta-things such as international visa requirements, the philosophical differences and motivations among nomads, and links to numerous discussion forums and other nomads that you can follow. Sadly, she is no longer traveling due to health issues.

A few years ago, I came across Nikki and Jason Wynn, a 30-something couple that has been on the road since 2011. They initially sold their Dallas home and contents and bought an RV. They drove around the country for six years and then traded their RV for a sailboat. They are now somewhere in Polynesia, taking on the world.  Their YouTube channel gets 200K views with a wide range of upbeat videos that show lots of hands-on insight into the gear they use to stay connected when in the middle of an ocean, along with how they keep fresh water and live off the grid with all their electronics. The videos also have the usual travelogues about what they are up to and where they are.

Boating is also a big motivation behind Cruising the Cut. This is a solo effort from a 50-year-old former British TV journalist David Johns. For the past three years, he has been living aboard his narrowboat and navigating the extensive British canal network. I found him appealing, a combination of understated British irony (think of some of the characters played by John Cleese) with some boating-flavored HGTV “tiny house” design shows thrown in. Johns averages about 60K views of 170 different episodes. He also has an extensive curated list linking to other narrowboaters documenting their nomadic existence if you want to take a deeper dive into this subculture.

Let’s move on from boating to flying. Kara and Nate Buchanan are another young couple that three years ago took to the skies and have a goal of visiting 100 countries. They are currently making their way through the Middle East and have produced more than 500 videos with 600K followers of their exploits. They are all about the people and the food and are very upbeat (sometimes bordering on the twee) about their adventures. If you want to know exactly how much money they make from their efforts, they are also the most transparent and provide monthly reports of their expenses and income. If you want help accumulating your own travel miles, they will freely share which are the best travel credit cards and other tips that they use to get around.

Chris Dodd has put together some very practical tips on how to become a nomadic freelancer, complete with flowcharts on where to find online training for the skills that you lack. This training orientation is something of his specialty, and he provides details on selecting the right coworking space among other things. He has been on the road for the past two years.

Mike Elgan has been writing about tech as long as I have, and now he has turned his nomadic leanings into a viable business. He and his wife Amira run Gastronomad, where they offer foodie tours to satisfy those who can’t afford to go 100% nomadic but still want to travel to interesting places and get off the beaten path. Given his background as a product reviewer, their site has a lot of info on camera choices, among other things.

One perspective you don’t always see online is from Matt Karsten, who started out with the ExpertVagabond blog back in 2010 and eventually gained millions in followers. Earlier this month, he wrote about quitting traveling due to burnout and after meeting the woman he would eventually marry. “Trying to juggle a normal work routine when you’re also trying to figure out where to sleep next week just isn’t ideal. Often, I never wrote much about the places I was living because I was too busy catching up with work after months of traveling.”  They have moved to LA.

I have just tapped the surface of these nomads, and have tried to give you a sampling, including a few who are still wandering the planet in search of new adventures. As you can see, some have given this life up and “settled down,” whatever that means. Some travel as couples, others as singles. Some are into the hardware, some are more about learning their craft. You may follow or know of others or count yourself as nomads; feel free to share your own stories and recommendations on my blog comments. And good luck if you decide to pursue your own nomadic dream.

FIR B2B podcast #119: Our favorite email newsletter tips

Paul Gillin and I are old hands at email newsletters. Paul had his own for several years and has produced several for his clients. I currently publish two: my own Web Informant, which I have been doing almost weekly since 2003, and Inside Security which is part of a group of newsletters. We share a few tips from our years of experience.

The first is to know your audience and segment them for best results. This post in Marketing Week documents how marketers are segmenting the audiences at a much finer level than they previously did thanks to an explosion in behavioral data from third parties. One bottled water vendor was able to dramatically boost the response rate of its YouTube ads with an email newsletter sliced by 16 different segments. The survey found that behavior and location are the most effective segmentation methods, with the old stalwarts like age and gender being the least effective.

We discuss how to craft your subject line and choose a coherent theme as well as how to pick the optimal length and number of hyperlinks to include. If you do use links, beware of URL shortening services, since many as spam filters block them. There’s also the question of whether to make your newsletters text-only or to go the HTML route. If you choose the latter, be sure to test each newsletter with different browsers and different screen depths. Finally, we cover how to choose the right tool for the mailings. We’ve used a variety of them over the years, and each has different strengths and weaknesses. Some of these topics are mentioned in this piece for Marketing360.

We’d love to hear from you about your favorite email newsletters and tips for creating your own. You can listen to our 16 min. podcast here:

Security Intelligence: How to Defend Your Organization Against Fileless Malware Attacks

The threat of fileless malware and its potential to harm enterprises is growing. Fileless malware leverages what threat actors call “living off the land,” meaning the malware uses code that already exists on the average Windows computer. When you think about the modern Windows setup, this is a lot of code: PowerShell, Windows Management Instrumentation (WMI), Visual Basic (VB), Windows Registry keys that have actionable data, the .NET framework, etc. Malware doesn’t have to drop a file to use these programs for bad intentions.

Given this growing threat, I provide several tips on what can security teams can do to help defend their organizations against these attacks in my latest post for IBM’s Security Intelligence blog.

How to protect your mobile apps using Zimperium’s zIAP SDK (screencast)

If you are looking for a way to protect your Android and iOS apps from malware and other mobile threats, you should look at Zimperium ‘s In-App Protection (zIAP) SDK . It supports both Apple X-Code for iOS apps and Android Studio for those apps. One of the advantages of zIAP is that you don’t have to redeploy your code because changes are updated dynamically at runtime and automatically pushed to your devices. zIAP ensures that mobile applications remain safe from cyber attacks by providing immediate device risk assessments and threat alerts. Organizations can minimize exposure of their sensitive data, and prevent their customers and partners’ data from being jeopardized by malicious and fraudulent activity. I tested the product in April 2019.

Pricing starts for 10K Monthly Active Devices at $12,000 per year, with steep quantity discounts available.

https://go.zimperium.com/david-strom-ziap

Keywords: strom, screencast review, webinformant, zimperium, mobile security, app security, Android security, iOS security

RSA blog: Third-party risk is the soft underbelly of cybersecurity

In the past several weeks, we have seen the effects of ignoring the risks of our third-party vendors. They can quickly put your enterprise in peril, as this story about a third-party provider to the airline industry.  In this case, a back-end database supplier grounded scheduled flights because of a computer outage. And then there is this story about how two third-party providers from Facebook exposed more than 500M records with unsecured online databases. These are just the more notable ones. Hackers are getting more clever about how and when they attack us, and often our third-party apps and vendors are the soft underbelly of our cyber security. Witness the various attacks on point-of-sale vendors, or back-end database vendors, payment providers or ecommerce plug-ins, etc. And then there are system failures, such as what happened to the airline databases.

This isn’t a new problem, to be sure, but we have to ssume these attacks and system failures are going to happen more frequently, and expose more data. A study of 40 incident responders found that half of them have seen attacks that are targeting their entire supply chain, and more attackers are trying to move across an enterprise network laterally to find better opportunities to ply their trade.

We mention the issue with third-party risk management in this post for the RSAC blog. I want to follow up where this post leaves off and talk about more specific and actionable suggestions to protect your third-party flank. The problem is that we are too trusting when it comes to these third-party apps. We have developed complex infrastructures that combine the best of the online and on-premises worlds, with a nice sprinkling of browser-based interfaces riding over all of this. That is great for building some powerful apps, but not as great when it comes to making sure that all of this gear is properly protected.

Here are a few practical things to check as your formulate your own strategies for mitigating these third-party risks:

Vet your providers as if their were your own employees. As this post describes in more detail, you should do due diligence on any prospective vendor, partner or third-party supplier. This is helpful not just for cyber security, but also for compliance reasons too. For example, when did your prospective supplier perform its last pen test or business continuity exercise? Speaking of which, does your breach response plan take into account the posture of all of your key suppliers?

– With any of your security tools, don’t just set and forget but regularly examine your third-party apps to see if any of them have had recent exploits or if they are behind on a software version. NotPetya depended on this lazy upgrade experience. Their authors sent out their malware by taking advantage of a bug that was only patched by Microsoft a few weeks before it was exploited. There are numerous other examples. Hackers are paying attention to the timing of these bugs and zero-days, and that means you have to be ever-vigilant with your upgrades. That means your network is only as good as everyone’s equipment, whether you own it, connect to its servers or just share data with it. When you purchase a new third-party app, make sure you understand your supplier’s update policies and audit this  equipment regularly.

– Speaking of audits, you should regularly perform data inventories too. By this I mean what data sets your third-parties have access to or store on their own computing infrastructures. Part of this assessment is understanding what data from your third-party provider is available online and how it is protected at rest and in motion. This will come in handy if any of your third-party providers is hit with a breach, and can minimize the blowback to your own computing environment.

Segment your network and put each vendor’s servers behind its own firewall. vLANs are not anything new, but they are the first line of defense when trying to stop a hacker from being able to roam around your network at will. A number of third-party attacks happened because the attackers were able to move about an internal flat network from their point of entry, until they found a victim’s PC that they could leverage for a breach.

–  Review your user access policies and know who has administrative and other more privileged rights. This becomes more of an issue when you have numerous third-parties attached to your network, and you may not know if a trusted employee of theirs has been terminated – but his account still has those higher-level access rights. Indeed, as I was writing this post, I got an email from one of my co-workers who told me that I had admin rights to our CMS. I can’t remember why I had these rights, but glad that someone was checking.

–  Have good management techniques on third-party devices and treat them as if they were your own. This is especially important for internet-connected devices.  As our networks become more complex, it becomes harder to lock every endpoint down. This means a network-connected printer at one of your suppliers could become infected and used to move onto your own network.

–  Recognize that these attacks could be a symptom of bigger problem with your security and structure your protection accordingly. Have you lagged behind keeping up with your network documentation? Do you vet all your suppliers consistently? Do you have representatives from your suppliers working on premises – and know what physical access restrictions they have?

– Finally, who stores your secrets (keys, certs) and where do they put and protect them? Just because a third-party uses encryption doesn’t mean they are using it effectively, or universally. The recent Facebook breaches with plaintext passwords discovered unprotected is a good case in point.

As you can see, managing third-party risks isn’t anything fancy, but just the basic block-and-tackling of issues that we have known about for decades. If you follow these best practices, you will go a long way towards protecting your business.