I spoke to Krishnan Chellakarai about his thoughts. He is currently the Director, IT Security & Privacy at Gilead Sciences and has been a security manager at several biotech firms in the past. One thing he is concerned about is the increasing threats from IoT. He gave me a theoretical example. “What happens if you are reading your emails on your Apple Watch and you click on a phished link. This could lead to a hacker gaining access to credentials and use this information to stealing information from your network.” As users bring in more Fitbits and other devices with Internet access to corporations, “every company needs to worry about this threat vector because it is a foot in the door.” This is part of a bigger trend, where “we have less data stored on individual devices, but there is more access” across the corporation. What this means is that there is “less visibility for IT security pros in case of an exploit.”
Certainly, some of the responsibility with keeping a firm’s infrastructure secure has to lie with each individual user. Chellakarai asks if “people ever look at their Gmail last account activity in the right bottom corner?” Or do we ever click on the security link that pops up when you are signed in to your account from multiple places? This is food for thought. “IT managers need to put some common sense controls in place so they can have better network visibility,” he says. Another example: when was the last time anyone checked their printer firmware or other legacy devices to ensure that they have brought up to their latest versions. “It is time to stop thinking of security after an app is built, and start thinking about security from the beginning, when you are planning your architecture and building your apps.”
Chellakarai says, “One of my first things when I start working for a new company is to do a data analysis and network baseline, so that I can understand what is going on across my infrastructure. It is so critical to do this, and especially when you join a company. I look at policies that aren’t being enforced and other loopholes too. Then I can prioritize and focus on the risks that I find.”
Lenny Zeltser has been teaching security classes at SANS for more than 15 years now and has earned the prestigious GIAC Security Expert professional designation. He is not some empty suit but a hands-on guy who developed the Linux toolkit REMnux that is used by malware analysts throughout the world. He is frequently quoted in the security trades and recently became VP of Products of Minerva Labs and spoke to me about his approach to understanding incident response, endpoint protection and digital forensics.
“I can’t think about malware in the abstract,” he said. “I have to understand it in terms of its physical reality, such as how it injects code into a running process and uses a command and control network. This means I have to play with it to learn about it.”
“Malware has become more elaborate over the past decade,” he said. “It takes more effort to examine it now. Which is interesting, because at its core it hasn’t changed that much. Back a decade or so, bots were using IRC as their command and control channel. Now of course there is much more HTTP/HTTPS-based connections.”
One interesting trend is that “malware is becoming more defensive, as a way to protect itself from analysis and automated tools such as sandboxes. This makes sense because malware authors want to derive as much value as they can and try to hide from discovery. If a piece of malware sees that it is running or a test machine or inside a VM, it will just shut down or go to sleep.”
Why has he made the recent move to working for a security vendor? “One reason is because I want to use the current characteristics of malware to make better protective products,” he said. Minerva is working on products that try to trick malware into thinking that they are running in sandboxes when they are sitting on user’s PCs, as a way to shut down the infection. Clever. “Adversaries are so creative these days. So two can play that game!”
Another current trend for malware is what is called “fileless,” or the ability to store as little as possible in the endpoint’s file system. While the name is somewhat misleading – you still need something stored on the target, whether it be a shortcut or a registry key – the idea is to have minimal and less obvious markers that your PC has been infected. “Something ultimately has to touch the file system and has to survive a reboot. That is what we look for.”
Still, no matter how sophisticated a piece of malware is, there is always user error that you can’t completely eliminate. “I still see insiders who inadvertently let malware loose – maybe they click on an email attachment or they let macros run from a Word document. Ultimately, someone is going to try to run malicious code someplace, they will get it to where they want to.”
“People realize that threats are getting more sophisticated, but enterprises need more expertise too, and so we need to train people in these new skills,” he said. One challenge is being able to map out a plan post-infection. “What tasks do you perform first? Do you need to re-image an infected system? You need to see what the malware is doing, and where it has been across your network, before you can mitigate it and respond effectively,” he said. “It is more than just simple notification that you have been hit.”
I asked him to share one of his early missteps with me, and he mentioned when he worked for a startup tech company that was building web-based software. The firm wanted to make sure their systems were secure, and paid a third-party security vendor to build a very elegant and complex series of protective measures. “It was really beautiful, with all sorts of built-in redundancies. The only trouble was we designed it too well, and it ended up costing us an arm and a leg. We ended up overspending to the point where our company ran out of money. So it is great to have all these layers of protection, but you have to consider what you can afford and the business impact and your ultimate budget.”
Finally, we spoke about the progression of technology and how IT and security professionals are often unsure when it comes to the shock of the new. “First there was vLANs,” he said. “Initially, they were designed to optimize network performance and reduce broadcast domains. And they were initially resisted by security professionals, but over time they were accepted and used for security purposes. The same thing initially happened with VMs and cloud technologies. And we are starting to see containers become more accepted as security professionals get used to them. The trick is to stay current and make sure the tools are advancing with the technology.”
His earliest memory of a security issue was with managing people: “I have found that no matter how comprehensive our policies may be, if you don’t have the right culture among your workforce they won’t matter. Education, understanding, and inclusion are the ways to build the right security environment.”
He is drawn to tools that provide useful analytics. “With TB of data available to your team, trying to find the needle in the haystack can be a challenge. Each tool has its place in your security architecture so picking one is difficult, but those which are capable of providing me good information for analysis are the ones I prefer. That said, knowing your use cases and setting up your tools is probably the biggest impact to any security organization.”
His best advice for dealing with insider threats is to first, start with the basics. “Many companies have not taken adequate measures to protect their information or environments. At the lowest level, access provisioning, data classification, and updated antivirus and firewalls are all mandatory but when new systems or services get introduced into your environment the effects are often not well known. Protect against the drift.”
He sees MDM as a careful balance between protecting the employee and preventing unauthorized access. “At the core of the issue, no one wants their data put at risk and most users and organizations are willing to conform to a good policy in order to protect themselves.”
Paul Lanzi is the COO and co-founder of Remediant, an IT security startup that has created a product to protect privileged accounts. Prior to this startup, he worked for many years as an IT manager in the biotech field, managing various engineering teams for Genetech and then Roche.
Back 11 years ago when he started at Genentech, the first security problem he helped tackle was dealing with managing multiple accounts. “Everyone had multiple accounts and multiple passwords, and we built our own home-grown system to consolidate these accounts, and make it easier for everyone to use a single username and password to get all of their work done. That actually improved security, since lessened the chance that someone would have to write down their multiple passwords somewhere — but it also made it easier to ensure that every employee had the right access to do their job.”
Of course, today we have both single sign-on products to federate identities, such as Okta and Ping Identity, and identity governance products such as Sailpoint and RSA Archer. But back then this was hard work.
Lanzi’s best security tool has been multi-factor authentication. “I turn it on wherever I can, it is truly one of the most under-appreciated tools around. While it isn’t perfect, this technology sits in that rare sweet spot between simplicity and security,” he said. In his present firm he uses a combination of Google Authenticator and Yubikey Nano devices for this purpose. “I am amazed at how much crypto they can cram into that Nano form factor,” he said, which is about the size of thumbnail (shown here).
A decade or so ago, Lanzi was involved in rolling out 110,000 iPads globally at Genentech/Roche. “At the time, it was the largest non-education deployment of iPads in the world, and we used the MobileIron’s MDM software to protect both our data at rest and in flight. Their MDM-based security capabilities gave us the ability to remotely wipe the fewer than 20 devices that were lost or misplaced each month. Its combined capabilities gave us assurance that when those devices were lost, the data on them was still secure. We could also enforce minimum OS version standards, to ensure that users were keeping them up to date with OS security updates.”
Genentech/Roche had a very unusual security staff, composed of folks from different departments. “We had separate teams for patching desktops, maintaining our network infrastructure, an IT Security policy writing group, an account provisioning engineering group for maintaining that piece, and an overall Security Architect as well. They contributed to an overall defense in depth because they were mutually supportive and worked together. That isn’t going to be possible in every enterprise, but we had terrific coverage across the various skills and potential threats areas. And given that we had personnel split across South San Francisco, Madrid and Basil, Switzerland, it was pretty impressive.”
How has security changed among his various employers over the years? “It really depends on the level of support at the executive level. At Genentech/Roche, we had executives who understood the risks and the investment needed to minimize the security risks. Other places were behind the curve and more focused on creating policies and lagged with their investment in security infrastructure. Part of the issue is that unlike in the retail or government sectors, biotech hasn’t had the big-news breaches to motivate organizations towards security improvements.”
Bryan Doerr has been involved with tech companies for decades, most recently leaving Savvis/Century Link as their CTO before agreeing help bootstrap Observable Networks. I asked him to reflect back on his career and where the infosec industry is headed in general. “There is a lot of security industry maturation still to come, a lot of wood left to chop,” he told me in a phone interview last week. “While there are still some pockets of maturity here and there, they usually are only found with the largest companies who can afford it.”
Looking back more than a decade, the biggest change has been being able to deliver security as a subscription service, he said. “First we had pre-built security appliances, but lately we have seen managed detection and response services,” such as what his company delivers. “And it isn’t just a change in how protection is delivered, but how the subscription service can be more affordable for mid-market customers.”
Another big change is how end user customers finally are getting some benefit from sharing threat intelligence. “No one wanted to talk about where or how they were attacked and share these specifics with anyone else,” he said. This intelligence sharing has made the subscription service vendors more potent and compelling and has boosted the ability to respond effectively to threats.
“Ten years ago security was built on a simple idea: that we know about our attackers and threats, and through some means we could prevent those bad guys from getting inside our networks. Back then, we had a limited number of threats, so we could more readily recognize and block them. That is so far from where we are today. The fundamental nature of what is a threat and how attacks use technology has changed completely. The idea of tracking attack signatures makes a lot less sense when every attack is unique.”
Doerr agrees that the days of the perimeter being the sole point of defense are also long over. As an example, he points out the recent IoT botnet attacks.
One benefit from the last decade has been the move towards increasing virtualization. “This absolutely was a positive influence, and helped us to better design and operate more secure systems and more complex infrastructure,” he said. Before virtualization, we had too many different fiefdoms dedicated to particular circumstances. Each one had different configurations and staffs who were maintaining them. All of that variation left us vulnerable.”
But with virtual machines, “a lot of automation has been brought to bear to keep a consistent environment running. That means we can provision VMs, kill them off, and recreate them easily. This makes it more efficient to scale up and down and we don’t have to spend our time patching systems.”
Another issue is the nature of modern network traffic. “Our networks are becoming increasingly encrypted, we can’t even see what is going on over the wire and view the payloads, and this adds another layer of difficulty. Right now less than half of all traffic is encrypted, but it won’t be long before it becomes 100%. We won’t be able to readily examine any of this traffic, which will make networks harder to defend and detect exploits.”
When he was at Savvis, one memorable experience was upgrading one of their data centers. Thanks to a routing bug the entire data center couldn’t come back online. “We tripped over it on a Saturday, and didn’t immediately understand what we were doing. It was easy to miss a single use case that caused the problem. That was a humbling experience and gave me an appreciation of the magnitude of the business that we had running. You don’t feel it until something terrible happens and you see how significant these outages are.” The situation drove home the point that he needed to stay in touch with his technology and understand that it is not just an abstraction, but also a very real entity.
I asked him who had the better job, the CTO or the CIO? He was firmly behind the CTO position. “CTOs will have jobs for forever, because they help organizations understand the evolution of technology and anticipate the direction of that evolution. The CIOs still have some soul searching to do.”
David O’Berry was a former CIO at a state agency with 1000 employees, now he works for a security vendor. To give you an idea of his credentials, he has CISSP‑ISSAP, ISSMP, CCSP, CRISC, CSSLP, MCNE, CSPM and a CRMP!
He met his wife in college when a virus erased his senior thesis text and backups: luckily she was both a fast typist and a good sport. “That was by far the most expensive virus of my entire career!” Later on he had to attack another floppy-based virus, which was difficult because he had to run around the office finding infected disks and literally destroying them. He also faced down the Nachia/Welchia worm, which infected a PC that was not patched because the user was out on maternity leave.
“When I was a CIO, imaging software probably saved us the most time and had the strongest impact initially along with mail filtering products and endpoint management tools for remote control. Besides these products, I believe that standardization of what we did and how we did it had the single largest impact on our organization being able to progress as rapidly as we did with as limited resources as we had.”
For fighting insider threats, “you have to have contextually aware DLP and scanning products as well as what I call “Distributed Peer Review” by the nodes that attach to the environment. Each node has to contribute to the survival of the organism by being a sensor in the larger scheme of things.” He has seen plenty of ransomware, and feels that “first and foremost it is a test of backup and recovery plans. Having a known-state in that area fell out of vogue for a while but now it is more important than ever even if it seems like boring blocking and tackle.”
At his current employer, “we do use MDM and they also allow BYOD. As a former CIO, we had not adopted BYOD when I left but had made the entire workforce mobile and managed it accordingly. We also had implemented Imprivata for its single sign-on package.”
When it comes to securing the cloud and his cloud-based servers, “there are similar challenges to what we have been pursuing since the dawn of time. Visibility is king. Constructs that give you real-time visibility give you the edge over any other type of product when coupled with real-time mitigation and resilience.”
Now that he is on the vendor side, “I would say that the state of cybersecurity has gotten a lot worse since I made the jump because the pace of innovation and change has hit a vertical level and never stops. Malware creators have become more and more adept at how to attack the exploding number of devices. I believe we have a chance to get out in front of the next phase of this is, but to do so we have to share information in real-time as well as allow companies to participate without artificial barriers to entry. However, our window of opportunity is closing rapidly.”
Being the CIO of a non-profit gives you an entirely different perspective in terms of managing people, resources, and technologies.
David Goodman would know. He has been involved with managing IT operations for different non-profits for most of his professional career. He used to be the CIO of International Rescue Committee, and currently is the CIO-in-Residence at NetHope, an umbrella organization that is a resource for some of the world’s largest non-profit aid organizations.
“The biggest challenge for non-profits about IT is that few people understand it in that context. We usually don’t have any roadmap or a sizable staff for how we are going to implement any new technology. Many organizations don’t have any dedicated infosec staff, or if they do they only have one person for this task.”
Often, IT takes a hit due to unplanned consequences that is more because of the where the non-profit is located than anything related to the technology itself. For example, he tells the story of a nonprofit that opened an office in a very insecure country. “We opened an office there to help benefit refugees, which is our mission. We made connections with the local militia to make sure that we were permitted to do this and didn’t have any issues until one day our office was overrun by the militia and our people were taken hostage. They didn’t like what we were doing. While that doesn’t happen too often, it was pretty scary for our staff and volunteers. They took all of our computing equipment. Eventually, we were able to get them to release everyone, although two Americans were held in a hotel for a few extra weeks.”
Planning for this situation is a challenge, as you might expect. But the office had no incident response frameworks, no security policies. “There were passwords written on whiteboards. There were staffers using personal Skype accounts to communicate with headquarters. Because all the laptops were stolen, the rebels were using the staff’s personal Skype accounts that were set to autologin and were sending messages impersonating the staff. They couldn’t easily shut down these personal accounts.” Eventually all personnel returned safely and everyone was accounted for. But they lost all their equipment: “that was never seen again.”
Few IT managers or CIOs have to deal with this kind of situation. “It is pretty nasty stuff, and it is because of the nature of how many international nonprofits operate and the places they have their offices are often in conflict areas. This means we don’t just worry about IT security, but the safety of the staff too.”
Here is another example. At one international nonprofit, he wanted to improve the organization’s password policies. The issue was that many of the staffers are scattered around the world and don’t regularly login to their enterprise Active Directory domain controller which meant that staff didn’t get regular notifications of expiring passwords. “So for the field staff, we set their domain passwords not to expire. As you might imagine, this wasn’t great infosec policy, so I tried to implement a better one that had complexity and change management built-in. I got buy-in from senior management and approval from the CEO. We were ready to implement it, and I sent a reminder email to some of the affected parties, including the CEO.”
Suddenly he scuttled the whole idea: “He told me that he had been using the same password for more than 30 years and wasn’t about to change it now. So the very straightforward and approved password policy was shelved, and there are probably still hundreds of people using non-expiring passwords around the organization.” Goodman couldn’t get him to understand why a better password policy matters.
All is not gloom and doom however. At NetHope, he is working with a number of major donors, including the Gates Foundation and MasterCard International, to create non-profit specific security controls that can be used for guiding IT auditing and compliance. “We will have a set of best practices on how to appropriately secure critical data, all based on existing standards like ISO, NIST, and PCI. We will also provide implementation guidance so that nonprofits without dedicated info sec staff — which is nearly all of them — will know how to implement these controls.”
Ravi Ravishanker is the CIO and Associate Provost at Wellesley College in Massachusetts. He has been in IT for many years, and supports an organization with more than 1400 faculty and staff. I spoke to him in September 2016. “Information security has continued to be one of the highest priority for every one of the IT organizations I have worked for. The only difference is that it has become harder and its relative importance compared to the other things we have to do has gotten higher, which results in much higher resource allocation to security across the entire institution.”
He recalls back in 1986, when he began his IT career. He was writing code in assembler for a VAX VMS. This was done to make it faster to execute. “However, we made a programming error to have one user send a file to another using TCP/IP. Because of an internal security lapse, the students found out they could send someone else’s files using our program. It didn’t take long to fix the problem, fortunately.” Coming into the modern day, he finds that vulnerability scanners are one of his most important security tools. “This is because they expose vulnerabilities about network ports that shouldn’t be open. Similarly, scanners that test our web apps for a range of vulnerabilities are also essential.”
“We realize that given our limited resources, we have to be very diligent. First and foremost, data and network security needs to be a priority for everyone in the IT organization, not just a select group of security administrators. Also, security is a joint partnership between IT and our users; it is a shared responsibility of the entire the enterprise. If our users aren’t following best practices, they can expose our enterprise to data security issues. Security is a critical part of everything that we do.”
To date, he hasn’t seen much in the way of insider threats at the college. “People in higher education have a sense of loyalty to the institution, and we place a lot of trust in our employees. While insider threats are always a potential issue, we are in a space where it is minimal.”
The college has moved into the cloud and continues to increase its cloud footprint. “We try to do as much due diligence when we sign up with a new provider and make sure that they are giving us the security that we need. We thoroughly review the contracts and agreements from security and compliance perspectives before signing up with a provider.”
“We are a fairly small IT organization and currently our user services, which manages desktop support, and the systems and network groups are all under one director. This works really well in terms of information exchange between the groups and easy access to the systems and network engineers. However, we recently decided to reorganize this group and we hope that this relationship will be preserved because this relationship is critical from information security perspective.”
I spoke to the IT Manager of a 65-person trade association in the DC area. I have known this manager, whom I will call John, for decades through various IT positions, mostly in non-profits and trade associations.
(He has asked that I not use his name or the name of his association.)
Things have changed since he first began working at the association eight years ago. “When I was just a few months into my current position, we had about 15 laptops stolen from their docking stations by (what we believe was) the night-time cleaning crew. People came in to work and their laptops were gone. My logistical response was executed pretty well – I had folks up and running very quickly. But we never treated the incident as a serious information breach. These days we think about things differently.”
One of the biggest impacts that John has had was to hire a network management VAR to help setup and monitor their firewalls. He uses a combination of tools such as NetWrix for auditing their Active Directory logs (“I can unlock a user before they even realize it,” he said), Sophos for anti-virus full disk encryption and its web appliance.
He uses another VAR and additional monitoring tool that is industry-specific. “They have a monitoring appliance in our environment that sends a ton of alerts that tend to be very non-actionable – like someone used a cleartext password on a website. Well, there’s only so much I can do about that. The value is that they aggregate our data with our members’ data to look for unusual trends across the country so they can alert us to industry-wide attacks.” This VAR also performs vulnerability scans annually that he says is very disruptive to our storage array. But it is useful. ”For example, did you know that APC products (UPSs and PDUs) have three factory default login IDs and passwords? We knew about the first. Didn’t know about the second and third. So, I’m changing those asap.”
When it comes to dealing with insider threats, he says “a big win for us has been KnowBe4.com It is a very affordable training program that allows me to spam and phish my own staff. Plus they offer videos and a learning management system that we hope to implement next year with HR’s approval. They also send me a “scam of the week” which I repackage and send to staff. It’s both entertaining and educational.” Another classic phishing situation was when one of his VPs sent out member email addresses to a Yahoo address he thought was our CEOs. “ It happened on a weekend and the VP was on his phone and couldn’t really see the whole message on the screen. It was quickly discovered that the CEO did not have a Yahoo address. That was our first real cyber security incident. Calls were made. The board was notified. It was only names and email addresses, but those two items are considered personally identifiable information. This happened about a week before I implemented KnowBe4. If I had gotten approval for it earlier and set it up earlier, this might have been avoided.”
John also deploys a BYOD policy for some of the staff, and is still evaluating mobile device management strategies. They just migrated their email to Office 365 and haven’t yet implemented any two-factor authentication.
John’s total staff is a help desk technician and his VARs, one of whom is on site two days a month.
“Security is a bigger part of my job today because of the increased emphasis and because our association represents a high profile industry where security is also a high profile issue. Our CEO wants us to walk the walk if we’re telling our members to do the same.”
With all the changes to infosec technology, here is a not-so-outrageous idea: maybe you should take a page from the US Secret Service playbook in how you run your IT security department. Actually, this idea didn’t come from me, but from someone who actually is familiar with both roles. Nathaniel Gleicher is trained as a computer scientist and a lawyer, and currently is the Head of Cybersecurity Strategy at Illumio, a security vendor. Previously, he prosecuted cybercrime at the US DOJ and served as Director for Cybersecurity Policy at the White House National Security Council. While he worked at the White House, he saw multiple data breaches. “Every breach relies on lateral movement, and instead of attackers being at risk once they get inside, they’re able to take all the time that they need to identify high value information and cause damage.”
He thinks organizations need to take a different, simplified approach and go back to the basics: get visibility inside the data center and cloud and then be able to truly lock the doors inside.
In a blog post for his firm, he writes: “Like the Secret Service, cybersecurity defenders face a similar problem: they are defending high-value assets that must be protected, but also have to speak to hundreds or thousands of other servers. You have to have visibility, and reduce your attack surface, and focus on the security consequences for your most valuable assets. Shutting down the attack surface constrains attackers, makes lateral movement harder, forces attackers to risk exposure, and makes other security tools more effective.”
Sadly, most organizations focus their cybersecurity spend today at the perimeter, making no effort to secure or even understand the interior of their data centers. After reading Gleicher’s post, I asked him if there is a difference between interior and exterior networks any longer. He told me in a phone interview, “Everything is a potential threat. One difference is that you can have greater control around an interior network. And your network visibility is much more limited with exterior ones. But that’s missing the point. An intruder can find something once they are inside your network and can look around. Organizations are trying to layer defenses at the fortress wall, while the cyber attackers are parachuting inside and then free to move around as they want inside the data center and cloud.”
He continued, “I still have conversations with CISOs that don’t know how their devices are connected to their networks. And I don’t mean just a list of these devices, but how they are related to each other, both logically and operationally. This is the kind of information that attackers can exploit.”
His work with the Secret Service has him focused on understanding some of these lessons from providing physical security to protect the President. “People don’t see the Secret Service advance work that was done months before any presidential visit. They had to map the location and understand the physical space. The same is true for cybersecurity, because we need to identify the attacker quickly and respond fast too. This means that any cybersecurity effort should start months before any potential attacker actually shows up.” In other words, it isn’t just about stopping someone from getting across the White House fence, but understanding what will happen once then enter the grounds and what they might end up doing.
He agrees that good security isn’t easy. And he started early in his career with his first IT job for the Peace Corps. There he created a created a campus-wide network to connect 85 machines that were located in the different buildings of a college on a Caribbean island. Less than five minutes after it was first connected to the Internet it was breached. It took him several tries to close various ports and other vulnerabilities before he could defend the network properly. “This was an early lesson on how hard it was to do security properly: there are way more people trying to get in than keeping them out. It also showed me that the steps to strengthen data security aren’t rocket science and are very straightforward. It is a lot more how to orchestrate them and use them efficiently across the enterprise.”
Instead of focusing on the lack of response, he says we should be doing a better job of evaluating the highest-value targets, which is another lesson he learned from watching the Secret Service in action. He said, “You shouldn’t be in the business of protecting the app that handles your employee’s lunch request.” And not everything in the data center should be treated equally, too. “There are some things in your data center that are more valuable and you have to focus on what needs the most protection. If a burglar gets into your house and gets into your basement that is different from him getting into your bedroom where you keep your jewelry.”