Hiring a chief information security officer (CISO) is a tricky process. The job title is in the limelight, especially these days, when breaches are happening to so many businesses. The job turnover rate is high, with many CISOs quitting or getting fired because of security incidents or management frustration. And the supply of qualified candidates is low. According to the ISACA report, State of Cyber Security 2017, 48 percent of enterprises get fewer than 10 applicants for cybersecurity positions, and 64 percent say that fewer than half of their cybersecurity applicants are qualified. And that’s just the rank and file IT security positions, not the top jobs. So here are some things to consider when you need to find a CISO and you don’t want to hire a “chief impending sacrifice officer.”
Most of us remember the Stuxnet worm that infected the Iranian Natanz nuclear plant back in the early 2000’s. I was privy to some of the researchers at Symantec that worked on decoding in and wrote this piece for ReadWrite in 2011 after being briefed about their efforts. Now you can rent the movie called ZeroDays that was written and produced by Alex Gibney on Netflix. It was released last year, and goes into a lot more detail about how the worm came to be.
Gibney interviews a variety of computer researchers and intelligence agency officials, one of whom is portrayed by an actress from the NSA (to disguise her identity). This person has the most interesting things to say in the movie, such as “at the NSA we would laugh because we always found a way to get across the airgap.” She admits that a combination of state-sponsored agencies from around the world collaborated on its creation and detonation at the plant. (Maybe that isn’t the best word to use given it was an enrichment plant.) She also gives some insight into the interactions between the NSA and the Mossad on how changes to the worm were done. Sadly and ironically, the actions surrounding Stuxnet motivated Iran to build a more advanced nuclear program today and assemble its own cyber army.
With many tech documentaries, they either oversell, undersell or are just plain wrong about many of the details. Zero Days has none of these issues, and is a solid film that can be enjoyed by techies and the lay public alike. The role of cyber weapons and how we proceed in the future goes beyond Stuxnet, which as what the NSA manager says, “is just a back alley compared to what we can really do.”
At his Synergy conference keynote, Citrix CEO Kirill Tatarinov mentioned that IT “needs a software defined perimeter (SDP) that helps us manage our mission critical assets and enable people to work the way they want to.” The concept is not a new one, having been around for several years.
An SDP replaces the traditional network perimeter — usually thought of as a firewall. Those days are long gone, although you can still find a few IT managers that cling to this notion.
The SDP uses a variety of security software to define what resources are protected, and block entry points using protocols and methods. For example, if we look at the working group at the Cloud Security Alliance, they have decided on a control channel architecture using standard components such as SAML, PKI, and mutual TLS connections to define this perimeter.
Working groups such as these move slowly – it has been hard at work since 2013 – but I am glad to see Citrix adding their voice here and singing the SDP tune.
But perhaps a better way to explain the SDP is what is being called a “zero trust” network. In an article in Network World earlier this year, a post described the efforts at Google to move to this kind of model, whereby basically everyone on the network is guilty until proven innocent, or at least harmless. Every device is checked before being allowed access to resources. “Access is granted based on what Google knows about the end user and their device. And all access to services must be authenticated, authorized and encrypted,” according to the article.
This is really what a SDP is about, because all of these access evaluations are based on software that checks for identity, on other software that examines whether a device has the right credentials, and other software to make sure that traffic is encrypted across the network. Because Google is Google, they built their own solution and it took them years to implement across 20 different systems. What I liked about the Google implementation was that they installed their new systems across Google’s worldwide network and just had it inspect traffic for many months before they turned it on to ensure that nothing broke their existing applications.
You probably don’t have the same “money is no object” philosophy and want something more off-the-shelf. But you probably want to start sooner rather than later on building your own SDP.
As part of my duties to write and edit this email newsletter for Inside.com, I am always on the lookout for new security products. When I was at the Citrix Synergy show last week, I wanted to see the latest products. One of the booths that were drawing crowds was Bitdefender’s. They have a Hypervisor Introspection product that sits on top of XenServer v7 hypervisors. It is completely agentless, and just runs memory inspections of the hosted VMs. Despite the crowds, I was less enamored of their solution than others that I have reviewed in the past for Network World such as TrendMicro’s and Hytrust. (Note, this review is more than three years old, so take my recommendations with several spoonfuls of your favorite condiment).
Nevertheless, having some protection riding on top of your VMs is essential these days, and you can be sure there were lots of booths scattered around the show floor that claimed to stop WannaCry in its tracks, given the publicity of this recent attack. Whether they actually would have done so is another matter entirely, I am just saying.
Over at the Kaspersky booth, it was nearly empty but they actually have a better mousetrap and have had their Virtualization Security products for several years. Kaspersky has a wider support of hypervisors (they run on top of VMware and Hyper-V as well as Xen). They offer an agentless solution for VMware that works with the vShield technology, and lightweight agents that run inside each VM for the other hypervisors. While you have to deploy agents, you get more visibility into how the VMs operate. One company not here in Orlando but that I am familiar with in this space is Observable Networks: they don’t need agents because they monitor the network traffic and system logs produced by the hypervisor. So just don’t make a decision based on the agents vs. agentless argument but look closer at what the security tool is monitoring and what kinds of threats can really be prevented. Pricing on Kaspersky starts at $110 per virtual server with a single VM and $39 per virtual desktop that includes 10-14 VMs. Volume discounts apply.
IGEL was another crowded booth. They have developed thin clients in the form of a small-factor USB drive. If you have an Intel-based client with at least 2 GB of RAM and 2 GB of disk storage (such as an old Windows XP desktop or Wyse thin client), you can run a Citrix Receiver client that will basically extend the life of your aging desktop. A major health IT provider just placed an order for $2M worth of more than 9,000 of these USB clients, saving themselves millions in upgrades to their old Wyse terminals. I got to see a demo of their management interface at the show. “It looks like Active Directory with a policy-based tool and it is super easy to manage and keep track of thousands of desktops,” according to what their CEO, Jed Ayres, told me during the demo. Their product starts at $169 per device.
Another booth held an interesting biometric solution called Veridium ID. They have recently been verified as Citrix Ready, but have been around for a couple of years developing their product. I have seen several biometric products, but this one looked very interesting. Basically, for phones that have a fingerprint sensor, they make use of that as the additional authentication factor. If your phone doesn’t have such as sensor, it uses the camera to take a picture of four of your fingers (as you can see here). It works with any SAML ID provider and at their booth they showed me a demo of it working with an ordinary website and with a Xen-powered solution. Their product starts at $25 per user, which is about half of what the traditional multi-factor vendors are selling their hardware or smart tokens for.
When you hear about an IT staff that has to build their infrastructure from scratch to support a new business, you think, “That couldn’t be that hard – they had no legacy infrastructure to support. What a dream job.” Well, it wasn’t a piece of cake for the crew at the Okada Manila resort hotel, and in an interview with Dries Scott, the SVP of IT for Okada, I got to see why.
Okada was built on a huge site and is similar to the resort-style properties that can be found in Las Vegas and Macau. It will house 2,300 guest rooms when it is fully built and have 10,000 employees. Scott’s IT department has at least 100 of them full-time — plus contractors — to support 2,000 endpoints and numerous physical and virtual servers placed in two separate datacenters on the property.
Scott actually worked for a few of the Macau resort hotels before coming to Manila, and he wanted to create the ideal IT environment for a five-star luxury hotel. “The biggest decision we had to make was to try to steer clear of having actual desktop PCs as our workstations,” he said to me when he sat down for an interview yesterday. “When you are starting from a clean sheet of paper, you want something that could last 10 to 20 years and want products that could evolve over this time period.” He decided to choose VDI for his endpoints. “I wanted to move away from the usual desktop PC environment, although we ending up having a few of them for our staff. PCs are a pain to manage, because hard drives crash, getting updates and patches distributed isn’t easy, and other issues.” To support their VDI deployment they purchased a variety of products, including XenDesktop, XenApp and NetScaler, HP thin clients and Dell servers.
One of the key enabling technologies is FSLogix Office 365 Container. “This makes Outlook running on XenApp and XenDesktop able to mount users’ profiles as if they were on a local C: drive, so Windows acts normally and Outlook works like it is running on a regular PC desktop,” he said. This means you get the performance of the virtual workspace but the ease of management too.
Having a VDI solution meant some initial support hurdles. “We had to have a lot of patience with our users, some of whom were using VDI workstations for the first time,” he told me. “I could have taken the easy way out and just bought desktops for everyone, but I knew eventually VDI will pay off and benefit us in the long run.”
One concern Scott had was keeping corporate data secure. Given the market of his resort, he wanted to ensure that customers’ information stayed on the corporate systems; “It is one of our most critical assets,” he said. “Users don’t have the ability to remove any corporate data from the company.” His thin clients locked out USB access, for example, and he also set up appropriate data leak policies too. Through ShareFile, he has other policies for how files can be shared across his staff, and he prevents access to public SaaS repositories, like consumer file-sharing services whenever possible. Finally, he figured out ways to keep data from his construction contractors on his servers. “I didn’t want them to pack up their PCs and leave with my data on them,” he said.
Building a new resort’s IT infrastructure wasn’t as easy as I was assuming, mainly because some IT elements needed to be put in place during the construction phase to support those workers on the job site. This meant erecting temporary buildings and networks and then migrating these resources to the production environment once the hotel was built. “That migration wasn’t easy, but we are just about through that process,” he said. “We have certainly been through a bit of a bumpy road.” One of his recommendations was to use Citrix consulting services in setting up his environment and helping define the appropriate computing architecture. “They can help make everything stable from the beginning and figure out your app and server configurations.”
What helped him pull off this project? Executive buy-in. “Our chairman is an engineer and very much into technology. It was a massive help that he supported our decisions from day one. All he wanted was to implement my vision and he gave me the ability to implement it.”
The first decision you need to make in your smart home journey is selecting the right ecosystem. By ecosystem, I mean the voice-activated smart hub that is used to deliver audio content from the Internet (such as news, weather, and answers to other queries) as well as the main interface with a variety of other smart home devices, such as lighting, thermostats and TVs. In this review I look at two of the three main hubs from Google (the white-topped taller unit on the right) and Amazon (the smaller black unit on the left) and how they stack up.
This is the second in a series of articles on how to successfully and securely deploy smart home technology. The first one can be found here.
As more banking customers make use of mobile devices and apps, the opportunities for fraud increases. Mobile apps are also harder to secure than desktop apps because they are often written without any built-in security measures. Plus, most users are used to just downloading an app from the major app stores without checking to see if they are downloading legitimate versions.
Besides security, mobile apps have a second challenge: to be as usable as possible. Part of the issue is that the usability bar is continuously being raised, as consumers expect more from their banking apps.
In this white paper for VASCO, I show a different path. Mobile banking apps can be successful at satisfying the twin goals of usability and security. Usability doesn’t have to come at the expense of a more secure app, and security doesn’t have to come at making an app more complex to use. Criminals and other attackers can be neutralized with the right choices that are both usable and secure.
As the Internet of Things (IoT) becomes more popular, state and local government IT agencies need to play more of a leadership role in understanding the transformation of their departments and their networks. Embarking on any IoT-based journey requires governments and agencies to go through four key phases, which should be developed in the context of creating strategic partnerships between business lines and IT organizations. Here is more detail on these steps, published in StateTech Magazine this month.
Today I begin a series of reviews in Network World around smarter home products. Last year we saw the weaponized smart device as the Mirai botnet compromised webcams and other Internet-connected things. Then earlier this year we had Vizio admit to monitoring its connected TVs and more recently there was this remote TV exploit and even dishwashers aren’t safe from hackers.
Suddenly, the smart home isn’t smart enough, or maybe it is too smart for its own good. We need to take better care of securing our homes from digital intruders. The folks at Network World asked me to spend some time trying out various products and using a typical IT manager’s eye towards making sure they are setup securely.
Those of you that have read my work know that I am very interested in home networking: I wrote a book on the topic back in 2001 called The Home Networking Survival Guide and have tried out numerous home networking products over the years. My brief for the publication is broadly defined and I will look at all sorts of technologies that the modern home would benefit from, including security cameras, remote-controlled sensors, lighting and thermostats, and more.
Smart home technology has certainly evolved since I wrote my book. Back then, wireless was just getting started and most homeowners ran Ethernet through their walls. We didn’t have Arduino and Pi computers, and many whole house audio systems cost tens of thousands of dollars. TVs weren’t smart, and many people were still using dial-up and AOL to access the Internet.
Back in the early 2000’s, I visited John Patrick’s home in Connecticut. As a former IBMer, he designed his house like an IBM mainframe, with centralized control and distributed systems for water, entertainment, propane gas, Internet and other service delivery. He was definitely ahead of the time in many areas.
When I wrote about the Patrick house, I said that for many people, defining the requirements for a smart home isn’t always easy, because people don’t really know what they want. “You get better at defining your needs when you see what the high-tech toys really do. But some of it is because the high-tech doesn’t really work out of the box.” That is still true today.
My goal with writing these reviews is to make sure that your TV or thermostat doesn’t end up being compromised and being part of some Russian botnet down the road. Each article will examine one aspect of the secure connected home so you can build out your network with some confidence, or at least know what the issues are and what choices you will need to make in supporting your family’s IT portfolio of smart Things.
Since I live in a small apartment, I asked some friends who live in the suburbs if they would be interested in being the site of my test house. They have an 1800 sq. ft. three bedroom house on one level with a finished basement, and are already on their second smart TV purchase. One of them is an avid gamer and has numerous gaming consoles. Over the past several months (and continuing throughout the remainder of this year), we have tried out several products. In my first article posted today, we cover some of the basic issues involved and set the scene.
So you have read The Lean Startup. Suffered through following several agile blogs (such as this one). You think you are ready to join the cool kids and have product scrums and stand-up meetings and all that other stuff. Now you need an implementation plan.
Maybe it is time to read this post by Paul Adams on the Intercon blog. He describes some of the lessons he and his development team have learned from building software and scaling it up as the company grows. I asked a few of my contacts at startup software firms what they thought of the post and there was mostly general agreement with his methodology.
Here are some of Adams’ main points to ponder.
Everyone has a different process, and the process itself changes as the company matures and grows. But his description is for their current team size of four product managers, four software designers, and 25 engineers. Like he says: “it’s not how we worked when we had a much smaller team, and it may not work when we have doubled our team.”
Create a culture where you can make small and incremental steps with lots of checkpoints, goals, and evaluations. “We always optimize for shipping the fastest, smallest, simplest thing that will get us closer to our objective and help us learn what works.” They have a weekly Friday afternoon beer-fueled demo to show how far they have gotten in their development for the week. Anyone can attend and provide comments.
Facetime is important. While a lot of folks can work remotely, they find productivity and collaboration increases when everyone is in the same room in a “pod.” Having run many remote teams, certainly local pods can be better but if you have the right managers, you can pull off remote teams too. It appears IBM is moving in this “local is better” mode lately.
Have small teams and make them strictly accountable. Adams has a series of accountability rules for when something goes wrong. Create these rules and teams and stick by them. “We never take a step without knowing the success measurement,” said one friend of mine, who agrees with much of what Adams says in his post. My friend also mentions when using small teams, “not all resources have a one-to-one relationship in terms of productivity; we find that we that the resources we use for prototyping new features can generally float between teams.”
Have a roadmap but keep things flexible and keep it transparent. “Everything in our roadmap is broken down by team objective, which is broken down into multiple projects, which in turn are broken down into individual releases,” said Adams. They use the Trello collaboration tool for this purpose, something that can either be a terrific asset or a major liability, depending on the buy-in from the rest of the team and how faithful they are to keeping it updated.
However, caution is advised: “The comprehensive approach that Adams describes would be entirely too much overhead for most startups,” says my friend. This might mean that you evaluate what it will take to produce the kind of detail that you really need. And this brings up one final point:
Don’t have too many tools, though. “Using software to build software is often slower than using whiteboards and Post-it notes. We use the minimum number of software tools to get the job done. When managing a product includes all of Google Docs, Trello, Github, Basecamp, Asana, Slack, Dropbox, and Confluence, then something is very wrong.”