Why You Need to Deploy IPv6: It Is All about Performance and Security

You have heard the arguments for using IPv6 for decades, but here is a novel reason: it is all about getting better network performance. A recent study from Cloudflare’s network operations shows that an IPv6 network can operate 25ms to 300ms faster than over an IPv4 network. That isn’t theory: that is what they actually observed. These numbers are corroborated with studies from LinkedIn and Facebook, although Sucuri did a test last year that shows about the same in terms of web surfing.

Part of the debate here has to do with what constitutes performance. As Geoff Huston at the Australian RIPE network coordination center writes, you have to look carefully at what is actually being tested. He mentions two factors: overall network reliability (meaning connection attempts, dropped packets and so forth) and round-trip time over the resulting network paths of the two protocols. Just because you use the same network endpoints for your tests doesn’t mean that your IPv4 packets will travel over the same path as your IPv6 ones.

Huston shows that IPv6 reliability rates have been steadily increasing, especially as native IPv6 implementations have grown, replacing tunneling and other compromises that have had higher failure rates in the past. And round-trip times have been improving, with IPv6 being faster than IPv4 about half of the time.

Cloudflare also observed that some smartphones can save on IPv4/v6 translation times if they can connect over IPv6 directly. Such phones are becoming the norm on T-Mobile and Orange mobile networks, for example.  This agrees with Huston’s research: the more native IPv6 implementations on your endpoints and routers you can use, the better your overall performance.

But there is a second reason why you should consider IPv6, and that has to do with security. After all, this is a security-related blog so we talk about this for a moment. In the past, there have been articles such as this one in Security Intelligence that warn, ”The thing is, despite IPv6 having been around for almost 20 years now, few security professionals truly understand it.” Other bloggers point out that enabling native IPv6 will make your network less secure, because more embedded devices (like webcams and industrial controls) can become compromised (think Murai and WannaCry). That post suggests that most network administrators should turn off native IPv6, to reduce the potential attack surface.

I disagree. This is because IPv6 has several key technological innovations over older IPv4: it avoids NAT, has stateless or serverless address autoconfiguration, has a better protocol header to minimize processing time, uses simpler administration of IPSec conversations, is more efficient with QoS implementations, has a better multicast and anycast support and uses other more modern technologies. Taken together, the good news is that you could also get some big security improvements, if you deploy IPv6 properly across your enterprise.

Supporting IPv6 isn’t a simple matter of turning on the protocol across your network: you have to migrate segments, servers, routers and endpoints carefully and understand how you can establish a full end-to-end native implementation of the protocol. But if you do it correctly, you could have a better performing and more secure network as a result.

CSO Online: As malware grows more complex, protection strategies need to evolve

The days of simple anti-malware protection are mostly over. Scanning and screening for malware has become a very complex process, and most traditional anti-malware tools only find a small fraction of potentially harmful infections. This is because malware has become sneakier and more defensive and complex.

In this post for CSO Online sponsored byPC Pitstop, I dive into some of the ways that malware can hide from detection, including polymorphic methods, avoiding dropping files on a target machine, detecting VMs and sandboxes or using various scripting techniques. I also make the case for using application whitelisting (which is where PC Pitstop comes into play), something more prevention vendors are paying more attention to as it gets harder to detect the sneakier types of malware.

CSOonline: Review of Check Point’s SandBlast Mobile — simplifies mobile security

There is a new category of startups — like Lookout Security, NowSecure, and Skycure — who have begun to provide defense in depth for mobiles. Another player in this space is Check Point Software, which has rebranded its Mobile Threat Protection product as SandBlast Mobile. I took a closer look at this product and found that it fits in between mobile device managers and security event log analyzers. It makes it easier to manage the overall security footprint of your entire mobile device fleet. While I had a few issues with its use, overall it is a solid protective product.

You can read my review in CSOonline here.

How Citrix Fuels Red Bull Racing

Few businesses create a completely different product every couple of weeks, not to mention take their product team on the road and set up a completely new IT system on the fly. Yet, this is what the team at Red Bull Racing do each and every day.

A video featuring the racing company’s staff and its ultra-fast pace was featured at Citrix Synergy conference in Orlando earlier this summer. When I saw it, I wanted to find out more about what happens behind the scenes. Indeed, during the week of Synergy, they were busy at one of the 20 different races they participate in around the world each year. “We design and manufacture all of our cars and go through some 30,000 engineering changes a year,” says Matt Cadieux, CIO of Red Bull Racing, “We create new car parts for every race because all of our cars get torn apart and rebuilt to new specifications that are designed to make the car go faster.”

The typical race week starts out with the team showing up several days ahead of the green flag, ready to unload 40 tons of equipment on the first day. The next day, the communications networks are connected and the cars assembled. “We don’t have time to debug things, we have to be up and running and do that in 20 different places around the world,” he said.

The company is based in the UK, where they have a very space-age data center. And it should! They designed it based on the look and feel of NASA’s mission control room in Houston. “We have one of the largest video walls in Europe,” said Cadieux. The data center houses three Linux clusters, a Windows cluster, and uses IBM Spectrum Storage to manage all their resources.

All that video isn’t just for showing pretty pictures. They have some serious data visualizations, where they are tracking in real time more than 200 different car metrics dealing with its performance. “It is very graphically intensive, and behind the graphics are some pretty large databases too,” Cadieux said.

Getting these graphically-rich screens sent to the racing team at the track requires some serious computing horsepower and a very dependable remote desktop experience, and that is where Citrix comes into play. “If we didn’t have Citrix, there is no way we could manage these apps effectively,” he said. The racing company uses a combination of XenDesktop and XenServer along with NetScaler to provide firewalls and load balancing. As their infrastructure has gotten more complex, they have been able to add functionality from this collection of products easily.

That is one of the reasons why Red Bull Racing has worked with Citrix for many years. “We always had great tech support from them and they always exceed our expectations. We have access to their developers and early code releases. For our use cases, we are always pushing technical boundaries, because we not afraid to try new things. We give a lot of blunt and honest feedback to their product teams, too.”

As you might imagine, using all these graphical screens places a burden on having the lowest possible network latencies when those bits are shipped all over the planet. “We needed to take some heavyweight apps and use them in remote places, and that means we need some solid networks too.” They make use of AT&T VPN technologies to provide their global connectivity.

For example, when they are in Australia, they can get less than 380 ms round trip latencies back to their data center in the UK. “Sometimes, users can feel that delay, but they still can use our apps. When we are running races in Europe, you almost can’t tell the difference between the remote session and being back inside our data center,” he said.

The support team that they bring to each race used to be bigger, but thanks to XenDesktop they have managed to leave several key team members back in the UK. “We still can have the collaboration between people at the track and at home with XenDesktop. The result is that we can make better and more informed decisions,” he said. There is also a limit on the number of team members that can be present on the track as part of the racing staffing regulations.

Red Bull Racing has been around for more than a decade and, as you might assume, their current technology portfolio is “massively different” when compared to what they used back then. “When we first started, our operations room was the size of a broom closet and our simulations had almost no correlation to what we observed in the real world. Back then it was more of a science project than something that could make appropriate business decisions. Now, our math models are much more complex and we do a better job with visualizing our data. The size of our models and the infrastructure required to run them has also exploded over time.” As he mentions in this Network World article, “It allows us to get a very under-the-covers view of the health of the car, where you can understand forces, or you can see things overheating, or you can see, aerodynamically, what’s happening and whether our predictions in computational fluid dynamics and the wind tunnel are what really happens in the real world.”

Red Bull Racing have also been able to be more responsive; now, they can instrument a problem, track down a solution, and have it ready for race day all in a matter of hours. “We can now react to changes and surprises and deal with them in a more data-driven way,” he said.

Cadieux came from a Detroit-area traditional automaker before he moved across the pond and started supporting race cars. The two businesses have more in common than you might think, because ultimately they are all about making engineering decisions around the cars themselves. “We build our cars for performance, rater than for the mass market, that’s all. The software problems are similar.” Well, maybe he is just being modest.

Does Cadieux have one of the best jobs in IT? Certainly. “It is a very cool job and I am very lucky to have it. We get to solve some very interesting problems and be at the forefront of using tech to improve our business. It is great to work with such a cohesive team and be able to move very quickly, and merge engineering with our business needs.”

Speech: How to make your mobile phone safe from hackers

While the news about laptop camera covers can make any of us paranoid, the real cyber threat comes from the computer we all carry in our pockets and purses: our mobile phones. In this speech I am giving at Venture Cafe STL, I will describe some of the more dangerous cyber threats that can turn your phone into a recording device and launch pad for hackers, and how you can try to prevent these in your daily life.

iBoss blog: The new rules for MFA

In the old days — perhaps one or two years ago — security professionals were fond of saying that you need multiple authentication factors (MFAs) to properly secure login identities. But that advice has to be tempered with the series of man-in-the-middle and other malware exploits on MFAs that nullify the supposed protection of those additional factors. Times are changing for MFA, to be sure.

I wrote a three-part series for the iBoss blog about this topic. Here is part 1, which introduces the issues.  Part 2 covers some of the new authentication technologies. If you are responsible for protecting your end users’ identities, you want to give some of these tools careful consideration. A good place to start your research is the site TwoFactorAuth, which lists which sites support MFA logins. (The Verge just posted their own analysis of the history of MFA that is well worth reading too.)

And part 3  goes into detail about why a multi-layered approach for MFA is best.

Enterprise.nxt: What to look for in your next CISO

Hiring a chief information security officer (CISO) is a tricky process. The job title is in the limelight, especially these days, when breaches are happening to so many businesses. The job turnover rate is high, with many CISOs quitting or getting fired because of security incidents or management frustration. And the supply of qualified candidates is low. According to the ISACA report, State of Cyber Security 2017, 48 percent of enterprises get fewer than 10 applicants for cybersecurity positions, and 64 percent say that fewer than half of their cybersecurity applicants are qualified. And that’s just the rank and file IT security positions, not the top jobs. So here are some things to consider when you need to find a CISO and you don’t want to hire a “chief impending sacrifice officer.”

Read my article in HPE’s Enterprise.nxt.

Behind the scenes at creating Stuxnet

Most of us remember the Stuxnet worm that infected the Iranian Natanz nuclear plant back in the early 2000’s. I was privy to some of the researchers at Symantec that worked on decoding in and wrote this piece for ReadWrite in 2011 after being briefed about their efforts. Now you can rent the movie called ZeroDays that was written and produced by Alex Gibney on Netflix. It was released last year, and goes into a lot more detail about how the worm came to be.

Gibney interviews a variety of computer researchers and intelligence agency officials, one of whom is portrayed by an actress from the NSA (to disguise her identity). This person has the most interesting things to say in the movie, such as “at the NSA we would laugh because we always found a way to get across the airgap.” She admits that a combination of state-sponsored agencies from around the world collaborated on its creation and detonation at the plant. (Maybe that isn’t the best word to use given it was an enrichment plant.) She also gives some insight into the interactions between the NSA and the Mossad on how changes to the worm were done. Sadly and ironically, the actions surrounding Stuxnet motivated Iran to build a more advanced nuclear program today and assemble its own cyber army.

With many tech documentaries, they either oversell, undersell or are just plain wrong about many of the details. Zero Days has none of these issues, and is a solid film that can be enjoyed by techies and the lay public alike. The role of cyber weapons and how we proceed in the future goes beyond Stuxnet, which as what the NSA manager says, “is just a back alley compared to what we can really do.”

Building a software-defined network perimeter

At his Synergy conference keynote, Citrix CEO Kirill Tatarinov mentioned that IT “needs a software defined perimeter (SDP) that helps us manage our mission critical assets and enable people to work the way they want to.” The concept is not a new one, having been around for several years.

An SDP replaces the traditional network perimeter — usually thought of as a firewall. Those days are long gone, although you can still find a few IT managers that cling to this notion.

The SDP uses a variety of security software to define what resources are protected, and block entry points using protocols and methods. For example, if we look at the working group at the Cloud Security Alliance, they have decided on a control channel architecture using standard components such as SAML, PKI, and mutual TLS connections to define this perimeter.

Working groups such as these move slowly – it has been hard at work since 2013 – but I am glad to see Citrix adding their voice here and singing the SDP tune.

 

But perhaps a better way to explain the SDP is what is being called a “zero trust” network. In an article in Network World earlier this year, a post described the efforts at Google to move to this kind of model, whereby basically everyone on the network is guilty until proven innocent, or at least harmless. Every device is checked before being allowed access to resources. “Access is granted based on what Google knows about the end user and their device. And all access to services must be authenticated, authorized and encrypted,” according to the article.

This is really what a SDP is about, because all of these access evaluations are based on software that checks for identity, on other software that examines whether a device has the right credentials, and other software to make sure that traffic is encrypted across the network. Because Google is Google, they built their own solution and it took them years to implement across 20 different systems. What I liked about the Google implementation was that they installed their new systems across Google’s worldwide network and just had it inspect traffic for many months before they turned it on to ensure that nothing broke their existing applications.

You probably don’t have the same “money is no object” philosophy and want something more off-the-shelf. But you probably want to start sooner rather than later on building your own SDP.

New security products of the week

As part of my duties to write and edit this email newsletter for Inside.com, I am always on the lookout for new security products. When I was at the Citrix Synergy show last week, I wanted to see the latest products. One of the booths that were drawing crowds was Bitdefender’s. They have a Hypervisor Introspection product that sits on top of XenServer v7 hypervisors. It is completely agentless, and just runs memory inspections of the hosted VMs. Despite the crowds, I was less enamored of their solution than others that I have reviewed in the past for Network World such as TrendMicro’s and Hytrust. (Note, this review is more than three years old, so take my recommendations with several spoonfuls of your favorite condiment).

Nevertheless, having some protection riding on top of your VMs is essential these days, and you can be sure there were lots of booths scattered around the show floor that claimed to stop WannaCry in its tracks, given the publicity of this recent attack. Whether they actually would have done so is another matter entirely, I am just saying.

Over at the Kaspersky booth, it was nearly empty but they actually have a better mousetrap and have had their Virtualization Security products for several years. Kaspersky has a wider support of hypervisors (they run on top of VMware and Hyper-V as well as Xen). They offer an agentless solution for VMware that works with the vShield technology, and lightweight agents that run inside each VM for the other hypervisors. While you have to deploy agents, you get more visibility into how the VMs operate. One company not here in Orlando but that I am familiar with in this space is Observable Networks: they don’t need agents because they monitor the network traffic and system logs produced by the hypervisor. So just don’t make a decision based on the agents vs. agentless argument but look closer at what the security tool is monitoring and what kinds of threats can really be prevented. Pricing on Kaspersky starts at $110 per virtual server with a single VM and $39 per virtual desktop that includes 10-14 VMs. Volume discounts apply.

IGEL was another crowded booth. They have developed thin clients in the form of a small-factor USB drive. If you have an Intel-based client with at least 2 GB of RAM and 2 GB of disk storage (such as an old Windows XP desktop or Wyse thin client), you can run a Citrix Receiver client that will basically extend the life of your aging desktop. A major health IT provider just placed an order for $2M worth of more than 9,000 of these USB clients, saving themselves millions in upgrades to their old Wyse terminals. I got to see a demo of their management interface at the show. “It looks like Active Directory with a policy-based tool and it is super easy to manage and keep track of thousands of desktops,” according to what their CEO, Jed Ayres, told me during the demo. Their product starts at $169 per device.

img_25953Another booth held an interesting biometric solution called Veridium ID. They have recently been verified as Citrix Ready, but have been around for a couple of years developing their product. I have seen several biometric products, but this one looked very interesting. Basically, for phones that have a fingerprint sensor, they make use of that as the additional authentication factor. If your phone doesn’t have such as sensor, it uses the camera to take a picture of four of your fingers (as you can see here). It works with any SAML ID provider and at their booth they showed me a demo of it working with an ordinary website and with a Xen-powered solution. Their product starts at $25 per user, which is about half of what the traditional multi-factor vendors are selling their hardware or smart tokens for.