Should hacking back be legal?

Two reports, one recent and one from last year have been published about the state of active cyber defense strategies.

 The first one is Into the Gray Zone: The Private Sector and Active Defense Against Cyber Threats, it covers the work of a committee of government and industry experts put together by the Center for Cyber and Homeland Security of George Washington University and came out last October. The second report just came out this month and is called, Private Sector Cyber Defense: Can Active Measures Help Stabilize Cyberspace? It is published by Wyatt Hoffman and Eli Levite, two fellows at the Carnegie Endowment, a DC think tank. 

Both reports review the range of active cyber defense strategies. There are a variety of techniques that range from the more common honeypots (where IT folks set up a decoy server that looks like it contains important information but is used as a lure to attract hackers) to botnet takedowns to using white-hat (or legal uses of) ransomware to using cyber ‘dye-packs’ to collect network information from a hacker and possibly destroying his equipment, to other hacking back activities. The issue is where to draw the legal line for both the government and private actors.

Active defense is nothing new: honeypots were used back in 1986 by Clifford Stoll, who created fake files promising military secrets to lure a spy onto his network. He documented the effort in his book The Cuckoo’s Egg. Of course, since then people have gotten more sophisticated in their defense mechanisms, particularly as the number of attacks and their sophistication has grown.

The first report dissects two active defense case studies that are available in the public literature: Google’s reaction to Operation Aurora in 2009 that began in China and the Dridex banking Trojan botnet takedown in 2015. Google made use of questionably legal discovery technologies but was never prosecuted by any law enforcement agency. Dridex was neutralized through cooperation of several government agencies and private sector efforts, and resulted in the extradition and conviction of Andrey Ghinkul.

With both of these cases, the GWU report shows that attribution of the source of the malware was possible, but not without a lot of tremendous cooperation from a variety of private and government sources. That is the good news.

Speaking of cooperation, that is where the second report comes into play, where it compares the cyber efforts with the commercial shipping industry’s experience regarding piracy on the high seas. After it became clear that governments’ military efforts were insufficient responses to the piracy problem, the demand for private sector security services increased dramatically. While governments initially resisted their involvement, they begrudgingly accepted that the active defense measures deployed by shipowners, in consultation with insurance providers, were helping to deter attacks and that the tradeoffs in risk were unavoidable. The bottom line—the private sector filled a critical gap in protection by working together.

But here is the problem, as true now as last fall when the first GWU report was published. A private business has no explicit right of self-defense when it comes to a cyber attack, and in most cases, could be doing something that runs afoul of US laws. There are various legal remedies that the government can take, but not an ordinary business. As the GWU report states, “US law is commonly understood to prohibit active defense measures that occur outside the victim’s own network. This means that a business cannot legally retrieve its own data from the computer of the thief who took it, at least not without court-ordered authorization.” What makes matters worse is the number of cyber job openings in those government agencies, so even though they have the authority, they are woefully understaffed to take any action.

The GWU report puts forth a risk-based framework for how government and the private sector can work together to solve this problem, and you can read their various recommendations if you are interested. 

It is a tricky situation. One of the GWU report authors is Nuala O’Connor, the President and CEO of the Center for Democracy & Technology. She says that “as more aggressive active defense measures might become lawful are based on considerations like whether they were conducted in conjunction with the government and the intent of the actor,” there could be problems. “I believe these types of measures should remain unlawful. Intent can be difficult to measure, particularly when on the receiving end of an effort to gain access.”

 The Carnegie authors admit that their shipping analogy has its limitations, but correctly point out that when the government is lacking in its efforts, the private sector will step in and fill the gap with their own solutions. They say, “Malicious cyber actors motivated by geopolitical objectives, however, may have a far different calculus than cybercriminals, which affects whether and how they can be deterred.” In the meantime, my point in bringing up this issue is to get you to think about your own active cyber defense strategies for your own business.  

FIR B2B Podcast #75: BETH WINKOWSKI DOES B2B PR VERY WELL

There’s no shortage of people willing to bash bad PR practices, but we prefer to take a more positive tone. This week we speak with someone who does B2B PR very well.

Beth Winkowski has had her own PR firm for decades after working for leading edge tech companies back in the 1990s. Both Paul and I have had tremendous respect for her, not just for the quality of her communications but for the very skillful way in which she handles journalist relationships. She sends out press releases for all events announced by her clients but only asks for press briefings occasionally. Both Paul and I know that when Beth asks for a briefing, the announcement is important. Her clients are well prepared, with PowerPoint decks that explain but don’t overwhelm. She sends a confirmation the morning of the  call along with the final press release. She always includes graphics. These sound like small things, but it’s amazing how few agencies attend these small details.

When we first contacted Beth about being a guest, she demurred, saying that she only seeks publicity for her clients and not for herself. She has no website because she doesn’t want to appear to be promoting her own interests ahead of those of her clients. It’s these kinds of philosophical details that are important. Beth had a lot of wisdom to impart on our show, and while most of it is common sense, it is a reason why she is the consummate PR pro.

Listen to our 19 min. podcast here.

Stopping phishing

When IT professionals talk about phishing attacks, they are quick to blame uneducated users who aren’t really focused on processing their emails. But while this is certainly one of the causes – and one of the reasons why phishing remains so popular among attackers – you can’t fault even the most eagle-eyed users from several things that are making it harder to spot phony emails. A combination of more subtle attacks using non-Roman URL characters, more focus on mobile man-in-the-middle exploits, greater use of SSL certificates and more mobile email usage have created new opportunities for phishers.

Homograph attacks. Even if you are the sharpest-eyed observer, you will have a hard time detecting this latest phishing technique that goes by the name Punycode or an IDN homograph typosquatting attack. The idea is simple: back in the day, the Internet standards bodies expanded the ability to handle non-Roman alphabet characters for domains and URLs. The trouble is that many of these characters look very similar to the ordinary ones that you and I use in our Roman alphabet. Spammers purchased domains that looked just like the all-Roman letters, with one or two changes using some other character set. This post from Wordfence shows how subtle these homographs really are, making it almost impossible for anyone to detect. There is further discussion on this site about how phishers operate.

More mobile email usage. This is making it harder to see (and then vet) the URL bar when a browser session is opened on your phone. The mobile app designers want as much screen real estate as they can to show a web page and this means that the URL line is often hidden or quickly moves off the screen as you scroll down. Even if you wanted to pay attention, you probably don’t bother to scroll back up to see it. What is making things worse is that the criminals are making better copies of real web pages. The crooks are getting better at using the exact same HTML code that a bank or retailer uses for their web pages, which makes them harder to distinguish, even if they are viewed on a full-sized PC screen.

More SSL encryption usage. Ironically, an effort that began several years by Google and the non-profit foundation behind the Let’s Encrypt website have made problems worse. That website makes it dirt simple to obtain a free SSL certificate in a matter of seconds, so that warning signs in the URL bar of browsers when you aren’t connecting to a secure website are almost moot now. While it is great that more than half of all web traffic is now encrypted, we need better mechanisms that just a red/green indicator to help users understand what they are viewing.

More frequent MITM attacks on mobile apps. Security researcher Will Strafach gave a report earlier this year and demonstrated numerous IOS apps that were vulnerable to man-in-the-middle attacks. These allow attackers to intercept data as it is being passed from a device to a server. That grabbed a few headlines, but apparently wasn’t enough. In a more recent report, he has continued to track these apps and shows that many of them are still vulnerable.

So what is being done? The browser vendors are doing a better job at detecting the homograph URLs (if you are not running Chrome 59 or Firefox 53, please do upgrade now). Many network security vendors are fine-tuning their tools to better detect compromised emails, or track reputations of malware control sites, or use other techniques to try to neutralize the phishers. Some enterprises are deploying secure browsers, to limit the damage of a phished link.

Clearly, this will take a combination of approaches to fight this continued battle. Phishing is a war of attrition. All it takes is one less-attentive user and the game is on. And it requires constant vigilance — by all of us.

Enterprise.nxt: What to look for in your next CISO

Hiring a chief information security officer (CISO) is a tricky process. The job title is in the limelight, especially these days, when breaches are happening to so many businesses. The job turnover rate is high, with many CISOs quitting or getting fired because of security incidents or management frustration. And the supply of qualified candidates is low. According to the ISACA report, State of Cyber Security 2017, 48 percent of enterprises get fewer than 10 applicants for cybersecurity positions, and 64 percent say that fewer than half of their cybersecurity applicants are qualified. And that’s just the rank and file IT security positions, not the top jobs. So here are some things to consider when you need to find a CISO and you don’t want to hire a “chief impending sacrifice officer.”

Read my article in HPE’s Enterprise.nxt.

Behind the scenes at creating Stuxnet

Most of us remember the Stuxnet worm that infected the Iranian Natanz nuclear plant back in the early 2000’s. I was privy to some of the researchers at Symantec that worked on decoding in and wrote this piece for ReadWrite in 2011 after being briefed about their efforts. Now you can rent the movie called ZeroDays that was written and produced by Alex Gibney on Netflix. It was released last year, and goes into a lot more detail about how the worm came to be.

Gibney interviews a variety of computer researchers and intelligence agency officials, one of whom is portrayed by an actress from the NSA (to disguise her identity). This person has the most interesting things to say in the movie, such as “at the NSA we would laugh because we always found a way to get across the airgap.” She admits that a combination of state-sponsored agencies from around the world collaborated on its creation and detonation at the plant. (Maybe that isn’t the best word to use given it was an enrichment plant.) She also gives some insight into the interactions between the NSA and the Mossad on how changes to the worm were done. Sadly and ironically, the actions surrounding Stuxnet motivated Iran to build a more advanced nuclear program today and assemble its own cyber army.

With many tech documentaries, they either oversell, undersell or are just plain wrong about many of the details. Zero Days has none of these issues, and is a solid film that can be enjoyed by techies and the lay public alike. The role of cyber weapons and how we proceed in the future goes beyond Stuxnet, which as what the NSA manager says, “is just a back alley compared to what we can really do.”

FIR B2B Podcast #74: IN THE ‘CIRCULAR ECONOMY,’ SUSTAINABILITY IS GOOD BUSINESS

The “circular economy” is about more than just sustainability or preserving the environment. It’s a new economic model based upon the idea of maximizing the lifetime value of resources for as long as possible, whether through recycling, reuse or sharing. It’s a concept that underscores the growth of the so-called “sharing economy” and is paying benefits in the form of new product concepts and improved customer engagement.

Thomas Singer, The Conference BoardThe Conference Board recently published a report that defines the circular economy and offers examples of how it’s changing the way some businesses work. Thomas Singer (left), who authored the report, joins Paul Gillin and I to summarize its findings and discuss the long-term impact on businesses and marketers.

Thomas is a principal researcher in corporate leadership at The Conference Board and author of numerous publications, including “Driving Revenue Growth through Sustainable Products and Services” and the comprehensive corporate sustainability benchmarking report “Sustainability Practices.”  You can listen to our 19m podcast here:

Document your network

Over the weekend, I had an interesting experience. Normally, I don’t go into my office then, which is across the street from my apartment. But yesterday the cable guy was coming to try to fix my Internet connection. During the past week my cable modem would suddenly “forget” its connection. It was odd because all the idiot lights were solidly illuminated. There seemed to be no physical event that was associated with the break. After I power cycled the modem my connection would come back up.

I was lucky: I got a very knowledgeable cable guy, and he worked hard to figure out my issue. I will save you a lot of the description here and just tell you that he ended up replacing a video splitter that was causing my connection to drop. Cable Internet is using a shared connection, and my problem could have multiple causes, such as a chatty neighbor or a misbehaving modem down the block. But once we replaced the splitter, I was good to go.

Now I have been in my office for several years, and indeed built it out it from unfinished space when I first moved in. I designed the cable plant and where I wanted my contractor to pull all the wires and terminate them. But that was years ago. I didn’t document any of this, or if I did have misplaced that information. But the cable tech took the time to make up for my oversight, He tracked down my misbehaving video splitter that was buried inside one of my wall plates. And that is one of the morals of this story: always be documenting your infrastructure. It costs you less to do that contemporaneously, when you are building it, then when you have to come back after the fact and try to remember where your key network parts are located or how they are configured.

Part of this story was that I was using an Evenroute IQrouter, an interesting wireless router that can optimize for latency. I was able to bring up this graph that showed me the last several minutes’ connection details so I knew it wasn’t my imagination.

 

Now my network is puny compared to most companies’, to be sure. But I have been in some larger businesses that don’t do any better job of keeping track of their gear. Oh the wiring closets that I have been in, let me tell you! They look more like spaghetti. For example, here I am in the offices of Checkpoint in Israel in January 2016. Granted, this was in one of their test labs but still it shouldn’t look like this (I am standing next to Ed Covert, a very smart infosec analyst):

 

Compare this with how they should look. This was taken in a demonstration area at Panduit’s offices. Granted, it was set up to show how neat and organized their cabling could be.

Documentation isn’t just about making pretty color-coded cables nice and neat, although you can start there. The problem is when you have to change something, and then you need to keep track when you do. This means being diligent when you add a new piece of gear, or change your IP address range, or add a new series of protocols or applications. So many times you hear about network administrators that opened a particular port and didn’t remember to close it once the reason for the request was satisfied. Or a username which was still active months or years after the user had left the company. I had an email address on Infoworld’s server for years after I no longer wrote for them, and I tried to get it turned off to no avail.

So take the time and document everything. Otherwise you will end up like me, with a $5 part inside one of your walls that is causing you downtime and aggravation.

Building a software-defined network perimeter

At his Synergy conference keynote, Citrix CEO Kirill Tatarinov mentioned that IT “needs a software defined perimeter (SDP) that helps us manage our mission critical assets and enable people to work the way they want to.” The concept is not a new one, having been around for several years.

An SDP replaces the traditional network perimeter — usually thought of as a firewall. Those days are long gone, although you can still find a few IT managers that cling to this notion.

The SDP uses a variety of security software to define what resources are protected, and block entry points using protocols and methods. For example, if we look at the working group at the Cloud Security Alliance, they have decided on a control channel architecture using standard components such as SAML, PKI, and mutual TLS connections to define this perimeter.

Working groups such as these move slowly – it has been hard at work since 2013 – but I am glad to see Citrix adding their voice here and singing the SDP tune.

 

But perhaps a better way to explain the SDP is what is being called a “zero trust” network. In an article in Network World earlier this year, a post described the efforts at Google to move to this kind of model, whereby basically everyone on the network is guilty until proven innocent, or at least harmless. Every device is checked before being allowed access to resources. “Access is granted based on what Google knows about the end user and their device. And all access to services must be authenticated, authorized and encrypted,” according to the article.

This is really what a SDP is about, because all of these access evaluations are based on software that checks for identity, on other software that examines whether a device has the right credentials, and other software to make sure that traffic is encrypted across the network. Because Google is Google, they built their own solution and it took them years to implement across 20 different systems. What I liked about the Google implementation was that they installed their new systems across Google’s worldwide network and just had it inspect traffic for many months before they turned it on to ensure that nothing broke their existing applications.

You probably don’t have the same “money is no object” philosophy and want something more off-the-shelf. But you probably want to start sooner rather than later on building your own SDP.

New security products of the week

As part of my duties to write and edit this email newsletter for Inside.com, I am always on the lookout for new security products. When I was at the Citrix Synergy show last week, I wanted to see the latest products. One of the booths that were drawing crowds was Bitdefender’s. They have a Hypervisor Introspection product that sits on top of XenServer v7 hypervisors. It is completely agentless, and just runs memory inspections of the hosted VMs. Despite the crowds, I was less enamored of their solution than others that I have reviewed in the past for Network World such as TrendMicro’s and Hytrust. (Note, this review is more than three years old, so take my recommendations with several spoonfuls of your favorite condiment).

Nevertheless, having some protection riding on top of your VMs is essential these days, and you can be sure there were lots of booths scattered around the show floor that claimed to stop WannaCry in its tracks, given the publicity of this recent attack. Whether they actually would have done so is another matter entirely, I am just saying.

Over at the Kaspersky booth, it was nearly empty but they actually have a better mousetrap and have had their Virtualization Security products for several years. Kaspersky has a wider support of hypervisors (they run on top of VMware and Hyper-V as well as Xen). They offer an agentless solution for VMware that works with the vShield technology, and lightweight agents that run inside each VM for the other hypervisors. While you have to deploy agents, you get more visibility into how the VMs operate. One company not here in Orlando but that I am familiar with in this space is Observable Networks: they don’t need agents because they monitor the network traffic and system logs produced by the hypervisor. So just don’t make a decision based on the agents vs. agentless argument but look closer at what the security tool is monitoring and what kinds of threats can really be prevented. Pricing on Kaspersky starts at $110 per virtual server with a single VM and $39 per virtual desktop that includes 10-14 VMs. Volume discounts apply.

IGEL was another crowded booth. They have developed thin clients in the form of a small-factor USB drive. If you have an Intel-based client with at least 2 GB of RAM and 2 GB of disk storage (such as an old Windows XP desktop or Wyse thin client), you can run a Citrix Receiver client that will basically extend the life of your aging desktop. A major health IT provider just placed an order for $2M worth of more than 9,000 of these USB clients, saving themselves millions in upgrades to their old Wyse terminals. I got to see a demo of their management interface at the show. “It looks like Active Directory with a policy-based tool and it is super easy to manage and keep track of thousands of desktops,” according to what their CEO, Jed Ayres, told me during the demo. Their product starts at $169 per device.

img_25953Another booth held an interesting biometric solution called Veridium ID. They have recently been verified as Citrix Ready, but have been around for a couple of years developing their product. I have seen several biometric products, but this one looked very interesting. Basically, for phones that have a fingerprint sensor, they make use of that as the additional authentication factor. If your phone doesn’t have such as sensor, it uses the camera to take a picture of four of your fingers (as you can see here). It works with any SAML ID provider and at their booth they showed me a demo of it working with an ordinary website and with a Xen-powered solution. Their product starts at $25 per user, which is about half of what the traditional multi-factor vendors are selling their hardware or smart tokens for.

FIR B2B #73: WHAT’S GOOD ABOUT TODAY’S TRADE SHOWS

We all love to carp about trade shows, so this time we thought we’d take a different approach and highlight some of the noteworthy moments over our many years of covering and speaking at them. David has just been to two different shows in Orlando last month and compares how they were run and what he learned. Paul has been to many vendor-focused shows over the years and offers some of his perspective. The best shows all have this in common:

  • Solid speakers that have compelling stories, often drawn from the end-user community. We realize that some shows are run for profit and sell sponsorships (that often include a speaking “slot”). Still, the better speakers will always generate more buzz, coverage, and attendee response. These speakers aren’t afraid of telling tales that have a mixture of positive and negative experiences from the vendor’s products.
  • The smaller, more vendor-driven shows will collect the faithful and boosters, no need to amplify or over-sell this.
  • User-run shows, such as from VMware and Terradata, are often better than those that are vendor-run.
  • Having executives who “give good interview” is key: not all of them can (even with some training) do this.
  • PR teams who know what reporters like and tailor their schedules accordingly, rather than set up too many “meet-n-greets” that keeps us off the show floor.

Speaking of bad PR, Paul ends our episode with a tale of woe about one PR person who admitted that a news item was over a year old. Telling the truth is always a good operating philosophy.
Listen to the 18m podcast here: