security

Learning from a great public speaker, Reuben Paul

I got a chance to witness a top-rated speaker ply his trade at a conference that I attended this week here in St. Louis. The conference was a gathering of several hundred people who work in IT for our intelligence agencies, called DoDIIS. When I signed up for press credentials, I didn’t know he was going to be speaking, but glad that I could see him in action. As someone who speaks professionally at similar groups, I like to learn from the best, and he was certainly in that category.

The odd thing about this person is that he is still a kid, an 11-year-old to be exact. His name is Reuben Paul and he lives in Austin. Reuben already has spoken at numerous infosec conferences around the world, and he “owned the room,” as one of the generals who runs one of the security services mentioned in a subsequent speech. What made Reuben (I can’t quite bring myself to use his last name as common style dictates, sorry) so potent a speaker is that he was funny and self-depreciating as well as informative. He was both entertaining as well as instructive. He did his signature story, as we in the speaking biz like to call it, a routine where he hacks into a plush toy teddy bear (shown here sitting next to him on the couch along with Janice Glover-Jones, who is the CIO for the Defense Intelligence Agency) using a Raspberry Pi connected to his Mac.

The bear makes use of a Bluetooth connection to the Internet, along with a microphone to pick up ambient sound. In a matter of minutes, Reuben was showing the audience how he was able to record a snippet of audio and play it back on the bear’s speaker, using some common network discovery tools and Python commands. Yes, the kid knows Python: something that impressed several of the parade of military generals who spoke afterwards. These generals semi-seriously were vying to get the kid to work for their intelligence service agencies once he was no longer subject to child labor restrictions.

The kid is also current with the security issues of the Internet of Things, and can show you how an innocent toy can become the leverage point for hackers to enter your home and take control without your knowledge. This has become very topical, given the recent attacks using WannaCry, Petya and others that target these connected objects.

Reuben also managed to shame the IT professionals attending the conference. As the video monitors on stage were showing him scrolling down the list of network addresses from phones that were broadcasting their Bluetooth signals, he told us, “if you see your phone listed here, you might remember next time to turn off your Bluetooth for your own protection.” That got a laugh from the audience. Yes, this kid was shaming us and no one got upset! We were in the presence of a truly gifted speaker. I had made a similar point in my speech just a couple of weeks ago about Bluetooth vulnerability, and much less adroitly.

Reuben isn’t just a one-trick pony (or bear), either. The kid has set up several businesses already, which is impressive enough even without considering his public speaking prowess. One of them is this one that helps teach kids basic cybersecurity concepts. Clearly, he knows his audience, which is another tenet of a good speaker. If you ever get a chance to see him in person, do make the effort.

Read More
iBoss blog: What Is the CVE and Why It Is Important

The Common Vulnerabilities and Exposures (CVE) program was launched in 1999 by MITRE to identify and catalog vulnerabilities in software or firmware and create a free lexicon to help organizations improve their security. Since its creation, the program has been very successful and is now used to link together different vulnerabilities and to facilitate the comparison of security tools and services. You now see evidence of its work by the unique CVE number that accompanies a malware announcement by a security researcher.

In my latest blog post for iBoss, I look at how the CVE got started and where it used and the importance it plays in sharing threat information.

Read More
When anonymous web data isn’t anymore

One of my favorite NY Times technology stories (other than, ahem, my own articles) is one that ran more than ten years ago. It was about a supposedly anonymous AOL user that was picked from a huge database of search queries by researchers. They were able to correlate her searches and tracked down Thelma, a 62-year old widow living in Georgia. The database was originally posted online by AOL as an academic research tool, but after the Times story broke it was removed. The data “underscore how much people unintentionally reveal about themselves when they use search engines,” said the Times story.

In the intervening years since that story, tracking technology has gotten better and Internet privacy has all but effectively disappeared. At the DEFCON trade show a few weeks ago in Vegas, researchers presented a paper on how easy it can be to track down folks based on their digital breadcrumbs. The researchers set up a phony marketing consulting firm and requested anonymous clickstream data to analyze. They were able to actually tie real users to the data through a series of well-known tricks, described in this report in Naked Security. They found that if they could correlate personal information across ten different domains, they could figure out who was the common user visiting those sites, as shown in this diagram published in the article.

The culprits are browser plug-ins and embedded scripts on web pages, which I have written about before here. “Five percent of the data in the clickstream they purchased was generated up by just ten different popular web plugins,” according to the DEFCON researchers.

So is this just some artifact of gung-ho security researchers, or does this have any real-world implications? Sadly, it is very much a reality. Last week Disney was served legal papers about secretly collecting kid’s usage data of their mobile apps, saying that the apps (which don’t ask parents permission for the kids to use, which is illegal) can track the kids across multiple games. All in the interest of serving up targeted ads. The full list of 43 apps that have this tracking data can be found here, including the one shown at right.

So what can you do? First, review your plug-ins, delete the ones that you really don’t need. In my article linked above, I try out Privacy Badger and have continued to use it. It can be entertaining or terrifying, depending on your POV. You could regularly delete your cookies and always run private browsing sessions, although you do give up some usability for doing so.

Privacy just isn’t what it used to be. And it is a lot of hard work to become more private these days, for sure.

Read More
Is iOS more secure than Android?

I was giving a speech last week, talking about mobile device security, and one member of my audience asked me this question. I gave the typical IT answer, “it depends,” and then realized I needed a little bit more of an explanation. Hence this post.

Yes, in general, Android is less secure than All The iThings, but there are circumstances where Apple has its issues too. A recent article in ITworld lays out the specifics. There are six major points to evaluate:

  1. How old is your device’s OS? The problem with both worlds is when their owners stick with older OS versions and don’t upgrade. As vulnerabilities are discovered, Google and Apple come out with updates and patches — the trick is in actually installing them. Let’s look at the behavior of users between the two worlds: The most up-to-date Android version, Nougat, has less than 1% market share. On the other hand, more than 90% of iOS users have moved to iOS v10. Now, maybe in your household or corporation you have different profiles. But as long as you use the most recent OS and keep it updated, right now both are pretty solid.
  2. Who are the hackers targeting for their malware? Security researchers have seen a notable increase in malware targeting all mobile devices lately (see the timeline above), but it seems there are more Android-based exploits. It is hard to really say, because there isn’t any consistent way to count. And a new effort into targeting CEO “whale” phishing attacks or specific companies for infection isn’t really helping: if a criminal is trying to worm their way into your company, all the statistics and trends in the universe don’t really matter. I’ve seen reports of infections that “only” resulted in a few dozen devices being compromised, yet because they were all from one enterprise, the business impact was huge.
  3. Where do the infected apps come from? Historically, Google Play certainly has seen more infected apps than the iTunes Store. Some of these Android apps (such as Judy and FalseGuide) have infected millions of devices. Apple has had its share of troubled apps, but typically they are more quickly discovered and removed from circulation.
  4. Doesn’t Apple do a better job of screening their apps? That used to be the case, but isn’t any longer and the two companies are at parity now. Google has the Protect service that automatically scans your device to detect malware, for example. Still, all it takes is one bad app and your network security is toast.
  5. Who else uses your phone? If you share your phone with your kids and they download their own apps, well, you know where I am going here. The best strategy is not to let your kids download anything to your corporate devices. Or even your personal ones.
  6. What about my MDM, should’t that protect me from malicious apps? Well, having a corporate mobile device management solution is better than not having one. These kinds of tools can implement app whitelisting and segregating work and personal apps and data. But an MDM won’t handle all security issues, such as preventing someone from using your phone to escalate privileges, detecting data exfiltrations and running a botnet from inside your corporate network. Again, a single phished email and your phone can become compromised.

Is Android or iOS inherently more secure? As you can see, it really depends. Yes, you can construct corner cases where one or the other poses more of a threat. Just remember, security is a journey, not a destination.

Read More
Do real people want real encryption?

The short answer is a resounding Yes! Let’s discuss this topic which has spanned generations.

The current case in point has to do with terrorists using WhatsApp. For those of you that don’t use it, it is a text messaging app that also enables voice and video conversations. I started using it when I first went to Israel, because my daughter and most of the folks that I met there professionally were using it constantly. It has become a verb, like Uber and Google are for getting a ride and searching for stuff. Everything is encrypted end-to-end.

This is why the bad guys also use it. In a story that my colleague Lisa Vaas posted here in Naked Security, she quotes the UK Home Secretary Amber Rudd about some remarks she recently made. For those of you that aren’t familiar with UK government, this office covers a wide collection of duties, mixing what Americans would find in our Homeland Security and Justice Departments. She said, “Real people often prefer ease of use and a multitude of features to perfect, unbreakable security.” She was trying to make a plea for tech companies to loosen up their encryption, just a little bit mind you, because of the inability for her government to see what the terrorists are doing. “However, there is a problem in terms of the growth of end-to-end encryption” because police and security services aren’t “able to access that information.” Her idea is to serve warrants on the tech companies and get at least metadata about the encrypted conversations.

This sounds familiar: after the Paris Charlie Hebdo attacks two years ago. The last person in her job, David Cameron, issued similar calls to break into encrypted conversations. They went nowhere.

Here is the problem. You can’t have just a little bit of encryption, just like you can’t be a little bit pregnant. Either a message (or an email or whatever) is encrypted, or it isn’t. If you want to selectively break encryption, you can’t guarantee that the bad guys can’t go down this route too. And if vendors have access to passwords (as some have suggested), that is a breach “waiting to happen,” as Vaas says in her post. “Weakening security won’t bring that about, however, and has the potential to make matters worse.”

In Vaas’ post, she mentions security expert Troy Hunt’s tweet (reproduced here) showing links to all the online services that (surprise!) she uses that operate with encryption like Wikipedia, Twitter and her own website. Jonathan Haynes, writing in the Guardian, says “A lot of things may have changed in two years but the government’s understanding of information security does not appear to be one of them.”

It isn’t that normal citizens or real people or whatever you want to call non-terrorists have nothing to hide.They do have their privacy, and if we don’t have encryption, then everything is out in the open for anyone to abuse, lose, or spread around the digital landscape.

Read More
CSO Online: As malware grows more complex, protection strategies need to evolve

The days of simple anti-malware protection are mostly over. Scanning and screening for malware has become a very complex process, and most traditional anti-malware tools only find a small fraction of potentially harmful infections. This is because malware has become sneakier and more defensive and complex.

In this post for CSO Online sponsored byPC Pitstop, I dive into some of the ways that malware can hide from detection, including polymorphic methods, avoiding dropping files on a target machine, detecting VMs and sandboxes or using various scripting techniques. I also make the case for using application whitelisting (which is where PC Pitstop comes into play), something more prevention vendors are paying more attention to as it gets harder to detect the sneakier types of malware.

Read More
CSOonline: Review of Check Point’s SandBlast Mobile — simplifies mobile security

There is a new category of startups — like Lookout Security, NowSecure, and Skycure — who have begun to provide defense in depth for mobiles. Another player in this space is Check Point Software, which has rebranded its Mobile Threat Protection product as SandBlast Mobile. I took a closer look at this product and found that it fits in between mobile device managers and security event log analyzers. It makes it easier to manage the overall security footprint of your entire mobile device fleet. While I had a few issues with its use, overall it is a solid protective product.

You can read my review in CSOonline here.

Read More
Warning: your mobile phone is not safe from hackers

The biggest cyber threat isn’t sitting on your desk: it is in your pocket or purse. I am talking of course about your smartphone. Our phones have become the prime hacking target, due to a combination of circumstances, some under our control, and some not.

Just look at some of the recent hacks that have happened to phones. There are bad apps that look benign, apps that claim to protect you from virus infections but are really what are called “fake AV” and harm your phone instead, and even malware that infects application construction tools. I will get to some of the specifics in a moment. If you are in St. Louis on August 3, you can come hear me speak here about this topic.

Part of the problem is that the notion of “bring your own device” has turned into “bring your own trouble” – as corporate users have become more comfortable using their own devices, they can infect or get infected from the corporate network.  And certainly mobile users are less careful and tend to click on email attachments that could infect their phones. But the fault really lies in the opportunity that mobile apps present.

For example, take a look what security researcher Will Strafach has done with this report earlier this year. He demonstrated dozens of iOS apps that were vulnerable to what is called a man-in-the-middle attack. These allow hackers to intercept data as it is being passed from your phone through the Internet to someplace else. At the time, his report grabbed a few headlines, but apparently, that wasn’t enough. In a more recent update, he found that very few of the app creators took the hint — most did nothing. He estimates that 18 million downloaded apps still have this vulnerability. Security is just an afterthought for many app makers.

Another issue is that many users just click on an app and download it to their phones, without any regard to seeing if they have the right app. Few of us do any vetting or research to find out if the app is legit, or if it part of some hacker’s scheme, and to do so really requires a CS degree or a lot of skill. Take the case of the “fake AV” app that infects rather than protects your phone. There are hundreds of them in the Google Play store. FalseGuide is another malware app that has been active since last November and infected more than two million users.

The Judy malware has infected between 8.5 million to 36.5 million users over the past year, hiding inside more than 40 different apps. DressCode initially appeared around April 2016 and since then it has been downloaded hundreds of thousands of times. Both look like ordinary apps that your kids might want to download and play with. Hackers often take legit apps and insert malware and then rename and relist them on the app stores, making matters worse.

Even the WannaCry worm, which was initially Windows-only, has been found in seven apps in the Google Play store and two in Apple’s App Store. Speaking of Apple, the malware XcodeGhost is notable in that it has targeted iOS devices and resulted in 300 malware-infected apps being created, although that malware infected Apple’s desktop development environment rather than the mobile phones directly.

So what can you do? First, make sure your phone has a PIN to lock its use, and if you have a choice of a longer PIN, choose that. There are still at least ten percent of users that don’t lock their phones. Having a PIN also encrypts the data on your phone too.

Next, use encrypted messaging apps to send sensitive information, such as Signal or WhatsApp. Don’t trust SMS texts or ordinary emails for this.

Use a password manager, such as Lastpass, to store all your passwords and share them across your devices, so you don’t have to remember them or write them down.

When you are away from your home or office network, use a VPN to protect your network traffic.

Don’t automatically connect to Wi-Fi hotspots by name: hackers like to fool you into thinking that just because something is named “Starbucks Wi-Fi” it could be from someone else. Apple makes a Configurator app that can be used to further lock down its devices: use it.

Turn off radios that aren’t in use, such as Bluetooth and Wi-Fi.

Don’t do your online banking — or anything else that involves moving money around — when you are away from home.

Don’t let your kids download apps without vetting them first.

Turn on the Verify Apps feature, especially on Android devices, to prevent malicious or questionable apps from being downloaded.

Keep your devices’ operating systems updated, especially Android ones. Hackers often take advantage of phones running older OS’s.

I realize that this is a lot of work. Many of these tasks are inconvenient, and some will break old habits. But ask yourself if you want to spend the time recovering from a breach, and if it is worth it to have your life turned upside down if your phone is targeted.

Read More
iBoss blog: The new rules for MFA

In the old days — perhaps one or two years ago — security professionals were fond of saying that you need multiple authentication factors (MFAs) to properly secure login identities. But that advice has to be tempered with the series of man-in-the-middle and other malware exploits on MFAs that nullify the supposed protection of those additional factors. Times are changing for MFA, to be sure.

I wrote a three-part series for the iBoss blog about this topic. Here is part 1, which introduces the issues.  Part 2 covers some of the new authentication technologies. If you are responsible for protecting your end users’ identities, you want to give some of these tools careful consideration. A good place to start your research is the site TwoFactorAuth, which lists which sites support MFA logins. (The Verge just posted their own analysis of the history of MFA that is well worth reading too.)

And part 3  goes into detail about why a multi-layered approach for MFA is best.

Read More
Should hacking back be legal?

Two reports, one recent and one from last year have been published about the state of active cyber defense strategies.

 The first one is Into the Gray Zone: The Private Sector and Active Defense Against Cyber Threats, it covers the work of a committee of government and industry experts put together by the Center for Cyber and Homeland Security of George Washington University and came out last October. The second report just came out this month and is called, Private Sector Cyber Defense: Can Active Measures Help Stabilize Cyberspace? It is published by Wyatt Hoffman and Eli Levite, two fellows at the Carnegie Endowment, a DC think tank. 

Both reports review the range of active cyber defense strategies. There are a variety of techniques that range from the more common honeypots (where IT folks set up a decoy server that looks like it contains important information but is used as a lure to attract hackers) to botnet takedowns to using white-hat (or legal uses of) ransomware to using cyber ‘dye-packs’ to collect network information from a hacker and possibly destroying his equipment, to other hacking back activities. The issue is where to draw the legal line for both the government and private actors.

Active defense is nothing new: honeypots were used back in 1986 by Clifford Stoll, who created fake files promising military secrets to lure a spy onto his network. He documented the effort in his book The Cuckoo’s Egg. Of course, since then people have gotten more sophisticated in their defense mechanisms, particularly as the number of attacks and their sophistication has grown.

The first report dissects two active defense case studies that are available in the public literature: Google’s reaction to Operation Aurora in 2009 that began in China and the Dridex banking Trojan botnet takedown in 2015. Google made use of questionably legal discovery technologies but was never prosecuted by any law enforcement agency. Dridex was neutralized through cooperation of several government agencies and private sector efforts, and resulted in the extradition and conviction of Andrey Ghinkul.

With both of these cases, the GWU report shows that attribution of the source of the malware was possible, but not without a lot of tremendous cooperation from a variety of private and government sources. That is the good news.

Speaking of cooperation, that is where the second report comes into play, where it compares the cyber efforts with the commercial shipping industry’s experience regarding piracy on the high seas. After it became clear that governments’ military efforts were insufficient responses to the piracy problem, the demand for private sector security services increased dramatically. While governments initially resisted their involvement, they begrudgingly accepted that the active defense measures deployed by shipowners, in consultation with insurance providers, were helping to deter attacks and that the tradeoffs in risk were unavoidable. The bottom line—the private sector filled a critical gap in protection by working together.

But here is the problem, as true now as last fall when the first GWU report was published. A private business has no explicit right of self-defense when it comes to a cyber attack, and in most cases, could be doing something that runs afoul of US laws. There are various legal remedies that the government can take, but not an ordinary business. As the GWU report states, “US law is commonly understood to prohibit active defense measures that occur outside the victim’s own network. This means that a business cannot legally retrieve its own data from the computer of the thief who took it, at least not without court-ordered authorization.” What makes matters worse is the number of cyber job openings in those government agencies, so even though they have the authority, they are woefully understaffed to take any action.

The GWU report puts forth a risk-based framework for how government and the private sector can work together to solve this problem, and you can read their various recommendations if you are interested. 

It is a tricky situation. One of the GWU report authors is Nuala O’Connor, the President and CEO of the Center for Democracy & Technology. She says that “as more aggressive active defense measures might become lawful are based on considerations like whether they were conducted in conjunction with the government and the intent of the actor,” there could be problems. “I believe these types of measures should remain unlawful. Intent can be difficult to measure, particularly when on the receiving end of an effort to gain access.”

 The Carnegie authors admit that their shipping analogy has its limitations, but correctly point out that when the government is lacking in its efforts, the private sector will step in and fill the gap with their own solutions. They say, “Malicious cyber actors motivated by geopolitical objectives, however, may have a far different calculus than cybercriminals, which affects whether and how they can be deterred.” In the meantime, my point in bringing up this issue is to get you to think about your own active cyber defense strategies for your own business.  

Read More
1 2 3 42