iBoss blog: Understanding the Differences Between Anonymity and Privacy

Balancing anonymity and privacy isn’t an either/or situation. There are many shades of gray, and it is more of an art than science. Making sure your users understand the distinction between the two terms and setting their appropriate expectations of both should be a critical part of any job managing IT security.

Most users when they say they want anonymity really are saying that they don’t want anyone –whether it be the government or an IT department — to keep track their web searches and conversations. They will say they want some amount of privacy when they are at work, whether they are using their computers and phones for work-related tasks or not.

Certainly, part of the problem is that people today over-share online: they post photos of themselves at various restaurants, or are tagged by their social media “friends” in awkward situations, or post their travel itineraries down to the exact hotels they stay at. How hard would it be to intercept their communications, break into an unoccupied home, or steal a laptop from their hotel room with this information?

But part of the problem is that controlling our privacy is complex: Take a look at the typical controls offered by Twitter. How can any normal person figure these out, let alone remember to change any of them as their needs change? It is hopeless.

As I wrote about this for another blog post, many enterprises are deleting their most sensitive data so they don’t have to worry about potential and embarrasing leaks. Some are also making sure they own their own encryption keys, rather than trust them in the hands of some well-meaning third party. And Apple has recently announced changes to its iOS 11 that will make it harder for law enforcement to extract your personal data.

Sometimes, the purported solutions to privacy controls only make things worse. Windows 10 comes with a series of “personalization” settings that are enabled for the maximum intrusion into our lives by default. One of them – letting ads access a specially-coded ID that is stored on your computer to personalize messages for you – is presented in a way to “improve your experience.” If you choose this route, this translates to increasing the creepiness factor, as ads are served up online based on your browsing history.

As another example, technology often gives us a false sense of security. Just because your users enable private browsing or connect to the Internet through a proxy server doesn’t mean people can’t figure out who you actually are or target ads to your browsing history. Recently, researchers have found flaws in the extension APIs of all browsers that make it easier to fingerprint anyone. Called the WebExtensions API, this protects browsers against attackers trying to list installed extensions by using access control settings in the form of the manifest.json file included in every extension. This file blocks websites from checking any of the extension’s internal files and resources unless the manifest.json file is specifically configured to allow it. But it could be leveraged through this flaw.

Even when this is patched, big data has made it almost absurdly easy to figure out supposedly anonymous users. Remember this New York Times article? Reporters chose a single random user from this list of 20 million Web search queries collected by AOL back in 2006. The Times was able to track her down, a 62-year-old widow who immediately recognized her web searches. So much for being anonymous! And that was back in 2006: imagine other data repositories and tools that are available now to track down individuals with relative ease.

So, realize that privacy isn’t the same as anonymity. Just because I do not know you are does not mean that you have any privacy. Someone who captures my face when I am out on a remote hiking trail can still expose my location and my name through the auspices of Facebook’s facial recognition algorithms, and I could be tagged without my knowledge.

IT needs to understand the differences between privacy and anonymity, and be able to clearly communicate this information to its users. Part of this is having a clearly stated privacy policy on the corporate webpage – and then following it. (This one from email vendor Mailpile is exemplary.) They need to set policies for how the enterprise will track cookies, browsing sessions, metadata and the actual private details of their employees, if these items are tracked.

HPE Enterprise.Nxt: The rise of ransomware

Ransomware is a troubling trend. Novice criminals with little technical savvy and cheap software can generate big payouts and impact enterprise operations. Here’s what you need to know about the changing ransomware landscape. Ransomware happens to be the fifth most common form of malware, and is expected to see a 300 percent increase this year, according to MWR InfoSecurity. 

You can read my analysis here on HPE’s Enterprise.Nxt site. I review some of its history, highlight a few of the recent innovations with ransomware-as-a-service (such as this web dashboard from Satan shown here), and make a few suggestions on how to prevent it from spreading around your company.

Estonia leads the way in digital innovation

(updated Feb 2, 2018)

My father’s father emigrated to America from Lithuania about a hundred years ago, and one day I intend to visit the Baltic region and see the land for myself, as my sister and I did earlier this year when we visited my mother’s homeland in northeast Poland. In my mind, the next best thing is to follow the activities of Estonia, a neighboring nation that is doing some interesting things online. (I know, my mind works in strange ways. But bear with me, I needed an intro for this essay.)

One reason why I am interested in Estonia is something that they have had in place for many years called the e-Resident program. Basically, this is an ID card issued by their government, for use by anyone in the world. You don’t have to ever live there, or even want to live there. More people have signed up for this ID than are actual residents of the country, so it was a smart move by their government to widen their virtual talent pool. Once you have this ID, you can register a new business in a matter of minutes. Thousands of businesses have been started by e-Residents, which also helps to bring physical businesses there too. In many countries, offshore businesses are required to have a local director or local address. Not Estonia.

So last week, after thinking more about this, I finally took the e-Resident plunge. It costs about $100, you need to take a picture of your local passport and fill out a simple form. When the ID card is ready, you have to physically go and pick it up at a local Estonia embassy (either NYC or DC would be the closest places for me, I chose NYC). You select your pickup site when you register and can’t change it.

Well, as usual, it was bad timing for me. I should have waited a little bit longer. This week we learned that there are potential exploits with the ID cards, at least the cards that have been circulating for the past several years. Almost 750,000 cards are affected. According to Estonian officials, the risk is a theoretical one and there is no evidence of anyone’s digital identity actually being misused. It might change how the IDs are used in next month’s national elections, although they haven’t decided on that. About a third of their voters do vote online.

Update: They have issued a fix, which requires you to update the certs that are attached to your ID. So last week (January 2018), I went to their embassy in NYC and got my kit, which includes the ID card, a nifty little USB reader, and some very bare-bones instructions. My first attempt at installing the software on my Windows 10 laptop was met with failure. A second attempt on another Windows PC eventually worked. My biggest complaint is the tech support: an email to one address took several days for a reply, which told me to send it to another email address (in three different languages, no less). The ID card software (shown here) installs four different programs on your computer, each with somewhat similar names. The cert update process requires your full attention and a solid Internet connection: lacking either, and you will have to re-run it.

The ID card software is also somewhat finicky. It has a place for you to upload your photo, but that didn’t seem to work. You are assigned an official Estonian email address, which I wanted to try to forward to my actual email account, and instructions were lacking on how to do this.

So overall, I give Estonia a passing grade for e-Resident, but just barely. It is hard to use and still needs some work. If a member of the public isn’t computing savvy, they will have a hard time getting going with the whole process. Navigating the numerous web pages for help is tedious and daunting. If you are thinking about the program, I would recommend waiting until they work out the bugs.

Now back to my earlier blog post for some other background info.

Estonia is leading the world in other digital matters too. Lots of companies have disaster recovery data centers located far from their headquarters, but that is an issue with Estonia, which is small enough that far is a just a few minutes’ drive. So they came up with another plan to make Estonia the first government to build an off-site data center in another country. The government will make backup copies of its critical data infrastructure and store them in Luxembourg if agreements between the two countries are reached. My story in IBM’s Security Intelligence blog goes into more details of what they call their “data embassy.” They have lots of other big digital plans too, such as using 100% digitized textbooks in their education system by the end of the decade and a public sector data exchange facility with Finland they are putting in place for this year.

Earlier this year, I read about a course they offered called “Subversive Leverage and Psychological Defense” to master’s degree students at their Academy of Security Sciences. The students are preparing for positions in the Estonian Internal Security Service. The story from CSM Passcode goes into more details about how vigilant they have to be to fight Russian propaganda. These aren’t isolated examples of how sophisticated they are. They also were the first EU country to teach HTML coding in its elementary schools back in 2012, and the Skype software was developed there.

Their former Prime Minister Taavi Rõivas has even appeared on the The Daily Show with Trevor Noah to talk about these programs. Clearly, they have a strong vision, made all the more impressive by the fact that they had almost no Internet access just a few years ago when they were still part of the Soviet empire. Certainly a place to keep an eye on.

iBoss blog: What is OAuth and why should I care?

The number of choices for automating login authentication is a messy alphabet soup of standards and frameworks, including SAML, WS-Federation, OpenID Connect, OAuth, and many others. Today I will take a closer look at OAuth and recent developments that favor this standard.

The idea behind all of these standards is to automate the login process, so your users don’t have to remember their many login and passwords for connecting to various resources. That sounds great in theory, but getting the automation to work properly isn’t always easy or obvious. To pull this off, you have to conquer some technical challenges that involve just-in-time user provisioning, adapting to consumer-based SaaS services as well as supporting enterprise apps, and understanding exactly how they provide the automation itself.

OAuth began its life about seven years ago as an open standard that was created to handle authorization by Twitter and Google. It has seen a lot of revisions since then. OAuth now has two different versions in current usage; v2 is the most recent and more capable and more widely used. The two aren’t compatible and rely on two different sets of standards specifications (more specifically, RFC 5849, superseded by RFC 6749).  Today OAuth has dozens of supporters.

A good example of how OAuth is used is when two websites are trying to accomplish something on behalf of a user: both of them have to figure out how to approve the user and get that unit of work done.  If you have to think of it as something, don’t call it a protocol: it actually is the authorization plumbing inside the authentication protocols. A good explanation of the more technical underpinings of OAuth and its relationship to authentication and OpenID and SAML can be found here.

Okay, so having gotten that out of the way, where does OAuth show up in security practice? Typically, enterprises adopt OAuth through using a single sign-on tool, such as Ping Identity, Okta, or SecureAuth. These tools control the overall login process by connecting an identity provider, such as Active Directory, with a collection of applications. The actual process is that instead of a user directly entering their username and password into an app’s login screen, they work with an identity provider that encrypts and then federates their credentials to the apps as part of the authentication process. Once this chain of events is setup, a user doesn’t really see what happens: they click on an app and they are logged in properly. Corporate security managers like this process to be hidden, because then they don’t have to worry about resetting individual users’ passwords.

Another example is with iBoss’ Web Gateway Security. iBoss makes use of OAuth to integrate its security policies with users’ Google accounts to cover BYOD situations and manage guest wireless access. A customizable captive portal automatically binds these BYOD users to a variety of directory services including Active Directory, eDirectory, Open Directory, and LDAP.

Earlier this year, Google updated its G Suite with the ability to do OAuth apps whitelisting. This means that a site administrator can have more granular control over what third-party apps do with G Suite data. You can set up permissions for specific data types, such as allow access to your staff’s Google Drive documents but not their contact lists, for example. This prevents rogue apps from accessing data unintentionally.

OAuth isn’t perfect: attackers can still phish a user’s credentials during the authentication process using man-in-the-middle attacks, which is one of the reasons why Google is providing more control over OAuth across its SaaS app suite. And OAuth also doesn’t provide encryption or client verification: you will need to employ Transport Layer Security for these protective features. Nevertheless, it is being used for more apps and gaining wider acceptance, and should be a part of your security toolkit.

Stopping malicious website redirects

In my work as editor of Inside Security’s email newsletter, I am on the lookout for ways that criminals can take advantage of insecure Internet infrastructure. I came across this article yesterday that I thought I would share with you and also take some time to explain the concept of the malicious redirect. This is how the bad guys turn something that was designed to be helpful into an exploit.

A redirect is when you put some HTML code on a web page because that URL is no longer in service, but you don’t want to lose that visitor. The most likely situation is that someone could have clicked on an old link and gotten to that location. So you direct them to the appropriate place on your website. Simple right?

Now the bad guys have used this, but instead of being helpful, they use the redirect code to point you to a place that contains some malware, in the hopes that you will not notice that the new web page is a trap and in an instant, your computer is now infected with something. Surprise! Sadly, this happens more and more.

In a post on Sucuri’s blog, researchers describe several ways the malicious redirect can happen. One way is by leveraging configuration files such as .htacess or .ini files. These are files associated with web servers that control all sorts of behavior and are usually hidden from ordinary browsing. Usually, your website security prevents folks from messing with these files, but if you made setup errors or if you aren’t paying attention, the configuration files can be exposed to attackers. Another way is by having an attacker mess with your DNS settings so that visitors to your site end up going somewhere else. How does some attacker gain access to your DNS servers? Typically, it is through a compromised administrative account password. Do you really know who in your organization has access to this information? Probably more people than you realize. When was the last time you changed this password anyway?

My office is in a condo complex that has several doors to a public alley. Each of the doors has a combination lock and all of the doors have the same combination. A year or so ago, the board was discussing that it might be time to change the combination because many people – by design – know what this combination is. This is just good security practice. Now the analogy isn’t quite sound – by design a lot of people have to know this number, otherwise no one can get out to the alley to take their trash out – but still, it was a good idea to regularly change the access code.

Neither of these exploit methods is new: they have been happening almost since the web became popular, sadly. So it is important that if you run websites and don’t want your reputation ruined or have some criminal spreading malware that you at least understand what can happen and make sure that you are protected.

But there is another way redirects can happen: by an attacker grabbing an expired domain name and leveraging its associated WordPress plug-in. Since a lot of you run WordPress sites, I want to take a moment to describe this attack method.

  • Attacker finds a dormant plug-in on the WordPress catalog. Give the thousands of plug-ins, there are lots of them that haven’t been updated in several years.
  • Check the underlying domain name that is used for the plug-in. If it isn’t active, purchase and register the name.
  • Set up a website for this domain that contains malicious Javascript code for the redirect.
  • Change the code on your plug-in to serve up the malware whenever anyone uses it.
  • Hope no one notices and sit back as the Internet spread your nasty business far and wide.

Moral of the story: Don’t use outdated plug-ins, and limit the potential for attacks by deleting unused plug-ins from your WordPress servers anyway. Make use of a tool such as WordFence to protect your blogs. Update your blog with the latest versions of WordPress and the latest plug-in versions too while you are at it.

When I first started using WordPress more than a decade ago, I went plug-in crazy and loaded up more than a dozen different ones for all sorts of enhancements to my blog’s appearance and functions. Now I am more careful, and only run the ones that I absolutely need. Situations such as this malicious redirect are a good reason why you should follow a similar strategy.

iBoss blog: The Dark Side of SSL Certificates

The world of SSL certificates is changing, as the certs become easier to obtain and more frequently used. In general, having a secure HTTP-based website is a good thing: the secure part of the protocol means it is more difficult to eavesdrop on any conversation between your browser and the web server. Despite their popularity, there is a dark side to them as well. Let’s take a closer look at my iBoss blog post this week.

Learning from a great public speaker, Reuben Paul

I got a chance to witness a top-rated speaker ply his trade at a conference that I attended this week here in St. Louis. The conference was a gathering of several hundred people who work in IT for our intelligence agencies, called DoDIIS. When I signed up for press credentials, I didn’t know he was going to be speaking, but glad that I could see him in action. As someone who speaks professionally at similar groups, I like to learn from the best, and he was certainly in that category.

The odd thing about this person is that he is still a kid, an 11-year-old to be exact. His name is Reuben Paul and he lives in Austin. Reuben already has spoken at numerous infosec conferences around the world, and he “owned the room,” as one of the generals who runs one of the security services mentioned in a subsequent speech. What made Reuben (I can’t quite bring myself to use his last name as common style dictates, sorry) so potent a speaker is that he was funny and self-depreciating as well as informative. He was both entertaining as well as instructive. He did his signature story, as we in the speaking biz like to call it, a routine where he hacks into a plush toy teddy bear (shown here sitting next to him on the couch along with Janice Glover-Jones, who is the CIO for the Defense Intelligence Agency) using a Raspberry Pi connected to his Mac.

The bear makes use of a Bluetooth connection to the Internet, along with a microphone to pick up ambient sound. In a matter of minutes, Reuben was showing the audience how he was able to record a snippet of audio and play it back on the bear’s speaker, using some common network discovery tools and Python commands. Yes, the kid knows Python: something that impressed several of the parade of military generals who spoke afterwards. These generals semi-seriously were vying to get the kid to work for their intelligence service agencies once he was no longer subject to child labor restrictions.

The kid is also current with the security issues of the Internet of Things, and can show you how an innocent toy can become the leverage point for hackers to enter your home and take control without your knowledge. This has become very topical, given the recent attacks using WannaCry, Petya and others that target these connected objects.

Reuben also managed to shame the IT professionals attending the conference. As the video monitors on stage were showing him scrolling down the list of network addresses from phones that were broadcasting their Bluetooth signals, he told us, “if you see your phone listed here, you might remember next time to turn off your Bluetooth for your own protection.” That got a laugh from the audience. Yes, this kid was shaming us and no one got upset! We were in the presence of a truly gifted speaker. I had made a similar point in my speech just a couple of weeks ago about Bluetooth vulnerability, and much less adroitly.

Reuben isn’t just a one-trick pony (or bear), either. The kid has set up several businesses already, which is impressive enough even without considering his public speaking prowess. One of them is this one that helps teach kids basic cybersecurity concepts. Clearly, he knows his audience, which is another tenet of a good speaker. If you ever get a chance to see him in person, do make the effort.

iBoss blog: What Is the CVE and Why It Is Important

The Common Vulnerabilities and Exposures (CVE) program was launched in 1999 by MITRE to identify and catalog vulnerabilities in software or firmware and create a free lexicon to help organizations improve their security. Since its creation, the program has been very successful and is now used to link together different vulnerabilities and to facilitate the comparison of security tools and services. You now see evidence of its work by the unique CVE number that accompanies a malware announcement by a security researcher.

In my latest blog post for iBoss, I look at how the CVE got started and where it used and the importance it plays in sharing threat information.

When anonymous web data isn’t anymore

One of my favorite NY Times technology stories (other than, ahem, my own articles) is one that ran more than ten years ago. It was about a supposedly anonymous AOL user that was picked from a huge database of search queries by researchers. They were able to correlate her searches and tracked down Thelma, a 62-year old widow living in Georgia. The database was originally posted online by AOL as an academic research tool, but after the Times story broke it was removed. The data “underscore how much people unintentionally reveal about themselves when they use search engines,” said the Times story.

In the intervening years since that story, tracking technology has gotten better and Internet privacy has all but effectively disappeared. At the DEFCON trade show a few weeks ago in Vegas, researchers presented a paper on how easy it can be to track down folks based on their digital breadcrumbs. The researchers set up a phony marketing consulting firm and requested anonymous clickstream data to analyze. They were able to actually tie real users to the data through a series of well-known tricks, described in this report in Naked Security. They found that if they could correlate personal information across ten different domains, they could figure out who was the common user visiting those sites, as shown in this diagram published in the article.

The culprits are browser plug-ins and embedded scripts on web pages, which I have written about before here. “Five percent of the data in the clickstream they purchased was generated up by just ten different popular web plugins,” according to the DEFCON researchers.

So is this just some artifact of gung-ho security researchers, or does this have any real-world implications? Sadly, it is very much a reality. Last week Disney was served legal papers about secretly collecting kid’s usage data of their mobile apps, saying that the apps (which don’t ask parents permission for the kids to use, which is illegal) can track the kids across multiple games. All in the interest of serving up targeted ads. The full list of 43 apps that have this tracking data can be found here, including the one shown at right.

So what can you do? First, review your plug-ins, delete the ones that you really don’t need. In my article linked above, I try out Privacy Badger and have continued to use it. It can be entertaining or terrifying, depending on your POV. You could regularly delete your cookies and always run private browsing sessions, although you do give up some usability for doing so.

Privacy just isn’t what it used to be. And it is a lot of hard work to become more private these days, for sure.

Is iOS more secure than Android?

I was giving a speech last week, talking about mobile device security, and one member of my audience asked me this question. I gave the typical IT answer, “it depends,” and then realized I needed a little bit more of an explanation. Hence this post.

Yes, in general, Android is less secure than All The iThings, but there are circumstances where Apple has its issues too. A recent article in ITworld lays out the specifics. There are six major points to evaluate:

  1. How old is your device’s OS? The problem with both worlds is when their owners stick with older OS versions and don’t upgrade. As vulnerabilities are discovered, Google and Apple come out with updates and patches — the trick is in actually installing them. Let’s look at the behavior of users between the two worlds: The most up-to-date Android version, Nougat, has less than 1% market share. On the other hand, more than 90% of iOS users have moved to iOS v10. Now, maybe in your household or corporation you have different profiles. But as long as you use the most recent OS and keep it updated, right now both are pretty solid.
  2. Who are the hackers targeting for their malware? Security researchers have seen a notable increase in malware targeting all mobile devices lately (see the timeline above), but it seems there are more Android-based exploits. It is hard to really say, because there isn’t any consistent way to count. And a new effort into targeting CEO “whale” phishing attacks or specific companies for infection isn’t really helping: if a criminal is trying to worm their way into your company, all the statistics and trends in the universe don’t really matter. I’ve seen reports of infections that “only” resulted in a few dozen devices being compromised, yet because they were all from one enterprise, the business impact was huge.
  3. Where do the infected apps come from? Historically, Google Play certainly has seen more infected apps than the iTunes Store. Some of these Android apps (such as Judy and FalseGuide) have infected millions of devices. Apple has had its share of troubled apps, but typically they are more quickly discovered and removed from circulation.
  4. Doesn’t Apple do a better job of screening their apps? That used to be the case, but isn’t any longer and the two companies are at parity now. Google has the Protect service that automatically scans your device to detect malware, for example. Still, all it takes is one bad app and your network security is toast.
  5. Who else uses your phone? If you share your phone with your kids and they download their own apps, well, you know where I am going here. The best strategy is not to let your kids download anything to your corporate devices. Or even your personal ones.
  6. What about my MDM, should’t that protect me from malicious apps? Well, having a corporate mobile device management solution is better than not having one. These kinds of tools can implement app whitelisting and segregating work and personal apps and data. But an MDM won’t handle all security issues, such as preventing someone from using your phone to escalate privileges, detecting data exfiltrations and running a botnet from inside your corporate network. Again, a single phished email and your phone can become compromised.

Is Android or iOS inherently more secure? As you can see, it really depends. Yes, you can construct corner cases where one or the other poses more of a threat. Just remember, security is a journey, not a destination.