The many ironies of the post email era

It has been 20 years since Marshall Rose and I wrote  our book about Internet email. Since then, it has become almost a redundant term: how could you have email without using the Internet? For that matter, how can you have a business without any email to the outside world? It seems unthinkable today.

But for something so essential to modern life, Internet email also comes with multiple ironic situations. I will get to these in a moment.

To do some research for this essay, I re-read a column that I wrote ten years ago about the evolution of email between 1998-2008. Today I want to talk about the last ten years and what we all have been doing during this period. I would call this decade the post email era because email has become the enabling technology for an entire class of applications that previously weren’t possible or weren’t as easy ten years ago. Things like Slack, MFA logins, universal SMS, and the thousands of apps that notify us of their issues using emails. Ironically, all of this has almost eclipsed the actual use of Internet email itself. While ten years ago we had many of these technologies, now they are in more general use. And by post-email I don’t mean that we have stopped using it; quite the contrary. Now it is so embedded in our operations that most of us don’t even think much about it and take it for granted, like the air we breathe. That’s its second irony.

When a new business is being formed, usually the decision for its email provider comes down to hosting email on Google or Microsoft’s servers. That is a big change from ten years ago, when cloud-based email was still being debated and (in some cases) feared. I have been hosting my email on Google’s servers for more than ten years, and many of you have also done the same.

Another change is pricing. This has made email a commodity and it is pretty reasonable: Google charges $5 per month for 30 GB of storage or $10 per month with either 1 TB or unlimited storage. If you want to go with Microsoft, they have similarly priced plans for 1 TB of storage. That is an immense amount of storage. Remember when the first cloud emailers had 1 GB of storage? That seems so quaint — and so limiting — now. For all the talk back then about “inbox zero” (meaning culling messages from your inbox as much as possible), we have enabled email hoarding. That’s another irony.

Apart from all this room to keep our stuff, another major reason for using the cloud is that it frees up the decision as to which email client to run (and to support) for each user. A third reason is that the cloud frees up users to run multiple email clients, depending on what device and for what particular post-email task they want to accomplish. Both of these concepts were pretty radical 20 years ago, and even five years ago they were still not as well accepted as they are now. Today many of us spend more of our time on email with our phones than our desktops, and use multiple programs for our email, and don’t give this a second thought.

Why would anyone want to host their own email server anymore? Here is another irony: one reason is privacy. The biggest thing to happen to email in the past ten years was a growing awareness of how exposed one’s email communications could be. Between Snowden’s revelations and Hillary’s server, it is now crystal clear to the world at large that your email can be read by your government.

When Marshall developed the early email protocols, he didn’t hide this aspect of its operations. It just took the rest of the world many years to catch on. As a result, we now have companies that are deliberating locating in data havens to prevent governments from gaining access to their data streams. Witness ProtonMail and Kolabnow, both doing business from Switzerland, and Mailfence, operating out of Belgium. These companies have picked their locations because they don’t want your email finding its way into the NSA’s Utah data centers, or anywhere else for that matter. And we have articles such as this one in Ars that discuss the issues about Swiss privacy laws. Today a business knows enough to ask where its potential messages will be stored, whether they will be encrypted, and who has control over its encryption keys. That certainly wasn’t in many conversations — or even decisions about selecting an email provider — ten years ago.

One way to take back control over your email is literally to host your own email server so that your message traffic is completely under your control. That has been a difficult proposition even for tech-savvy businesses — until now. This is what Helm is trying to do, and they have put together a sexy little server (about the size of of a small gingerbread house, to keep things festive and seasonal) which can sit on any Internet network anywhere in the world and deliver messages to your inbox. It doesn’t take a lot of technical skill to setup (you use a smartphone app), and it will encrypt all your messages from end-to-end. Helm doesn’t touch them and can’t decrypt them either. Because of this, the one caveat is that you can’t use a webmail client. That is a big tradeoff for many of us that have grown to like webmail over the past decade. Brian Krebs blogged this week that users can pick two items out of security, privacy and convenience. That is the rub. With Helm, you get privacy and security, but not convenience (if you are a webmail user). Irony again: webmail has become so pervasive but you need to go back to running your own server and email desktop clients if you want ironclad security.

Speaking of email encryption, one thing that hasn’t change in the past ten years is how it is rarely used. One of the curiosities of the Snowden revelations was how hard he had to work to find a reporter who was adept enough at using PGP to exchange encrypted messages. Encryption still is hard. And while Protonmail and Tutanova and others (mentioned in this article) have come into play, they are still more curiosities than in widespread general use.

Another trend over the past ten years is how spam and phishing have become bigger problems. This is happening as our endpoints get better at filtering malware out of our inboxes. This is one reason to use hosted Exchange or Gmail: both companies are very good at stopping spam and malicious messages.

It is somewhat of an ironic contradiction: you would hope that better spam processing would make us safer, not more at risk. This risk is easy to explain but hard to prevent. All it takes is just one user on your network, who clicks on one wrong attachment, and a hacker can gain control over your desktop and eventually your entire network. Now that scenario is a common one witnessed in many TV and movie episodes, even in non-sci-fi-themed shows. For example, this summer we had Rihanna as a hacker in Ocean’s 8. Not very realistic, but certainly fun to watch.

So welcome to the many ironies of the post email era. Share your thoughts about how your own email usage has evolved over this past decade if you feel so inclined.

FIR B2B podcast #110: David Lloyd, on how personas are for marketers too

The concept of user personas was originally developed for user interface design, but it’s a powerful tool for marketers, too. David Lloyd, who is the lead strategist and senior data analyst from Brilliant Noise in London. joins us to discuss his post this past summer about The dream of data-driven personas.

Personas, particularly ones that are deeply rooted in data, can help shape marketing campaigns. We talk about the differences between user experience and marketing personas and what are the typical data types that would be used to shape useful ones. His blog post talks about common mistakes that marketers make in creating personas and describes what a typical persona looks like, down to assigning them a name to make them more real. 

He also addresses why you don’t want to go too wide or too specific in creating your personas: the ideal number of personas a marketer should work with is about three. Also, the cloud has made it far easier to create and collect a great deal of online data that can be useful in creating personas. Lloyd tells how marketers can make personas more actionable as part of their marketing plans.

CSOonline: How to beef up your Slack security

When it comes to protecting your Slack messages, many companies are still flying blind. Slack has become the defacto corporate messaging app, with millions of users and a variety of third-party add-on bots and other apps that can extend its use. It has made inroads into replacing email, which makes sense because it is so immediate like other messaging apps. But it precisely because of its flexibility and ubiquity that makes it more compelling to protect its communications.


In this post for CSOonline, I take a closer look at what is involved in securing your Slack installatio nand some of the questions you’ll want to ask before picking the right vendor’s product. You can see some of the tools that I took a closer look at too in the chart above.

Book review: The End of Trust

Last week the Electronic Frontier Foundation published an interesting book called The End of Trust. It was published in conjunction with the writing quarterly McSweeneys, which I have long been a subscriber and enjoy its more usual fiction short story collections. This issue is its first total non-fiction effort and it is worthy of your time.

There are more than a dozen interviews and essays from major players in the security, privacy, surveillance and digital rights communities. The book tackles several basic issues: first the fact that privacy is a team sport, as Cory Doctorow opines — meaning we have to work together to ensure it. Second, there are numerous essays about the role of the state in a society that has accepted surveillance, and the equity issues surrounding these efforts. Third, what is the outcome and implications of outsourcing of digital trust. Finally, various authors explore the difference between privacy and anonymity and what this means for our future.

While you might be drawn to articles from notable security pundits, such as an interview where Edward Snowden explains the basics behind blockchain and where Bruce Schneier discusses the gap between what is right and what is moral, I found myself reading other less infamous authors that had a lot to say on these topics.  

Let’s start off by saying there should be no “I” in privacy, and we have to look beyond ourselves to truly understand its evolution in the digital age. The best article in the book is an interview with Julia Angwin, who wrote an interesting book several years ago called Dragnet Nation. She says “the word formerly known as privacy is not about individual harm, it is about collective harm. Google and Facebook are usually thought of as monopolies in terms of their advertising revenue, but I tend to think about them in terms of acquiring training data for their algorithms. That’s the thing what makes them impossible to compete with.” In the same article, Trevor Paglen says, “we usually think about Facebook and Google as essentially advertising platforms. That’s not the long-term trajectory of them, and I think about them as extracting-money-out-of-your-life platforms.”

Role of the state

Many authors spoke about the role that law enforcement and other state actors have in our new always-surveilled society. Author Sara Wachter-Boettcher said, “I don’t just feel seen anymore. I feel surveilled.” Thenmozhi Soundararajan writes that “mass surveillance is an equity issue and it cuts across the landscape of race, class and gender.” This is supported by Alvaro Bedoya, the director of a Georgetown Law school think tank. He took issue about the statement that everyone is being watched, because some are watched an awful lot more than others. With new technologies, it is becoming harder to hide in a crowd and thus we have to be more careful about crafting new laws that allow the state access to this data, because we could lose our anonymity in those crowds. “For certain communities (such as LBGTQ), privacy is what lets its members survive. Privacy is what let’s them do what is right when what’s right is illegal. Digital tracking of people’s associations requires the same sort of careful First Amendment analysis that collecting NAACP membership lists in Alabama in the 1960s did. Privacy can be a shield for the vulnerable and is what let’s those first ‘dangerous’ conversations happen.”

Scattered throughout the book are descriptions of various law enforcement tools, such as drones facial recognition systems, license plate readers and cell-phone simulators. While I knew about most of these technologies, collected together in this fashion makes them seem all the more insidious.

Outsourcing our digital trust

Angwin disagrees with the title and assumed premise of the book, saying the point is more about the outsourcing of trust than its complete end. That outsourcing has led to where we prefer to trust data over human interactions. As one example, consider the website Predictim, which scans a potential babysitter or dog walker to determine if they are trustworthy and reliable using various facial recognition and AI algorithms. Back in the pre-digital era, we asked for personal references and interviewed our neighbors and colleagues for this information. Now we have the Internet to vet an applicant.

When eBay was just getting started, they had to build their own trust proxy so that buyers would feel comfortable with their purchases. They came up with early reputation algorithms, which today have evolved into the Uber/Lyft star-rating for their drivers and passengers. Some of the writers in this book mention how Blockchain-based systems could become the latest incarnation for outsourcing trust.  

Privacy vs. anonymity

The artist Trevor Paglen says, “we are more interested not so much in privacy as a concept but more about anonymity, especially the political aspects.” In her essay, McGill ethics professor Gabriella Coleman says, “Anonymity tends to nullify accountability, and thus responsibility. Transparency and anonymity rarely follow a binary moral formula, with the former being good and the latter being bad.”

Some authors explore the concept of privacy nihilism, or disconnecting completely from one’s social networks. This was explored by Ethan Zuckerman, who wrote in his essay: “When we think about a data breach, companies tend to think about their data like a precious asset like oil, so breaches are more like oil spills or toxic waste. Even when companies work to protect our data and use it ethically, trusting a platform gives that institution control over your speech. The companies we  trust most can quickly become near-monopolies whom we are then forced to trust because they have eliminated their most effective competitors. Facebook may not deserve our trust, but to respond with privacy nihilism is to exit the playing field and cede the game to those who exploit mistrust.” I agree, and while I am not happy about what Facebook has done, I am also sticking with them for the time being too.

This notion of the relative morality of our digital tools is also taken up in a recent NY Times op/ed by NYU philosopher Matthew Liao entitled, Do you have a moral duty to leave Facebook? He says that the social media company has come close to crossing a “red line” but for now he is staying with them.  

The book has a section for practical IT-related suggestions to improve your trust and privacy footprint, many of which will be familiar to my readers (MFA, encryption, and so forth). But another article by Douglas Rushkoff goes deeper. He talks about the rise of fake news in our social media feeds and says that it doesn’t matter what side of an issue people are on for them to be infected by the fake news item. This is because the item is designed to provoke a response and replicate. A good example of this is one individual recently mentioned in this WaPost piece who has created his own fake news business out of a parody website here.

Rushkoff recommends three strategies for fighting back: attacking bad memes with good ones, insulating people from dangerous memes via digital filters and the equivalent of AV software, and better education about the underlying issues. None of these are simple.

This morning the news was about how LinkedIn harvested 18M emails from to target ads to recruit people to join its social network. What is chilling about this is how all of these email addresses were from non-members that it had collected, of course without their permission.  

You can go to the EFF link above where you can download a PDF copy or go to McSweeneys and buy the hardcover book. Definitely worth reading.

FIR B2B podcast #109: Transparency, Truth and the Rebirth of Long-Form Content

Three items in the news caught our attention this week. The first was a piece that by Agility PR about a tale of two PR crisis responses— and why only one of them worked.  The crises in question are the firing of Megyn Kelly by NBC and Andy Rubin’s departure from Google with a $90 million severance package. Both situations were handled differently by the organizations’ leaders, and both produced very different results in terms of public and employee perception. The contrasting cases are useful to help shape your own crisis response and to understand how you have to get ahead of the news in just the right tone and with actions that speak louder than platitudes.

The second piece we discuss provides evidence that marketing guru Gary Vaynerchuk is wrong about an awful lot of things, largely because he appears to base his observations and predictions more on instinct than on facts. We respect Vaynerchuk for what he’s accomplished, but think that in an environment in which the value of facts is being called into question, it’s incumbent upon thought leaders to use them. This is the big data age, after all.

Finally, we have often debated the optimal length of podcasts and videos for content marketing purposes, but maybe old assumptions about keeping recorded content as short as possible is out of date. Welcome to the Age of the Hour-Long YouTube Video makes the case that long-form content is making a comeback. For the same reason that podcasts have become popular, people are now able to put their idle time to work. This may have implications for marketing videos in the future, and whether you want to go after quality or quantity when it comes to collecting readership. We both are devotees of podcasts that frequently run 90 minutes or more. That’s because the content is great, the hosts do their research and the subjects are interesting. Which would you rather have, eyeballs or fans?

Happy holidays to all, we’ll return next week with fresh insights. You can listen to our podcast here:

Book review: You’ll see this message when it is too late

A new book from Professor Josephine Wolff at Rochester Inst. of Technology called You’ll see this message when it is too late is worth reading.  While there are plenty of other infosec books on the market, to my knowledge this is first systematic analysis of different data breaches over the past decade.

She reviews a total of nine major data breaches of the recent past and classifies them into three different categories, based on the hackers’ motivations; those that happened for financial gain (TJ Maxx and the South Carolina Department of Revenue and various ransomware attacks); for cyberespionage (DigiNotar and US OPM) and online humiliation (Sony and Ashley Madison). She takes us behind the scenes of how the breaches were discovered, what mistakes were made and what could have been done to mitigate the situation.

A lot has been already written on these breaches, but what sets Wolff’s book apart is that she isn’t trying to assign blame but dive into their root causes and link together various IT and corporate policy failures that led to the actual breach.

There is also a lot of discussion about how management is often wrong about these root causes or the path towards mitigation after the breach is discovered. For example, then-South Carolina governor Nikki Haley insisted that if only the IRS had told them to encrypt their stolen tax data, they would have been safe. Wolff then describes what the FBI had to do to fight the Zeus botnet, where its authors registered thousands of domain names in advance of each campaign, generating new ones for each attack. The FBI ended up working with security researchers to figure out the botnet’s algorithms and be able to shut down the domains before they could be used by the attackers. This was back in 2012, when such partnerships between government and private enterprise were rare. This collaboration also happened in 2014 when Sony was hacked.

Another example of management security hubris can be found with the Ashley Madison breach, where its managers touted how secure its data was and how your profiles could be deleted with confidence — both promises were far from the truth as we all later found out.

The significance of some of these attacks weren’t appreciated until much later. For example, the attack on the Dutch registrar DigiNotar’s certificate management eventually led to its bankruptcy. But more importantly, it demonstrated that a small security flaw could have global implications, and undermine overall trust in the Internet and compromise hundreds of thousands of Iranian email accounts. To this day, most Internet users still don’t understand the significance in how these certificates are created and vetted.

Wolff mentions that “finding a way into a computer system to steal data is comparatively easy. Finding a way to monetize that data can be much harder.” Yes, mistakes were made by the breached parties she covers in this book. “But there were also potential defenders who could have stepped in to detect or stop certain stages of these breaches.” This makes the blame game more complex, and shows that we must consider the entire ecosystem and understand where the weak points lie.

Yes, TJ Maxx could have provided stronger encryption for its wireless networks; South Carolina DoR could have used MFA; DigiNotar could have segmented its network more effectively and set up better intrusion prevention policies; Sony could have been tracking exported data from its network; OPM could have encrypted its personnel files; Ashley Madison could have done a better job with protecting its database security and login credentials. But nonetheless, it is still difficult to define who was really responsible for these various breaches. 

For corporate security and IT managers, this book should be required reading.

CSOonline: How to set up a successful digital forensics program

IT and security managers have found themselves increasingly needing to better understand the world of digital forensics. This world has become more important as the probability of being breached continues to approach near-certainty, and as organizations need to better prepare themselves for legal actions and other post-breach consequences.

In this post for CSOonline, I describe the basics behind digital forensics, the kinds of specialized tools that are required, links to appropriate resources to learn more and a checklist of various decisions that you will need to consider if you are going to be more involved in this field. It is not just about understanding the legal consequences of a breach, but also in being properly prepared before a breach occurs. And something that you need to get your head around: lawyers can be your friends in these circumstances.

CSOonline: Top application security tools for 2019

The 2018 Verizon Data Breach Investigations Report says most hacks still happen through breaches of web applications. For this reason, testing and securing applications (from my CSOonline article last month) has become a priority for many organizations. That job is made easier by a growing selection of application security tools. I put together a list of 13 of the best ones available, with descriptions of the situations where they can be most effective. I highlight both commercial and free products. The commercial products very rarely provide list prices and are often bundled with other tools from the vendor with volume or longer-term licensing discounts. Some of the free tools, such as Burp Suite, also have fee-based versions that offer more features.

This article has been replaced by a more recent piece written by John Breeden in 2022.

 

 

Fear of Facebook: becoming social, but only behind our keyboards

As many of you know, for the last several years I have been doing a regular podcast with Paul Gillin on B2B marketing trends. Gillin has been in tech journalism for more than 30 years, having run Computerworld and TechTarget and written numerous books. It is a fun gig, and we offer a lot of insight, and you should subscribe if you are interested in the overall topic.

In our latest episode, we return to talking to Dan Newman, who is a very insightful guy for just being born when both Paul and I started in IT. Newman said one thing that I want to expand upon here. We were talking about the rise of customer self-service portals and methods, including using chatbots as a tool to provide quick answers. He thinks this is an indication that “We have become more social, but only behind our keyboards.” That is an interesting phenomenon.

I would amend that position to say not only have we become more social, but more critical to a fault thanks to our consumption of online social networks. You could lay the blame on Reddit, as this recent book does. (We are the Nerds, as reviewed in the NY Times here.) A better title for this book, as the reviewer David Streitfeld states, should be “We are the Trolls” and suggests its tag line should be “Two inexperienced young guys created something they didn’t understand and couldn’t control.” He writes, “the lack of adult oversight; the suck-up press; the growth-at-any-cost mentality; the loyal employees, by turns abused and abusive” all contributed to its offensive snark. In the end, it didn’t matter. Forget about connecting the world, or doing good, or bringing a voice to the disenfranchised. Reddit is a $2B media property, and in the Valley, money is what eventually matters.

That is a common theme for many tech companies, and it seems we are seeing the same effect happening down highway 101 at the Facebook campus (shown here). You should watch the two-part Frontline series this week about Facebook. During the program, you will see how in the process of connecting the world’s populace, it has inflamed their worst passions and stoked their fears. It interviews several current and ex-employees. While the latter might have axes to grind, it is worth hearing their points of view. You’ll hear how Trump’s digital media manager spent $100M on Facebook ads before the 2016 election. If you haven’t thought about this before, it is worth viewing both episodes to see how much influence the company has had, and how poor Zuck’s leadership has been. The program also highlights the rise of “fake news” across Facebook, such as these companion posts on the Pope endorsing either Trump or endorsing Clinton.

Think fake news is easy to spot? Take this quick quiz developed by the Newseum education staff. My wife and I tried it, and while we did reasonably well, we still got a couple of items wrong. Granted, we had a timed deadline to complete the quiz, and some of them we just guessed answers. But we saw that it is harder than we both thought, even when you have been told to be on the lookout. Imagine how much harder this task could be in our normal lives consuming online media posts?

The Frontline program interviewed their chief of security Alex Stamos who says, “Russia [through its advertising and fake accounts] wants to find fault lines in US society and amplify them, and to make Americans not trust each other.” Russians orchestrated two concurrent and co-located protest rallies in Houston, seeding participants on both sides. There is no question that Facebook is being used as an amplifier to promote hatred of all kinds. Just look at your own news feeds.

Farhad Manjoo’s column in the NY Times this week makes a case that Zuck is “too big to fail,” playing off the phrase used for the 2008 mortgage banking crisis. He mentions reports that tie Facebook posts to the Myanmar genocide, discriminatory advertising and multiple federal legal inquiries. He concludes by saying either Zuck fix Facebook, or no one does, like it or not.

But here’s the thing: as we become more social behind our keyboards, we can’t be as discriminating as we can when we meet people face-to-face. In embracing the self-service world, we are all doing ourselves a tremendous disservice too. Lies become truth, and democracy is turned inside out. It is time Facebook took responsibility for its power and role in this process.

FIR B2B podcast #108: Dan Newman’s 2019 tech trends for CMOs

Paul Gillin and I speak this week with Daniel Newman, author, speaker, millennial CEO and founding partner at Futurum Research. We were interested in a column he wrote for Forbes entitled, How Will The 10 Top Digital Transformation Trends For 2019 Impact The CMO. 

Dan highlighted a couple of the tech trends that will be essential items for CMOs to get their arms around in the coming year, including the transformation of data from machine learning to AI. “Analytics should be the CMO’s best friend,” he told us. “AI will allow for data-driven campaigns that will be guaranteed to work every time.”

Newman said data should play a pivotal role in marketing in the future, and don’t worry too much about over-personalizing the message. Nobody ever complains when a brand provides too much value and can help drive purchases that customers want at any given moment. The trick is to find the right moment and to target customers accurately.

The European Union’s new General Data Protection Regulation will force changes in the way brands market in the coming year, he said. They will have to become more creative about not just getting customers to opt in but to staying engaged. This means that companies are going to have change the way they do lead development. They’ll need to know customers better in order to personalize content because they’ll have less data to work with.

We had a particularly interesting discussion about chatbots as a mechanism for driving personal interactions. Newman sees us moving away from face-to-face moments, and the phenomenon isn’t limited to teens or Gen Xers. The rise of customer self-service is an indication that “We have become more social, but only behind our keyboards,” he said.

Another of his provocative predictions consumers will be able to use blockchain to, in effect, sell information about themselves to marketers.  While Newman sees this technology as still immature, he believes its long-term potential is explosive.

Finally, as the average tenure for CMOs continues to decline, they will have to do a better job of managing expectations and develop tighter relationships with their CEOs. You can listen to our 24 min. podcast here: