Last week the Electronic Frontier Foundation published an interesting book called The End of Trust. It was published in conjunction with the writing quarterly McSweeneys, which I have long been a subscriber and enjoy its more usual fiction short story collections. This issue is its first total non-fiction effort and it is worthy of your time.
There are more than a dozen interviews and essays from major players in the security, privacy, surveillance and digital rights communities. The book tackles several basic issues: first the fact that privacy is a team sport, as Cory Doctorow opines — meaning we have to work together to ensure it. Second, there are numerous essays about the role of the state in a society that has accepted surveillance, and the equity issues surrounding these efforts. Third, what is the outcome and implications of outsourcing of digital trust. Finally, various authors explore the difference between privacy and anonymity and what this means for our future.
While you might be drawn to articles from notable security pundits, such as an interview where Edward Snowden explains the basics behind blockchain and where Bruce Schneier discusses the gap between what is right and what is moral, I found myself reading other less infamous authors that had a lot to say on these topics.
Let’s start off by saying there should be no “I” in privacy, and we have to look beyond ourselves to truly understand its evolution in the digital age. The best article in the book is an interview with Julia Angwin, who wrote an interesting book several years ago called Dragnet Nation. She says “the word formerly known as privacy is not about individual harm, it is about collective harm. Google and Facebook are usually thought of as monopolies in terms of their advertising revenue, but I tend to think about them in terms of acquiring training data for their algorithms. That’s the thing what makes them impossible to compete with.” In the same article, Trevor Paglen says, “we usually think about Facebook and Google as essentially advertising platforms. That’s not the long-term trajectory of them, and I think about them as extracting-money-out-of-your-life platforms.”
Role of the state
Many authors spoke about the role that law enforcement and other state actors have in our new always-surveilled society. Author Sara Wachter-Boettcher said, “I don’t just feel seen anymore. I feel surveilled.” Thenmozhi Soundararajan writes that “mass surveillance is an equity issue and it cuts across the landscape of race, class and gender.” This is supported by Alvaro Bedoya, the director of a Georgetown Law school think tank. He took issue about the statement that everyone is being watched, because some are watched an awful lot more than others. With new technologies, it is becoming harder to hide in a crowd and thus we have to be more careful about crafting new laws that allow the state access to this data, because we could lose our anonymity in those crowds. “For certain communities (such as LBGTQ), privacy is what lets its members survive. Privacy is what let’s them do what is right when what’s right is illegal. Digital tracking of people’s associations requires the same sort of careful First Amendment analysis that collecting NAACP membership lists in Alabama in the 1960s did. Privacy can be a shield for the vulnerable and is what let’s those first ‘dangerous’ conversations happen.”
Scattered throughout the book are descriptions of various law enforcement tools, such as drones facial recognition systems, license plate readers and cell-phone simulators. While I knew about most of these technologies, collected together in this fashion makes them seem all the more insidious.
Outsourcing our digital trust
Angwin disagrees with the title and assumed premise of the book, saying the point is more about the outsourcing of trust than its complete end. That outsourcing has led to where we prefer to trust data over human interactions. As one example, consider the website Predictim, which scans a potential babysitter or dog walker to determine if they are trustworthy and reliable using various facial recognition and AI algorithms. Back in the pre-digital era, we asked for personal references and interviewed our neighbors and colleagues for this information. Now we have the Internet to vet an applicant.
When eBay was just getting started, they had to build their own trust proxy so that buyers would feel comfortable with their purchases. They came up with early reputation algorithms, which today have evolved into the Uber/Lyft star-rating for their drivers and passengers. Some of the writers in this book mention how Blockchain-based systems could become the latest incarnation for outsourcing trust.
Privacy vs. anonymity
The artist Trevor Paglen says, “we are more interested not so much in privacy as a concept but more about anonymity, especially the political aspects.” In her essay, McGill ethics professor Gabriella Coleman says, “Anonymity tends to nullify accountability, and thus responsibility. Transparency and anonymity rarely follow a binary moral formula, with the former being good and the latter being bad.”
Some authors explore the concept of privacy nihilism, or disconnecting completely from one’s social networks. This was explored by Ethan Zuckerman, who wrote in his essay: “When we think about a data breach, companies tend to think about their data like a precious asset like oil, so breaches are more like oil spills or toxic waste. Even when companies work to protect our data and use it ethically, trusting a platform gives that institution control over your speech. The companies we trust most can quickly become near-monopolies whom we are then forced to trust because they have eliminated their most effective competitors. Facebook may not deserve our trust, but to respond with privacy nihilism is to exit the playing field and cede the game to those who exploit mistrust.” I agree, and while I am not happy about what Facebook has done, I am also sticking with them for the time being too.
This notion of the relative morality of our digital tools is also taken up in a recent NY Times op/ed by NYU philosopher Matthew Liao entitled, Do you have a moral duty to leave Facebook? He says that the social media company has come close to crossing a “red line” but for now he is staying with them.
The book has a section for practical IT-related suggestions to improve your trust and privacy footprint, many of which will be familiar to my readers (MFA, encryption, and so forth). But another article by Douglas Rushkoff goes deeper. He talks about the rise of fake news in our social media feeds and says that it doesn’t matter what side of an issue people are on for them to be infected by the fake news item. This is because the item is designed to provoke a response and replicate. A good example of this is one individual recently mentioned in this WaPost piece who has created his own fake news business out of a parody website here.
Rushkoff recommends three strategies for fighting back: attacking bad memes with good ones, insulating people from dangerous memes via digital filters and the equivalent of AV software, and better education about the underlying issues. None of these are simple.
This morning the news was about how LinkedIn harvested 18M emails from to target ads to recruit people to join its social network. What is chilling about this is how all of these email addresses were from non-members that it had collected, of course without their permission.
You can go to the EFF link above where you can download a PDF copy or go to McSweeneys and buy the hardcover book. Definitely worth reading.