A reader’s guide to Twitter’s supposed demise

I asked last November if we were witnessing the end of Twitter, and point out that the company has become more town dump than town square. Let’s review what has happened at Twitter and what we have learned about its internal operations since then. The short answer: things are worse, but not necessarily in ways that were anticipated when Elon took the company private.

Yes, there have been some notable service outages, which is to be expected given how most of its engineering staff has quit or been fired over the past several months and because one of its major data centers was shuttered. But for the most part, the service is still running. That’s great, and we could credit Elon for perhaps picking the right people to keep the lights on. (This is why I use the “supposed” adjective in the title of this piece.)

There is this behind-the-scene story about what has happened post-Elon at Twitter in New York Magazine, taken from reporting from former employees’ interviews, and well worth reading. In summary, it was complete chaos. There is also another Washington Post piece that summarizes three primary source documents: First is the Jan.6th committee’s “Purple team report” draft that was never adopted by the full committee (and that the Post has published here.) The other two documents are transcripts of testimony of two former Twitter staffers taken by the committee last fall: one by “J Johnson” (a pseudonym) who was an engineer and part of a safety policy review team and one by Anika Navaroli who was a senior safety policy domain specialist with a legal and free speech background. I will return to these documents in a moment.

One of Elon’s major rallying cries has been to attempt to neutralize the bots. This isn’t a new problem: I first wrote about the problem with bots and their abuses of Twitter more than 10 years ago. I saw my own follower count plummet right after his takeover – whether that was people terminating their own accounts or through any bot cleansing I can’t really say. Clearly, this was never much of a priority at Twitter beforehand.

Another Elon focus was to reinstate previously banned users, most notably our former president who had nearly 88M followers when he was kicked off on January 8, 2021. Part of the reinstatement is that you can now review all his tweets — he has not posted anything since his reinstatement. (There is also this archive of his entire tweet corpus, including deleted tweets for your own reference.)

Before I dive into the Jan. 6 documents, I should mention one other historical note. Last summer, after the revelations of Mudge’s tenure at the company, I wrote about some of its major infosec operational failures. Ironically, Mudge was fired in January 2022 for poor performance and ineffective leadership, something which seems to be the new normal for post-Elon Twitter.

The Mudge report provides context for the great failures of social media to moderate their most dangerous and hateful content, which is documented in the Jan. 6 committee’s Purple team report which outlines these failures as it relates to that fateful date at the Capitol. The draft document was supposed to be included as an appendix to the full committee report but only made it as far as a draft. It covers more than a dozen different social media properties and how they wrestled with their content moderation policies, “terrified of the backlash they would get if they followed their own rules and applied them to Trump,” as Johnson testified. “My safety policy team colleagues were still very unclear about what we should be doing. Twitter leadership were aware of the risks we raised, but they didn’t do anything to help address those risks and concerns. They were reluctant to intervene and block these tweets.” Instead, the social networks helped amplify these messages. The Purple draft report shows just how hard it is to turn this around: the tools are blunt-force instruments at best.

Using language such as “locked and loaded” or “Be there, it will be wild” or the debate comment “stand back and stand by” concerned the moderation teams, who consistently raised alarms at how these words were being amplified across their network. Johnson testified: “There was never, to my knowledge, leadership convening a meeting and saying, Violence has broken out. You have the green light to take it all down. That never happened.”

Navaroli testified: “I do not remember ever seeing any threat model or threat analysis leading up to the election. Del Harvey was the executive in charge of Twitter’s content moderation and security teams. Navaroli said Harvey didn’t understand the need for policies to limit Trump’s speech, or the urgency to put them in place prior to the election of Nov. 2020, or that there was a gap in coverage of existing Twitter policies. Navaroli called it magical thinking, and that Harvey refused to take any potential threats seriously. This continued into 2021, when she eventually left the company.

Her testimony highlights the lack of any content analysis tools at Twitter: she used the same public search function on Twitter’s website like any of us. “All we had were hammers, and we needed scalpels, something more nuanced.” She also mentions that “Trump was a unique user who sat above and beyond the rules of Twitter. His tweets weren’t deleted, which is what happened with other world leaders,” (think Maduro of Venezuela or Bolsonaro of Brazil). She concludes that Trump and Twitter had a symbiotic if not parasitic relationship, and that Twitter bears the responsibility for Trump’s incitement to violence was posted and amplified. “I believe that January 6th was planned, orchestrated, and carried out on the Twitter platform within and right in front of our eyes using plain language and hashtags. And Twitter, in my eyes, bears the responsibility for hosting and promoting incitement to violence that led to the loss of life on January 6th.”

What does this mean for the future of Twitter? Here are a few of my thoughts:

  • Content moderation will continue to be hard, especially at the intersection of on and offline activities.
  • The legal environment is in a state of flux, with new cases before the Supreme Court as I wrote about last fall on Avast’s blog.
  • The social media landscape is complex and the interactions among the players are not well documented. Users of one network who are banned move quickly to others where they can ply their hate and incite violence. Coordination across platforms doesn’t exist.
  • There is little operational transparency of the social network operators. The Jan. 6 committee staffers got a lot of information as part of their work, some of which can be seen by the public, but most of it hasn’t yet been published. The Purple team draft raises lots of issues, and has numerous recommendations. Whether any will ever be implemented is anyone’s guess, but chances are slim that most won’t.

4 thoughts on “A reader’s guide to Twitter’s supposed demise

  1. As far as I’m concerned, #twitterisdead – like the brontosaurus bleeding to death at the tail, the head continues to eat the fronds with no awareness.
    My question is: When will the board at Tesla vote to kick Elon off? It’s in their purview.

  2. David, thanks for this. I used to be a free speech absolutist but am no longer, when the guardrails of acceptable behavior – even for the President of the US – seem to be mostly gone. I don’t know what the answer is. Something has to be done about Section 230, but I don’t know what that “something” is, nor am I optimistic for any positive changes.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.