In the past week I have seen a number of reports that range from unsettling to depressing. The reports document a three-pronged foundation of the darkest parts of the online world: disinformation, cyber-terrorism, and the difficulty in trying to craft better legal approaches to stop both.
Let’s start with the disinformation. A consortium of journalists from around the world wrote about a team of Israeli contractors (called “Team Jorge”) who claim to have covertly influenced more than 30 elections and placed stories to help improve the online reputations of numerous private business clients around the world. They did this by using hacking, sabotage and automated disinformation tools. Call it disinformation-mercenaries-for-hire. If this sounds familiar, it is another news product from the French-based ForbiddenStories group that broke the series of Pegasus-related stories back in the summer of 2021 that I have written about for Avast here. The group labels this effort “Story Killers” and you can read the various pieces here.
What is depressing is how adept this industry has become: by comparison, the Russian Internet Research Agency’s antics in meddling with our 2016 election looks crude and mere child’s play. The reporters uncovered a wide-ranging collection of automated tools to quickly create hundreds of fake social media accounts and generate all kinds of fake posts that are then amplified by the social networks and search engines. “We must be able to recount the life of the characters, their past, their personality,” said one mercenary. “When it’s a small agency, it’s done in a rather sloppy way. If it’s well done, it’s the Israelis.”
The Israeli company behind these operations has a wide array of services, including digital surveillance, hack-and-leak smear campaigns, influence operations, and election interference and suppression. They claim to have operated for a decade.
One of the consortium partners is The Guardian and they document one of these automated systems that is used to manage a collection of social media avatars. Called AIMS, it allows for managing 30,000 seemingly real accounts to be created for nonexistent people. These can then be deployed either as a swarm – similar to a network of bots – or as single agents. Other tools are described in this piece by Haaretz.
The disinformation mercenaries sold access to their software to various national intelligence agencies, political parties and corporate clients interested in trying to resolve business disputes. Accounts span Twitter, LinkedIn, Facebook, Telegram, Airbnb, Gmail, Instagram and YouTube. Some of the identities even have Amazon accounts with credit cards and bitcoin wallets. All of this was leveraged to stage real-world events in order to provide ammunition for social media campaigns to provoke outrage.
Let’s move on to the cyberterrorism effort. Speaking about the Russians, also released this week are two reports from the Atlantic Council, a DC-based think tank that has studied the disinformation war the Russians have waged against Ukraine. (To be clear, this is completely independent of the Story Killers effort.) It is also depressing news because you realize that unlike an actual shooting war, there is never any time when you can claim victory. The totality, scope and power of this vast collection of fake news stories, phony government documents, deep fake videos and other digital effluvia is staggering and is being used by the Russians to convince both their own citizens and the rest of the world of Putin’s agenda.
And something else to worry about with the war comes from one final report, this one from Dutch intelligence forces that was covered here. The report says, “Before and during the war, Russian intelligence and security services engaged in widespread digital espionage, sabotage and influencing against Ukraine and NATO allies. The sustained and very high pressure that Russia exerts with this requires constant vigilance from Ukrainian and Western defenders.”
Taken together, you can see that disinformation has become weaponized in both the public and private sector. So what can be done? Cue up part three, which is trying to craft better laws to control these actions. Coincidentally, the US Supreme Court heard two cases that have been moving through our judicial system, Gonzalez v. Google and Twitter v. Taamneh. Both cases involve ISIS attacks. The former involves the 2015 murder in Paris of the 23-year old American student Nohemi Gonzalez, which I wrote about in a blog for Avast last fall. The latter involves the 2017 death of Nawras Alassaf in Istanbul. The first case directly involves the Section 230 statutes, the latter the various sections of the anti-terrorism act. Both were laws passed in the mid 1990s, when the internet was young and by comparison innocent.
You can read the transcriptions of the court’s oral arguments for Gonzalez here. The oral arguments transcript for Twitter are found here. I have taken the time to read them and if you are interested in my further thoughts, email me directly or post your questions here. Making effective changes to both laws won’t be easy without drastic consequences for how online companies run their businesses, and how we legitimately use them. And that is lesson from reading all these reports: as long as the bad guys can figure out ways to exploit these technologies, we will have to deal with some dire consequences.