Last week I wrote about the looming AI bias in the HR field. Here is another report about the potential threats of AI in another arena. But first, do you know what the states of California, Georgia, Nevada, Oregon, and Washington have in common? Sadly, all of them have election offices that received suspicious letters in the mail last year. This year is already ramping up and many election workers have received death threats just trying to do their — usually volunteer — jobs. Many have quit, after logging decades of service.
I have been following election misinformation campaigns for several years, such as writing about whether the 2020 election was rigged or not for Avast’s blog here. By now you should know that it wasn’t. But this latest round of physical threats — many of which have been criminally prosecuted — is especially toxic when fueled with AI misinformation campaigns. The stakes are certainly higher, especially given the number of national races — CISA has released this set of guidelines.
And the election threats aren’t just a domestic problem. This year will see more than 70 elections in 50 countries — many of them where people are voting for their heads of state, including India, Taiwan, Indonesia and others. Taken together, 2024 will see a third of the world’s population enter the voting booth. Some have seen huge increases in online voters: India’s last national election was in 2019, and since then they have added 250 M internet users, thanks to cheap smartphones and mobile data plans. That could spell difficulties for first-time online voters.
All this comes at a time when social media trust and safety teams have all but disappeared from the landscape, indeed the whole name for these groups will become a curiosity a few years from now. Instead, hate mongers and fear mongers celebrate their attention and unblocked access to the network. (To be fair, Facebook/Meta announced a new effort to fight deepfakes on WhatsApp just after I posted this.)
While the social networks were busily disinvesting in any quality control, more and better AI-laced misinformation campaigns have sprouted, thanks to new tools that can combine voices and images with clickbait headlines that can draw attention. That is not a good combination. Many of the leading AI tech firms — such as OpenAI and Anthropic — are trying to fill the gap. But it is a lopsided battle.
While it is nice that someone has taken up the cause for truthiness (to use a phrase from that bygone era), I am not sure that giving AI firms this responsibility is going to really work.
An early example happened in the New Hampshire presidential primary, where voters reported receiving deep fake robocalls with President Biden’s voice. As a result, the account used for this activity was subsequently banned. Expect things to get worse. Deepfakes such as this have become as easy as crafting a phishing attack (and are often combined too), and thanks to AI they are getting more realistic.It is only a matter of time before these attacks spill over into influencing the vote.
But deepfakes aren’t the sole problem. Garden-variety hacking is a lot easier. Cloudflare reported that from November 2022 to August 2023, it mitigated more than 60,000 daily threats to US elections groups it surveyed, including numerous denial-of-service attacks. That stresses the security defenses to organizations that never were on the forefront of technology, something that CISA and others have tried to help with various tools and documents, such as the one mentioned at the top of this post. And now we have certain elements of Congress that want to defund CISA just in time for the fall elections. Bad idea.
Contributing to the mess is that media can’t be trusted to provide a safe harbor for election results. Look what happened to the Fox News decision team after it called — correctly — Arizona for Biden back in 2020. Many of their staff were fired for doing a solid job. And while it is great that Jon Stewart is back leading Comedy Central’s Monday night coverage, I don’t think you are going to see much serious reporting there (although his debut show last week was hysterical and made me wish he was back five days a week.
Of course, it could be worse: we could be voting in Russia, where no one doubts what the outcome will be. The only open question is will its czar-for-life get more than 100% of the vote.
Great article – as one comes to expect from you, David! It’s astounding how an invention that grew out of DARPA some 50 years ago has been turned into both a necessity and convenience while also being a tool to divide us into ideological bubbles that never interact. One wonders what the Framers of the Constitution, believers in the power of the marketplace of ideas, would have made of a world where ideas were freely shared yet never interacted.
Like many of your readers, I’ve fooled around a bit with ChatGPT and some of the new photo-creating engines. They are both awe inspiring and frightening. Where will truth hide when falsehood and deception are so easily created.
Yipes!
During the 1950’s we had the accent on the group because everyone was sick & tired of bashing each other during WW11. The 1960’s to relatively lately was back to bashing each other on a limited scale due to the atomic bomb. Now we’re into the me generation bashing because the likelihood of getting caught or suffering a long jail term is minimal to nonexistent. It also occurred to me that the WW11 generation has largely died off & the sons & daughters of the WW11 or greatest generation is next in line & as the next, next generation knows nothing about Hiroshima or Nagasaki they want to put an atomic or hydrogen bomb into the near orbit around the earth. To also top it off we have A. I. etc. which will affect the university educated similar to the industrial era affecting the farmer when the industrial revolution based on the steam engine centralized everyone.