Fighting online disinformation and hate

The past month has seen some interesting developments in the fight against online disinformation and hate speech. First was the K-Pop campaign that diluted the impact of white nationalists by filling the various social media channels with fan videos using their hashtags. The K-Pop fans were also initially credited for buying up tickets to the Trump Tulsa rally. While we know only about six thousand people attended the rally, it is hard to state with any certainty who really got those tickets in the end.

This is an effective way to blunt the impact of hate groups, because you are using the crowd to counter-program their content. What hasn’t worked until now is forcing different social media platforms to ban these groups entirely. This is because a ban will only shift the haters’ efforts to another platform, where they can regroup. As a result many new social platforms are popping up that are decentralized and unmoderated.

Megan Squire, a computer science professor whom I am distantly related, has studied these hate groups and documents how their members know how to push the limits of social media. For example, one group uses You Tube for its live streaming and real-time comments, then deletes the recorded video file at the end of their presentation and uploads the content to other sites that are less vigilant about their hate speech moderation.

Part of the problem is politics: tech companies are viewed as supporting mostly liberal ideologies and target conservative voices. This has resulted in a number of legal proposals. Squire says that these proposals are “naive and focused on solving yesterday’s problems. They don’t acknowledge the way the social media platforms are actually being gamed today nor how they will be abused tomorrow.”

Another issue is how content is recommended by these platforms. “The issue of content moderation should focus not on content removal but on the underlying algorithms that determine what is relevant and what we see, read, and hear online. It is these algorithms that are at the core of the misinformation amplification,” says Hany Farid, a computer science professor in his Congressional testimony this past week about the propagation of disinformation. He suggests that the platforms need to tune their algorithms to value trusted, respectful and universally accepted information over the alternatives to produce a healthier ecosystem.

But there is another way to influence the major tech platforms: through their pocketbooks. In the past month, more than 100 advertisers have pulled their ads from Facebook and other social sites. CNN is keeping track of this trend here. Led by civil rights organizations such as the NAACP and the ADL, the effort is called Stop Hate for Profit. They have posted a ten-point plan to improve things on Facebook/s various properties. It has been called a boycott, although that is not completely accurate: many advertisers have said they will return to Facebook in a few weeks. One problem is that the majority of Facebook business is from smaller businesses. Still, it is noteworthy how quickly this has happened.

Perhaps this effort will move the needle with Facebook and others. It is too soon to tell, although Facebook has announced some very small steps that will probably prove to be ineffective, if history is any predictor.

3 thoughts on “Fighting online disinformation and hate

  1. This baffles: Another issue is how content is recommended by these platforms. “The issue of content moderation should focus not on content removal but on the underlying algorithms that determine what is relevant and what we see, read, and hear online

    …admittedly, I’m missing out on the most pervasive recommendations algorithms by not being on Facebook. But I don’t understand people surrendering their content choices to that or any website. For example, I like Netflix because I find almost all films I want to see. On the other hand, someone just mentioned to me that he tried it briefly and abandoned it because he couldn’t find anything to watch. When I asked how that could be, given its vast holdings, he replied that he’d just scrolled through its offerings without searching. Since he’d never searched/watched, it had no idea what he’d want/like. I guess he expected it to read his mind, or thought that the most common recommendations would suit him. Same for Facebook or anything — why allow it to filter/focus for you vs. curating your own content. Aside from laziness, I mean.

  2. Pingback: FIR B2B podcast episode #139: Faulting and fixing Facebook's hate speech problem | Web Informant

  3. David
    This book is relevant and authoritative:
    The Wires of War: Technology and the Global Struggle for Power
    Jacob Helberg
    Simon and Schuster, Oct. 12, 2021 – Computers – 384 pages

    https://books.google.ca/books/about/The_Wires_of_War.html
    From the former news policy lead at Google, an urgent and groundbreaking account of the high-stakes global cyberwar brewing between Western democracies and the autocracies of China and Russia that could potentially crush democracy.

    From 2016 to 2020, Jacob Helberg led Google’s global internal product policy efforts to combat disinformation and foreign interference. During this time, he found himself in the midst of what can only be described as a quickly..

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.