The news earlier this month about Mitt Romney’s fake “Pierre Delecto” Twitter account once again brought fakery to the forefront. We discuss various aspects of fake news and what brands need to know to remain on point, honest and genuine to themselves. We first point out a study undertaken by North Carolina State researchers that found that the less people trust Facebook, the more skeptical they become of the news they see there. One lesson from the study is that brands should carefully choose how they rebut fake news.
Facebook is trying to figure out the best response to fake political ads, although it’s still far from doing an adequate job. A piece in BuzzFeed found that the social network has been inconsistent in applying its own corporate standards to decisions about what ads to run. These standards have nothing about whether the ads are factual and more to do with profanity or major user interface failures such as misleading or non-clickable action buttons. More work is needed.
Finally, we discuss two MIT studies mentioned in Axios about how machine learning can’t easily flag fake news. We have mentioned before how easy it is for machines to now create news stories without much human oversight. But one weakness of ML recipes is that precise and unbiased training data need to be used. When training data contains bias, machines simply amplify it, as Amazon discovered last year. Building truly impartial training data sets requires special skills, and it’s never easy. (The image here btw is from a wonderful movie starring Orson Wells “F is for Fake.”)
Listen to the latest episode of our podcast here.