Further misadventures in fake news

The term fake news is used by many but misunderstood. It has gained notoriety as a term of derision from political figures about mainstream media outlets. But when you look closer, you can see there are many other forms that are much more subtle and far more dangerous. The public relations firm Ogilvy wrote about five different types of fake news (satire, misinformation, sloppy reporting and purposely deceptive).

But that really doesn’t help matters, especially in the modern era of state-sponsored fake news. We used to call this propaganda back when I was growing up. To better understand this modern context, I suggest you examine two new reports that present a more deliberate analysis and discussion:

  • The first is by Renee Diresta and Shelby Grossman for Stanford University’s Internet Observatory project called Potemkin Pages and Personas, Assessing GRU Online Operations. It documents two methods of Russia’s intelligence agency commonly called the GRU, narrative laundering and hack-and-leaking false data. I’ll get into these methods in a moment. For those of you that don’t know the reference, Potemkin means a fake village that was built in the late 1700’s to impress a Russian monarch who would pass by a region and fooled into thinking there were actual people living there. It was a stage set with facades and actors dressed as inhabitants.
  • The second report is titled Simulated media assets: local news from Vlad Shevtsov, a Russian researcher who has investigated several seemingly legit local news sites in Albany, New York (shown below) and Edmonton, Alberta. These sites constructed their news pages out of evergreen articles and other service pieces that have attracted millions of page views, according to analytics. Yet they have curious characteristics, such as being viewed almost completely from mobile sources outside their local geographic area.

Taken together, this shows a more subtle trend towards how “news” can be manipulated and shaped by government spies and criminals. Last month I wrote about Facebook and disinformation-based political campaigns. Since then Twitter announced they were ending all political advertising. But the focus on fake news in the political sphere is a distraction. What we should understand is that the entire notion of how news is being created and consumed is undergoing a major transition. It means we have to be a lot more skeptical of what news items are being shared in our social feeds and how we obtain facts. Move over Snopes.com, we need a completely new set of tools to vet the truth.

Let’s first look at the Shevtsov report on the criminal-based news sites, for that is really the only way to think about them. These are just digital Potemkin villages: they look like real local news sites, but are just containers to be used by bots to generate clicks and ad revenue. Buzzfeed’s Craig Silverman provides a larger context in his analysis here. These sites gather traffic quickly, stick around for a year or so, and then fade away, after generating millions of dollars in ad revenues. They take advantage of legitimate ad serving operations, including Google’s AdSense, and quirks in the organic search algorithms that feed them traffic.

This is a more insidious problem than seeing a couple of misleading articles in your social news feed for one reason: the operators of these sites aren’t trying to make some political statement. They just want to make money. They aren’t trying to fool real readers: indeed, these sites probably have few actual carbon life forms that are sitting at keyboards.

The second report from Stanford is also chilling It documents the efforts of the GRU to misinform and mislead, using two methods.

— narrative laundering. This makes something into a fact by repetition through legit-sounding news sources that are also constructs of the GRU operatives. This has gotten more sophisticated since another Russian effort led by the Internet Research Agency (IRA) was uncovered during the Mueller report. That entity (which was also state-sponsored) specialized in launching social media sock puppets and creating avatars and fake accounts.  The methods used by the GRU involved creating Facebook pages that look like think tanks and other media outlets. These “provided a home for original content on conflicts and politics around the world and a primary affiliation for sock puppet personas.” In essence, what the GRU is doing is “laundering” their puppets through six affiliated media front pages. The researchers identified Inside Syria Media Center, Crna Gora News Agency, Nbenegroup.com, The Informer, World News Observer, and Victory for Peace as being run by the GRU, where their posts would be subsequently picked up by lazy or uncritical news sites.

What is interesting though is that the GRU wasn’t very thorough about creating these pages. Most of the original Facebook posts had no engagements whatsoever. “The GRU appears not to have done even the bare minimum to achieve peer-to-peer virality, with the exception of some Twitter networking, despite its sustained presence on Facebook. However, the campaigns were successful at placing stories from multiple fake personas throughout the alternative media ecosystem.” A good example of how the researchers figured all this out was how they tracked down who really was behind the Jelena Rakocevic/Jelena Rakcevic persona. “She” is really a fake operative that purports to be a journalist with bylines on various digital news sites. In real life, she is a biology professor in Montenegro with a listed phone number for a Mercedes dealership.

— hack-and-leak capabilities. We are now sadly familiar with the various leak sites that have become popular across the interwebs. These benefitted from some narrative laundering as well. The GRU got Wikileaks and various mainstream US media to pick up on their stories, making their operations more effective. What is interesting about the GRU methods is that they differed from those attributed to the IRA “They used a more modern form of memetic propaganda—concise messaging, visuals with high virality potential, and provocative, edgy humor—rather than the narrative propaganda (long-form persuasive essays and geopolitical analysis) that is most prevalent in the GRU material.”

So what are you gonna do to become more critical? Librarians have been on the front lines of vetting fake news for years. Lyena Chavez of Merrimack College has four easy “tells” that she often sees:

  • The facts aren’t verifiable from the alleged sources quoted.
  • The story isn’t published in other credible news sources, although we have seen how the GRU can launder the story and make it more credible.
  • The author doesn’t have appropriate credentials or experience.
  • The story has an emotional appeal, rather than logic.

One document that is useful (and probably a lot more work than you signed up for) is this collection from her colleague at Merrimack Professor Melissa Zimdars. She has tips and various open source methods and sites that can help you in your own news vetting. If you want more, take a look at an entire curriculum that the Stony Brook J-school has assembled.

Finally, here are some tools from Buzzfeed reporter Jane Lytvynenko, who has collected them to vet her own stories.

 

RSA blog: Giving thanks and some thoughts on 2020

Thanksgiving is nearly upon us. And as we think about giving thanks, I remember when 11 years ago I put together a speech that somewhat tongue-in-cheek gave thanks to Bill Gates (and by extension) Microsoft for creating the entire IT support industry. This was around the time that he retired from corporate life at Microsoft.

My speech took the tack that if it wasn’t for leaky Windows OS’s and its APIs, many of us would be out of a job because everything would just work better. Well, obviously there are many vendors who share some of the blame besides Microsoft. And truthfully Windows gets more than its share of attention because it is found on so many desktops and running so many servers of our collective infrastructure.

Let’s extend things into the present and talk about what we in the modern-day IT world have to give thanks for. Certainly, things have evolved in the past decade, and mostly for the better: endpoints have a lot better protection and are a lot less leaky than your average OS of yesteryear.

You can read my latest blog post for RSA here abiout what else we have to be thankful for.

HPE blog: CISO faces breach on first day on the job

Most IT managers are familiar with the notion of a zero-day exploit or finding a new piece of malware or threat. But what is worse is not knowing when your company has been hacked for several months. That was the situation facing Jaya Baloo when she left her job as the chief information security officer (CISO) for Dutch mobile operator KPN and moved to Prague-based Avast. She literally walked into her first day on the job having to deal with a breach that had been active months earlier.

She has learned many things from her years as a security manager, including how to place people above systems, not to depend on prayer as a strategy has learned many things from her years as a security manager, including how to place people above systems and create a solid infrastructure plan, ignore compliance porn and the best ways to fight the bad guys. You can read my interview with her on HPE’s Enterprise.Nxt blog here.

Bob Metcalfe on credit, gratitude, and loyalty

For Bob Metcalfe, many things come in triples. His most successful company was called 3Com is one example. I met up with him recently and he told me, “You will be happier if you give and enjoy but not expect credit, gratitude, or loyalty.” Before I unpack that, let me tell you the story of how Bob and I first met.

This was in 1990 and I was about to launch Network Computing magazine for CMP. I was its first editor-in-chief and it was a breakout job for me in many respects: I was fortunate to be able to set the overall editorial direction of the publication and hire a solid editorial and production team, it was the first magazine that CMP ever published using desktop technology and it was the first time that I had built a test lab into the DNA of a B2B IT publication. Can you tell that I am still very proud of the pub? Yeah, there is that. Bob was one of our early columnists, and he was at the point in his career where he wanted to tell some stories about the development of his invention of Ethernet. We had a lot of fun getting these stories into print and Bob told me that for many years those first columns of his had a place of honor in his home. Bob went on to write many more columns for other IT pubs and eventually became publisher of Infoworld.

In addition to being a very clever inventor, Bob is also a master storyteller. One of his many sayings has since been enshrined as “Metcalfe’s law” which says a network’s effect is proportional to the square of its users or nodes. He is also infamous for wrongly predicting the end of the Internet in an Infoworld column he wrote in December 1995.  He called it a “gigalapse”  which would happen the next year. When of course it didn’t come to pass, he ate the printed copy of his column.

Oh well, you can’t always be right, but he is usually very pithy and droll.

Let’s talk about his latest statement, about credit, gratitude and loyalty. Notice how he differentiates the give and take of the three elements: with Bob, it is always critical to understand the relationship of inputs and outputs.

Credit means being acknowledged for your achievements. “The trick is to get credit without claiming it,” says Metcalfe. Credit comes in many forms: validation from your peers, recognition by your profession, or even a short “attaboy” from your boss for a job well done. I can think of the times in my career when I got credit for something that I wrote about: a fine explanation of something technical by one of my readers, or spotting a trend that few had yet seen. But what Bob is telling us is to put the shoe on the other foot, and give credit where and when it is due — output, rather than input. It is great to be acknowledged, but greater still if we cite those that deserve credit for their achievements. Going back to Network Computing, many of the people that I hired have gone on to do great things in the IT industry, and I continue to give them props for doing such wonderful work and to their contributions to our industry.

Gratitude is getting positive feedback, of thanking someone for their efforts. Too often we forget to say thanks. I can think of many jobs that I have held over the years where my boss didn’t give out many thank yous. But it is always better to give thanks to others than expect it. Credit and gratitude are a tight bundle to be sure.

Finally, there is loyalty. The dictionary defines this in a variety of ways, but one that I liked was “faithful to a cause, ideal, custom, institution, or product.” Too often we are expected to be faithful to something that starts out well but ends up poorly. Many times I have left jobs because the product team made some bad decisions, or because people whom I respected left out of frustration. If you are the boss, you can’t really demand loyalty, especially if you don’t show any gratitude or acknowledge credit for your staff’s achievements. “Loyalty is what you expect of your customers when your products are no longer competitive,” says Metcalfe.

I would be interested in your own reactions to what Bob said, and if you have examples from your own work life that you would like to share with others.

Red Hat blog: containers last mere moments, on average

You probably already knew that most of the containers created by developers are disposable, but did you realize that half of them are only around for less than five minutes, and a fifth of them last less than ten seconds? That and other fascinating details are available in the latest annual container report from Sysdig, a container security and orchestration vendor.

I mention that fun fact, along with other interesting trends in my latest blog post for Red Hat’s Developer site.

Adaptive access and step-up authentication with Thales SafeNet Trusted Access

SafeNet Trusted Access from Thales is an access management and authentication service. By helping to prevent data breaches and comply with regulations, it allows organizations to migrate to the cloud simply and securely.

 

MobilePass+ is available on iPhones and Android smartphones and Windows desktops. More information here. 

Pricing starts at $3/user/month for all tokens and services.

FIR B2B podcast #130: Don’t be fake!

The news earlier this month about Mitt Romney’s fake “Pierre Delecto” Twitter account once again brought fakery to the forefront. We discuss various aspects of fake news and what brands need to know to remain on point, honest and genuine to themselves. We first point out a study undertaken by North Carolina State researchers that found that the less people trust Facebook, the more skeptical they become of the news they see there. One lesson from the study is that brands should carefully choose how they rebut fake news.

Facebook is trying to figure out the best response to fake political ads, although it’s still far from doing an adequate job. A piece in BuzzFeed found that the social network has been inconsistent in applying its own corporate standards to decisions about what ads to run. These standards have nothing about whether the ads are factual and more to do with profanity or major user interface failures such as misleading or non-clickable action buttons. More work is needed.

Finally, we discuss two MIT studies mentioned in Axios about how machine learning can’t easily flag fake news. We have mentioned before how easy it is for machines to now create news stories without much human oversight. But one weakness of ML recipes is that precise and unbiased training data need to be used. When training data contains bias, machines simply amplify it, as Amazon discovered last year. Building truly impartial training data sets requires special skills, and it’s never easy.  (The image here btw is from a wonderful movie starring Orson Wells “F is for Fake.”)

Listen to the latest episode of our podcast here.

Red Hat Developer website editorial support

For the past several months, I have been working with the editorial team that manages the Red Hat Developers website. My role is to work with the product managers, the open source experts and the editors to rewrite product descriptions and place the dozens of Red Hat products into a more modern and developer-friendly and appropriate context. It has been fun to collaborate with a very smart and dedicated group. This work has been unbylined, but you can get an example of what I have done with this page on ODO and another page on Code Ready Containers.

Here is an example of a bylined article I wrote about container security for their blog.

An update on Facebook, disinformation and political censorship

Facebook CEO Mark Zuckerberg speaks at Georgetown University, Thursday, Oct. 17, 2019, in Washington. (AP Photo/Nick Wass)

Merriam-Webster defines sanctimonious as “hypocritically pious or devout.” Last week Mark Zuckerberg gave a speech at Georgetown University about Internet political advertising, the role of private tech companies with regard to regulating free speech, and other topics. I found it quite fitting of this definition. There has been a lot of coverage elsewhere, so let me just hit the highlights. I would urge you all to watch his talk all the way through and draw your own conclusions.

Let’s first talk about censoring political ads. Many of you have heard that CNN removed a Trump ad last week: that was pretty unusual and doesn’t happen very often in TVland. Most TV stations are required by the FCC to run any political ad, as long as they carry who paid for the spot. Zuck spoke about how they want to run all political ads and keep them around so we can examine the archive later. But this doesn’t mean that they allow every political ad to run. Facebook has their corporate equivalent of the TV stations’ “standards and practices” departments, and will pull ads that use profanity, or include non-working buttons, or other such UI fails. Well, not quite so tidy, it appears.

One media site took them up on their policy. According to research done by BuzzFeed, Facebook has removed more than 160 political ads posted in the first two weeks in October. More than 100 ads from Biden were removed, and 21 ads from Trump. BuzzFeed found that Facebook applied its ad removal policies unequally. Clearly, they have some room to improve here, and at least be consistent in their “standards.”

One problem is that unlike online ads, TV political ads are passive: you sit and watch them. Another is that online ads can be powerful demotivators and convince folks not to vote, which is what happened in the 2016 elections. One similarity though is the amount of money that advertisers spend. According to Politico, Facebook has already pocketed more than $50 million from 2020 candidates running ads on its platform. While for a company that rakes in billions in overall ads, this is a small number. But it still is important.

One final note about political ads. Facebook posted a story this week that showed new efforts at disinformation campaigns by Iran and Russian-state-sponsored groups. It announced new changes to its policy, to try to prevent foreign-led efforts to manipulate public debate in another country. Whether they will be successful remains to be seen. Part of the problem is how you define state-sponsored groups. For example, which is state-sponsored? Al Jazeera, France 24, RT, NPR and others all take government funding. Facebook will start labeling these outlets’ pages and provide information on whether their content is partially under government controls.

Much was said about the first amendment and freedom of speech. I heard many comments about Zuck’s talk that at least delineated this amendment only applies to the government’s regulation of speech, not by private companies. Another issue was mentioned by The Verge: “Zuckerberg presents Facebook’s platform as a neutral conduit for the dissemination of speech. But it’s not. We know that historically it has tended to favor the angry and the outrageous over the level-headed and inspiring.” Politico said that “On Facebook, the answer to harmful speech shouldn’t be more speech, as Zuckerberg’s formulation suggests; it should be to unplug the microphone and stop broadcasting it.” It had a detailed play-by-play analysis of some of the points he made during his talk that are well worth reading.

“Disinformation makes struggles for justice harder,” said Slate’s April Glaser, who has been following the company’s numerous content and speech moderation missteps. “It often strands leaders of marginalized groups in the trap of constantly having to correct the record about details that have little to do with the issues they actually are trying to address.” Her post linked to several situations where Facebook posts harmed specific people, such as Rohingya Muslims in Myanmar.

After his speech, a group of 40 civil rights organizations called upon Facebook to “protect civil rights as a fundamental obligation as serious as any other goal of the company.” They claim that the company is reckless when it comes to its civil rights record and posted their letter here, which cites a number of other historical abuses, along with their recommended solutions.

Finally, Zuck spoke about how effective they have been at eliminating fake accounts, which number in the billions and pointed to this report earlier this year. Too bad the report is very misleading. For example, “priority is given to detecting users and accounts that seek to cause harm”- but only financial harm is mentioned.” This is from Megan Squire, who is a professor of Computer Science at Elon University. She studies online radicalization and various other technical aspects. “I would like to see numbers on how they deal with fake accounts used to amplify non-financial propaganda, such as hate speech and extremist content in Pages and Groups, both of which are rife with harmful content and non-authentic users. Facebook has gutted the ability for researchers to systematically study the platform via its own API.” Squires would like to see ways that outside researchers “could find and report additional campaigns, similarly to how security researchers find zero days, but Facebook is not interested in this approach.”

Zuck has a long history of apologia tours. Tomorrow he testifies before Congress yet again, this time with respect to housing and lending discrimination. Perhaps he will be a little more genuine this time around.

FIR B2B podcast #129: We’re Pleased and Excited to Tell You What People Don’t Know About Social Media

My podcast with Paul Gillin examines three different articles that touch on various B2B marketing aspects in this podcast. The first one from Digiday and documents what the BBC went through to establish its fifth content vertical it calls Future. The channel deals with health, wellness and sustainability, and it took a lot more effort than you might think to create. Branded content is driving a lot of page views at the Beeb, as the Brits lovingly refer to it, and the reason is because of all the work that the media company puts into their creation, working with ad partners, their marketing teams and editors. An article on whether eating eggs is healthy brought in a million page views and had an average dwell time of five minutes, which is content gold.

The second piece is from Chris Penn, who does excellent marketing research. He came up with analytics that show several “happy words” — such as “pleased,” “excited,” “proud” and “thrilled” — litter the press release landscape, offering nothing in the way of real information. Does anyone really care if your CEO is having a good day because you just announced version 3.45 of some product? It might be time to eliminate these words entirely from your marketing lingo, have the language reflect reality more closely and perhaps get more reporters’ attention too.

Finally, we found this Pew Research survey that shows exactly how little the average adult knows about the digital marketing world. Pew gave more than 4,000 adults a 10-question quiz that asked things like what the “s” in “https” stands for, who owns Instagram and whether ads are a significant source of social media revenue. A huge chunk of respondents either answered incorrectly or didn’t know the answer.  Listen to our podcast here.