How the Red Cross provides social media leadership

I have been volunteering for the past several months for the American Red Cross and I came across a series of documents, policies, and training about how they use social media that I thought I would share with you. I actually have two very different volunteer jobs with them. First, I work for our local chapter to drive blood to various hospital blood banks. And I work for the national office in DC to help produce a monthly webinar that is attended by hundreds of volunteers and employees involved in their disaster relief efforts. Note that these thoughts are my own, and not necessarily that of the Red Cross.

One thing that I am continually impressed with the Red Cross is how well it partitions and structures the workflows of its volunteers. Even if you volunteer for a relatively low-level position, such as a front desk receptionist, there are manuals that guide what you do and when you do it. This isn’t surprising, given how many of us volunteers there are and how many volunteers are in key leadership roles directing its critical operations. Think about that for a moment: many non-profits give their volunteers the scut work (file these papers that have been lying around here for months). The Red Cross does the opposite, and it is often hard to distinguish between volunteers and staffers when you first meet someone.

A good case in point is my wife, who volunteered in their Santa Monica office years ago after the Katrina floods. Within weeks she was attending staff meetings and eventually she was hired as the chapter’s development director.

But let’s talk about social media, and my first point is the Red Cross’ social media guidelines, which take up all of a single page but have lots of good advice. I thought I would share some of them with you as an example of what you should create for your own business. During my last webinar, Megan Weiler, the senior director of Social Media at their DC HQ gave a presentation on these guidelines and pointed out their six core principles of being a good social citizen:

  • Be human, meaning “be your friendly self and use good manners” – too often we tend to post from frustration or to try to right a wrong.
  • Be engaging, find others of similar interests and encourage thoughtful discussions.
  • Be accurate, make sure news items are verified and give credit for the content you got from someone or somewhere else.
  • Be honest, meaning if you mess up, fess up and do so quickly.
  • Be considerate, don’t start flame wars. If you have to disagree with someone, do it politely. Also, stay focused on the topic at hand.
  • Be safe. Protect your privacy and “be mindful of what you share online.”

These are all great things to keep in mind when you create your own social media posts for your company. What I like about this list is that it gives you the responsibility and boundaries to be successful at delivering messages using social media. Having written and spoken about these topics for more than a decade, I found it a very refreshing take. Too often corporations are heavy-handed about directing their employees’ use of social media. That heavy hand results in social media misfires or sock puppetry that doesn’t serve anyone well. (Take as a case in point of the Twitter account of a certain former White House staffer earlier this month as an example.)

Some corporations were early advocates of social media like Dell, who subsequently put together a central social media command center at its corporate offices outside of Austin. That may work well for them (I wrote this analysis of Dell’s effort back in 2011) and indeed the Red Cross has its own digital operations nerve center to help with its disaster relief efforts. But this is just one aspect of what the Red Cross does and managing their gigantic global volunteer staff at the Red Cross has some other circumstances and wider implications. They actually understand that social media engagement is a critical component of their operational DNA and sharing a volunteer’s personal story is part of their mission.

You might wonder why I am driving blood around town. My reason was simple: it was an extension of the many years where I donated blood and I liked being more involved and getting to understand their infrastructure to bring blood units to those who need them. It isn’t intellectually challenging – other than keeping track of where in each hospital the blood labs are located – but it deepens my involvement. (Did you notice how I just shared my personal story here?) BTW, for those of you that donate blood, thanks for helping out!

Finally, the Red Cross has a half-hour online training course on social media basics that are only available to volunteers. The class walks you through what social listening is all about and how to get you more engaged in participating in social media as a Red Crosser. The class also makes a distinction between a volunteer implying they are running an official Red Cross social media account, versus their saying that they only represent themselves. That is an important distinction.

The class goes into further details:

  • If you post anything about the Red Cross, make sure you disclose your role and use your real name. Disclose any vested self-interest and write about your own expertise.
  • Respect your dignity, privacy and confidences. Be sensitive to the community you are serving, be cautious about sharing information before it is vetted.
  • “Remember if you are online, you are on the record.” This is probably the most important aspect of social media that many of us tend to forget.
  • Understand that your personal social media accounts are your identity. You should certainly include your corporate affiliation in your online bios but shouldn’t construct your Twitter handle around them. For example, create a handle such as @dstrom, rather than @redCrossStrom. Maintain the balance of what is personal and what is professional. Some companies want you to operate their social media accounts – while that could work in certain circumstances, the Red Cross wants you to be you.

How to prevent a data breach, lessons learned from the infosec vendors themselves

This fall there have been data breaches at the internal networks of several major security vendors. I had two initial thoughts when I first started hearing about these breaches: First, if the infosec vendors can’t keep their houses in order, how can ordinary individuals or non-tech companies stand a chance? And then I thought it would be useful to examine these breaches as powerful lessons to be avoided by the rest of us. You see, understanding the actual mechanics of what happened during the average breach isn’t usually well documented. Even the most transparent businesses with their breach notifications don’t really get down into the weeds. I studied these breaches and have come away with some recommendations for your own infosec practices.

The breaches are:

You will notice a few common trends from these breaches. First, the delay in identifying the breach, and then notifying customers.  It took NordVPN five weeks before they notified by their datacenter provider, and they found out the attack was part of an attack on their other VPN vendor customers. “The datacenter deleted the user accounts that the intruder had exploited rather than notify us.”  It took Avast months to identify their breach. Initially, IT staffers dismissed the unauthorized access as a false positive and ignored the logged entry. Months later it was re-examined and determined to be malicious. It took two months for Trend to track down exactly what happened before the employee was identified and then terminated.

Finally, about 4,000 users on a support forum have notified by ZoneAlarm about a data breach. Data compromised includes names, email addresses, hashed passwords, and birthdates. The issue was outdated forum software code that wasn’t patched to current versions. Their breach happened at least several weeks before being noticed and emails were sent out to affected users within 24 hours of when they figured the situation out.

These delays are an issue for anyone. Remember, the EU, through GDPR, gives companies 72 hours to notify regulators. These regulators have issued some pretty big fines for those companies who don’t meet this deadline, such as British Airways.

Second is a question of relative transparency. Most of the vendors were very transparent about what happened and when. You’ll notice that for three out of the four situations I have linked to the actual vendor’s blog posts that describe the breach and what they have done about it. The sole exception is ZoneAlarm, which has not posted any details publicly. The company is owned by Check Point, and while they have been very forthcoming with emails to reporters that is still not the same as posting something online for the world to see.

Third is the issue that insider threats are real. Employees will always be the weakest link in any security strategy. With Trend, customer data (including telephone numbers but no payment data) was divulged by a rogue employee who sold the data from 68,000 customers in a support database to a criminal third party. This can happen to anyone, but you should contemplate how to make a leak such as this more difficult.

Finally, recovery, remediation and repair aren’t easy, even for the tech vendors that know what they are doing (at least most of the time). Part of the problem is first figuring out what actual harm was done, what the intruders did and what gear has to be replaced. Avast’s blog post is the most instructive of the three and worth reading carefully. They have embarked on a major infrastructure replacement, as their CISO told me in a separate interview here. For example, they found that some of their TLS keys were obtained but not used. Avast then  revoked and reissued various encryption certificates and pushed out updates of its various software products to ensure that they weren’t polluted or compromised by the attackers. Both Avast and NordVPN also launched massive internal audits to track what happened and to ensure that no other parts of their computing infrastructure were affected.

But part of the problem is that our computing infrastructures have become extremely complex. Even our own personal computer applications are impossible to navigate (just try setting up your Facebook privacy options in a single sitting). How many apps does the average person use these days? Can you honestly tell me that there is some cloud login that you haven’t used since 2010 that doesn’t have a breached password? Now expand that to your average small business that allows its employees to bring their personal phones to work and their company laptops home and you have a nightmare waiting to happen: all it takes is one of your kids clicking on some dodgy link on your laptop, or you downloading an app to your phone, and it is game over. And as a friend of mine who uses a Mac found out recently, a short session on an open Wifi network can infect your computer. (Macs aren’t immune, despite popular folklore.)

So I will leave you with a few words of hope. Study these breaches and use them as lessons to improve your own infosec, both corporate and personal. Treat all third-party sources of technology as if they are your own and ask these vendors and suppliers the hard questions about their security posture. Make sure your business has a solid notification plan in place and test it regularly as part of your normal disaster recovery processes. Trust nothing at face value, and if your tech suppliers don’t measure up find new ones that will. And as you have heard me say before, tighten up all your own logins with smartphone-based authentication apps and password managers, and use a VPN when you are on a public network.

FIR B2B podcast #131: How to Run Webcasts and Video Calls

Both Paul Gillin and I have run and participated in various webinars and online meetings over the years. For this podcast episode, we share some of their best practices. There are several things you can do to have great meetings. First, is preparing your speakers and in planning for the presentation. Do you have the right kind of slide deck? With our in-person speaking gigs, we try to minimize the text on our slides and provide for more of an experience and set the mood. For a webinar where you don’t necessary see your audience, your slides are more of your speaking notes, so your audience can take away your thoughts and remember your major points.

I produce a monthly webinar for the Red Cross that has a dozen speakers and an audience of several hundred. To pull this off with minimal technical issues, my team has put together a lengthy document that recommends how speakers connect (watch for poor Wi-Fi and don’t use speakerphones) and describes the various roles that different people play during the conference call (master of ceremonies, moderator, time keeper, slide wrangler, presenter recruiter, chat and notes helpers). Paul and I both suggest using a common slide deck for all speakers, which means getting the slides in order prior to the meeting. Also, with more than a couple of presenters you should test your speakers’ audio connections too; both of us have had more problems with wonky audio than video. And settle on a protocol for whether or not to show your face when the meeting starts (and check to see if you are appropriately dressed).

Both of us feel you should always start your meetings promptly: you don’t want to be wasting time waiting for stragglers. We both don’t particularly like Skype for Business, although “regular” Skype is fine (most times) and we have also used GoToMeeting and Zoom, too.

Here is an example of a recent speech I gave to an audience of local government IT managers. I also has lots of other tips on how to do more than meetings and improve team collaboration here.

If you would like to listen to our 16 minute podcast, click below:

Good luck with running your own online meetings, and please share your own tips and best practices as comments. And enjoy the video below.

Further misadventures in fake news

The term fake news is used by many but misunderstood. It has gained notoriety as a term of derision from political figures about mainstream media outlets. But when you look closer, you can see there are many other forms that are much more subtle and far more dangerous. The public relations firm Ogilvy wrote about several different types of fake news (satire, misinformation, sloppy reporting and purposely deceptive).

But that really doesn’t help matters, especially in the modern era of state-sponsored fake news. We used to call this propaganda back when I was growing up. To better understand this modern context, I suggest you examine two new reports that present a more deliberate analysis and discussion:

  • The first is by Renee Diresta and Shelby Grossman for Stanford University’s Internet Observatory project called Potemkin Pages and Personas, Assessing GRU Online Operations. It documents two methods of Russia’s intelligence agency commonly called the GRU, narrative laundering and hack-and-leaking false data. I’ll get into these methods in a moment. For those of you that don’t know the reference, Potemkin means a fake village that was built in the late 1700’s to impress a Russian monarch who would pass by a region and fooled into thinking there were actual people living there. It was a stage set with facades and actors dressed as inhabitants.
  • The second report is titled Simulated media assets: local news from Vlad Shevtsov, a Russian researcher who has investigated several seemingly legit local news sites in Albany, New York (shown below) and Edmonton, Alberta. These sites constructed their news pages out of evergreen articles and other service pieces that have attracted millions of page views, according to analytics. Yet they have curious characteristics, such as being viewed almost completely from mobile sources outside their local geographic area.

Taken together, this shows a more subtle trend towards how “news” can be manipulated and shaped by government spies and criminals. Last month I wrote about Facebook and disinformation-based political campaigns. Since then Twitter announced they were ending all political advertising. But the focus on fake news in the political sphere is a distraction. What we should understand is that the entire notion of how news is being created and consumed is undergoing a major transition. It means we have to be a lot more skeptical of what news items are being shared in our social feeds and how we obtain facts. Move over Snopes.com, we need a completely new set of tools to vet the truth.

Let’s first look at the Shevtsov report on the criminal-based news sites, for that is really the only way to think about them. These are just digital Potemkin villages: they look like real local news sites, but are just containers to be used by bots to generate clicks and ad revenue. Buzzfeed’s Craig Silverman provides a larger context in his analysis here. These sites gather traffic quickly, stick around for a year or so, and then fade away, after generating millions of dollars in ad revenues. They take advantage of legitimate ad serving operations, including Google’s AdSense, and quirks in the organic search algorithms that feed them traffic.

This is a more insidious problem than seeing a couple of misleading articles in your social news feed for one reason: the operators of these sites aren’t trying to make some political statement. They just want to make money. They aren’t trying to fool real readers: indeed, these sites probably have few actual carbon life forms that are sitting at keyboards.

The second report from Stanford is also chilling It documents the efforts of the GRU to misinform and mislead, using two methods.

— narrative laundering. This makes something into a fact by repetition through legit-sounding news sources that are also constructs of the GRU operatives. This has gotten more sophisticated since another Russian effort led by the Internet Research Agency (IRA) was uncovered during the Mueller report. That entity (which was also state-sponsored) specialized in launching social media sock puppets and creating avatars and fake accounts.  The methods used by the GRU involved creating Facebook pages that look like think tanks and other media outlets. These “provided a home for original content on conflicts and politics around the world and a primary affiliation for sock puppet personas.” In essence, what the GRU is doing is “laundering” their puppets through six affiliated media front pages. The researchers identified Inside Syria Media Center, Crna Gora News Agency, Nbenegroup.com, The Informer, World News Observer, and Victory for Peace as being run by the GRU, where their posts would be subsequently picked up by lazy or uncritical news sites.

What is interesting though is that the GRU wasn’t very thorough about creating these pages. Most of the original Facebook posts had no engagements whatsoever. “The GRU appears not to have done even the bare minimum to achieve peer-to-peer virality, with the exception of some Twitter networking, despite its sustained presence on Facebook. However, the campaigns were successful at placing stories from multiple fake personas throughout the alternative media ecosystem.” A good example of how the researchers figured all this out was how they tracked down who really was behind the Jelena Rakocevic/Jelena Rakcevic persona. “She” is really a fake operative that purports to be a journalist with bylines on various digital news sites. In real life, she is a biology professor in Montenegro with a listed phone number for a Mercedes dealership.

— hack-and-leak capabilities. We are now sadly familiar with the various leak sites that have become popular across the interwebs. These benefitted from some narrative laundering as well. The GRU got Wikileaks and various mainstream US media to pick up on their stories, making their operations more effective. What is interesting about the GRU methods is that they differed from those attributed to the IRA “They used a more modern form of memetic propaganda—concise messaging, visuals with high virality potential, and provocative, edgy humor—rather than the narrative propaganda (long-form persuasive essays and geopolitical analysis) that is most prevalent in the GRU material.”

So what are you gonna do to become more critical? Librarians have been on the front lines of vetting fake news for years. Lyena Chavez of Merrimack College has four easy “tells” that she often sees:

  • The facts aren’t verifiable from the alleged sources quoted.
  • The story isn’t published in other credible news sources, although we have seen how the GRU can launder the story and make it more credible.
  • The author doesn’t have appropriate credentials or experience.
  • The story has an emotional appeal, rather than logic.

One document that is useful (and probably a lot more work than you signed up for) is this collection from her colleague at Merrimack Professor Melissa Zimdars. She has tips and various open source methods and sites that can help you in your own news vetting. If you want more, take a look at an entire curriculum that the Stony Brook J-school has assembled.

Finally, here are some tools from Buzzfeed reporter Jane Lytvynenko, who has collected them to vet her own stories.

 

RSA blog: Giving thanks and some thoughts on 2020

Thanksgiving is nearly upon us. And as we think about giving thanks, I remember when 11 years ago I put together a speech that somewhat tongue-in-cheek gave thanks to Bill Gates (and by extension) Microsoft for creating the entire IT support industry. This was around the time that he retired from corporate life at Microsoft.

My speech took the tack that if it wasn’t for leaky Windows OS’s and its APIs, many of us would be out of a job because everything would just work better. Well, obviously there are many vendors who share some of the blame besides Microsoft. And truthfully Windows gets more than its share of attention because it is found on so many desktops and running so many servers of our collective infrastructure.

Let’s extend things into the present and talk about what we in the modern-day IT world have to give thanks for. Certainly, things have evolved in the past decade, and mostly for the better: endpoints have a lot better protection and are a lot less leaky than your average OS of yesteryear.

You can read my latest blog post for RSA here abiout what else we have to be thankful for.

HPE blog: CISO faces breach on first day on the job

Most IT managers are familiar with the notion of a zero-day exploit or finding a new piece of malware or threat. But what is worse is not knowing when your company has been hacked for several months. That was the situation facing Jaya Baloo when she left her job as the chief information security officer (CISO) for Dutch mobile operator KPN and moved to Prague-based Avast. She literally walked into her first day on the job having to deal with a breach that had been active months earlier.

She has learned many things from her years as a security manager, including how to place people above systems, not to depend on prayer as a strategy has learned many things from her years as a security manager, including how to place people above systems and create a solid infrastructure plan, ignore compliance porn and the best ways to fight the bad guys. You can read my interview with her on HPE’s Enterprise.Nxt blog here.

Bob Metcalfe on credit, gratitude, and loyalty

For Bob Metcalfe, many things come in triples. His most successful company was called 3Com is one example. I met up with him recently and he told me, “You will be happier if you give and enjoy but not expect credit, gratitude, or loyalty.” Before I unpack that, let me tell you the story of how Bob and I first met.

This was in 1990 and I was about to launch Network Computing magazine for CMP. I was its first editor-in-chief and it was a breakout job for me in many respects: I was fortunate to be able to set the overall editorial direction of the publication and hire a solid editorial and production team, it was the first magazine that CMP ever published using desktop technology and it was the first time that I had built a test lab into the DNA of a B2B IT publication. Can you tell that I am still very proud of the pub? Yeah, there is that. Bob was one of our early columnists, and he was at the point in his career where he wanted to tell some stories about the development of his invention of Ethernet. We had a lot of fun getting these stories into print and Bob told me that for many years those first columns of his had a place of honor in his home. Bob went on to write many more columns for other IT pubs and eventually became publisher of Infoworld.

In addition to being a very clever inventor, Bob is also a master storyteller. One of his many sayings has since been enshrined as “Metcalfe’s law” which says a network’s effect is proportional to the square of its users or nodes. He is also infamous for wrongly predicting the end of the Internet in an Infoworld column he wrote in December 1995.  He called it a “gigalapse”  which would happen the next year. When of course it didn’t come to pass, he ate the printed copy of his column.

Oh well, you can’t always be right, but he is usually very pithy and droll.

Let’s talk about his latest statement, about credit, gratitude and loyalty. Notice how he differentiates the give and take of the three elements: with Bob, it is always critical to understand the relationship of inputs and outputs.

Credit means being acknowledged for your achievements. “The trick is to get credit without claiming it,” says Metcalfe. Credit comes in many forms: validation from your peers, recognition by your profession, or even a short “attaboy” from your boss for a job well done. I can think of the times in my career when I got credit for something that I wrote about: a fine explanation of something technical by one of my readers, or spotting a trend that few had yet seen. But what Bob is telling us is to put the shoe on the other foot, and give credit where and when it is due — output, rather than input. It is great to be acknowledged, but greater still if we cite those that deserve credit for their achievements. Going back to Network Computing, many of the people that I hired have gone on to do great things in the IT industry, and I continue to give them props for doing such wonderful work and to their contributions to our industry.

Gratitude is getting positive feedback, of thanking someone for their efforts. Too often we forget to say thanks. I can think of many jobs that I have held over the years where my boss didn’t give out many thank yous. But it is always better to give thanks to others than expect it. Credit and gratitude are a tight bundle to be sure.

Finally, there is loyalty. The dictionary defines this in a variety of ways, but one that I liked was “faithful to a cause, ideal, custom, institution, or product.” Too often we are expected to be faithful to something that starts out well but ends up poorly. Many times I have left jobs because the product team made some bad decisions, or because people whom I respected left out of frustration. If you are the boss, you can’t really demand loyalty, especially if you don’t show any gratitude or acknowledge credit for your staff’s achievements. “Loyalty is what you expect of your customers when your products are no longer competitive,” says Metcalfe.

I would be interested in your own reactions to what Bob said, and if you have examples from your own work life that you would like to share with others.

Red Hat blog: containers last mere moments, on average

You probably already knew that most of the containers created by developers are disposable, but did you realize that half of them are only around for less than five minutes, and a fifth of them last less than ten seconds? That and other fascinating details are available in the latest annual container report from Sysdig, a container security and orchestration vendor.

I mention that fun fact, along with other interesting trends in my latest blog post for Red Hat’s Developer site.

Adaptive access and step-up authentication with Thales SafeNet Trusted Access

SafeNet Trusted Access from Thales is an access management and authentication service. By helping to prevent data breaches and comply with regulations, it allows organizations to migrate to the cloud simply and securely.

 

MobilePass+ is available on iPhones and Android smartphones and Windows desktops. More information here. 

Pricing starts at $3/user/month for all tokens and services.

FIR B2B podcast #130: Don’t be fake!

The news earlier this month about Mitt Romney’s fake “Pierre Delecto” Twitter account once again brought fakery to the forefront. We discuss various aspects of fake news and what brands need to know to remain on point, honest and genuine to themselves. We first point out a study undertaken by North Carolina State researchers that found that the less people trust Facebook, the more skeptical they become of the news they see there. One lesson from the study is that brands should carefully choose how they rebut fake news.

Facebook is trying to figure out the best response to fake political ads, although it’s still far from doing an adequate job. A piece in BuzzFeed found that the social network has been inconsistent in applying its own corporate standards to decisions about what ads to run. These standards have nothing about whether the ads are factual and more to do with profanity or major user interface failures such as misleading or non-clickable action buttons. More work is needed.

Finally, we discuss two MIT studies mentioned in Axios about how machine learning can’t easily flag fake news. We have mentioned before how easy it is for machines to now create news stories without much human oversight. But one weakness of ML recipes is that precise and unbiased training data need to be used. When training data contains bias, machines simply amplify it, as Amazon discovered last year. Building truly impartial training data sets requires special skills, and it’s never easy.  (The image here btw is from a wonderful movie starring Orson Wells “F is for Fake.”)

Listen to the latest episode of our podcast here.