How to prevent a data breach, lessons learned from the infosec vendors themselves

This fall there have been data breaches at the internal networks of several major security vendors. I had two initial thoughts when I first started hearing about these breaches: First, if the infosec vendors can’t keep their houses in order, how can ordinary individuals or non-tech companies stand a chance? And then I thought it would be useful to examine these breaches as powerful lessons to be avoided by the rest of us. You see, understanding the actual mechanics of what happened during the average breach isn’t usually well documented. Even the most transparent businesses with their breach notifications don’t really get down into the weeds. I studied these breaches and have come away with some recommendations for your own infosec practices.

The breaches are:

You will notice a few common trends from these breaches. First, the delay in identifying the breach, and then notifying customers.  It took NordVPN five weeks before they notified by their datacenter provider, and they found out the attack was part of an attack on their other VPN vendor customers. “The datacenter deleted the user accounts that the intruder had exploited rather than notify us.”  It took Avast months to identify their breach. Initially, IT staffers dismissed the unauthorized access as a false positive and ignored the logged entry. Months later it was re-examined and determined to be malicious. It took two months for Trend to track down exactly what happened before the employee was identified and then terminated.

Finally, about 4,000 users on a support forum have notified by ZoneAlarm about a data breach. Data compromised includes names, email addresses, hashed passwords, and birthdates. The issue was outdated forum software code that wasn’t patched to current versions. Their breach happened at least several weeks before being noticed and emails were sent out to affected users within 24 hours of when they figured the situation out.

These delays are an issue for anyone. Remember, the EU, through GDPR, gives companies 72 hours to notify regulators. These regulators have issued some pretty big fines for those companies who don’t meet this deadline, such as British Airways.

Second is a question of relative transparency. Most of the vendors were very transparent about what happened and when. You’ll notice that for three out of the four situations I have linked to the actual vendor’s blog posts that describe the breach and what they have done about it. The sole exception is ZoneAlarm, which has not posted any details publicly. The company is owned by Check Point, and while they have been very forthcoming with emails to reporters that is still not the same as posting something online for the world to see.

Third is the issue that insider threats are real. Employees will always be the weakest link in any security strategy. With Trend, customer data (including telephone numbers but no payment data) was divulged by a rogue employee who sold the data from 68,000 customers in a support database to a criminal third party. This can happen to anyone, but you should contemplate how to make a leak such as this more difficult.

Finally, recovery, remediation and repair aren’t easy, even for the tech vendors that know what they are doing (at least most of the time). Part of the problem is first figuring out what actual harm was done, what the intruders did and what gear has to be replaced. Avast’s blog post is the most instructive of the three and worth reading carefully. They have embarked on a major infrastructure replacement, as their CISO told me in a separate interview here. For example, they found that some of their TLS keys were obtained but not used. Avast then  revoked and reissued various encryption certificates and pushed out updates of its various software products to ensure that they weren’t polluted or compromised by the attackers. Both Avast and NordVPN also launched massive internal audits to track what happened and to ensure that no other parts of their computing infrastructure were affected.

But part of the problem is that our computing infrastructures have become extremely complex. Even our own personal computer applications are impossible to navigate (just try setting up your Facebook privacy options in a single sitting). How many apps does the average person use these days? Can you honestly tell me that there is some cloud login that you haven’t used since 2010 that doesn’t have a breached password? Now expand that to your average small business that allows its employees to bring their personal phones to work and their company laptops home and you have a nightmare waiting to happen: all it takes is one of your kids clicking on some dodgy link on your laptop, or you downloading an app to your phone, and it is game over. And as a friend of mine who uses a Mac found out recently, a short session on an open Wifi network can infect your computer. (Macs aren’t immune, despite popular folklore.)

So I will leave you with a few words of hope. Study these breaches and use them as lessons to improve your own infosec, both corporate and personal. Treat all third-party sources of technology as if they are your own and ask these vendors and suppliers the hard questions about their security posture. Make sure your business has a solid notification plan in place and test it regularly as part of your normal disaster recovery processes. Trust nothing at face value, and if your tech suppliers don’t measure up find new ones that will. And as you have heard me say before, tighten up all your own logins with smartphone-based authentication apps and password managers, and use a VPN when you are on a public network.

FIR B2B podcast #131: How to Run Webcasts and Video Calls

Both Paul Gillin and I have run and participated in various webinars and online meetings over the years. For this podcast episode, we share some of their best practices. There are several things you can do to have great meetings. First, is preparing your speakers and in planning for the presentation. Do you have the right kind of slide deck? With our in-person speaking gigs, we try to minimize the text on our slides and provide for more of an experience and set the mood. For a webinar where you don’t necessary see your audience, your slides are more of your speaking notes, so your audience can take away your thoughts and remember your major points.

I produce a monthly webinar for the Red Cross that has a dozen speakers and an audience of several hundred. To pull this off with minimal technical issues, my team has put together a lengthy document that recommends how speakers connect (watch for poor Wi-Fi and don’t use speakerphones) and describes the various roles that different people play during the conference call (master of ceremonies, moderator, time keeper, slide wrangler, presenter recruiter, chat and notes helpers). Paul and I both suggest using a common slide deck for all speakers, which means getting the slides in order prior to the meeting. Also, with more than a couple of presenters you should test your speakers’ audio connections too; both of us have had more problems with wonky audio than video. And settle on a protocol for whether or not to show your face when the meeting starts (and check to see if you are appropriately dressed).

Both of us feel you should always start your meetings promptly: you don’t want to be wasting time waiting for stragglers. We both don’t particularly like Skype for Business, although “regular” Skype is fine (most times) and we have also used GoToMeeting and Zoom, too.

Here is an example of a recent speech I gave to an audience of local government IT managers. I also has lots of other tips on how to do more than meetings and improve team collaboration here.

If you would like to listen to our 16 minute podcast, click below:

Good luck with running your own online meetings, and please share your own tips and best practices as comments. And enjoy the video below.

Further misadventures in fake news

The term fake news is used by many but misunderstood. It has gained notoriety as a term of derision from political figures about mainstream media outlets. But when you look closer, you can see there are many other forms that are much more subtle and far more dangerous. The public relations firm Ogilvy wrote about several different types of fake news (satire, misinformation, sloppy reporting and purposely deceptive).

But that really doesn’t help matters, especially in the modern era of state-sponsored fake news. We used to call this propaganda back when I was growing up. To better understand this modern context, I suggest you examine two new reports that present a more deliberate analysis and discussion:

  • The first is by Renee Diresta and Shelby Grossman for Stanford University’s Internet Observatory project called Potemkin Pages and Personas, Assessing GRU Online Operations. It documents two methods of Russia’s intelligence agency commonly called the GRU, narrative laundering and hack-and-leaking false data. I’ll get into these methods in a moment. For those of you that don’t know the reference, Potemkin means a fake village that was built in the late 1700’s to impress a Russian monarch who would pass by a region and fooled into thinking there were actual people living there. It was a stage set with facades and actors dressed as inhabitants.
  • The second report is titled Simulated media assets: local news from Vlad Shevtsov, a Russian researcher who has investigated several seemingly legit local news sites in Albany, New York (shown below) and Edmonton, Alberta. These sites constructed their news pages out of evergreen articles and other service pieces that have attracted millions of page views, according to analytics. Yet they have curious characteristics, such as being viewed almost completely from mobile sources outside their local geographic area.

Taken together, this shows a more subtle trend towards how “news” can be manipulated and shaped by government spies and criminals. Last month I wrote about Facebook and disinformation-based political campaigns. Since then Twitter announced they were ending all political advertising. But the focus on fake news in the political sphere is a distraction. What we should understand is that the entire notion of how news is being created and consumed is undergoing a major transition. It means we have to be a lot more skeptical of what news items are being shared in our social feeds and how we obtain facts. Move over Snopes.com, we need a completely new set of tools to vet the truth.

Let’s first look at the Shevtsov report on the criminal-based news sites, for that is really the only way to think about them. These are just digital Potemkin villages: they look like real local news sites, but are just containers to be used by bots to generate clicks and ad revenue. Buzzfeed’s Craig Silverman provides a larger context in his analysis here. These sites gather traffic quickly, stick around for a year or so, and then fade away, after generating millions of dollars in ad revenues. They take advantage of legitimate ad serving operations, including Google’s AdSense, and quirks in the organic search algorithms that feed them traffic.

This is a more insidious problem than seeing a couple of misleading articles in your social news feed for one reason: the operators of these sites aren’t trying to make some political statement. They just want to make money. They aren’t trying to fool real readers: indeed, these sites probably have few actual carbon life forms that are sitting at keyboards.

The second report from Stanford is also chilling It documents the efforts of the GRU to misinform and mislead, using two methods.

— narrative laundering. This makes something into a fact by repetition through legit-sounding news sources that are also constructs of the GRU operatives. This has gotten more sophisticated since another Russian effort led by the Internet Research Agency (IRA) was uncovered during the Mueller report. That entity (which was also state-sponsored) specialized in launching social media sock puppets and creating avatars and fake accounts.  The methods used by the GRU involved creating Facebook pages that look like think tanks and other media outlets. These “provided a home for original content on conflicts and politics around the world and a primary affiliation for sock puppet personas.” In essence, what the GRU is doing is “laundering” their puppets through six affiliated media front pages. The researchers identified Inside Syria Media Center, Crna Gora News Agency, Nbenegroup.com, The Informer, World News Observer, and Victory for Peace as being run by the GRU, where their posts would be subsequently picked up by lazy or uncritical news sites.

What is interesting though is that the GRU wasn’t very thorough about creating these pages. Most of the original Facebook posts had no engagements whatsoever. “The GRU appears not to have done even the bare minimum to achieve peer-to-peer virality, with the exception of some Twitter networking, despite its sustained presence on Facebook. However, the campaigns were successful at placing stories from multiple fake personas throughout the alternative media ecosystem.” A good example of how the researchers figured all this out was how they tracked down who really was behind the Jelena Rakocevic/Jelena Rakcevic persona. “She” is really a fake operative that purports to be a journalist with bylines on various digital news sites. In real life, she is a biology professor in Montenegro with a listed phone number for a Mercedes dealership.

— hack-and-leak capabilities. We are now sadly familiar with the various leak sites that have become popular across the interwebs. These benefitted from some narrative laundering as well. The GRU got Wikileaks and various mainstream US media to pick up on their stories, making their operations more effective. What is interesting about the GRU methods is that they differed from those attributed to the IRA “They used a more modern form of memetic propaganda—concise messaging, visuals with high virality potential, and provocative, edgy humor—rather than the narrative propaganda (long-form persuasive essays and geopolitical analysis) that is most prevalent in the GRU material.”

So what are you gonna do to become more critical? Librarians have been on the front lines of vetting fake news for years. Lyena Chavez of Merrimack College has four easy “tells” that she often sees:

  • The facts aren’t verifiable from the alleged sources quoted.
  • The story isn’t published in other credible news sources, although we have seen how the GRU can launder the story and make it more credible.
  • The author doesn’t have appropriate credentials or experience.
  • The story has an emotional appeal, rather than logic.

One document that is useful (and probably a lot more work than you signed up for) is this collection from her colleague at Merrimack Professor Melissa Zimdars. She has tips and various open source methods and sites that can help you in your own news vetting. If you want more, take a look at an entire curriculum that the Stony Brook J-school has assembled.

Finally, here are some tools from Buzzfeed reporter Jane Lytvynenko, who has collected them to vet her own stories.

 

RSA blog: Giving thanks and some thoughts on 2020

Thanksgiving is nearly upon us. And as we think about giving thanks, I remember when 11 years ago I put together a speech that somewhat tongue-in-cheek gave thanks to Bill Gates (and by extension) Microsoft for creating the entire IT support industry. This was around the time that he retired from corporate life at Microsoft.

My speech took the tack that if it wasn’t for leaky Windows OS’s and its APIs, many of us would be out of a job because everything would just work better. Well, obviously there are many vendors who would share some of the blame besides Microsoft. And truthfully Windows gets more than its share of attention because it is found on so many desktops and running so many servers of our collective infrastructure.

Let’s extend things into the present and talk about what we in the modern-day IT world have to give thanks for. Certainly, things have evolved in the past decade, and mostly for the better: endpoints have a lot better protection and are a lot less leaky than your average Windows XP desktop of yesteryear. We have more secure productivity tools, and most can operate from the cloud with a variety of desktop, laptop and mobile devices. We have better security automation, detection and remediation methods too. We also can be more mobile and obtain an Internet or Wifi signal in more remote places, making our jobs easier as we move around the planet. All of these are things to be thankful for, and many of us (myself included) often take these for granted.

What about looking forward? If I look at the predictions that I made a year ago, most of them have withstood the test of time.

Let’s start off with my biggest fail from 2018. I totally blew the call for cryptomining attacks trending upwards. At least I wasn’t alone, and other December 2018 predictions also had this trend mentioned in their lists. However, the exact opposite actually happened, and numerous reports showed a decline in cryptomining during 2019. One reasonable cause was the shuttering of the Coinhive operation in March. I am glad that this happened, and the lower rate of these attacks is another thing to be thankful for!

As I predicted, a number of good things have been happening on the authentication front in the past year. As I touched on in my post last month, a number of the single sign-on vendors’ multi-factor authentication (MFA) products have seen significant improvement. This includes better FIDO integration and better smartphone authentication tools. For example, RSA has its SecurID Access product that combines MFA and risk-based authentication methods. All of these items are things we can be thankful for, and hopefully more security managers will implement MFA in the coming months across their networks and applications.

Ransomware continues to be a threat, as I mentioned in my blog post last December and as concluded in the latest RSA fraud report here. Sadly, criminals continue to latch on to ransoms as a very profitable source of funds. This year we saw the development of new ransomware vectors into the software supply chain, with the Sodinokibi malware milking more than 20 different local Texas government IT operations thanks to a vulnerability in a managed endpoint service. The latest report shows this malware has made more than $4.5M in ill-gotten gains, by tracking specific Bitcoin deposits of the criminals.

Clearly we have made some significant progress in the past year, and even in the past decade.  But with all these innovations comes new risks too. Criminals aren’t just standing still, and figuring out new ways to breech our defenses. And there are still thousands of infosec jobs that go unfilled, as skilled security analysts remain in demand. Hopefully, that will be that we can do something about in the coming year.

HPE blog: CISO faces breach on first day on the job

Avast CISO’s Jaya Baloo has many lessons learned from her years as a security manager, including how to place people above systems, create a solid infrastructure plan, and best ways to fight the bad guys.

Most IT managers are familiar with the notion of a zero-day exploit or finding a new piece of malware or threat. But what is worse is not knowing when your company has been hacked for several months. That was the situation facing Jaya Baloo when she left her job as the corporate information security officer (CISO) for the Dutch mobile operator KPN and moved to Prague-based Avast. She literally walked into her first day on the job having to deal with a breach that had been discovered months ago.

Baloo had several reasons why she first started talking about working for Avast, which makes a variety of anti-malware and VPN tools and has been in business for more than three decades. “When I interviewed with their senior management, I thought that we were very compatible, and I thought that I totally fit in with their culture.” She liked that Avast had a global customer reach and that she would be working for a security company.

But after she accepted her job offer, the IT staff found evidence in late September that their environment had been penetrated since May. The evidence pointed to a compromised credential for their internal VPN. Baloo’s first day at Avast was October 1, and in the first three weeks she had numerous fires to put out. She never thought making the move to Avast was going to be a challenge. “Before I got there, I thought the biggest downside was that it was going to get boring. I thought this job was going to be a piece of cake.”

Fat chance of that. During those first weeks she quickly realized that she had to solve several problems. First was to figure out what happened with the intrusion and what damage was done. As part of this investigation, she had to go back in time six months and examine every product update that was sent out to ensure that Avast’s customers weren’t infected. This also led to understand what parts of their software supply chain were compromised. These things weren’t easy and took time to track down. They were hampered by having logs that weren’t complete or misleading. Evidence also had been inadvertently deleted.

Second was to build up trust in her staff. During her interviews, Baloo was very hopeful. “I felt that I didn’t have to sell them on the need for security, since that was their focus and their main business. I thought that they would be a source of security excellence.” To her surprise, she found out that they were a typical software company, “with silos and tribes and different loyalties just like everyone else.” As she began working there, she also had to climb a big learning curve. “I didn’t know who to believe and who had the right information or who was just being a strong communicator,” she told me. The problem was not that Avast staffers were deliberately lying to her, but that it took time to get perspective on the breach details and to understand the ground truth of what happened during and after the breach. Some stories were harder to elicit because staffers weren’t used to her methods.

Finally, she had to develop a game plan to restore order and confidence, and to ensure that the breach was fully contained. She made several decisions to revoke and re-issue certificates, to send out new product updates and to begin the process to completely overhaul the company’s network and protective measures. Twenty days into the job, she posted a public update that described these steps.

In my conversations with Baloo, I realized that she had developed a series of tenets from her previous jobs as a security manager. I call them Jaya’s CISO Gems.

  1. You have to continuously doubt yourself. First and foremost, avoid complacency and be paranoid about your own capabilities. “You need to have a plan for widening your own field of view, security knowledge and perspective. You have to include more potential threats and need to challenge yourself daily. If you don’t, everything is going to look normal.” Baloo told me that many security staffers have a tendency to pay more attention to their systems, and if a system isn’t complaining or issuing alerts, then the staff thinks all is well. This complacency can be dangerous, because “you tend to hunt for things that you expect and that means you are only going to find things you are looking for.” Part of the issue is that you have to be on the lookout for the unexpected and push the envelope and have a plan for improving your own security knowledge and skills.
  2. Trust people before systems. “We have a lot of faith invested in our systems, but not necessarily in our people. That is the reverse of what it should be. We tend to focus in our comfort zone, and our zone is in tech and metrics.” But a CISO needs to listen to her team. “I like a team that can tell you when you are wrong, because that is how you learn and grow in the job and have a culture that you promote too. And above all to do it with a sense of humor.”
  3. Build a functional SOC, not just a stage set. “A SOC should support your people, not have ten thousand screens that are pretty to look at but that really say nothing. The utility of a SOC is to able to provide the subtle clues that something is wrong with your infrastructure. As an example, you may still have firewall rules that allow for malware to enter your network.” Whether you have your own SOC or outsource it, its capabilities should match what is going on across your network.
  4. Everything in your infrastructure is suspect. Trust nothing and scan everything. She suggests starting with monitoring your oldest gear first, which is what Avast did after they found the breach. “Stop making excuses for this older equipment and make sure you don’t take away the possibility that you need to fix something old. You can’t be afraid of scanning something because this aging system might go down. Do pen testing for real.” Part of a good monitoring program is to do it periodically by default, and make sure that all staff know what the IT department is monitoring. “The goal isn’t big brother style monitoring but to find oddball user behavior and to make it visible. With cybersecurity, prayer is not an option.”
  5. Do your own phishing awareness training and do it often. While there are any number of awareness vendors that can help set up a solid program, the best situation is to craft your own. “You know your own environment best and it isn’t hard to create believable emails that can be used as a learning moment with those users who end up clicking on the bait. Phishing awareness training is really a people problem and very hard to get significant improvement, because all it takes is one person to click on something malicious. We were always successful at getting people to click. For example, we sent out one email that said we were changing the corporate policy on free coffee and tea and had users enter their credentials for a survey.” Part of rolling your own awareness program is being up on the latest email authentication protocols such as DMARC, DKIM and SPF so you can have confidence in your controls.
  6. Make sure you set the appropriate level of security awareness for every specific job role. “You don’t want your entire company knowing everything about your complete security policy, just what is needed for them to do their jobs,” she said. “And we should tell them how to do their jobs properly and not focus on what they are doing wrong, too.” As an example, she cites that the customer care department should understand the best practices on how to handle customer data.
  7. CISOs should be as technical as possible. “I see a lot of CISOs that come from a higher-level risk management background and don’t take the time or have the skills to understand the details how their security technology works. You shouldn’t be afraid to dive deeper.” She also sees CISOs that come from a regulatory background. Some of the biggest attacks, such as Target, were compliant with regulations at the time. Compliance (such as with satisfying GDPR) has turned into a paper exercise rather than checking firewall rules or doing more technical checks. Instead, you get caught up in producing “compliance porn that gets sent to the board and then you get pwned. Stuff gets lost in translation to management, and you need this technical background.”
  8. Prioritize your risk intelligence. You have to know what to act on first, it is all about triage. “You fix someone with a heart attack before fixing a broken bone,” she says. This means matching risk with relevance, as I mention in my blog post for RSA here. Part of this is doing a level of sanity checking with other organizations to see what they have included in their risk profiles. Don’t do the easy stuff first just because it is easy.
  9. Don’t panic and destroy evidence. As Baloo found out during her response to their own attack, you need to understand that an infected PC can be useful in understanding your response. “Every member of the enterprise needs to be part of your response,” she says. Part of this is being trained in how to preserve evidence properly.
  10. Start with open source security tools first. “I am not a fan of building custom security software unless nothing like it exists on the market and it is absolutely necessary. And if you write your own tools, go the open source route and embrace it entirely: build it, make it available with peer review and let someone else kick it. I have seen too many custom systems that never get updated.”

Bob Metcalfe on credit, gratitude, and loyalty

For Bob Metcalfe, many things come in triples. His most successful company was called 3Com is one example. I met up with him recently and he told me, “You will be happier if you give and enjoy but not expect credit, gratitude, or loyalty.” Before I unpack that, let me tell you the story of how Bob and I first met.

This was in 1990 and I was about to launch Network Computing magazine for CMP. I was its first editor-in-chief and it was a breakout job for me in many respects: I was fortunate to be able to set the overall editorial direction of the publication and hire a solid editorial and production team, it was the first magazine that CMP ever published using desktop technology and it was the first time that I had built a test lab into the DNA of a B2B IT publication. Can you tell that I am still very proud of the pub? Yeah, there is that. Bob was one of our early columnists, and he was at the point in his career where he wanted to tell some stories about the development of his invention of Ethernet. We had a lot of fun getting these stories into print and Bob told me that for many years those first columns of his had a place of honor in his home. Bob went on to write many more columns for other IT pubs and eventually became publisher of Infoworld.

In addition to being a very clever inventor, Bob is also a master storyteller. One of his many sayings has since been enshrined as “Metcalfe’s law” which says a network’s effect is proportional to the square of its users or nodes. He is also infamous for wrongly predicting the end of the Internet in an Infoworld column he wrote in December 1995.  He called it a “gigalapse”  which would happen the next year. When of course it didn’t come to pass, he ate the printed copy of his column.

Oh well, you can’t always be right, but he is usually very pithy and droll.

Let’s talk about his latest statement, about credit, gratitude and loyalty. Notice how he differentiates the give and take of the three elements: with Bob, it is always critical to understand the relationship of inputs and outputs.

Credit means being acknowledged for your achievements. “The trick is to get credit without claiming it,” says Metcalfe. Credit comes in many forms: validation from your peers, recognition by your profession, or even a short “attaboy” from your boss for a job well done. I can think of the times in my career when I got credit for something that I wrote about: a fine explanation of something technical by one of my readers, or spotting a trend that few had yet seen. But what Bob is telling us is to put the shoe on the other foot, and give credit where and when it is due — output, rather than input. It is great to be acknowledged, but greater still if we cite those that deserve credit for their achievements. Going back to Network Computing, many of the people that I hired have gone on to do great things in the IT industry, and I continue to give them props for doing such wonderful work and to their contributions to our industry.

Gratitude is getting positive feedback, of thanking someone for their efforts. Too often we forget to say thanks. I can think of many jobs that I have held over the years where my boss didn’t give out many thank yous. But it is always better to give thanks to others than expect it. Credit and gratitude are a tight bundle to be sure.

Finally, there is loyalty. The dictionary defines this in a variety of ways, but one that I liked was “faithful to a cause, ideal, custom, institution, or product.” Too often we are expected to be faithful to something that starts out well but ends up poorly. Many times I have left jobs because the product team made some bad decisions, or because people whom I respected left out of frustration. If you are the boss, you can’t really demand loyalty, especially if you don’t show any gratitude or acknowledge credit for your staff’s achievements. “Loyalty is what you expect of your customers when your products are no longer competitive,” says Metcalfe.

I would be interested in your own reactions to what Bob said, and if you have examples from your own work life that you would like to share with others.

Red Hat blog: containers last mere moments, on average

You probably already knew that most of the containers created by developers are disposable, but did you realize that half of them are only around for less than five minutes, and a fifth of them last less than ten seconds? That and other fascinating details are available in the latest annual container report from Sysdig, a container security and orchestration vendor.

I mention that fun fact, along with other interesting trends in my latest blog post for Red Hat’s Developer site.

Adaptive access and step-up authentication with Thales SafeNet Trusted Access

SafeNet Trusted Access from Thales is an access management and authentication service. By helping to prevent data breaches and comply with regulations, it allows organizations to migrate to the cloud simply and securely.

 

MobilePass+ is available on iPhones and Android smartphones and Windows desktops. More information here. 

Pricing starts at $3/user/month for all tokens and services.

FIR B2B podcast #130: Don’t be fake!

The news earlier this month about Mitt Romney’s fake “Pierre Delecto” Twitter account once again brought fakery to the forefront. We discuss various aspects of fake news and what brands need to know to remain on point, honest and genuine to themselves. We first point out a study undertaken by North Carolina State researchers that found that the less people trust Facebook, the more skeptical they become of the news they see there. One lesson from the study is that brands should carefully choose how they rebut fake news.

Facebook is trying to figure out the best response to fake political ads, although it’s still far from doing an adequate job. A piece in BuzzFeed found that the social network has been inconsistent in applying its own corporate standards to decisions about what ads to run. These standards have nothing about whether the ads are factual and more to do with profanity or major user interface failures such as misleading or non-clickable action buttons. More work is needed.

Finally, we discuss two MIT studies mentioned in Axios about how machine learning can’t easily flag fake news. We have mentioned before how easy it is for machines to now create news stories without much human oversight. But one weakness of ML recipes is that precise and unbiased training data need to be used. When training data contains bias, machines simply amplify it, as Amazon discovered last year. Building truly impartial training data sets requires special skills, and it’s never easy.  (The image here btw is from a wonderful movie starring Orson Wells “F is for Fake.”)

Listen to the latest episode of our podcast here.

Red Hat Developer website editorial support

For the past several months, I have been working with the editorial team that manages the Red Hat Developers website. My role is to work with the product managers, the open source experts and the editors to rewrite product descriptions and place the dozens of Red Hat products into a more modern and developer-friendly and appropriate context. It has been fun to collaborate with a very smart and dedicated group. This work has been unbylined, but you can get an example of what I have done with this page on ODO and another page on Code Ready Containers.

Here is an example of a bylined article I wrote about container security for their blog.