Web Informant

David Strom's musings on technology

FIR B2B podcast #130: Don’t be fake!

The news earlier this month about Mitt Romney’s fake “Pierre Delecto” Twitter account once again brought fakery to the forefront. We discuss various aspects of fake news and what brands need to know to remain on point, honest and genuine to themselves. We first point out a study undertaken by North Carolina State researchers that found that the less people trust Facebook, the more skeptical they become of the news they see there. One lesson from the study is that brands should carefully choose how they rebut fake news.

Facebook is trying to figure out the best response to fake political ads, although it’s still far from doing an adequate job. A piece in BuzzFeed found that the social network has been inconsistent in applying its own corporate standards to decisions about what ads to run. These standards have nothing about whether the ads are factual and more to do with profanity or major user interface failures such as misleading or non-clickable action buttons. More work is needed.

Finally, we discuss two MIT studies mentioned in Axios about how machine learning can’t easily flag fake news. We have mentioned before how easy it is for machines to now create news stories without much human oversight. But one weakness of ML recipes is that precise and unbiased training data need to be used. When training data contains bias, machines simply amplify it, as Amazon discovered last year. Building truly impartial training data sets requires special skills, and it’s never easy.  (The image here btw is from a wonderful movie starring Orson Wells “F is for Fake.”)

Listen to the latest episode of our podcast here.

Red Hat Developer website editorial support

For the past several months, I have been working with the editorial team that manages the Red Hat Developers website. My role is to work with the product managers, the open source experts and the editors to rewrite product descriptions and place the dozens of Red Hat products into a more modern and developer-friendly and appropriate context. It has been fun to collaborate with a very smart and dedicated group. This work has been unbylined, but you can get an example of what I have done with this page on ODO and another page on Code Ready Containers.

Here is an example of a bylined article I wrote about container security for their blog.

An update on Facebook, disinformation and political censorship

Facebook CEO Mark Zuckerberg speaks at Georgetown University, Thursday, Oct. 17, 2019, in Washington. (AP Photo/Nick Wass)

Merriam-Webster defines sanctimonious as “hypocritically pious or devout.” Last week Mark Zuckerberg gave a speech at Georgetown University about Internet political advertising, the role of private tech companies with regard to regulating free speech, and other topics. I found it quite fitting of this definition. There has been a lot of coverage elsewhere, so let me just hit the highlights. I would urge you all to watch his talk all the way through and draw your own conclusions.

Let’s first talk about censoring political ads. Many of you have heard that CNN removed a Trump ad last week: that was pretty unusual and doesn’t happen very often in TVland. Most TV stations are required by the FCC to run any political ad, as long as they carry who paid for the spot. Zuck spoke about how they want to run all political ads and keep them around so we can examine the archive later. But this doesn’t mean that they allow every political ad to run. Facebook has their corporate equivalent of the TV stations’ “standards and practices” departments, and will pull ads that use profanity, or include non-working buttons, or other such UI fails. Well, not quite so tidy, it appears.

One media site took them up on their policy. According to research done by BuzzFeed, Facebook has removed more than 160 political ads posted in the first two weeks in October. More than 100 ads from Biden were removed, and 21 ads from Trump. BuzzFeed found that Facebook applied its ad removal policies unequally. Clearly, they have some room to improve here, and at least be consistent in their “standards.”

One problem is that unlike online ads, TV political ads are passive: you sit and watch them. Another is that online ads can be powerful demotivators and convince folks not to vote, which is what happened in the 2016 elections. One similarity though is the amount of money that advertisers spend. According to Politico, Facebook has already pocketed more than $50 million from 2020 candidates running ads on its platform. While for a company that rakes in billions in overall ads, this is a small number. But it still is important.

One final note about political ads. Facebook posted a story this week that showed new efforts at disinformation campaigns by Iran and Russian-state-sponsored groups. It announced new changes to its policy, to try to prevent foreign-led efforts to manipulate public debate in another country. Whether they will be successful remains to be seen. Part of the problem is how you define state-sponsored groups. For example, which is state-sponsored? Al Jazeera, France 24, RT, NPR and others all take government funding. Facebook will start labeling these outlets’ pages and provide information on whether their content is partially under government controls.

Much was said about the first amendment and freedom of speech. I heard many comments about Zuck’s talk that at least delineated this amendment only applies to the government’s regulation of speech, not by private companies. Another issue was mentioned by The Verge: “Zuckerberg presents Facebook’s platform as a neutral conduit for the dissemination of speech. But it’s not. We know that historically it has tended to favor the angry and the outrageous over the level-headed and inspiring.” Politico said that “On Facebook, the answer to harmful speech shouldn’t be more speech, as Zuckerberg’s formulation suggests; it should be to unplug the microphone and stop broadcasting it.” It had a detailed play-by-play analysis of some of the points he made during his talk that are well worth reading.

“Disinformation makes struggles for justice harder,” said Slate’s April Glaser, who has been following the company’s numerous content and speech moderation missteps. “It often strands leaders of marginalized groups in the trap of constantly having to correct the record about details that have little to do with the issues they actually are trying to address.” Her post linked to several situations where Facebook posts harmed specific people, such as Rohingya Muslims in Myanmar.

After his speech, a group of 40 civil rights organizations called upon Facebook to “protect civil rights as a fundamental obligation as serious as any other goal of the company.” They claim that the company is reckless when it comes to its civil rights record and posted their letter here, which cites a number of other historical abuses, along with their recommended solutions.

Finally, Zuck spoke about how effective they have been at eliminating fake accounts, which number in the billions and pointed to this report earlier this year. Too bad the report is very misleading. For example, “priority is given to detecting users and accounts that seek to cause harm”- but only financial harm is mentioned.” This is from Megan Squire, who is a professor of Computer Science at Elon University. She studies online radicalization and various other technical aspects. “I would like to see numbers on how they deal with fake accounts used to amplify non-financial propaganda, such as hate speech and extremist content in Pages and Groups, both of which are rife with harmful content and non-authentic users. Facebook has gutted the ability for researchers to systematically study the platform via its own API.” Squire would like to see ways that outside researchers “could find and report additional campaigns, similarly to how security researchers find zero days, but Facebook is not interested in this approach.”

Zuck has a long history of apologia tours. Tomorrow he testifies before Congress yet again, this time with respect to housing and lending discrimination. Perhaps he will be a little more genuine this time around.

FIR B2B podcast #129: We’re Pleased and Excited to Tell You What People Don’t Know About Social Media

My podcast with Paul Gillin examines three different articles that touch on various B2B marketing aspects in this podcast. The first one from Digiday and documents what the BBC went through to establish its fifth content vertical it calls Future. The channel deals with health, wellness and sustainability, and it took a lot more effort than you might think to create. Branded content is driving a lot of page views at the Beeb, as the Brits lovingly refer to it, and the reason is because of all the work that the media company puts into their creation, working with ad partners, their marketing teams and editors. An article on whether eating eggs is healthy brought in a million page views and had an average dwell time of five minutes, which is content gold.

The second piece is from Chris Penn, who does excellent marketing research. He came up with analytics that show several “happy words” — such as “pleased,” “excited,” “proud” and “thrilled” — litter the press release landscape, offering nothing in the way of real information. Does anyone really care if your CEO is having a good day because you just announced version 3.45 of some product? It might be time to eliminate these words entirely from your marketing lingo, have the language reflect reality more closely and perhaps get more reporters’ attention too.

Finally, we found this Pew Research survey that shows exactly how little the average adult knows about the digital marketing world. Pew gave more than 4,000 adults a 10-question quiz that asked things like what the “s” in “https” stands for, who owns Instagram and whether ads are a significant source of social media revenue. A huge chunk of respondents either answered incorrectly or didn’t know the answer.  Listen to our podcast here.

Picking the right social media posting tool

I have been interested in social media productivity tools for many years. Back in 2013, I wrote a review for Network World of eight different ones. Of the 90+ products that I examined as part of this project, only Hootsuite and SproutSocial are still around. That gives you an idea of the volatility in this market. I decided to take another look at what is available and focused on four different services: Hootsuite, Buffer, Later and Zoho Social. I picked these four because all of them have free plans and transparent pricing so you can get a better idea of what they do before you spend significant time evaluating them. There are certainly at least a dozen others to choose from (including Mailchimp, which now offers Facebook and Instagram posting automation in addition to its mailing list management).

The idea is that as you dive into managing your brand’s social media identity, you want some automated method to help with your posts, to monitor your social feeds, and to analyze the results. Now there are specialized tools for each of these three categories. But you have to start somewhere, so if you have yet to use any of these tools, I would suggest starting with the ones that are oriented around posting new content.

Each of the four support a different collection of social media networks: All work with Twitter and Facebook (and support different aspects of the Facebook universe, such as groups and business account pages). Some also support Instagram, Pinterest, LinkedIn and WordPress posts too. Hootsuite has a number of add-on apps that support other social networks.

The free versions of three of the tools come with various posting limits: Zoho Social doesn’t have any limits. As I said, there are other tools that focus on analytics of your posts, but each of these is useful for limited purposes – for example, Later only will provide Instagram analytics. Finally, if you do decide to pay for service, plans vary all over the place in terms of monthly fees and annual payment discounts, ranging from a few dollars a month to several hundred. The chart below has more specifics, along with a link to the pricing page with more details about what each plan offers (and doesn’t offer too).

If you are going to use one of these services, examine three aspects carefully. First is their publishing and scheduling features, since that is what you are going to be doing with them. All four offer various publishing and scheduling features, including their own URL shorteners. Zoho Social will also work with bit.ly too, which is nice. Second is how these tools will fit into a team of folks that will be doing the posting. Some are easier to use in teams and have access controls to make it more useful than everyone sharing the same email address (which could be a security nightmare, particularly if a team member is fired or leaves). Finally, look at the other integrations and plug-ins and extensions that each vendor offers. If you already use any of the Zoho CRM products, their Social tool ties in nicely there. Hootsuite also has several integrations, as I mentioned.

Vendor Social networks supported Posting limits Analytics included Pricing plans
Buffer.com T(Twitter), F(Facebook), L(LinkedIn), P(Pinterest) can’t respond to content, only post new content (retweets allowed) Posts only Free (3 platforms, 1 user), $15- 99/month
Hootsuite.com T, F, L, WordPress, others Unlimited for paid accounts Entire social networks Free (3 profiles, 1 user, 30 monthly posts), $29- 599/month
Later.com T, F, P, I (instagram) 30 per month per platform (50 for Twitter) Instagram only Free  (1 profile, 1 user), $9- 49/month
Zoho Social T, F, L, I No limits Extensive Free (1 user). $10-300/month

 

Thoughts about live tweeting during arts performances

I realize that I come very late to this issue, but I recently discovered that many theatrical venues are actually encouraging live tweeting of their performances, and have done so for many years. As someone who speaks professionally and encourages live tweeting, I feel somewhat conflicted about this. Granted, my speeches are more than just cultural events — or at least I would like to think so — but still, there are plenty of people in my audiences who are using their phones while I am on stage.

The key event was an article in the NY Times this week about the practice. As I said, it has been going on for many years. One of our local opera companies puts on an annual Twitter invitational performance, inviting social media influencers to attend a single performance gratis and tweet away during the show.

This is a growing trend, and theatrical companies in numerous cities such as San Francisco, Palm Beach and Sacramento have established a separate seating section in their auditoriums called tweet seats where folks are encouraged to use their phones during the performance. Some even have set up monitors in the lobby displaying the tweets during intermission. Again, this mirrors many conferences that I have been to where the collected live tweets are displayed for all to see. Part of my job as a reporter covering a conference is to live tweet the event. I have to admit that I get excited when I see my tweets are trending and liked by other attendees.

I think it is getting harder to make a distinction between live tweeting in certain venues — such as a ball game or a professional conference — and in others, which just makes the issue more complicated.

I asked a friend of mine who runs a New York theater company what he thinks of live tweeting and using devices during his performances. “This is a huge problem. People record our shows on their phones all the time, AND they are now offended that you ask them to turn OFF their phones. I pretty much felt like that was the end of civilization as I knew it.” My friend told me that he “actually has had to crawl down aisles to stop people from texting or recording.”

The Times story notes situations where many Broadway actors have taken the phones out of the hands of audience members or stopping the show to berate the phone’s owner. My friend echoes this with his own experiences.

There seem to be several issues here:

  • Should cellphones and other devices be banned completely from live performances? It used to be that devices were banned as a distraction for the cast and other audience members, either because of the lit screen or because someone was actually on the phone during the show. But now that most phones have video cameras, it is a larger issue. An artist or theater company has a right to control their recorded performance.
  • Should an artistic company encourage live tweeting? I kind of get it: especially for opera, its audience is aging rapidly, and having live tweeting is a way to show they are hip and relevant and seed interest in a younger crowd that may attend other shows. Of course, for those shows they might be forced to just watching and listening. My friend has further commentary: “To be honest, my only objection is the fact that a huge portion of the artistic process is reflection — that moment to think about what you really feel about something that was presented.  A knee-jerk reaction isn’t enough. You need to pause and really connect to a feeling. As a frequent theatergoer, I’m not sure sometimes how I feel until the next day or several days later after I have seen a performance.” He makes a good point.
  • Is this a problem just for the millennial generation? I think it is applicable to all ages. Our attention spans have gotten shorter, our focus is less in living in the moment and more about sharing it with our “audience” and “developing our brand.” Indeed, this is the plot line of a new novel I am reading (Follow Me, out in February).

I welcome your comments and thoughts about this.

HPE blog: Top 10 great security-related TED talks

 

Like many of you, I love watching TED Talks. The conference, which covers technology, entertainment and design, was founded by Ricky Wurman back in 1984 and has spawned a cottage industry featuring recorded videos from some of the greatest speakers around the world. I was fortunate to attend one of the TED events back when it was still an annual event back in its early days and got to meet Wurman back when he was producing his Access city guides that were an interesting mix of travelogue and design.

If you are interested in watching more TED videos, here is my own very idiosyncratic guide to some of them that have more to do with cybersecurity and general IT operations, along with some of the lessons that I learned from the various speakers. If you do get a chance to attend a local event, I would also encourage you to do so: you will meet some interesting people, both in the audience and on stage.

The TED Talks present a unique perspective on the past, and many of them resonate with current events and best practices in the cybersecurity world. So many times security professionals think they have found something new when it turns out to be something that has been around for many years. One of the benefits of watching the TED talks is that they paint a picture of the past, and sometimes the past is still very relevant to today’s situations.

  1. In this 2015 talk in London, Rodrigo Bijou reviews how governments need hackers to help fight the next series of cyberwars and fight terrorists.

One of the more current trends is the situation of using malware-laced images by phishers. A recent news article mentioned the technique and labeled it “HeatStroke.”  This is one method the bad guys use to hide their code from easy detection by defenders and threat hunters. Turns out this technique isn’t so new and has been seen for years by security researchers. Bijou’s talk mentioned that years ago malware-injected images were part of ad-based clickjacking attacks. HeatStroke is just a new way take on an old problem.

Bijou’s talk also references the Arab Spring that was happening around that time. One of the consequences of public protests, particularly in countries with totalitarian governments, is that the government can restrict communications by blocking overall Internet access. This is being done more frequently, and Netblocks track such outages in countries all over the world, including Papua, Algeria, Ethiopia and Yemen. Bijou shows a now-famous photo of the Google public DNS address (8.8.8.8) that had been spray painted on a wall, in the hope that people will know what it means and use it to avoid the net blockade.

Since 2015, there have been numerous public DNS services established, many of them free or low cost. Corporate IT managers should investigate their DNS supplier for both performance gains and also for the better security provided – many of these services filter out bad URL links and phishing lures for example. You should consider switching after using a similar testing regimen to what was used in this blog post to find the best technology that could work for you.

  1. In this 2014 talk in Rio de Janeiro Andy Yen describes how encryption works. Yen was one of the founders of ProtonMail, one of the leading encryption providers.

Email encryption is another technology that has been around a long time. In one of the better talks that I have watched multiple times. Yen’s talk has been viewed 1.7M times. We still have a love/hate relationship with email encryption: many companies still don’t use it to protect their communications, in spite of numerous improvements to products over the years. Encryption technologies continue to improve: in addition to ProtonMail there are small business encryption solutions such as the Helm server that make it easier to deploy.

  1. Bruce Schneier gave his talk in 2004 at Penn State about the difference between the perception and reality of security.

Part of the staying power of his message is that we humans still process threats pretty much the same way we’ve done since we were living in caves: we tend to downplay common risks (such as finding food or driving to the store) and fear more spectacular ones (such as a plane crash or being eaten by a tiger). Schneier has been talking about “security theater” for many years now, such as the process by which we get screened at the airport. Part of understanding your own corporate theatrical enactment is in evaluating how we tend to tradeoff security against money, time and convenience.

  1. Juan Enriquez’s talk Long Beach in 2013 was about the rise of social networks and the hyperconnected world that we now live in.

He spoke about the effect of social media posts, calling them “digital tattoos.” The issue – then and now – is that all the information we provide on ourselves is easily assembled, often by just tapping into facial databases and without even knowing that our picture has been taken by someone nearby with a cell camera phone. “Warhol got it wrong,” he said, “now you are only anonymous for 15 minutes.” He feels that we are all threatened with immortality, because of our digital tattoos that follow us around the Internet. It is a good warning about how we need to consider the privacy considerations of our posts. Again, this isn’t anything new, but it does bear repeating and a good suggestion if your company still doesn’t have any formal policies and provisions in place for social media.

  1. This 2014 talk by Lorrie Faith Cranor at Carnegie Mellon University (CMU) is all about passwords.

Watching several TED talks makes it clear that passwords are still the bane of our existence, even with various technologies to improve how we use them and how to harden them against attacks. But you might be surprised to find out that once upon a time, college students only had to type a single digit for their passwords. This was at CMU, a leading computer science school and the location of one of the computer emergency response teams. The CMU policy was in effect up until 2009, when the school changed the minimum requirements to something a lot more complex. Researchers found that 80% of the CMU students reused passwords, and when asked to make them more complex merely added an “!” or an “@” symbol to them. Cranor also found that the password “strength meters” that are provided by websites to help you create stronger ones don’t really measure complexity accurately, with the meters being too soft on users as a whole.

A classic password meme is the XKCD cartoon that suggests stringing together four random common words to make more complex passwords. The problem though is that these passwords are error-prone and take a long time to type in. A better choice, suggested by her research, is to use a collection of letters which can be pronounced. This is also much harder to crack. The lesson learned: passwords still are the weak entry point into our networks, and corporations who have deployed password managers or single sign-on tools are a leg up on protecting their data and their users’ logins.

  1. Another frequently viewed talk was given in Long Beach in 2011 by Ralph Langer, a German security consultant. He tells the now familiar story from modern history how Stuxnet came to be created and how it was deployed against the Iranian nuclear plant at Natanz back in 2010.

What makes this relevant for today is the effort that the Stuxnet creators (supposedly a combination of US and Israeli intelligence agencies) designed the malware to work in a very specific set of circumstances. In the years since Stuxnet’s creation, we’ve seen less capable malware authors also design custom code for specific purposes, target individual corporations, and leverage multiple zero-day attacks. It is worth reviewing the history of Stuxnet to refresh your knowledge of its origins. The story of how Symantec dissected Stuxnet is something that I wrote about in 2011 for ReadWrite that is also worth reading.

  1. Avi Rubin’s 2011 talk in DC reviews how IoT devices can be hacked. He is a professor of computer science.

Back in 2011, some members of the general public still thought you could catch a cold from a computer virus. Rubin mentions that IoT devices were under attack as far back as 2006, something worth considering that these attacks have become quite common (such as with the Mirai attacks which began in 2016). Since then, we have seen connected cars, smart home devices, and other networked devices become compromised. One lesson learned from watching Rubin’s talk is that attackers may not always follow your anticipated threat model and compromise your endpoints with new and clever methods. Rubin urges defenders to think outside the box to anticipate the next threat.

  1. Del Harvey gave a talk in Vancouver in 2014. She handles security for Twitter and her talk is all about the huge scale brought about by the Internet and the sorts of problems she has to face daily.

She spoke about how many Tweets her company has to screen and examine for potential abuse, spam, or other malicious circumstances. Part of her problem is that she doesn’t have a lot of context to evaluate what a user is saying in their Tweets, and also that even if she makes one mistake in looking at a million Tweets, that is still something that could happen 500 times a day. This is also a challenge for security defenders who have to process a great deal of daily network traffic to find that one bad piece of malware buried in our log files. Harvey says it helps to visualize an impending catastrophe and this contains a clue of how we have to approach the scale problem ourselves, through the use of better automated visualization tools to track down potential bad actors.

  1. This 2014 Berlin session by Carey Kolaja is about her experiences working for Paypal.

She was responsible for establishing new markets for the payments vendor that could help the world move money with fewer fees and less effort. Part of her challenge though is establishing the right level of trust so that payments would be processed properly, and that bad actors would be quickly identified. She tells the story of a US soldier in Iraq that was trying to send a gift to his family back in New York. The path of the transaction was flagged by Paypal’s systems because of the convoluted route that the payment took. While this was a legitimate transaction, it shows even back then we had to deal with a global reach and have some form of human evaluation behind all the technology to ensure these oddball payment events happen. The lesson for today is how we examine authentication events that happen across the world and putting in place risk-based security scoring tools to flag similar complex transactions. “Today trust is established in seconds,” she says – which also means that trust can be broken just as quickly.

  1. Our final talk is by Guy Goldstein and given in 2010 in Paris. He talks about how hard it is to get attribution correctly after a cyber-attack.

Even back then it was difficult to pin down why you were targeted and who was behind the attack, let alone when you were first penetrated by an adversary. “Attribution is hard to get right,” he says, “and the consequences of getting it wrong are severe.” Wise words, and why you need to have red teams to boost your defensive capabilities to anticipate where an attack might come from.

As you can see, there is a lot to be gleaned from various TED talks, even ones that have been given at conferences many years ago. There are still security issues to be solved and many of them are still quite relevant to today’s environment. Happy viewing!

Lessons for leaders: learning from TED Talks

  • Public DNS providers have proliferated and a worth a new look to protect your network from outages in conflict-prone hotspots around the world
  • Consider privacy implications of your staff’s social media posts and assemble appropriate guidelines for how they consume social media.
  • Improve your password portfolio by using a password manager, a single sign-on tool or some other mechanism for making them stronger and less onerous in their creation for your users
  • Think outside the box and visualize where your next threats will appear on your network.
  • Examine whether risk-based authentication security tools can help provide more trustworthy transactions to thwart phishers.
  • Build red teams to help harden your defenses.

Protecting your digital and online privacy

I gave a talk at our local Venture Cafe about this topic and thought I would summarize some of my suggestions in a blog post here. We all know that our devices leak all sorts of personal data: the locations and movements of our phones, the contents of our emails and texts, the people with whom we communicate, and even the smart devices in our homes are all chatty Cathys. There have been numerous articles that describe these communications, including how an app for the University of Alabama’s football team tracks students who agree to divulge their game attendance in return to obtain rewards points for college merch (see the screenshot here). Another NY Times story analyzed the tracking resources when a reporter visited dozens of different websites. The trackers from these sites were able to determine where the reporter lived and worked and could collect all sorts of other personal information, including finding out when women who were using phone apps to track their monthly periods were having sex.

Most of us have some basic understanding about how web tracking cookies work: this technology is decades old. But that era seems so quaint now and the problem is that our phones are powerful computers that can track all sorts of other stuff that can be more invasive. It also doesn’t help that our phones are usually with us at all times. Reading the two NYT pieces should make anyone more careful about what information you should give up to the digital overlords that control our apps. In my talk I present a few tools to fight back and provide more privacy protection. They include:

  • Monitor your Wifi usage and then choose the right VPN that offers the best protection. Open Wifi networks can collect everything that you are doing online: you should find and use the right VPN to at least encrypt these conversations. The problem is many VPNs are owned by Chinese vendors or that collect other information about you. Two studies are worth reviewing: one by Privacy Australia which has a nice analysis of which are faster performers and one by Top10VPN which goes into details about who owns each vendor. I use ProtonVPN on both my phone and laptop.
  • Choose passwords carefully and use a password manager. I have made this recommendation before, do take it seriously if you still are a hold out. Reusing passwords is the single biggest mistake you can make towards compromising your privacy. I use LastPass on all my devices.
  • Change your DNS settings to provide additional protection. There are now numerous alternative DNS providers that can help encrypt and hide your web traffic, as well as provide for faster connections. Cloudflare has two tools, including its 1.1.1.1 DNS service and its Warp phone VPN service. Both are free.

In my talk I also have several main strategies towards better privacy protection. These include:

  1. Eliminate very personal data on social media, such as your real birthday and other identifying information. Be careful about future posts and whom you tag on your social media accounts too.
  2. Delete the Facebook Messenger phone app: it scraps your entire contact list and uploads it to Facebook. Don’t use social media identities as login proxies if you can avoid them.
  3. Audit your phones regularly and eliminate unneeded apps. Know which ones are leaking data and avoid them as well. The app Mighty Signal will report on what is leaked.
  4. Set up your phone for optimum privacy protection. This involves several steps, including updating to the latest iOS and Android OS versions and enable their latest privacy features, such as stripping photo location metadata and blocking unknown callers. A good place to start is to use the JumboPrivacy App to further restrict your data leakage too: it will recommend the most private settings for you, given how complex the average phone app is these days and how hard it is to figure out how to configure each app appropriately.
  5. If you are truly concerned, move to a different browser and search tool, such as Brave and DuckDuckGo that offer more privacy protection. Yes you will give up some functionality for this protection, so you have to weigh the tradeoffs of utility versus protection.

This seems like a lot of work, and I won’t deny that. Take things one step at a time, and change one habit and understand its consequences (including loss of functionality and convenience) before moving on to making other changes. Too often folks can easily get overwhelmed and then retreat to old habits, nullifying these improvements. When you have a choice, pick technologies that are easier to manage and implement.

Do let me know what your own experiences have been along this journey too by posting a comment here if you’d like.

 

RSA blog: Are you really cyber aware?

For many IT managers, being cyber aware is a hard thing to pin down. Does this mean that you (really) understand the various potential threat modes that can put your organization at risk? Or that you have some form of regularly scheduled cyber security awareness training happening? Or that you have multiple threat detection and response tools in operation to protect your endpoints? If you have been reading my columns, you know that the best answer is that there is some combination of all three of these elements.

Let’s put this in context, because it is once again time to highlight that October is Cyber Awareness month. Last year I wrote about how security awareness has to be “celebrated” every day, not just in October. Let’s look at some of my recommendations from that blog post and see how far we have come – or not.

My post mentioned four major themes to improve security awareness:

  • More comprehensive adoption of multi-factor authentication (MFA) tools and methods,
  • Ensuring better backups to thwart ransomware and other attacks,
  • Paying more attention to cloud data server configuration, and
  • Doing continuous security awareness training.

Sadly, all four of these suggestions are still needed, and many of the past year’s breaches happened because of one or more of them were neglected. There are some bright spots: MFA projects seem to be happening with greater frequency. Single sign-on tools are improving their MFA support, documentation and overall integration making it easier for corporate security developers to add these methods to their own apps. And security awareness training seems to be on the rise as well, with many companies implementing more regular assessments to motivate users to be more careful. This is good, because the bad guys are constantly upping their own game to try to trip us up and force their way into our networks.

But there are also problem areas that have arisen in the past year that bear mention. While ransomware continues to plague many companies, the way that attackers are getting to delivery their ransom attacks is troubling. The news over the past year has shown increased targeting by bad actors. This happens in several ways, including:

 

For these cases, a single exploit caused multiple attacks because of the common software used by their customers. This means that better backups aren’t enough anymore: you also must secure your software supply chain and treat any external software supplier as a potential source of a threat.

This means you need to think about whether your existing security tools can catch such exploits, and if not, what protective measures you can put into place that can. For example, do you have a subresource registry to verify the integrity of your source code? Or do you have a policy to host as many of your third-party scripts on your own servers rather than on any of your suppliers’ servers? Both are worth investigating.

Part of the problem is that attackers are getting more determined: we’ve seen evidence (such as what happened this past year at British Airways) where they have tried multiple entry points and adjusted their methods to find a way inside a targeted network. But a big part of why attackers succeed is because we have very complex technologies in place with multiple failure points. Some of these points are known and protected, but many aren’t. This is why security awareness is a constant battle. Standing still is admitting defeat. So the title of this post isn’t as rhetorical as you might think. Chances are you aren’t as aware as you think you should be, and hopefully I have given you a few ideas to improve.

The worldwide spread of government-sponsored social media misinformation

For the past three years, researchers at Oxford University have been tracking the rise of government and political party operatives who have been using various social media tools as propaganda devices. Their goal is to shape and undermine trust with public opinion and automate dissent suppression. This year’s report is chilling and I urge you to read it yourself and see what you think. It shows how social media has infected the world’s democracies on an unprecedented scale.

The researchers combed news reports and found evidence of what they call “cyber propaganda troops” in 70 different countries, with the most activity happening in Russia, the US, Venezuela, Brazil, Germany and the UK.  This is a big increase in the number of places where they found these activities a year ago. In 44 countries, they found evidence of a government agency or members of political parties using various automated tools to help social media shape public attitudes. “Social media has become co-opted by many authoritarian regimes. In 26 countries computational propaganda is being used as a tool of information control.” Azerbaijan, Israel, Russia, Tajikistan and Uzbekistan have taken things a step further: there student groups are hired by government agencies to use digital propaganda to promote the state’s ideology.

You would expect that these techniques would be employed in dictatorships and in countries with less than stellar press freedoms and democratic records But what is interesting about their study is the few places that we would consider democracies where they didn’t find any evidence of any systematic social media tampering, such as in Canada, France, and Norway. The authors don’t say why this is the case, whether from a lack of research resources or because those places haven’t yet gotten on the state-controlled social media bandwagon.

“Until recently, we found that China rarely used social media to manipulate public opinion in other countries,” they state in their report. Prior to this year, China focused on manipulating its home grown social media platforms such as WeChat and QQ. That has changed, and now Chinese state-sponsored agencies are branching out and can be seen operating in other parts of the world, using Facebook and Twitter. “China is turning to these technologies as a tool of geopolitical power and influence.”

One thing the Oxford researchers didn’t examine is how the practice of using fake followers of major political figures has spread. This analysis was done by SparkToro. As you can see in the above graphic, Donald Trump and Jerry Brown have half or more of their Twitter followers by bots and other automated programs. There are other political figures elsewhere that have high fake proportions too.

It is sadly ironic that the very tools that were created to improve communications and bring us closer together have been so successfully subverted for just the opposite purposes by various governments. And that these tools have become mainstream elements in so many places around the world.