Network Solutions blog: How to Secure Mobile Devices from Common Vulnerabilities

The biggest cyber threat isn’t sitting on your desk: it is in your pocket or purse and, of course, we mean your smartphone. Our phones have become the prime hacking target, due to a combination of circumstances, some under our control and some not. These mobile malware efforts aren’t new. Sophos has been tracking them for more than a decade (see this timeline from 2016). There are numerous examples of attacks, including fake anti-virus, botnets, and hidden or misleading mobile apps. If you want the quick version, there is this blog post for Network Solutions. It includes several practical suggestions on how you can improve your mobile device security.

You can also download my ebook that goes into more specific details about these various approaches to mobile device security.

How to minimize your cyber risk with Sixgill

In this white paper sponsored by the security vendor Sixgill, I explain why the dark web is such a critical part of the cybercrime landscape, and how Sixgill’s product can provide cybersecurity teams with clear visibility into their company’s threats landscape along with contextual and actionable recommendations for remediation. I cover the following topics:

  • How the dark web has evolved into a sophisticated environment well suited to the needs of cybercriminals.
  • What steps these criminals take in the hopes of staying hidden from cybersecurity teams.
  • How Sixgill uses information from the underground to generate critical threat intelligence – without inadvertently tipping cybercriminals off to the fact that an investigation is underway.
  • Why Sixgill’s rich data lake, composed of the broadest collection of exclusive deep and dark web sources, enables us to detect indicators of compromise (IOCs) before conventional, telemetry-based cyberthreat intelligence solutions can do so.
  • Which factors businesses and organizations need to consider when choosing a cyber threat intelligence solution.

You can download my white paper here.

Avast blog: Your guide to safe and secure online dating

Recently, information from five different dating sites have leaked millions of their users’ private data. The sites cover users from the USA, Korea and Japan. On top of this, a variety of other niche dating apps (such as CougarD and 3Somes) had data breaches of their own that exposed hundreds of thousands of users’ profiles in May, including photos and audio recordings. This latter event occurred thanks to a misconfigured and open Amazon S3 storage bucket. Thankfully, the owner of the account quickly moved to secure it properly when they heard from security researchers. We haven’t heard much about dating site breaches since private data from some 30M Ashley Madison users were posted online in 2015.

In this time of the pandemic when more of us are doing everything we can online, dating remains a security sinkhole. This is because by its very nature, online dating means we eventually have to reveal a lot of personal information to our potential dating partners. How we do this is critical for maintaining both information security and personal safety. In this post for Avast’s blog I provide a bunch of pointers on how to do this properly and provide my own recommendations.

Tales of IT bottlenecks in these Covid times

Having worked in IT for several decades, it is always interesting how past tech choices have come back to thwart us, showing weaknesses in our infrastructure and how the word legacy is often used pejoratively in our field. Consider the lowly fax machine, which many of us have not thought about in years.

In the early 1990s if my memory serves me, we had plug-in modem cards for PCs that also supported sending and receiving faxes. These were eventually replaced with technologies that could be used to transmit faxes across the Internet. (That link is woefully outdated and many of those vendors have gone away. Sorry! But at least you have some historical record to understand the context.) Why am I talking about faxes?

The NY Times recently posted this story about how the fax machines located in many public health offices is the latest bottleneck in our response to the pandemic. There is a photo included in the post of a pile of faxes taller than I am produced by one of these machines, located in a Houston office. This shows how we can have all the latest and greatest digital technology we want, but then things break with something that we have since forgotten about, like the fax machine. Humans will have to review all these faxes and try to sort things out, often re-enter the data and search for missing elements, such as details on the actual patient who is tested.

As someone who has had my own health challenges (although not Covid-related, at least not yet and hopefully not ever) over the past few months, I have come across a few digital bottlenecks myself. At my last hospital visit, I had to wait around for more than hour for my appointment for a very frustrating reason: my appointment wasn’t entered correctly “into the system” and the only way I could be seen and treated was for the staff to get hold of someone at Epic support to clear my appointment and then have it re-entered. No one at the hospital IT department could do this, apparently. Epic is the electronic medical record (EMR) provider of my hospital and for reference their motto is “the patient at the heart.” Yes indeed.

Let me tell you another digital bottleneck that I experienced. I was very careful to pick my treatment with a doctor that had experience with the particular surgery that I required and that I could communicate with readily using the Epic messaging portal, which they brand as MyChart. Often he answered my inquiries within minutes after I posted them to the portal. As a result, I have gotten very familiar with the MyChart portal and have used it frequently during my treatment over the past several months.

I have learned over the years that doctors who are digital natives, or at least comfortable with the technologies that I use (email and the web), are those doctors that I want to treat me. But when I had complications from surgery that required other doctors to get involved in my treatment, I was really at their mercy. Often all I had was a phone number that would page someone on call, if I had a problem that needed help in off-hours. I wasn’t prepared for that at all. It was frustrating because I went from a position where I was quite comfortable with the level of communication with my primary surgeon to going back to the pre-Internet1960s-era tools for my care. It was almost as if we were faxing each other.

These problems and bottlenecks have a simple root cause — we as a country have made some bad decisions on how patient data is stored, protected, and disseminated years ago. While it is true that few of us could have foreseen the pandemic, these past decisions have had a long shadow. In our rush to spread blame about what is happening with the virus now, some of these past decisions could have been made differently to lessen the impact today.

When fax tech was going out of style in the late 2000’s, I wrote this post for Baseline Magazine about some of the lessons learned from the fax machine. There are four important ones that bear repeating:

  • Interoperability matters.
  • Simplicity matters.
  • Real-time communication matters.
  • Privacy matters.

If we examine the fax breakdown during the pandemic, we can see these four lessons are still very much relevant. I ended my column by saying, “So the next time you have to build a new application, consider the lowly fax machine and what it does right. Take these lessons to heart, and you will have a leg up on building better and more useful applications.” Maybe we can finally learn these lessons to be prepared for the next pandemic.

The Facebook civil rights audit is a mixed bag

For more than two years, a team of civil rights activists have been examining Facebook’s actions under a microscope. They have issued various interim reports: this week they produced their final report, which evaluates how well Facebook has done in implementing their extensive recommendations. The short answer: not very well.

The report covers a wide scope of activities, including eliminating hate speech, policing posts that are threatening democratic elections and the collection of US Census data, changes in advertising policies and algorithmic bias, inciting violence, and policies promoting diversity and inclusion. It would be a tall order for many tech companies to resolve all of these issues, but for business the size and scope of Facebook, I would expect to see more coherent and definitive progress.

At first glance, Facebook seems to be trying — maybe. “Facebook is in a different place than it was two years ago,” as the report mentions. The company has begun several initiatives towards making amends on some of their most reprehensible actions, including:

  • Setting up better screening of posts that encourage hate speech or promote misinformation or harassment. The auditors mention that while there have been improvements during the study period, specific recommendations haven’t been implemented.
  • Prohibiting ads that mention negative perceptions of immigrants, asylum seekers or refugees.
  • Creating new policies prohibiting threats of violence relating to voting and elections outcomes.
  • Expanding diversity and inclusion efforts, although in interviews with Facebook staff the auditors feel there is still plenty of room for improvement and could do a lot more.
  • Eliminating explicit bias in targeting housing, employment and credit application ads by age, gender or Zip code.
  • Making changes to its Ad Library to make it easier and more transparent for researchers to search for bias and to determine if Facebook is making progress in implementation of these policies.

But when you read the entire 90-page report, you get to see that while the company has moved (and is continuing to move) towards a more equitable and appropriate treatment, they have just begun to move the needle. “It is taking Facebook too long to get it right.” they state.

Megan Squire, a CS professor at Elon University, wrote to me with her reaction. “The report highlights the same kinds of inconsistencies and persistent failures to act that I have experienced as a researcher studying the hate groups. These groups still routinely use Facebook’s platform to recruit, train, organize, and plan violence. Onboarding civil rights expertise is something they have yet to do in the white supremacist and domestic terror space, but I hope they strongly consider something like this in the future.” Squire refers to hiring civil rights specialists to round out various teams. The final report mentions this hiring in several contexts, but doesn’t touch on it when it comes to the sections on fighting hate speech and improving Facebook’s content moderation.

One thing that occurred to me as I was reading the report is how many of the issues mentioned have to do with the actions of our President and his campaign staff. Many of his statements, on Twitter and Facebook and in his campaign advertising, violate the auditors’ recommended actions. They auditors mention a trio of Trump posts in May which contained false claims on mail-in voting and an attempt at voter suppression. The posts were removed by Twitter but left online by Facebook. “These political speech exemptions [justifying keeping them online] constitute significant steps backward that undermine the company’s progress and call into question the company’s priorities,” the auditors say. “For many users who view false statements from politicians or viral voting misinformation on Facebook, the damage is already done without knowing that the information they’ve seen is false.” The auditors mention civil rights advocates’ claims that Trump’s content is “troubling because it reflects a seeming impassivity towards racial violence.”

The auditors specifically address this, saying “powerful politicians do not have to abide by the same rules that everyone else does, so a hierarchy of speech is created that privileges certain voices over less powerful voices.” They mention how Facebook has reined in anti-vax proponents but ironically has been “far too reluctant to adopt strong rules to limit misinformation about voting.” They go on to state, “If politicians are free to mislead people about official voting methods (by labeling ballots illegal) and are allowed to use not-so-subtle dog whistles with impunity to incite violence against groups advocating for racial justice, this does not bode well for the hostile voting environment that can be facilitated by Facebook in the United States.”

Facebook has tried to blunt the auditors’ criticism, saying that from January to March 2020, they removed 4.7M pieces of hate speech-related content, which is more than twice what was removed in the prior three months. That’s progress, but just the tip of the hate-speech iceberg. Earlier this week, Zuck once again promised to address the auditors’ issues. And last week, the company announced they are trying to still lock down API access to private data, after yet another revealing breach of private user data was discovered. Clearly, they could do a better job.”Facebook has a long road ahead on its civil rights journey.” I agree. It is time we see progress over promises.

FIR B2B podcast episode #139: Faulting and fixing Facebook’s hate speech problem

This week we discuss the Facebook ad boycott. Well, it really isn’t a total boycott but more like a brief pause by hundreds of major consumer brands in their advertising programs with Facebook and all of its social media platforms. CNN is keeping track of who is pulling their ads this month. However, the protests aren’t expected to hurt Facebook very much since most of its $70 billion in annual ad revenue comes from smaller businesses, something that Andrew Yang discusses on his podcast with cybersecurity pro John Redgrave and is worth listening to (after you listen to ours).

Montgomery College Pulls Ads From Facebook, Supports 'Stop Hate ...The effort was created by a group of anti-hate speech advocates such as NAACP and ADL under the banner of Stop Hate for Profit. That website lists their demands for changes to Facebook’s operations. We wonder why more B2B companies haven’t stepped up to this effort. I wrote a blog post with his point of view last month here. Shortly after we recorded this episode, the results of an internal audit were released, finding that Facebook’s “approach to civil rights remains too reactive and piecemeal.” Clearly the company still has a long way to go, particularly since top executives appear to be in denial that anything is wrong in the first place. I will post more about the audit results soon.

Facebook has also been criticized for some sloppy programming with its API, allowing discontinued mobile apps to still access private data. The company has made a lame and half-hearted response.

Speaking about other worthwhile podcasts, the NY Times tech columnist Kevin Roose has been producing a series called Rabbit Hole about how social networks in general, and YouTube in particular, suck people into echo chambers through their recommendation engines. It’s an unsettling series and well worth a listen if you want to know how Gen Z and  younger use social media.

You can listen to our 17 minute podcast here.

Apple’s App Store: monopoly or digital mall?

Another salvo in the legal battle between Apple and its developers was fired last month. The EU Commission is following up on a complaint from Spotify that says Apple’s practices are anti-competitive and are designed to block the popular music streaming service. Apple has two policies: one that prevent app creators from linking payments from within the app other than subscriptions, and another that limits users from making payments other than in-app purchases. These two policies result in developers having to pay Apple commissions on these payment streams: which amount to nearly a third for the first year and 15% in subsequent years.

This follows the US Supreme Court ruling that iPhone customers could sue Apple for allegedly operating the App Store as a monopoly that overcharges people for software. So far no action has happened as a result of this case and legal experts say it will probably take several years to wind its way through the courts. There was another lawsuit filed in US District Court in San Jose by two app developers that also accuse Apple of being a monopolist.

Andy Yen of ProtonMail posted this blog entry last month, saying “We have come to believe Apple has created a dangerous new normal allowing it to abuse its monopoly power through punitive fees and censorship that stifles technological progress, creative freedom, and human rights. Even worse, it has created a precedent that encourages other tech monopolies to engage in the same abuses.” He states further that “It is hard to stay competitive if you are forced to pay your competitor 30% of all of your earnings.” 

Of course, Apple disputes all of these charges, saying that it is just a digital mall where the tenants (the developers) are just paying rent. Nevertheless, it is the only mall when it comes to providing iOS apps. Apple claims it needs some compensation to screen out malware and badly coded apps and claims that the vast majority of apps in its store are free with no payments collected from developers. “We only collect a commission from developers when a digital good or service is delivered through an app.” The company explained its practices in this post in May, and cited a number of instances where third-party app developers compete with its own apps such as iCloud storage, the Camera, Maps and Mail apps.

Tim Cook thinks nobody “reasonable is going to come to the conclusion that Apple’s a monopoly. Our share is much more modest. We don’t have a dominant position in any market.” I disagree. From where I sit this seems very similar to what Microsoft went through back in the 1990s. You might remember that the US government ruled its Windows and anti-competitive practices were considered a monopoly.

There are some differences between Microsoft then and Apple now: Apple doesn’t have a dominant share in mobile OS outside of the US (Google’s Android has 75% of the market). whereas Microsoft had 90% of the PC OS market. But still, the Apple App Store represents a high barrier for app developers to enter, and consumers do suffer as a result.

Fighting online disinformation and hate

The past month has seen some interesting developments in the fight against online disinformation and hate speech. First was the K-Pop campaign that diluted the impact of white nationalists by filling the various social media channels with fan videos using their hashtags. The K-Pop fans were also initially credited for buying up tickets to the Trump Tulsa rally. While we know only about six thousand people attended the rally, it is hard to state with any certainty who really got those tickets in the end.

This is an effective way to blunt the impact of hate groups, because you are using the crowd to counter-program their content. What hasn’t worked until now is forcing different social media platforms to ban these groups entirely. This is because a ban will only shift the haters’ efforts to another platform, where they can regroup. As a result many new social platforms are popping up that are decentralized and unmoderated.

Megan Squire, a computer science professor whom I am distantly related, has studied these hate groups and documents how their members know how to push the limits of social media. For example, one group uses You Tube for its live streaming and real-time comments, then deletes the recorded video file at the end of their presentation and uploads the content to other sites that are less vigilant about their hate speech moderation.

Part of the problem is politics: tech companies are viewed as supporting mostly liberal ideologies and target conservative voices. This has resulted in a number of legal proposals. Squire says that these proposals are “naive and focused on solving yesterday’s problems. They don’t acknowledge the way the social media platforms are actually being gamed today nor how they will be abused tomorrow.”

Another issue is how content is recommended by these platforms. “The issue of content moderation should focus not on content removal but on the underlying algorithms that determine what is relevant and what we see, read, and hear online. It is these algorithms that are at the core of the misinformation amplification,” says Hany Farid, a computer science professor in his Congressional testimony this past week about the propagation of disinformation. He suggests that the platforms need to tune their algorithms to value trusted, respectful and universally accepted information over the alternatives to produce a healthier ecosystem.

But there is another way to influence the major tech platforms: through their pocketbooks. In the past month, more than 100 advertisers have pulled their ads from Facebook and other social sites. CNN is keeping track of this trend here. Led by civil rights organizations such as the NAACP and the ADL, the effort is called Stop Hate for Profit. They have posted a ten-point plan to improve things on Facebook/s various properties. It has been called a boycott, although that is not completely accurate: many advertisers have said they will return to Facebook in a few weeks. One problem is that the majority of Facebook business is from smaller businesses. Still, it is noteworthy how quickly this has happened.

Perhaps this effort will move the needle with Facebook and others. It is too soon to tell, although Facebook has announced some very small steps that will probably prove to be ineffective, if history is any predictor.

Avast blog: Understanding BlueLeaks

Earlier this month, a group of hackers published a massive dataset stolen from various local law enforcement agencies. The data has been labeled BlueLeaks and contains more than 269 GB of thousands of police reports that go back at least two decades from hundreds of agencies from around the US. The reports list private data including names, email addresses, phone numbers and bank accounts. The source is a group called Distributed Denial of Secrets or DDoSecrets, which like Wikileaks has been publishing various leaked datasets for many years.

The data can be easily searched as shown in the screenshot below.

What BlueLeaks shows is that third-party IT providers need to be properly vetted for their internal security methods. While having an easy-to-update website is great, it needs to be secure and all accounts should use multi-factor authentication and other tools to ensure that only authorized users have access. You can read more about the leak and its relevance here in my post for Avast’s blog.

RSA blog: Making the Next Digital Transition Will Require Extensive Security Planning

We are all in a forced march towards a more accelerated digital transition because of the virus. McKinsey is one of many consulting firms who have proposed a 90-day guide towards moving into this brave new era. And while I don’t want to pick on them specifically, their plan –like others of its ilk — is somewhat flawed. It will take more than Zoom and Slack meetings and a corporate subscription to G Docs or O365 to remake our organizations.

“Every remote worker is now a separate risk to the company,” Canadian cybersecurity consultant Andrew Brewer shared with CIM Magazine.. “Each home environment is different, and with so many of them and [the health crisis] happening so suddenly it’s like a perfect storm for companies.”

To make this move successful, we all will have a lot more work to do in planning for this transition. Here are a few ways to begin to frame your thinking:

First have a security-by-design approach to become more digital and to support remote working long-term. We have to stop giving lip service to InfoSec. Instead, we should be thinking about security first and foremost. This isn’t something to wait until the end of a project when the security team will be tasked with another “cleanup on Aisle 6” operation and asked to add  security in after the environment is built. This means involving the entire C-suite at the beginning of the process to lay a solid foundation for a new network infrastructure, a new communications plan and the right kinds of gear for your remote workers.

Second have a better understanding of the sea changes that will need to happen in DevSecOps to support 100% WFH. In a different report, McKinsey says that rapid IT changes “may have created new risks and exposures.” Planning for these risks and modernizing the tech stack may take more than a 90-day project timeline.

Finally, there is the parallel effort to understand the omnichannel approach that will be introduced with a digital-centric business model. The move towards 100% WFH will introduce even more digital channels, which means more opportunity for fraud. Over the years, I have spoken to Daniel Cohen, the Head of Anti-Fraud Products and Strategy for RSA. In his opinion, the way to combat this is to start investing in omnichannel fraud prevention. A more digital operation also means that your cybersecurity attack surface area will increase, so it will take “information security, risk management and fraud prevention teams to work together, says Cohen.

As an example, I offer my purchase of some pants from the Gap. I got them online, but they were too small, so I returned them. I still haven’t received a credit for my return, because the returns are sitting in a big pile in some warehouse, waiting for an employee to sort through them to ensure that I did indeed return the appropriate merch. And this is from a company that has a robust online business. As long as the multiple channels intersect with some human-provided function, you will still have non-digital intersections and collaborations that will need careful planning and attention.

There are many risks and challenges associated with digital transformation in response to the current health crisis. I think they can be conquered, but all will require significant planning to ensure that we manage the associated risk appropriately.