Web Informant

David Strom's musings on technology

Avast blog: Primary update: Voting issues in Los Angeles and Iowa

Last week Super Tuesday brought many of us to the polls to vote for our favorite candidate for President. And while voting went smoothly in most places, there was one major tech failure in Los Angeles, which saw the debut of new voting machines. Let’s compare what went wrong in LA with the earlier problems seen during the Iowa caucuses.

In our earlier blog, I brought you up to date with what happened with the Russians hacking our 2016 and 2018 elections. But the problems witnessed in Iowa and LA are strictly our own fault, the result of a perfect storm of different computing errors. For Iowa, the culprit was a poorly implemented mobile vote count smartphone app from the vendor Shadow Inc. For LA, it was a series of both tech and non-tech circumstances.

I go into details about each situation and what we’ve learned in this post for Avast’s blog.

So you wanna buy a used IP address block?

For the past 27 years, I have owned a class C block of IPv4 addresses. I don’t recall what prompted me back then to apply to Jon Postel for my block: I didn’t really have any way to run a network online, and back then the Internet was just catching on. Postel had the unique position to personally attend to the care and growth of the Internet.

Earlier this year I got a call from the editor of the Internet Protocol Journal asking me to write about the used address marketplace, and I remembered that I still owned this block. Not only would he pay me to write the article, but I could make some quick cash by selling my block.

It was a good block, perhaps a perfect block: in all the time that I owned it, I had never set up any computers using any of the 256 IP addresses associated with it. In used car terms, it was in mint condition. Virgin cyberspace territory. So began my journey into the used marketplace that began just before the start of the new year.

If you want to know more about the historical context about how addresses were assigned back in those early days and how they are done today, you’ll have to wait for my article to come out. If you don’t understand the difference between IPv4 and IPv6, you probably just want to skip this column. But for those of you that want to know more, let me give you a couple of pointers, just in case you want to do this yourself or for your company. Beware that it isn’t easy or quick money by any means. It will take a lot of work and a lot of your time.

First you will want to acquaint yourself with getting your ownership documents in order. In my case, I was fortunate that I had old corporate tax returns that documented that I owned the business that was on the ownership records since the 1990s. It also helped that I was the same person that was communicating with the regional Internet registry ARIN that was responsible for the block now. Then I had to transfer the ownership to my current corporation (yes, you have to be a business and fortunately for me I have had my own sub-S corps to handle this) before I could then sell the block to any potential buyer or renter. This was a very cumbersome process, and I get why: ARIN wants to ensure that I am not some address scammer, and that they are selling legitimate goods. But during the entire process my existing point of contact on my block, someone who wasn’t ever part of my business yet listed on my record from the 1990s, was never contacted about his legitimacy. I found that curious.

That brings up my next point which is whether to rent or to sell a block outright. It isn’t like deciding on a buying or leasing a car. In that marketplace, there are some generally accepted guidelines as to which way to go. But in the used IP address marketplace, you are pretty much on your own. If you are a buyer, how long do you need the new block – days, months, or forever? Can you migrate your legacy equipment to use IPv6 addresses eventually (in which cases you probably won’t need the used v4 addresses very long) or do you have legacy equipment that has to remain running on IPv4 for the foreseeable future?

If you want to dispose of a block that you own, do you want to make some cash for this year’s balance sheet, or are you looking for a steady income stream for the future? What makes this complicated is trying to have a discussion with your CFO how this will work, and I doubt that many CFOs understand the various subtleties about IP address assignments. So be prepared for a lot of education here.

Part of the choice of whether to rent or buy should be based on the size of the block involved. Some brokers specialize in larger blocks, some won’t sell or lease anything less than a /24 for example. “If you are selling a large block (say a /16 or larger) you would need to use a broker who can be an effective intermediary with the larger buyers,” said Geoff Huston, who has written extensively on the used IP address marketplace.

Why use a broker? When you think about this, it makes sense. I mean, I have bought and sold many houses — all of which were done with real estate brokers. You want someone that both buyer and seller can trust, that can referee and resolve issues, and (eventually) close the deal. Having this mediator can also help in the escrow of funds while the transfer is completed — like a title company. Also the broker can work with the regional registry staff and help prepare all the supporting ownership documentation. They do charge a commission, which can vary from several hundred to several thousand dollars, depending on the size of the block and other circumstances. One big difference between IP address and real estate brokers is that you don’t know what the fees are before you select the broker – which prevents you from shopping based on price.

So now I had to find an address broker. ARIN has this list of brokers who have registered with them. They show 29 different brokers, along with contact names and phone numbers and the date that the broker registered with ARIN. Note this is not their recommendation for the reputation of any of these businesses. There is no vetting of whether they are still in business, or whether they are conducting themselves in any honorable fashion. As the old saying goes, on the Internet, no one knows if you could become a dog.

Vetting a broker could easily be the subject of another column (and indeed, I take some effort in my upcoming article for IPJ to go into these details). The problem is that there are no rules, no overall supervision and no general agreement on what constitutes block quality or condition. IPv4MarketGroup has a list of questions to ask a potential broker, including if they will only represent one side of the transaction (most handle both buyer and seller) and if they have appropriate legal and insurance coverage. I found that a useful starting point.

I picked Hilco’s IPv4.Global brokerage to sell my block. They came recommended and I liked that they listed all their auctions right from their home page, so you could spot pricing trends easily. For example, last month other /24 blocks were selling for $20-24 per IP address. Rental prices varied from 20 cents to US$1.20 per month per address, which means at best a two-year payback when rentals are compared to sales and at worst a ten-year payback. I decided to sell my block at $23 per address: I wanted the cash and didn’t like the idea of being a landlord of my block any more than I liked being a physical landlord of an apartment that I once owned. It took several weeks to sell my block and about ten weeks overall from when I first began the process to when I finally got the funds wired to my bank account from the sale.

If all that seems like a lot of work to you, then perhaps you just want to steer clear of the used marketplace for now. But if you like the challenge of doing the research, you could be a hero at your company for taking this task on.

RSA Blog: The Tried and True Past Cybersecurity Practices Still Relevant Today

Too often we focus on the new and latest infosec darling. But many times, the tried and true is still relevant.

I was thinking about this when a friend recently sent me a copy of , which was published in 2003. Schneier has been around the infosec community for decades: he has written more than a dozen books and has his own blog that publishes interesting links to security-related events, strategies and failures..

His 2003 book contains a surprisingly cogent and relevant series of suggestions that still resonate today. I spent some time re-reading it, and want to share with you what we can learn from the past and how many infosec tropes are still valid after more than 15 years.

At the core of Schneier’s book is a five-point assessment tool used to analyze and evaluate any security initiative – from bank robbers to international terrorism to protecting digital data. You need to answer these five questions:

  1. What assets are you trying to protect?
  2. What are the risks to those assets?
  3. How well will the proposed security solution mitigate these risks?
  4. What other problems will this solution create?
  5. What are the costs and trade-offs imposed?

You’ll notice that this set of questions bears a remarkable resemblance to the IDEA framework that RSA CTO Dr. Zulfikar Ramzan presented during a keynote he gave several years ago. IDEA stands for creating innovative, distinctive end-to-end systems with successful assumptions. Well, actually Ramzan had a lot more to say about his IDEA but you get the point: you have to zoom back a bit, get some perspective, and see how your security initiative fits into your existing infrastructure and whether or not it will help or hurt the overall integrity and security.

Part of the problem is as Schneier says that “security is a binary system, either it works or it doesn’t. But it doesn’t necessarily fail in its entirety or all at once.” Solving these hard failures is at the core of designing a better security solution.

We often hear that the biggest weakness of any security system is the user itself. But Schneier makes a related point: “More important than any security claims are the credentials of the people making those claims. No single person can comprehensively evaluate the effectiveness of a security countermeasure.” We tend to forget about this when proposing some new security tech, and it is worth the reminder because often these new measures are too complex. Schneier tells us “No security countermeasure is perfect, unlimited in its capabilities and completely impervious to attack. Security has to be an ongoing process.” That means you need to periodically audit and re-evaluate your solutions to ensure that they are as effective as you originally proposed.

This brings up another human-related issue. “Knowledge, experience and familiarity all matter. When a security event occurs, it is important that those who have to respond to the attack know what they have to do because they’ve done it again and again, not because they read it in a manual five years ago.” This highlights the importance of training, and disaster and penetration planning exercises. Today we call this resiliency and apply strategies broadly across the enterprise, as well as specifically to cybersecurity practices. Managing these trusted relationships, as I wrote about in an earlier RSA blog, can be difficult.

Often, we tend to forget what happens when security systems fail. As Schneier says early on: “Good security systems are designed in anticipation of possible failure.” He uses the example of road signs that have special break-away poles in case someone hits the sign, or where modern cars have crumple zones that will absorb impacts upon collision and protect passengers. He also presents the counterexample of the German Enigma coding machine: it was thought to be unbreakable, “so the Germans never believed the British were reading their encrypted messages.” We all know how that worked out.

The ideal security solution needs to have elements of prevention, detection and response. These three systems need to work together because they complement each other. “An ounce of prevention may be worth a pound of cure, but only if you are absolutely sure beforehand where that ounce of prevention should be applied.”

One of the things he points out  is that “forensics and recovery are almost always in opposition. After a crime, you can either clean up the mess and get back to normal, or you can preserve the crime scene for collecting the evidence. You can’t do both.”  This is a problem for computer attacks because system admins can destroy the evidence of the attack in their rush to bring everything back online. It is even more true today, especially as we have more of our systems online and Internet-accessible.

Finally, he mentions that “secrets are hard to keep and hard to generate, transfer and destroy safely.” He points out the king who builds a secret escape tunnel from his castle. There always will be someone who knows about the tunnel’s existence. If you are a CEO and not a king, you can’t rely on killing everyone who knows the secret to solve your security problems. RSA often talks about ways to manage digital risk, such as this report that came out last September. One thing is clear: there is no time like the present when you should be thinking about how you protect your corporate secrets and what happens when the personnel who are involved in this protection leave your company.

Medium One-Zero: How to Totally Secure Your Smartphone

The more we use our smartphones, the more we open ourselves up to the possibility that the data stored on them will be hacked. The bad guys are getting better and better at finding ways into our phones through a combination of subtle malware and exploits. I review some of the more recent news stories about cell phone security, which should be enough to worry even the least paranoid among us. Then I describe the loss of privacy and the how hackers can gain access to our accounts through these exploits. Finally, I provide a few practical suggestions on how you can be more vigilant and increase your infosec posture. You can read the article on Medium’s OneZero site.

RSA blog: Trust has become a non-renewable resource: why you need a chief trust officer

Lately it seems like trust is in short supply with tech-oriented businesses. It certainly doesn’t help that there have been a recent series of major breaches among security tech vendors. And the discussions about various social networks accepting political advertising haven’t exactly helped matters either. We could be witnessing a crisis of confidence in our industry, and CISOs may be forced to join the front lines of this fight.

One way to get ahead of the issue might be to anoint a Chief Trust Officer. The genesis of the title is to recognize that the role of the CISO is evolving. Corporations need a manager is focused less on talking about technical threats and more about engendering trust in the business’ systems. The CTrO, as it is abbreviated, should assure stakeholders that they have the right set of tools and systems in place.

This isn’t exactly a new idea: Tom Patterson and Bob West were appointed in that position at Unisys and CipherCloud respectively more than five years ago, and Bill Burns had held his position at Informatica for more than three years. Burns was originally their CISO and given the job to increase transparency and improve overall security and communications. Still, the title hasn’t exactly caught on: contemporary searches on job boards such as Glassdoor and Indeed find few open positions advertised. Perhaps finding a CTrO is more of an internal promotion than hiring from outside the organization. It is interesting that all the instances cited above are from the tech universe. Does that say we in IT are quicker to recognize the problem, or just that we have given it lip service?

Tom Patterson echoes a phrase that was often used by Ronald Reagan: “trust but verify.” It is a good maxim for any CTrO to keep in mind.

I spoke to Drummond Reed, who has been for three years now an actual CTrO for the security startup Evernym. “We choose that title very consciously because many companies already have Chief Security Officers, Chief Identity Officers and Chief Privacy Officers.” But at the core of all three titles is “to build and support trust. For a company like ours, which is in the business of helping businesses and individuals achieve trust through self-sovereign identity and verifiable digital credentials, it made sense to consolidate them all into a Chief Trust Officer.”

Speaking to my comment about paying lip service, Reed makes an important point: the title can’t be just an empty promise, but needs to carry some actual authority, and must be at a level that can rise above just another technology manager. The CTrO needs to understand the nature of the business and legal rules and policies that a company will follow to achieve trust with its customers, partners, employees, and other stakeholders. It is more about “elevating the importance of identity, security, and privacy within the context of an enterprise whose business really depends on trust.”

Trust is something that RSA’s President Rohit Ghai speaks often about. Corporations should “enable trust; not eradicate threats. Enable digital wellness; not eradicate digital illness.” I think this is also a good thing for CTrO’s to keep in mind as they go about their daily work lives. Ghai talks about trust as the inverse of risk: “we can enhance trust by delivering value and reducing risk,” and by that he means not just managing new digital risks, but all kinds of risks.

In addition to hiring a CTrO, perhaps it is time we also focus more on enabling and promoting trust. For that I have a suggestion: let’s start treating digital trust as a non-renewable resource. Just like the energy conservationists promote moving to more renewable energy sources, we have to do the same with promoting better trust-maintaining technologies. These include better authentication, better red team defensive strategies, and better network governance. You have seen me write about these topics in other columns over the past couple of years, but perhaps they are more compelling in this context.

RSA blog: Giving thanks and some thoughts on 2020

Thanksgiving is nearly upon us. And as we think about giving thanks, I remember when 11 years ago I put together a speech that somewhat tongue-in-cheek gave thanks to Bill Gates (and by extension) Microsoft for creating the entire IT support industry. This was around the time that he retired from corporate life at Microsoft.

My speech took the tack that if it wasn’t for leaky Windows OS’s and its APIs, many of us would be out of a job because everything would just work better. Well, obviously there are many vendors who would share some of the blame besides Microsoft. And truthfully Windows gets more than its share of attention because it is found on so many desktops and running so many servers of our collective infrastructure.

Let’s extend things into the present and talk about what we in the modern-day IT world have to give thanks for. Certainly, things have evolved in the past decade, and mostly for the better: endpoints have a lot better protection and are a lot less leaky than your average Windows XP desktop of yesteryear. We have more secure productivity tools, and most can operate from the cloud with a variety of desktop, laptop and mobile devices. We have better security automation, detection and remediation methods too. We also can be more mobile and obtain an Internet or Wifi signal in more remote places, making our jobs easier as we move around the planet. All of these are things to be thankful for, and many of us (myself included) often take these for granted.

What about looking forward? If I look at the predictions that I made a year ago, most of them have withstood the test of time.

Let’s start off with my biggest fail from 2018. I totally blew the call for cryptomining attacks trending upwards. At least I wasn’t alone, and other December 2018 predictions also had this trend mentioned in their lists. However, the exact opposite actually happened, and numerous reports showed a decline in cryptomining during 2019. One reasonable cause was the shuttering of the Coinhive operation in March. I am glad that this happened, and the lower rate of these attacks is another thing to be thankful for!

As I predicted, a number of good things have been happening on the authentication front in the past year. As I touched on in my post last month, a number of the single sign-on vendors’ multi-factor authentication (MFA) products have seen significant improvement. This includes better FIDO integration and better smartphone authentication tools. For example, RSA has its SecurID Access product that combines MFA and risk-based authentication methods. All of these items are things we can be thankful for, and hopefully more security managers will implement MFA in the coming months across their networks and applications.

Ransomware continues to be a threat, as I mentioned in my blog post last December and as concluded in the latest RSA fraud report here. Sadly, criminals continue to latch on to ransoms as a very profitable source of funds. This year we saw the development of new ransomware vectors into the software supply chain, with the Sodinokibi malware milking more than 20 different local Texas government IT operations thanks to a vulnerability in a managed endpoint service. The latest report shows this malware has made more than $4.5M in ill-gotten gains, by tracking specific Bitcoin deposits of the criminals.

Clearly we have made some significant progress in the past year, and even in the past decade.  But with all these innovations comes new risks too. Criminals aren’t just standing still, and figuring out new ways to breech our defenses. And there are still thousands of infosec jobs that go unfilled, as skilled security analysts remain in demand. Hopefully, that will be that we can do something about in the coming year.

HPE blog: CISO faces breach on first day on the job

Avast CISO’s Jaya Baloo has many lessons learned from her years as a security manager, including how to place people above systems, create a solid infrastructure plan, and best ways to fight the bad guys.

Most IT managers are familiar with the notion of a zero-day exploit or finding a new piece of malware or threat. But what is worse is not knowing when your company has been hacked for several months. That was the situation facing Jaya Baloo when she left her job as the corporate information security officer (CISO) for the Dutch mobile operator KPN and moved to Prague-based Avast. She literally walked into her first day on the job having to deal with a breach that had been discovered months ago.

Baloo had several reasons why she first started talking about working for Avast, which makes a variety of anti-malware and VPN tools and has been in business for more than three decades. “When I interviewed with their senior management, I thought that we were very compatible, and I thought that I totally fit in with their culture.” She liked that Avast had a global customer reach and that she would be working for a security company.

But after she accepted her job offer, the IT staff found evidence in late September that their environment had been penetrated since May. The evidence pointed to a compromised credential for their internal VPN. Baloo’s first day at Avast was October 1, and in the first three weeks she had numerous fires to put out. She never thought making the move to Avast was going to be a challenge. “Before I got there, I thought the biggest downside was that it was going to get boring. I thought this job was going to be a piece of cake.”

Fat chance of that. During those first weeks she quickly realized that she had to solve several problems. First was to figure out what happened with the intrusion and what damage was done. As part of this investigation, she had to go back in time six months and examine every product update that was sent out to ensure that Avast’s customers weren’t infected. This also led to understand what parts of their software supply chain were compromised. These things weren’t easy and took time to track down. They were hampered by having logs that weren’t complete or misleading. Evidence also had been inadvertently deleted.

Second was to build up trust in her staff. During her interviews, Baloo was very hopeful. “I felt that I didn’t have to sell them on the need for security, since that was their focus and their main business. I thought that they would be a source of security excellence.” To her surprise, she found out that they were a typical software company, “with silos and tribes and different loyalties just like everyone else.” As she began working there, she also had to climb a big learning curve. “I didn’t know who to believe and who had the right information or who was just being a strong communicator,” she told me. The problem was not that Avast staffers were deliberately lying to her, but that it took time to get perspective on the breach details and to understand the ground truth of what happened during and after the breach. Some stories were harder to elicit because staffers weren’t used to her methods.

Finally, she had to develop a game plan to restore order and confidence, and to ensure that the breach was fully contained. She made several decisions to revoke and re-issue certificates, to send out new product updates and to begin the process to completely overhaul the company’s network and protective measures. Twenty days into the job, she posted a public update that described these steps.

In my conversations with Baloo, I realized that she had developed a series of tenets from her previous jobs as a security manager. I call them Jaya’s CISO Gems.

  1. You have to continuously doubt yourself. First and foremost, avoid complacency and be paranoid about your own capabilities. “You need to have a plan for widening your own field of view, security knowledge and perspective. You have to include more potential threats and need to challenge yourself daily. If you don’t, everything is going to look normal.” Baloo told me that many security staffers have a tendency to pay more attention to their systems, and if a system isn’t complaining or issuing alerts, then the staff thinks all is well. This complacency can be dangerous, because “you tend to hunt for things that you expect and that means you are only going to find things you are looking for.” Part of the issue is that you have to be on the lookout for the unexpected and push the envelope and have a plan for improving your own security knowledge and skills.
  2. Trust people before systems. “We have a lot of faith invested in our systems, but not necessarily in our people. That is the reverse of what it should be. We tend to focus in our comfort zone, and our zone is in tech and metrics.” But a CISO needs to listen to her team. “I like a team that can tell you when you are wrong, because that is how you learn and grow in the job and have a culture that you promote too. And above all to do it with a sense of humor.”
  3. Build a functional SOC, not just a stage set. “A SOC should support your people, not have ten thousand screens that are pretty to look at but that really say nothing. The utility of a SOC is to able to provide the subtle clues that something is wrong with your infrastructure. As an example, you may still have firewall rules that allow for malware to enter your network.” Whether you have your own SOC or outsource it, its capabilities should match what is going on across your network.
  4. Everything in your infrastructure is suspect. Trust nothing and scan everything. She suggests starting with monitoring your oldest gear first, which is what Avast did after they found the breach. “Stop making excuses for this older equipment and make sure you don’t take away the possibility that you need to fix something old. You can’t be afraid of scanning something because this aging system might go down. Do pen testing for real.” Part of a good monitoring program is to do it periodically by default, and make sure that all staff know what the IT department is monitoring. “The goal isn’t big brother style monitoring but to find oddball user behavior and to make it visible. With cybersecurity, prayer is not an option.”
  5. Do your own phishing awareness training and do it often. While there are any number of awareness vendors that can help set up a solid program, the best situation is to craft your own. “You know your own environment best and it isn’t hard to create believable emails that can be used as a learning moment with those users who end up clicking on the bait. Phishing awareness training is really a people problem and very hard to get significant improvement, because all it takes is one person to click on something malicious. We were always successful at getting people to click. For example, we sent out one email that said we were changing the corporate policy on free coffee and tea and had users enter their credentials for a survey.” Part of rolling your own awareness program is being up on the latest email authentication protocols such as DMARC, DKIM and SPF so you can have confidence in your controls.
  6. Make sure you set the appropriate level of security awareness for every specific job role. “You don’t want your entire company knowing everything about your complete security policy, just what is needed for them to do their jobs,” she said. “And we should tell them how to do their jobs properly and not focus on what they are doing wrong, too.” As an example, she cites that the customer care department should understand the best practices on how to handle customer data.
  7. CISOs should be as technical as possible. “I see a lot of CISOs that come from a higher-level risk management background and don’t take the time or have the skills to understand the details how their security technology works. You shouldn’t be afraid to dive deeper.” She also sees CISOs that come from a regulatory background. Some of the biggest attacks, such as Target, were compliant with regulations at the time. Compliance (such as with satisfying GDPR) has turned into a paper exercise rather than checking firewall rules or doing more technical checks. Instead, you get caught up in producing “compliance porn that gets sent to the board and then you get pwned. Stuff gets lost in translation to management, and you need this technical background.”
  8. Prioritize your risk intelligence. You have to know what to act on first, it is all about triage. “You fix someone with a heart attack before fixing a broken bone,” she says. This means matching risk with relevance, as I mention in my blog post for RSA here. Part of this is doing a level of sanity checking with other organizations to see what they have included in their risk profiles. Don’t do the easy stuff first just because it is easy.
  9. Don’t panic and destroy evidence. As Baloo found out during her response to their own attack, you need to understand that an infected PC can be useful in understanding your response. “Every member of the enterprise needs to be part of your response,” she says. Part of this is being trained in how to preserve evidence properly.
  10. Start with open source security tools first. “I am not a fan of building custom security software unless nothing like it exists on the market and it is absolutely necessary. And if you write your own tools, go the open source route and embrace it entirely: build it, make it available with peer review and let someone else kick it. I have seen too many custom systems that never get updated.”

Red Hat Developer website editorial support

For the past several months, I have been working with the editorial team that manages the Red Hat Developers website. My role is to work with the product managers, the open source experts and the editors to rewrite product descriptions and place the dozens of Red Hat products into a more modern and developer-friendly and appropriate context. It has been fun to collaborate with a very smart and dedicated group. This work has been unbylined, but you can get an example of what I have done with this page on ODO and another page on Code Ready Containers.

Here is an example of a bylined article I wrote about container security for their blog.

An update on Facebook, disinformation and political censorship

Facebook CEO Mark Zuckerberg speaks at Georgetown University, Thursday, Oct. 17, 2019, in Washington. (AP Photo/Nick Wass)

Merriam-Webster defines sanctimonious as “hypocritically pious or devout.” Last week Mark Zuckerberg gave a speech at Georgetown University about Internet political advertising, the role of private tech companies with regard to regulating free speech, and other topics. I found it quite fitting of this definition. There has been a lot of coverage elsewhere, so let me just hit the highlights. I would urge you all to watch his talk all the way through and draw your own conclusions.

Let’s first talk about censoring political ads. Many of you have heard that CNN removed a Trump ad last week: that was pretty unusual and doesn’t happen very often in TVland. Most TV stations are required by the FCC to run any political ad, as long as they carry who paid for the spot. Zuck spoke about how they want to run all political ads and keep them around so we can examine the archive later. But this doesn’t mean that they allow every political ad to run. Facebook has their corporate equivalent of the TV stations’ “standards and practices” departments, and will pull ads that use profanity, or include non-working buttons, or other such UI fails. Well, not quite so tidy, it appears.

One media site took them up on their policy. According to research done by BuzzFeed, Facebook has removed more than 160 political ads posted in the first two weeks in October. More than 100 ads from Biden were removed, and 21 ads from Trump. BuzzFeed found that Facebook applied its ad removal policies unequally. Clearly, they have some room to improve here, and at least be consistent in their “standards.”

One problem is that unlike online ads, TV political ads are passive: you sit and watch them. Another is that online ads can be powerful demotivators and convince folks not to vote, which is what happened in the 2016 elections. One similarity though is the amount of money that advertisers spend. According to Politico, Facebook has already pocketed more than $50 million from 2020 candidates running ads on its platform. While for a company that rakes in billions in overall ads, this is a small number. But it still is important.

One final note about political ads. Facebook posted a story this week that showed new efforts at disinformation campaigns by Iran and Russian-state-sponsored groups. It announced new changes to its policy, to try to prevent foreign-led efforts to manipulate public debate in another country. Whether they will be successful remains to be seen. Part of the problem is how you define state-sponsored groups. For example, which is state-sponsored? Al Jazeera, France 24, RT, NPR and others all take government funding. Facebook will start labeling these outlets’ pages and provide information on whether their content is partially under government controls.

Much was said about the first amendment and freedom of speech. I heard many comments about Zuck’s talk that at least delineated this amendment only applies to the government’s regulation of speech, not by private companies. Another issue was mentioned by The Verge: “Zuckerberg presents Facebook’s platform as a neutral conduit for the dissemination of speech. But it’s not. We know that historically it has tended to favor the angry and the outrageous over the level-headed and inspiring.” Politico said that “On Facebook, the answer to harmful speech shouldn’t be more speech, as Zuckerberg’s formulation suggests; it should be to unplug the microphone and stop broadcasting it.” It had a detailed play-by-play analysis of some of the points he made during his talk that are well worth reading.

“Disinformation makes struggles for justice harder,” said Slate’s April Glaser, who has been following the company’s numerous content and speech moderation missteps. “It often strands leaders of marginalized groups in the trap of constantly having to correct the record about details that have little to do with the issues they actually are trying to address.” Her post linked to several situations where Facebook posts harmed specific people, such as Rohingya Muslims in Myanmar.

After his speech, a group of 40 civil rights organizations called upon Facebook to “protect civil rights as a fundamental obligation as serious as any other goal of the company.” They claim that the company is reckless when it comes to its civil rights record and posted their letter here, which cites a number of other historical abuses, along with their recommended solutions.

Finally, Zuck spoke about how effective they have been at eliminating fake accounts, which number in the billions and pointed to this report earlier this year. Too bad the report is very misleading. For example, “priority is given to detecting users and accounts that seek to cause harm”- but only financial harm is mentioned.” This is from Megan Squire, who is a professor of Computer Science at Elon University. She studies online radicalization and various other technical aspects. “I would like to see numbers on how they deal with fake accounts used to amplify non-financial propaganda, such as hate speech and extremist content in Pages and Groups, both of which are rife with harmful content and non-authentic users. Facebook has gutted the ability for researchers to systematically study the platform via its own API.” Squire would like to see ways that outside researchers “could find and report additional campaigns, similarly to how security researchers find zero days, but Facebook is not interested in this approach.”

Zuck has a long history of apologia tours. Tomorrow he testifies before Congress yet again, this time with respect to housing and lending discrimination. Perhaps he will be a little more genuine this time around.

HPE blog: Top 10 great security-related TED talks

 

Like many of you, I love watching TED Talks. The conference, which covers technology, entertainment and design, was founded by Ricky Wurman back in 1984 and has spawned a cottage industry featuring recorded videos from some of the greatest speakers around the world. I was fortunate to attend one of the TED events back when it was still an annual event back in its early days and got to meet Wurman back when he was producing his Access city guides that were an interesting mix of travelogue and design.

If you are interested in watching more TED videos, here is my own very idiosyncratic guide to some of them that have more to do with cybersecurity and general IT operations, along with some of the lessons that I learned from the various speakers. If you do get a chance to attend a local event, I would also encourage you to do so: you will meet some interesting people, both in the audience and on stage.

The TED Talks present a unique perspective on the past, and many of them resonate with current events and best practices in the cybersecurity world. So many times security professionals think they have found something new when it turns out to be something that has been around for many years. One of the benefits of watching the TED talks is that they paint a picture of the past, and sometimes the past is still very relevant to today’s situations.

  1. In this 2015 talk in London, Rodrigo Bijou reviews how governments need hackers to help fight the next series of cyberwars and fight terrorists.

One of the more current trends is the situation of using malware-laced images by phishers. A recent news article mentioned the technique and labeled it “HeatStroke.”  This is one method the bad guys use to hide their code from easy detection by defenders and threat hunters. Turns out this technique isn’t so new and has been seen for years by security researchers. Bijou’s talk mentioned that years ago malware-injected images were part of ad-based clickjacking attacks. HeatStroke is just a new way take on an old problem.

Bijou’s talk also references the Arab Spring that was happening around that time. One of the consequences of public protests, particularly in countries with totalitarian governments, is that the government can restrict communications by blocking overall Internet access. This is being done more frequently, and Netblocks track such outages in countries all over the world, including Papua, Algeria, Ethiopia and Yemen. Bijou shows a now-famous photo of the Google public DNS address (8.8.8.8) that had been spray painted on a wall, in the hope that people will know what it means and use it to avoid the net blockade.

Since 2015, there have been numerous public DNS services established, many of them free or low cost. Corporate IT managers should investigate their DNS supplier for both performance gains and also for the better security provided – many of these services filter out bad URL links and phishing lures for example. You should consider switching after using a similar testing regimen to what was used in this blog post to find the best technology that could work for you.

  1. In this 2014 talk in Rio de Janeiro Andy Yen describes how encryption works. Yen was one of the founders of ProtonMail, one of the leading encryption providers.

Email encryption is another technology that has been around a long time. In one of the better talks that I have watched multiple times. Yen’s talk has been viewed 1.7M times. We still have a love/hate relationship with email encryption: many companies still don’t use it to protect their communications, in spite of numerous improvements to products over the years. Encryption technologies continue to improve: in addition to ProtonMail there are small business encryption solutions such as the Helm server that make it easier to deploy.

  1. Bruce Schneier gave his talk in 2004 at Penn State about the difference between the perception and reality of security.

Part of the staying power of his message is that we humans still process threats pretty much the same way we’ve done since we were living in caves: we tend to downplay common risks (such as finding food or driving to the store) and fear more spectacular ones (such as a plane crash or being eaten by a tiger). Schneier has been talking about “security theater” for many years now, such as the process by which we get screened at the airport. Part of understanding your own corporate theatrical enactment is in evaluating how we tend to tradeoff security against money, time and convenience.

  1. Juan Enriquez’s talk Long Beach in 2013 was about the rise of social networks and the hyperconnected world that we now live in.

He spoke about the effect of social media posts, calling them “digital tattoos.” The issue – then and now – is that all the information we provide on ourselves is easily assembled, often by just tapping into facial databases and without even knowing that our picture has been taken by someone nearby with a cell camera phone. “Warhol got it wrong,” he said, “now you are only anonymous for 15 minutes.” He feels that we are all threatened with immortality, because of our digital tattoos that follow us around the Internet. It is a good warning about how we need to consider the privacy considerations of our posts. Again, this isn’t anything new, but it does bear repeating and a good suggestion if your company still doesn’t have any formal policies and provisions in place for social media.

  1. This 2014 talk by Lorrie Faith Cranor at Carnegie Mellon University (CMU) is all about passwords.

Watching several TED talks makes it clear that passwords are still the bane of our existence, even with various technologies to improve how we use them and how to harden them against attacks. But you might be surprised to find out that once upon a time, college students only had to type a single digit for their passwords. This was at CMU, a leading computer science school and the location of one of the computer emergency response teams. The CMU policy was in effect up until 2009, when the school changed the minimum requirements to something a lot more complex. Researchers found that 80% of the CMU students reused passwords, and when asked to make them more complex merely added an “!” or an “@” symbol to them. Cranor also found that the password “strength meters” that are provided by websites to help you create stronger ones don’t really measure complexity accurately, with the meters being too soft on users as a whole.

A classic password meme is the XKCD cartoon that suggests stringing together four random common words to make more complex passwords. The problem though is that these passwords are error-prone and take a long time to type in. A better choice, suggested by her research, is to use a collection of letters which can be pronounced. This is also much harder to crack. The lesson learned: passwords still are the weak entry point into our networks, and corporations who have deployed password managers or single sign-on tools are a leg up on protecting their data and their users’ logins.

  1. Another frequently viewed talk was given in Long Beach in 2011 by Ralph Langer, a German security consultant. He tells the now familiar story from modern history how Stuxnet came to be created and how it was deployed against the Iranian nuclear plant at Natanz back in 2010.

What makes this relevant for today is the effort that the Stuxnet creators (supposedly a combination of US and Israeli intelligence agencies) designed the malware to work in a very specific set of circumstances. In the years since Stuxnet’s creation, we’ve seen less capable malware authors also design custom code for specific purposes, target individual corporations, and leverage multiple zero-day attacks. It is worth reviewing the history of Stuxnet to refresh your knowledge of its origins. The story of how Symantec dissected Stuxnet is something that I wrote about in 2011 for ReadWrite that is also worth reading.

  1. Avi Rubin’s 2011 talk in DC reviews how IoT devices can be hacked. He is a professor of computer science.

Back in 2011, some members of the general public still thought you could catch a cold from a computer virus. Rubin mentions that IoT devices were under attack as far back as 2006, something worth considering that these attacks have become quite common (such as with the Mirai attacks which began in 2016). Since then, we have seen connected cars, smart home devices, and other networked devices become compromised. One lesson learned from watching Rubin’s talk is that attackers may not always follow your anticipated threat model and compromise your endpoints with new and clever methods. Rubin urges defenders to think outside the box to anticipate the next threat.

  1. Del Harvey gave a talk in Vancouver in 2014. She handles security for Twitter and her talk is all about the huge scale brought about by the Internet and the sorts of problems she has to face daily.

She spoke about how many Tweets her company has to screen and examine for potential abuse, spam, or other malicious circumstances. Part of her problem is that she doesn’t have a lot of context to evaluate what a user is saying in their Tweets, and also that even if she makes one mistake in looking at a million Tweets, that is still something that could happen 500 times a day. This is also a challenge for security defenders who have to process a great deal of daily network traffic to find that one bad piece of malware buried in our log files. Harvey says it helps to visualize an impending catastrophe and this contains a clue of how we have to approach the scale problem ourselves, through the use of better automated visualization tools to track down potential bad actors.

  1. This 2014 Berlin session by Carey Kolaja is about her experiences working for Paypal.

She was responsible for establishing new markets for the payments vendor that could help the world move money with fewer fees and less effort. Part of her challenge though is establishing the right level of trust so that payments would be processed properly, and that bad actors would be quickly identified. She tells the story of a US soldier in Iraq that was trying to send a gift to his family back in New York. The path of the transaction was flagged by Paypal’s systems because of the convoluted route that the payment took. While this was a legitimate transaction, it shows even back then we had to deal with a global reach and have some form of human evaluation behind all the technology to ensure these oddball payment events happen. The lesson for today is how we examine authentication events that happen across the world and putting in place risk-based security scoring tools to flag similar complex transactions. “Today trust is established in seconds,” she says – which also means that trust can be broken just as quickly.

  1. Our final talk is by Guy Goldstein and given in 2010 in Paris. He talks about how hard it is to get attribution correctly after a cyber-attack.

Even back then it was difficult to pin down why you were targeted and who was behind the attack, let alone when you were first penetrated by an adversary. “Attribution is hard to get right,” he says, “and the consequences of getting it wrong are severe.” Wise words, and why you need to have red teams to boost your defensive capabilities to anticipate where an attack might come from.

As you can see, there is a lot to be gleaned from various TED talks, even ones that have been given at conferences many years ago. There are still security issues to be solved and many of them are still quite relevant to today’s environment. Happy viewing!

Lessons for leaders: learning from TED Talks

  • Public DNS providers have proliferated and a worth a new look to protect your network from outages in conflict-prone hotspots around the world
  • Consider privacy implications of your staff’s social media posts and assemble appropriate guidelines for how they consume social media.
  • Improve your password portfolio by using a password manager, a single sign-on tool or some other mechanism for making them stronger and less onerous in their creation for your users
  • Think outside the box and visualize where your next threats will appear on your network.
  • Examine whether risk-based authentication security tools can help provide more trustworthy transactions to thwart phishers.
  • Build red teams to help harden your defenses.