Behind the scenes at a regional NCCD competition

Every year hundreds of college students compete in the National Collegiate Cyber Defense Competition. Teams from around the country begin with regional competitions, and the winners of those go on to compete for bragging rights and cash prizes at the national event in Orlando held at the end of April. A friend of mine from the Seattle area, Stephen Kangas, was one of the volunteers, all of whom are drawn from IT security professionals. I spoke to him this week about some of his experiences. The event tries to simulate defending a simulated corporate network, and is divided into two basic teams: the defenders who comprise the blue teams from the colleges, and the attackers, or red team. In addition, there are other groups, such as the judges and the “orange team” which I will get to in a moment. There is also a team of judges with body cams to record the state of play are assigned to each blue team and these are used to tally up the final point totals. Points are also awarded based on the number of services that are still online and haven’t been hacked, as well as those systems which were hacked and then recovered. Both teams have to file incident reports and these are also evaluated as part of the scores.

Stephen has participated at the competition for several years as a mentor and coach for a team from a local high school that competes in the high school division. This year he was on one of the red teams attacking one of the college blue teams. He has his Certified Ethical hacker credential and is working towards a MS in Cybersecurity degree too. He has been involved in various IT roles both as a vendor and as a consultant, including a focus in information security, for decades. “I wanted to expand my knowledge in this area. Because most of my experience has been on defensive side, I wanted to get better, and for that you have to know about the strategy, tools, and tactics used by the offensive black hats out there.”

The event takes place over a weekend and the red team attackers take points away from the defenders for penetrating their corresponding blue team’s network and “pwning” their endpoints, servers, and other devices. “I was surprised at how easy it was to penetrate our target’s network initially. People have no idea how vulnerable they are as individuals and it is becoming easier every day. We need to be preparing and helping people to develop the knowledge and skills to protect us.” His red team consisted of three others that had complementary specializations, such as email, web and SQL server penetration and different OSs. Each of the 30 red team volunteers brings their own laptop and but they all use the same set of hacking tools (which includes Kali Linux, Cobalt Strike, and Empire, among others), and the teams communicate via various Slack channels during the event.

The event has an overall red team manager who is taking notes and sharing tips with the different red teams. Each blue team runs an exact VM copy of the scenario, with the same vulnerabilities and misconfigurations. This year it was a fake prison network. “We all start from the same place. We don’t know the network topology, which mimics the real-world situation where networks are poorly documented (if at all).” Just like in the real world, blue teams were given occasional injects, such as deleting a terminated employee or updating the release date of a prisoner; the red teams were likewise given occasional injects, such as finding and pwning the SQL server and changing the release date to current day.

In addition to the red and blue teams is a group they call the orange team that adds a bit of realism to the competition. These aren’t technical folks but more akin to drama students or improv actors that call into the help desk with problems (I can’t get my email!) and read from scripted suggestions to also put more stress on the blue team to do a better job of defending their network. Points are awarded or taken away from blue teams by the judges depending upon how they handle their Help Desk phone calls.

Adding additional realism, during the event members of each red team make calls with the help desk, pretending to be an employee, trying to social engineer them for information. “My team broke in and pwned their domain controllers. We held them for ransom after locking them out of their Domain Controller, which we returned in exchange for keys and IP addresses to some other systems. Another team called and asked ransom for help desk guy to sing a pop song. They had to sing well enough to get back their passwords.” His team also discovered several Linux file shares that had employee and payroll PII on it.

His college’s team came in second, so they are not going on to the nationals (University of Washington won first place). But still, all of the college students learned a lot about better defense that they can use when competing next year, and ultimately when they are employed after graduation.  Likewise, the professionals on the red teams learned new tools and techniques from each other that will benefit them in their work. It was an interesting experience and Stephen intends to volunteer for Pacific Rim region CCDC again next year.

RSA blog: Understanding the trust landscape

Earlier this month, president of RSA, Rohit Ghai, opened the RSA Conference in San Francisco with some stirring words about understanding the trust landscape. The talk is both encouraging and depressing, for what it offers and for how far we have yet to go to realize this vision completely.

Back in the day, we had the now-naïve notion that defending a perimeter was sufficient. If you were “inside” (however defined), you were automatically trusted. Or once you authenticated yourself, you were then trusted. It was a binary decision: in or out. Today, there is nothing completely inside and trusted anymore.

It is all a matter of shades of grey. So cyber security means evaluating who and what is trusted on a continuous basis. Ironically, to get to appreciate these shades of grey, we have to work a lot harder before we can trust our computers, apps and devices.

I had an opportunity to  spend some time with Rohit at a presentation we both did in London earlier this year and enjoyed exchanging many ideas with him.

Part of the challenge is that the world has become a lot more complicated. How many of us accept the following activities as part of our normal activities?

  • Telling your credit card company when you will be out of the country is now part of my pre-trip routine.
  • Questioning when asked to provide our SSN or street address – remember when some of us had them printed on our checks?
  • When signing up for a new website, I no longer automatically provide my “real” birthday. While this is a more secure posture, it is also somewhat annoying when this date rolls around on the calendar and those congratulatory notes come in.
  • Now I use MFA sign-ons more routinely. But when I have an account that doesn’t use MFA it gives me pause as to whether I even want to do business with them.
  • I now accept the extra steps of using a VPN when roaming around on public Wi-Fi networks as part of the my normal connection process.

Like Rohit, I have begun “to obsess about the trust landscape.” I think we all know what he means. He spoke about how to manage various risks, which means assessment about the likelihood of particular digital compromises to our networks, our endpoints, and our lives. “It must become our new normal,” he said during this keynote.

But what does this really imply? That we can’t trust anyone or anything anymore? That is where the depression sets in. Some vendors have tried to make lemonade out of these lemons by promoting what they call a “zero trust” model. You might think this is a new term, but you would be wrong. It has been around since 2010, when then-Forrester analyst John Kindervag first created the notion. The idea is simple: no one gets any access until they can prove their identity. In that paper, he mentions how when Bugsy Siegel built Vegas, he built the town first, and then the roads. In IT, too often we first go for the infrastructure before we understand the apps that will be running on it.

Here is a better idea: RSA CTO Zulfikar Ramzan advocates replacing the zero trust model with one that focusses on managing zero risk. That gets IT staffs to examine what is really important: identifying key IT assets, data as well as third parties and focusing their energies on securing those. He mentioned in this video interview that “if digital transformation is the rocket ship, then trust has to be the fuel for that rocket ship.”

Using this zero-risk model changes the conversation from building roads to looking more carefully at the business itself: what apps will we need to deliver business services, how will proprietary data be stored and protected, and who will have access to what based on the business. How many of you can certify with complete confidence that every user in your Active Directory is still a legitimate and current employee? I don’t see too many hands raised, which proves my point.

Tom Wolfe wrote in his 1987 novel, The Bonfire of the Vanities, about a concept called “the favor bank.” This means we all make deposits, as favors, in the hopes of making future withdrawals when we need them. Rohit used a variation in his speech he called the “reputation bank,” where companies make deposits of trustworthy moments, to balance those dark times when they need to make their own withdrawals. I like the concept, because it gets across that trust is a two-way street. I will give up my email to you, if I get some benefit to me. Those vendors that know how the reputation bank will earn interest and our trust; those that lie about their privacy policies will overdraw their accounts.

To conclude things, I turn to that great security authority, Billy Joel, who once said it best:

It took a lot for you to not lose your faith in this world
I can’t offer you proof
But you’re going to face a moment of truth …
It only is a matter of trust.

The Huawei telecom ban makes no sense

Color me confused about our 5G technology policy. Today I see this statement: “I want 5G, and even 6G, technology in the United States as soon as possible. It is far more powerful, faster, and smarter than the current standard. American companies must step up their efforts, or get left behind. I want the United States to win through competition, not by blocking out currently more advanced technologies.” That is from a recent set of tweets from our president. He is expected to sign an executive order banning Huawei equipment from domestic cellular carriers before next week. Not to be outdone, Congress is considering HR 4747 that would prevent government agencies from doing business with them.

Huawei seems to be the latest target of badly behaving tech companies, and it has gotten a lot of enemies. Last week our Secretary of State meet with several European leaders, telling them to not purchase any equipment from Huawei in building out their 5G cellular networks. He told them that this gear will make it more difficult for American equipment to operate there.

The fear is that Chinese will embed spying devices in their gear, interfering with communications. Chinese hacking attempts have dramatically risen over the past year, according to this new report from Crowdstrike. While the report didn’t identify Huawei as the source, they did find several hacking attempts aimed specifically at telecom vendors and their government customers.

The US isn’t alone in its fear of Huawei spying. Poland, Italy and Germany are all considering banning their gear from their newer cell networks. Last year, both South Korea and Australia enacted such a ban, and the UK began removing their equipment too. Huawei supplies Australian and UK 4G equipment and BT said last month that they will begin removing that stuff.  A recent news story in The Register stated that Huawei won’t be used to run any new British government networks, even though it will continue to be used in British landline infrastructure.

But is the Chinese government really using Huawei equipment to spy on us? Jason Perlow writes in ZDnet that chances are low, mainly because first there is no concrete proof, and second because it wouldn’t be in their best economic interests. Also, given that you can find Chinese semiconductors in just about everything these days, it would be nearly impossible to effectively ban them.

But there is another confounding reason that no one has mentioned, and that has to do with this law called CALEA. It spells out requirements for telecom suppliers and how they must provide access to government wiretaps and other law enforcement activities from their gear. So technically, not only is Huawei doing this, but all the other telecom vendors have to do so too. If you are with me so far, you see that Huawei is obligated to have this “backdoor” if they want to do business in the USA, yet we are criticizing them for having this very same backdoor! How this will play out in these bans is hard to realize.

A Huawei ban makes no sense. But it won’t stop government agencies from piling on at this stage.

CSOonline: How online polls are hacked and what you should do about it

The news in January about Michael Cohen’s indictments covers some interesting ground for IT managers and gives security teams something else to worry about: He allegedly paid a big data firm Redfinch Solutions to rig two online polls in then-candidate Donald Trump’s favor. To those of us who have worked with online polls and surveys, this comes as no surprise.

Researchers at RiskIQ found another survey-based scam that involves a complex series of steps that use cloned YouTube identities to eventually get marks to take surveys to redeem their “free” iPhones. Instead, the respondents get malware installed on their computers or phones. Security managers need to up their game and understand both the financial and reputational risks of rigged polls and the exploits that are delivered through them. Then they can improve their protective tools to keep hackers away from their networks and users. In this story for CSOonline, I talk about some of these issues and explain why businesses should use online polls and how to keep your networks safe from bad ones. 

Privacy, transparency, and increasing digital trust

There is a crisis of trust in American democracy.” So begins a new report from the Knight Commission on Trust, Media and Democracy organized by the Aspen Institute. It lays blame on our political discourse, racial tensions, and technology that gives us all more access to more commentary and news. “In 2018, unwelcome facts are labeled as fake.”

Part of the problem with trust has to do with the ease of cyber-criminals to ply their trade. Once relegated to a dark corner of the Internet, now many criminals operate in the public view, selling various pieces of technology such as ready-made phishing kits to seed infections, carders to collect credit card numbers, botnets and web stressors to deliver DDoS attacks, and other malware construction kits that require little to no technical expertise beyond clicking a few buttons on a web form. A new report from CheckPoint shows that anyone who is willing to pay can easily obtain all of these tools. We truly have witnessed the growth of the “Malware-as-a-service” industry.

This week I was in London participating in a forum for the Euro press put on by RSA. I got a chance to interview numerous experts who have spent their careers examining cybercrime and understanding how to combat fraud. It was a somewhat sobering picture, to be sure. At the forum, RSA’s president Rohit Ghai spoke about how the largest facet of risk today is digital risk, and how businesses need to better integrate risk management and cyber security methods. “This is a team sport, and security, IT, operations and risk groups all need to work together,” he said. “Our goal is not just about protecting apps or data, but about protecting our trust assets. We trust strangers to share our homes and cars because tech brings us together and drives the sharing economy.” We need to replace this trust system in the B2B world as Airbnb and Lyft have done for consumer-based businesses.

Ghai agrees with the conclusions of the Knight report that trust is at an all-time low. We have gotten so distrustful of our digital lives that we now have a new acronym, LDL, for let’s discuss live. But we can’t turn back the clock to the analog era: we need trust to fuel our future economic growth. He mentioned that to be trustful, “an ethical company should be doing the right thing, even if no one is looking at them at the time.” I liked that idea: too often we hear about corporations that are polluting our environment, denying any responsibility or worse, covering up the details when they get caught.

Part of the challenge is that cybersecurity is really a business problem, not a failure of technology. This is because “breaches and intrusions will occur,” says Ghai. “We have to move beyond the shame of admitting a data intrusion, and understanding its business impact. Our goal should be maintaining cyber wellness, not trying to totally eradicate threats.” Taking better care of customers’ privacy is also good for business, as numerous reports (such as this one from RSA) have concluded recently. Almost half of the consumers surveyed believe there are ethical ways companies can use their data.

Another issue is that what we say and what we actually do about maintain our digital privacy is often at odds with each other. In a 2017 MIT privacy experiment, they found that student participants would quite readily give up personal data for very small incentives, such as a free pizza. This dichotomy is even seen with IT security pros. A recent survey by Yubico found that more than half of those IT managers who have been phished have still not changed their password behavior. If they don’t change to improve their own security, who will?

The same dichotomy can be said about transparency: sadly, there are few companies who are actually as transparent as they claim, either through willfully misleading the public (Facebook is tops in this regard) or by just doing a poor job of keeping their IT assets under appropriate controls (the City of Atlanta or Equifax are two prime case studies here).

Where do we go from here? Security expert Bruce Schneier says that trust is fragile, and transparency is essential to trust. The Knight report carries a series of recommendations for journalists, technology vendor managers, and ordinary citizens, and I hope we can implement many or all of them to make for a better mutual and trusted future. They include being better at practicing radical transparency, for journalists to disclose information sources as a rule, and making social networks step up and take responsibility for protecting their users. All of us need to work together if we want to turn this around and increase trust.

Dealing with CEO Phishing Fraud

When we get emails from our CEO or other corporate officers, many of us don’t closely scrutinize their contents. Phishers count on this for their exploits. The messages often come around quitting time, so there is some sense of urgency so we will act before thinking through the consequences. 

Here is an example of a series of emails between “the boss” (in reality, the phisher) and his subordinate that happened in November 2017. You can see the growing sense of urgency to make a funds transfer happen, which is the phisher’s stock in trade. According to FBI statistics, this type of fraud is now a $12 billion scam. And yes, the money was actually sent to this attacker.

KnowBe4, which sells phishing training services, categorizes the scam into two separate actions:

  1. First is the phishing attempt itself. It is usually called spear phishing, meaning that the attacker has studied the corporate organization chart and targeted specific individuals. The attacker has also examined who has fiduciary responsibility to perform the actual funds transfer, because at the heart of this scam it is all about the money that they can steal from your business.
  2. Next is all about social engineering. The attacker has to appear to be convincing and act like the boss. Often, the targeted employee is tricked into divulging confidential information, such as bank accounts or passwords. Many times they use social media sources to amplify their message and make it seem  more legit.

The blog post mentions several different situations that are common with this type of fraud:

  1. Business working with a foreign supplier.
  2. Business receiving or initiating a wire transfer request.
  3. Business contacts receiving fraudulent correspondence.
  4. Executive and attorney impersonations.
  5. Confidential data theft.

A new blog post by Richard DeVere here provides some good suggestions on how to be more vigilant and skeptical with these emails. 

  • Examine the tone and phrasing of the email. One time a very brusque CEO — who was known for this style — supposedly sent a very polite email. The recipient flagged it as a potential phish because of this difference.
  • Have shared authority on money transfers. Two heads are better than one.
  •  As Reagan has said, trust but verify. Ask your boss (perhaps by calling directly) if this email really originated from him or her before acting on it. Phone calls and texts can be spoofed from your boss’ number. As the illustration above shows, this is quite common. Take a moment to process what is being asked of you.
  • Report the scammer to the right authorities inside and outside your company.

The bottom line: be wary and take a breath when you get one of these emails.

CSOonline: Building your forensic analysis toolset

A solid toolset is at the core of any successful digital forensics program, an earlier article that I wrote for CSOonline. Although every toolset is different depending on an organization’s needs, some categories should be in all forensics toolkits. In this roundup for CSOonline, I describe some of the more popular tools, many of which are free to download. I have partitioned them into five categories: overall analysis suites (such as the SANS workstation shown here), disk imagers, live CDs, network analysis tools, e-discovery and specialized tools for email and mobile analysis.

The dangers of DreamHost and Go Daddy hosting

If you host your website on GoDaddy, DreamHost, Bluehost, HostGator, OVH or iPage, this blog post is for you. Chances are your site icould be vulnerable to a potential bug or has been purposely infected with something that you probably didn’t know about. Given that millions of websites are involved, this is a moderate big deal.

It used to be that finding a hosting provider was a matter of price and reliability. Now you have to check to see if the vendor actually knows what they are doing. In the past couple of days, I have seen stories such as this one about GoDaddy’s web hosting:

 

And then there is this post, which talks about the other hosting vendors:

Let’s take them one at a time. The GoDaddy issue has to do with their Real User Metrics module. This is used to track traffic to your site. In theory it is a good idea: who doesn’t like more metrics? However, the researcher Igor Kromin, who wrote the post, found the JavaScript module that is used by GoDaddy is so poorly written that it slowed down his site’s performance measurably. Before he published his findings, all GoDaddy hosting customers had these metrics enabled by default. Now they have turned it off by default and are looking at future improvements. Score one for progress.

Why is this a big deal? Supply-chain attacks happen all the time by inserting small snippets of JavaScript code on your pages. It is hard enough to find their origins as it is, without having your hosting provider to add any additional burdens as part of their services. I wrote about this issue here.

If you use GoDaddy hosting, you should go to your cPanel hosting portal, click on the small three dots at the top of the page (as shown above), click “help us” and ensure you have opted out.

Okay, moving on to the second article, about other hosting provider scripting vulnerabilities. Paulos Yibelo looked at several providers and found multiple issues that differed among them. The issues involved cross-site scripting, cross-site request forgery, man-in-the-middle problems, potential account takeovers and bypass attack vulnerabilities. The list is depressingly long, and Yibelo’s descriptions show each provider’s problems. “All of them are easily hacked,” he wrote. But what was more instructive was the responses he got from each hosting vendor. He also mentions that Bluehost terminated his account, presumably because they saw he was up to no good. “Good job, but a little too late,” he wrote.

Most of the providers were very responsive when reporters contacted them and said these issues have now been fixed. OVH hasn’t yet responded.

So the moral of the story? Don’t assume your provider knows everything, or even anything, about hosting your site, and be on the lookout for similar research. Find a smaller provider that can give you better customer service (I have been using EMWD.com for years and can’t recommend them enough). If you don’t know what some of these scripting attacks are or how they work, go on over to OWASP.org and educate yourself about their basics.

CSOonline: How to secure your WordPress site

If you run a WordPress blog, you need to get serious about keeping it as secure as possible. WordPress is a very attractive target for hackers for several reasons that I’ll get to in a moment. To help you, I have put together my recommendations for the best ways to secure your site, and many of them won’t cost you much beyond your time to configure them properly. My concern for WordPress security isn’t general paranoia; my own website has been attacked on numerous occasions, including a series of DDoS attacks on Christmas day. I describe how to deploy various tools such as WordFence, shown below and you can read more on CSOonline. 

Both real and fake Facebook privacy news

I hope you all had a nice break for the holidays and you are back at work refreshed and ready to go. Certainly, last year hasn’t been the best for Facebook and its disregard for its users’ privacy. But a post that I have lately seen come across my social news feed is blaming them for something that isn’t possible. In other words, it is a hoax. The message goes something like this:

Deadline tomorrow! Everything you’ve ever posted becomes public from tomorrow. Even messages that have been deleted or the photos not allowed. Channel 13 News talked about the change in Facebook’s privacy policy….

Snopes describes this phony alert here. They say it has been going on for years. And it has gained new life, particularly as the issues surrounding Facebook privacy abuses have increased. So if you see this message from one of your Facebook friends, tell them it is a hoax and nip this in the bud now. You’re welcome.

The phony privacy message could have been motivated by the fact that many of you are contemplating leaving or at least going dark on your social media accounts. Last month saw the departure of several well known thought leaders from the social network, such as Walt Mossberg. I am sure more will follow. As I wrote about this topic last year, I suggested that at the very minimum if you are concerned about your privacy you should at least delete the Facebook Messenger app from your phone and just use the web version.

But even if you leave the premises, it may not be enough to completely cleanse yourself of anything Facebook. This is because of a new research report from Privacy International that is sadly very true. The issue has to do with third-party apps that are constructed from Facebook’s Business Tools. And right now, it seems only Android apps are at issue.

The problem has to do with the APIs that are part of these tools, and how they are used by developers. One of the interfaces specifies a unique user ID value that is assigned to a particular phone or tablet. That ID comes from Google, and is used to track what kind of ads are served up to your phone. This ID is very useful, because it means that different Android apps that are using these Facebook tools all reference the same number. What does this mean for you? Unfortunately, it isn’t good news.

The PI report looked at several different apps, including Dropbox, Shazam, TripAdvisor, Yelp and several others.

If you run multiple apps that have been developed with these Facebook tools, with the right amount of scrutiny your habits can be tracked and it is possible that you could be un-anonymized and identified by the apps you have installed on your phone. That is bad enough, but the PI researchers also found out four additional disturbing things to make matters worse:

First, the tracking ID is created whether you have a Facebook account or not. So even if you have gone full Mossberg and deleted everything, you will still be tracked by Facebook’s computers. It also is created whether your phone is logged into your Facebook account (or using other Facebook-owned products, such as What’sApp) or not.

Second, the tracking ID is created regardless of what you have specified for your privacy settings for each of the third-party apps. The researchers found that the default setting by the Facebook developers for these apps was to automatically transfer data to Facebook whenever a phone’s user opens the app. I say was because Facebook added a “delay” feature to comply with the EU’s GDPR. An app developer has to rebuild their apps with the latest version to employ this feature however. The PI researchers found 61% of the apps they tested automatically send data when they are opened.

Third, some of these third-party apps send a great deal of data to Facebook by design. For example, the Kayak flight search and pricing tool collects a great deal of information about your upcoming travels – this is because it is helping you search for the cheapest or most convenient flights. This data could be used to construct the details about your movements, should a stalker or a criminal wish to target you.

When you put together the tracking ID with some of this collected data, you can find out a lot about whom you are and what you are doing. The PI researchers, for example, found this one user who was running the following apps:

  • “Qibla Connect” (a Muslim prayer app),
  • “Period Tracker Clue,”
  • “Indeed” (a job search app), and
  • “My Talking Tom” (a children’s’ app).

This means the user could be potentially profiled as likely a Muslim mother who is looking for a new job. Thinking about this sends a chill up my spine, as it probably does with you. The PI report says, “Our findings also show how routinely and widely users’ Google ad ID is being shared with third parties like Facebook, making it a useful unique identifier that enables third parties to match and link data about an individual’s behavior.”

Finally, the researchers also found that the opt-out methods don’t do anything; the apps continue to share data with Facebook no matter what you have done in your privacy settings, or if you have explicitly sent any opt-out messages to the app’s creators.

Unfortunately, there are a lot of apps that exhibit this behavior: researchers found that Facebook apps are the second most popular tracker, after Google’s parent company Alphabet, for all free apps on the Google Play Store.

So what should you do if you own an Android device? PI has several suggestions:

First, reset your advertising ID regularly by going to Settings > Google > Ads > Reset Advertising ID. Next, go to Settings > Google > Ads > Opt out of personalized advertising to limit these types of ads that leverage your personal data. Next, make sure you update your apps to keep them current. Finally, regularly review the app permissions on your phone and make sure you haven’t granted them anything you aren’t comfortable doing.

Clearly, the real bad news about Facebook is stranger than fiction.