HPE Enterprise.nxt: How to protect hidden Windows services from attacks

The hijacking legitimate but obscure Windows services is a tough exploit to detect. Here are two lesser known Windows services that could be vulnerable to malware attacks. You might think you can tell the difference between benign and malicious Windows services, but some of these services are pretty obscure. Do you know what ASLR and BITS are? Exactly.

You can read my latest article for HPE here.

iBoss blog: A Review of the Notable Vulnerabilities of 2017

This past year has seen its usual collection of exploits, vulnerabilities, attacks and data leaks. But let’s take a look back and see if we can learn a few lessons from the progress of time. Of all stories, it certainly seems like this year has been a watershed in terms of major ransomware attacks. From Locky, Petya, Mirai, WannaCry, and BadRabbit, we haven’t had much time in between each attack to bounce back, the attacks are getting bigger and more intrusive and more targeted.

For this and other megatrends, I offer up some suggestions for security managers too. Here are more in my iBoss blog post this week. 

Lessons learned from the Minitel era

Technologists tend to forget the lessons learned from the immediate past, thinking that new tech is always better and more advanced than those dusty modems of yesteryear. That is why a new book from MIT Press on Minitel is so instructive and so current, especially as we devolve from a net-neutral world in the weeks and years to come. Much as I want to be tempted to discuss net neutrality, let’s just leave those issues aside and look at the history of Minitel and what we can learn from its era.

Minitel? That French videotext terminal thing from the 1980s and 1990s? Didn’t that die an ignominious death from the Internet? Yes, that is all true. But for its day, it was ahead of its time and ahead of today’s Internet in some aspects too. You’ll see what I mean when you consider its content and micropayments, network infrastructure, and its hybrid public/private ownership model. Let’s dive in.

Minitel was the first time anyone figured out how to develop a third-party payments system called Kiosk that made it easier for content providers to get paid for their work, and laid the foundation for the Apple App Store and others of its ilk. It presaged the rise of net porn well before various Internet newsgroups and websites gained popularity, and what was remarkable was that people paid money for this content too.  It was the first time a decentralized network could hook up a variety of public clients and servers of different types. Granted the clients were 1200 bps terminals and the network was X.25, but still this was being done before anyone had even thought of the Web. It was the first public/private tech partnership of any great size: millions of ordinary citizens had terminals (granted they got them free of charge) well before AOL sent out its first CD and before the first private dot coms were registered. The authors call this “private innovation decentralized to the edges of the network.” This is different from what the Internet basically did beginning in the middle 1990s, which was to privatize the network core. Before then, the Internet was still the province of the US government and had limited private access.

Minitel made possible a whole series of innovations well before their Internet-equivalents caught on, sometimes decades earlier. The book describes a whole series of them, including e-government access, ecommerce, online dating, online grocery ordering, emjois and online slang, electronic event ticketing and electronic banking. When you realize that at its peak Minitel had 25,000 services running, something that the Web wouldn’t reach until 1995, it is a significant accomplishment.

Minitel wasn’t all rainbows and unicorns. Like AOL, it was a “walled garden” approach, but in some aspects it was more open than today’s Internet in ways that I will get to in a moment. It had issues being controlled by a nationalized phone company.

Certainly, the all-IP Internet was a big improvement over Minitel. You didn’t have to provision those screwy and expensive X.25 circuits. You could send real graphics, not those cartoon ones that videotext terminals used that were more like ASCII art. Minitel was priced by the minute, because that is what the phone company knew how to do things. Certainly, the early days of the Internet had plenty of 1200 bps modem users who had to pay per call, or set up a separate phone line for their modems. Now we at least don’t have to deal with that with broadband networks that are thousands of times faster.

One side note on network speeds: Minitel actually had two speeds: 1200 and 75 bps. Most of the time, the circuits were set up 1200/75 down/up. You could send a signal to switch the speeds if you were sending more than you were receiving, but that had to happen under app control.

So what can we learn from Minitel going into the future? While most of us think of Minitel as a quaint historical curio that belongs next to the Instamatic camera and the Watt steam engine, it was far ahead of its time. Minitel was also a cash infusion that enabled France to modernize and digitize its aging phone infrastructure. It was the first nationalized online environment, available to everyone in France. It proved that a state subsidy could foster innovation, as long as that subsidy was applied surgically and with care.  As the authors state, “sometimes complete control of network infrastructure by the private sector stifles rather than supports creativity and innovation.”

When we compare Minitel to today’s online world, we can see that the concept of open systems is a multi-dimensional continuum, and that it is hard to judge whether Minitel and the Internet are more or less open.  As we begin the migration of a neutral Internet infrastructure to one that will be controlled by the content providers, we should keep that in mind. The companies that control the content have different motivations from the users who consume that content. I do think we will see a vastly different Internet in 30 years’ time, just as the Internet of 1987 is very different from the one we all use today.

HPE blog: The changing perception of open source in enterprise IT

Once upon a time, when someone in IT wanted to make use of open source software, it was usually an off-the-books project that didn’t require much in the way of management buy-in. Costs were minimal, projects often were smaller with a couple of people in a single department, and it was easy to grasp what a particular open source project provided. Back then, IT primarily used open source to save money and “do more with less,” letting the department forgo the cost of commercial software.

Times have certainly changed. Yes, software costs are still a factor, and while it is generally true that open source can save money, it isn’t the only reason nowadays to adopt it. While application deployment costs have risen, the direct software cost is a small part of the overall development budget, often dwarfed by infrastructure, scalability, and reliability measures.

As a result, today’s open source efforts aren’t anything like those in earlier days.

You can read the full story on HPE’s blog here.

Understanding how to become an effective digital change agent

As technologists, we tend to get caught up in the computer side of things when it comes to try to get stuff done in our organizations. So often we forget that the real drivers of change are the people behind the screens. In new research that my colleague Brian Solis has just published, he documents exactly how enterprise digital transformation happens, and talks directly to some of those “change agents” that he has known for decades as an analyst covering the IT scene. His Manifesto is available now for downloading and reading, I strongly suggest doing so. (Other than registering your email, it is free of charge.)

With most organizations, “these digital transformation efforts often take place in isolated pockets, sometimes with little coordination and collaboration across the enterprise,” he writes. Often it is a solitary individual who drives change and introduction of particular digital technologies and methods at a grassroots level — and often fails to go further across the enterprise. His manifesto puts together a solid ten-point plan (shown here) if you want to be more effective in bringing this about at your company. This includes embracing yourself as a catalyst, obtaining leadership support, creating a roadmap and democratizing idea creation. Some of these are obvious, some aren’t. 

He says that “digital transformation is more of a people problem than a business problem. Trust is the least measurable but most important factor to build.” Without this trust, your colleagues can sabotage or block your efforts. One of the biggest obstacles in building trust is in managing your own ego as a change agent. When you display too much ego, you make the change all about you, rather than the benefit to your company. The same is true when managing your colleagues’ egos too.

On the other side of this is managing your own doubts about what you are trying to do. “Although it may seem counterintuitive to manage detractors, change agents ought to listen closely to their feedback. It is better to let them voice their concerns than to let them detract in secret.” Indeed, listening is often overlooked when advocating change. The better listener you are, the more you’ll get done. 

Solis mentions that when a change agent has the full buy-in of the executive suite, real change becomes possible and turns from a suggestion to a corporate mandate.

“Digital Darwinism is increasingly becoming either a threat or an opportunity based on how organizations react to change,” he says in his report. Digital change agents can become the next generation of leaders and help to be instrumental in having their companies more effectively compete in this digital economy.

 

 

 

iBoss blog: What is HTTP Strict Transport Security

 

 

Earlier this summer, I wrote about how the world of SSL certificates is changing as they become easier to obtain and more frequently used. They are back in the news more recently with Google’s decision to add 45 top-level domains to a special online document called the HTTPS Strict Transport Security (HSTS) preload list. The action by Google adds all of its top level domains, including .Google and .Eat, so that all hosts using that domain suffix will be secure by default. Google has led by example in this arena, and today Facebook, Twitter, PayPal and many other web properties have supported the HSTS effort.

The HSTS preload list consists of hosts that automatically enforce secure HTTP connections by every visiting browser. If a user types in a URL with just HTTP, this is first changed to HTTPS before the request is sent. The idea is to prevent man-in-the-middle, cookie hijacking and scripting attacks that will intercept web content, as well as prevent malformed certificates from gaining access to the web traffic.

The preload list mitigates against a very narrowly defined attack that could happen if someone were to intercept your traffic at the very first connection to your website and decode your HTTP header metadata. It isn’t a likely scenario, but that is why there is this list.  “Not having HSTS is like putting a nice big padlock on the front door of your website, but accidentally leaving a window unlocked. There’s still a way to get in, you just have to be a little more sophisticated to find it,” says Patrick Nohe of the SSL Store in a recent blog post.

This means if you thought you were good with setting a permanent 301 redirect from HTTP to HTTPS, you aren’t completely protected.

The preload site maintains a chart showing you which browser versions support HSTS, as shown above. As you might imagine, some of the older browsers, such as Safari 5.1 and earlier IE versions, don’t support it at all.

So, what should you do to protect your own websites? First, if you understand SSL certificates, all you might need is a quick lesson in how HSTS is implemented, and OWASP has this nice short “cheat sheet” here. If you haven’t gotten started with any SSL certs, now is the time to dive into that process, and obtain a valid EV SSL cert. If you haven’t catalogued all your subdomains, this is also a good time to go off and do that.

Next, start the configuration process on your webservers: locate the specific files (like the .htaccess file for Apache’s web servers) that you will need to update with the HSTS information. If you need more complete instructions, GlobalSign has a nice blog entry with a detailed checklist of items, and specific instructions for popular web servers.

After you have reviewed these documents, add your sites to the preload site. Finally, if you need more in-depth discussion, Troy Hunt has this post that goes into plenty of specifics. Healso warns you when to implement the preload feature: when you are absolutely, positively sure that have rooted out all of your plain HTTP requests across your website and never plan to go back to those innocent days.

IBM blog: The History of Connected Car Research in Israel

Israel is becoming a major center for connected car research. Fueled by government-backed military research, test labs established by automakers and numerous connected car startups, the country has attracted top talent from around the world and provided innovative technologies in automotive cybersecurity.

In my post for IBM’s SecurityIntelligence blog, I talk about the rise of this research after meeting some of the principals at a conference in Israel earlier this month.

Why you should be afraid of phishing attacks

I have known Dave Piscitello for several decades; he and I served together with a collection of some of the original inventors of the Internet and he has worked at ICANN for many years. So it is interesting that he and I are both looking at spam these days with a careful eye.

He recently posted a column saying “It sounds trivial but spam is one of the most important threats to manage these days.” He calls spam the security threat you easily forget, and I would agree with him. Why? Because spam brings all sorts of pain with it, mostly in the form of phishing attacks and other network compromises. Think of it as the gateway drug for criminals to infect your company with malware. A report last December from PhishMe found that 91% of cyberattacks start with a phish. The FBI says these scams have resulted in $5.3 billion in financial losses since October 2013.

We tend to forget about spam these days because Google and Microsoft have done a decent job hiding spam from immediate view of our inboxes. And while that is generally a good thing, all it takes is a single email that you mistakenly click on and you have brought an attack inside your organization. It is easy to see why we make these mistakes: the phishers spend a lot of time trying to fool us, by using the same fonts and page layout designs to mimic the real sites (such as your bank), so that you will login to their page and provide your password to them.

Phishing has gotten more sophisticated, just like other malware attacks. There are now whaling attacks that look like messages coming from the CFO or HR managers, trying to convince you to move money. Or spear phishing where a criminal is targeting someone or some specific corporation to trick the recipient into acting on the message. Attackers try to harvest a user’s credentials and use them for further exploits, attach phony SSL certificates to their domains to make them seem more legitimate, use smishing-based social engineering methods to compromise your cell phone, and create phony domains that are typographically similar to a real business. And there are automated phishing construction kits that can be used by anyone with a minimal knowledge to create a brand new exploit. All of these methods show that phishing is certainly on the rise, and becoming more of an issue for everyone.

Yes, organizations can try to prevent phishing attacks through a series of defenses, including filtering their email, training their users to spot bogus messages, using more updated browsers that have better detection mechanisms and other tools. But these aren’t as effective as they could be if users had more information about each message that they read while they are going through their inboxes.

There is a new product that does exactly that, called Inky Phish Fence. They asked me to evaluate it and write about it. I think it is worth your time. It displays warning messages as you scroll through your emails, as shown here.

There are both free and paid versions of Phish Fence. The free versions work with Outlook.com, Hotmail and Gmail accounts and have add-ins available both from the Google Chrome Store and the Microsoft Appsource Store. These versions require the user to launch the add-in proactively to analyze each message, by clicking on the Inky icon above the active message area. Once they do, Phish Fence instantly analyzes the email and displays the results in a pane within the message. The majority of the analysis happens directly in Outlook or Gmail so Inky’s servers don’t need to see the raw email, which preserves the user’s privacy.

The paid versions analyze every incoming mail automatically via a server process. Inky Phish Fence can be configured to quarantine malicious mail and put warnings directly in the bodies of suspicious mail. This means users don’t have to take any action to get the warnings. In this configuration, Outlook users can get some additional info by using the add-in, but all the essential information is just indicated inline with each email message.

I produced a short video screencast that shows the differences in the two versions and how Phish Fence works. And you can download a white paper that I wrote for Inky about the history and dangers of phishing and where their solution fits in. Check out Phish Fence and see if helps you become more vigilant about your emails.

Why Your Survey Won’t See the Light of the Media Day

I wrote this piece with Greg Matusky, the head of the Gregory FCA agency.

As a marketer of a security firm, you know that surveys can serve as high-impact marketing tools when shared with clients, used to power top-of-the-funnel lead gen campaigns, punch up sales literature, incorporated into white papers, and create great content for any number of channels.

But when it comes to gaining media attention for your survey, well, that can be a struggle. The media is inundated with corporate-funded surveys and often turn a jaundiced eye to them precisely because of their inbred biases.

Gaining exposure in the media or by having the results “go viral” on social media requires you to create surveys that deliver results that withstand media scrutiny. But these surveys also must meet the definition of what is new, what is newsworthy, and what is interesting to an audience eager to better understand the changing world of cybersecurity. Above all, you need to put away your marketer’s hat and assume a reporter’s perspective in order to create results welcomed, not ignored by the media.

If you would rather listen than read, check out this podcast episode that Paul Gillin and I did about surveys, from our FIR B2B series.

Here’s what you need to know.

Man Bites Dog. Findings should be unexpected, counter-intuitive, unusual, or all three.

Having a survey that repeats common wisdom is a sure way for reporters to instantly hit the delete key.

This Barracuda survey found that 74 percent of respondents stated that security concerns restrict their organization’s migration to the public cloud and have prevented widespread cloud adoption. So tell me something new! The results might have been news back in 2000, but not now.  A great survey breaks new ground. It adds to the common knowledge and doesn’t just repeat it. Push your organization to formulate questions that produce the unexpected, counter-intuitive findings that media love.

Bigger is Better!

Sample sizes need to be big enough to impress – and be meaningful. Sample sizes of a few hundred participants, based on some non-random selection, such as people filling out a SurveyMonkey form, isn’t going to cut it. You can’t fool the media. They want statistical validity and the credibility that comes from large sample sizes.

Want a prime example? Consider Kaspersky Lab and B2B International release of a survey that drew on 5,000 companies of all sizes from 30 countries. Now that carries heft, and indeed, the results were cited in several places, including that the average cost of a data breach for enterprise businesses in North America is $1.3M. Another survey from Bitdefender interviewed 1,050 IT professionals in several countries to find out their cloud security purchase decisions. Both of these surveys are keepers.

Compare those surveys to a Beyond Trust study of nearly 500 IT professionals and concluded the “5 Deadly Sins” within organizations that ultimately increase the risks of a data breach. Yes, that will be conclusive – not. You are cherry picking the results here for sure.

But sample size isn’t enough. Take for instance a recent survey conducted by One Identity. It asked 900 IT security professionals for their thoughts. Seems like a promising sample size. But the results talk about inadequate IT processes around user access by disgruntled former employees and other nefarious actors — providing a widespread opportunity to steal usernames and passwords, risking the infiltration of their entire IT network. That brings us to our next point.

Blind them with science!

Make sure you ask the right evidence-based questions. Many surveys focus on “soft” assessments, such as “Do you believe your cybersecurity is better/worse this year when compared to last year?” Can anyone really answer that question with hard facts? Probably not. To win media coverage, show the reporters the evidence behind the questions, or ask for specific information that can be based on more than just a “feeling.” As an example of what not to do: “Most organizations are worried that the technical skills gap will leave them exposed to security vulnerabilities,” which is from a Tripwire survey.

Here is another result from that same Tripwire survey that doesn’t really have any solid data behind it: “Seventy-nine percent believe the need for technical skills among security staff has increased over the past two years.” Where did they get their beliefs from?

And then there is this survey from ABI Research, which finds that 40% of respondents believe that data security is the leading barrier to adopting innovative technologies. Again, how did the participants rank their beliefs and come up with this conclusion? This survey says nothing.

Consider the source of the discontent.

Sometimes having surveys come from surprising places, such as academic researchers, is a sexy way to interest media. Third parties make the findings more newsworthy and citable. Here is a report about the relative security of swiping patterns versus a six-digit PIN code that was done for the US Naval Academy. They surveyed more than a thousand people to find out that “shoulder surfers” (busybodies who look over our shoulders at crowded places) can remember the swipe patterns better than the numeric PINs. It also provides an unexpected result too. Could your organization team with a similarly credible third party to tell its story?

The best surveys use data that isn’t easily available.

Data such as server logs or actual threat data that show particular trends is useful and notable. Many security vendors now report on data from their own networks, using their monitoring tools that track what is actually being observed “out in the wild.” There is no belief system required: This is cold, hard data. The king of these kinds of surveys is the Verizon Data Breach Investigations Report, which has been coming out for the past decade. This report examines the actual attacks and isn’t asking for anyone’s opinion or feelings. It is encyclopedic, comprehensive, thoughtful, and analytical. Because it has been around for so long, the analysts can pull together trends from its historical records. And, at least until Verizon was itself breached, the data came from a solid brand too.

As you can see, there are some surveys that are worthwhile. The best ones take time and cost money to pull off properly. But they are worth it in terms of great media coverage.

SecurityIntelligence blog: The history of ATM-based malware

I haven’t used a bank ATM for years, thanks to the fact that I usually don’t carry cash (and when I need it, my lovely wife normally has some handy). I still remember one time when I was in Canada and stuck my card in one of the cash machines, and was amazed that Canadian money was dispensed. I was amazed at how the machine “knew” what I needed, until I realized that it was only loaded with that currency.

Well, duh. Many of you might not realize that underneath that banking apparatus is a computer with the normal assortment of peripherals and devices that can be found on your desktop. The criminals certainly have figured this out, and have gotten better at targeting ATMs with all sorts of techniques.

Back as recently as three years ago, most ATM attacks were on the physical equipment itself: either by placing skimming devices over the card reading slot to capture your debit card data or by forcing entry into the innards of the ATM and planting special devices inside the box. Those days are just a fond memory now, as the bad guys have gotten better at defeating various security mechanisms.

For many years, almost all of the world’s ATMs ran on Windows XP. Banks have been upgrading, but there are still a lot of XP machines out there and you can bet that the criminals know exactly which ones are where.

But there is a lot happening in new ATM exploits, and my post for IBM’s Security Intelligence blog on the history of ATM malware hacking talks about these developments. In fact, ATM malware is now just as sophisticated and sneaky as the kind that infects your average Windows PC, and ATM malware authors are getting better at emptying their cash drawers. For example, malware authors are using various methods to hide their code, making it harder to find by defensive software tools. Or they are taking a page from the “fileless” malware playbook, whereby the malware uses legit OS code so it looks benign.

There is also a rise in network-based attacks which exploit lax banking networking topologies (segmentation seems to be a new technology for many of them), or rely on insiders that either were willing or had compromised accounts. Some of these network-based attacks are quite clever: a hacker can command a specific ATM unit to reboot and thereby gain control of the machine and have it spit out cash to an accomplice who is waiting at the particular machine.

Sadly, there are no signs of this changing anytime soon and ATM malware has certainly become mainstream.