Interest in multi-factor authentication (MFA) has risen in the past few years, spurred by the increasing frequency and severity of data breaches and destructive attacks. When Covid-19 happened, ransomware actors proliferated. Recently, MFA has received a boost from various supporters, including Google, the US federal government, GitHub and Microsoft. When evaluating the various MFA products and technologies on the market today, it’s important to understand the tradeoffs in security, scalability and usability inherent in each option. Additionally, it can be helpful to understand your available choices in the context of how MFA has developed over time.
In this ebook I co-authored with Evan Krueger, the engineering manager of Token, we track the evolution of MFA, the work of the FIDO Alliance to bring the industry together and provide new authentication standards, and some suggestions on how to choose the right MFA technology that you carry with you, that understands your biometrics, and can be married to your identity without any operator intervention. Ransomware and data theft are only increasing in severity. It’s time for the defenders to up their game as well.
Domain names lie at the heart of a business’ online presence. They control how a company’s web and other resources will be identified to the world and reinforce the numerous brands and trademarks of a business. Domains represent a combination of virtual storefronts and billboards to promote the brand and identify a source of trusted information about the business. The right domain name makes it easier for online customers to find and purchase a business’ products and services and is also used to protect their intellectual property and complement their offline efforts.
Companies typically register their internet domain names to support new brands, product launches, marketing campaigns, corporate acquisitions and restructurings. The issue for many corporations is managing many domains. And while the attention is focused on some of the world’s largest corporations, such as Coca Cola and Unilever which are reported to own thousands of domains, even smaller businesses can have large domain name portfolios. It is not uncommon for large organizations to own and operate thousands of domain names [3], for example.
The biggest cyber threat isn’t sitting on your desk: it is in your pocket or purse and, of course, we mean your smartphone. Our phones have become the prime hacking target, due to a combination of circumstances, some under our control and some not. These mobile malware efforts aren’t new. Sophos has been tracking them for more than a decade (see this timeline from 2016). There are numerous examples of attacks, including fake anti-virus, botnets, and hidden or misleading mobile apps. If you want the quick version, there is this blog post for Network Solutions. It includes several practical suggestions on how you can improve your mobile device security.
You can also download my ebook that goes into more specific details about these various approaches to mobile device security.
In this white paper sponsored by the security vendor Sixgill, I explain why the dark web is such a critical part of the cybercrime landscape, and how Sixgill’s product can provide cybersecurity teams with clear visibility into their company’s threats landscape along with contextual and actionable recommendations for remediation. I cover the following topics:
How the dark web has evolved into a sophisticated environment well suited to the needs of cybercriminals.
What steps these criminals take in the hopes of staying hidden from cybersecurity teams.
How Sixgill uses information from the underground to generate critical threat intelligence – without inadvertently tipping cybercriminals off to the fact that an investigation is underway.
Why Sixgill’s rich data lake, composed of the broadest collection of exclusive deep and dark web sources, enables us to detect indicators of compromise (IOCs) before conventional, telemetry-based cyberthreat intelligence solutions can do so.
Which factors businesses and organizations need to consider when choosing a cyber threat intelligence solution.
The nature of anti-virus software has radically changed since the first pieces of malware invaded the PC world back in the 1980s. As the world has become more connected and more mobile, the criminals behind malware have become more sophisticated and gotten better at targeting their victims with various ploys. This guide will take you through this historical context before setting out the reasons why it is time to replace AV with newer security controls that offer stronger protection delivered at a lower cost and with less of a demand for skilled security operations staff to manage and deploy. In this white paper I co-wrote for Endgame Inc., I’ll show you what is happening with malware development and protecting your network from it. why you should switch to a more modern endpoint protection platform (EPP) and how to do it successfully, too.
If you are looking for a comprehensive identity and access management (IAM) tool that can cover just about any authentication situation and provide ironclad security for your enterprise, you should consider HID Global’s ActivID product line.
Even if you are an IAM specialist, it will take days and probably weeks of effort to get the full constellation of features setup properly and tested for your particular circumstances. There is good news though: you would be hard pressed to find an authentication situation that it doesn’t handle. t has a wide range of tools that can lock down your network, covers a variety of multifactor authentication methods and token form factors (as shown here below), and provides single sign-on (SSO) application protection.
f you are rolling out MFA protection as part of a larger effort to secure your users and logins, then the case for using HID’s product becomes very compelling.
I was hired to take a closer look at their product earlier this year, and came away impressed with the level of thoroughness and comprehensive protective features. You can download my report here and learn more about this tool and what it can do.
If you run the IT security for your organization, you probably are feeling two things these days. First, you might be familiar with the term “box fatigue,” meaning that you have become tired of purchasing separate products for detecting intrusions, running firewalls, and screening endpoints for malware infections. Secondly, you are probably more paranoid too, as the number of data breaches continues unabated, despite all these disparate tools to try to keep attackers at bay.
I spent some time last month with the folks behind the Tachyon endpoint management product. The vendor is 1E, which isn’t a name that you often see in the press. They are based in London with a NYC office, and have several large American corporations as customers. While they paid me to consult with them, I came away from my contact with their product genuinely impressed with their approach, which I will try to describe here.
A lot of infosec products try to push the metaphor of searching for a needle (such as malware) in a haystack (your network). That notion is somewhat outdated, especially as malware authors are getting better at hiding their infections in plain sight, reusing common code that is part of the Windows OS or chaining together what seems like innocuous routines into a very destructive package. These blended threats, as they are known, are very hard to detect, and often live inside your network for days or even months, eluding most security scanners. This is one of the reasons why the number of breaches continues to make news.
What Tachyon does isn’t trying to find that needle, but instead figures out that first you need to look for something that doesn’t appear to be a piece of hay. That is an important distinction. In the memorable words of Donald Rumsfeld, there are unknown unknowns that you can’t necessary anticipate. He was talking about the fog of war, which is a good analogy to tracking down malware.
The idea behind Tachyon is to help you discover all sorts of ad hoc and serendipitous things out of your collection of computers and networks that you may not even have known required fixing. Often, issues that start out with some security problem end up becoming general IT operations related when they need to be fixed. Tachyon can help bridge that gap.
Today’s enterprise has an increasingly more complex infrastructure. As companies move to more virtual and cloud-based servers and more agile development, there are more moving parts that can be very brittle. Some cloud-based businesses have hundreds of thousands of servers running: if just a small fraction of a percent of that gear has a bug, it becomes almost impossible to ferret out and fix. This post on LinkedIn’s engineering blog is a good case in point. “Any service that is live 24/7 is in a state of change 24/7, and with change comes failures, escalations, and maybe even sleepless nights spent firefighting.” And that is just dealing with production systems, rather than any deliberate infections.
Unlike more narrowly-focused endpoint security products, Tachyon operates in a wider arena that responds to a lot of different events that deal with the entire spectrum of IT operations– not just related to your security posture. Does it matter if you have been infected with malware or have a problem because of an honest mistake by someone with setting up their machine? Not really: your environment isn’t up to par in either situation.
So how does Tachyon do this? It is actually quite simple to explain, and let me show you their home screen:
Does that query box at the top remind you of something? Think about Tachyon as what Google was trying to do back in the late 1990s. Back then, no one knew about search engines. But we quickly figured out that its simple query interface was more than an affectation when we got some real utility out of those queries. That is where we are today with Tachyon: think of it as the search tool for finding out the health of your network. You can ask it a question, and it will tell you what is happening.
Many security products require specialized operators that need training to navigate their numerous menus and interpret their results. What Tachyon is trying to do is to use this question-and-answer rubric that can be used by almost anyone, even a line manager, to figure out what is ailing your network.
But having a plain Jane home page is just one element of the product. The second important difference with Tachyon is how it automates finding and updating that peculiar piece of hay in the stack. I won’t get into the details here, but Tachyon isn’t the only tool in the box that has automation. While there are many products that claim to be able to automate routine IT functions, they still require a lot of manual intervention. Tachyon takes its automation seriously, and puts in place the appropriate infrastructure so it can automate the non-routine as well, to make it easier for IT staffs to do more with fewer resources. Given the reduced headcounts in IT, this couldn’t come at a better time.
If you would like to learn more about Tachyon and read the full review that I wrote about the product, download the PDF here and you’ll see why I think highly of it. And here is a short video about my thoughts on the product.
Now I realize that having 1E as a client could bias my thinking. But I think they are on to something worthwhile here. if you are looking for way to respond and resolve network and endpoint problems at scale, they deserve a closer look.
When it comes to building online applications, you can build them with old tools and attitudes or with new methods that are purpose-built for solving today’s problems and infrastructures. Back in the days when mainframes still walked the earth, setting up a series of online applications used some very primitive tools. And while we have more integrated development environments that embrace SaaS apps running in the cloud, it is more of a half-hearted acceptance. Few tools really have what it takes for handling and automating online apps.
Today’s IT environments are in a constant state of flux and moving at an unprecedented velocity. The tools used to manage these environments weren’t designed for this level of complexity nor designed for rapid changes in resources. The modern data center requires juggling numerous open source repositories, handling multiple cloud providers, being able to rapidly scale up and down its resources, orchestrating changes and populating builds across multiple servers and services.
And matters are only going to become more complex. More non-digital businesses are moving into the cloud, creating new applications that make use of mobile devices that tie them closer to their customers, suppliers and partners. Digital-first vendors are adding features and integrating their websites with a variety of third parties that both increase their security risks and complicate their applications flow and logic. The old days of manual labor for handling these situations are looking more than ever like the days when we last made buggy whips.
Typical use cases
Salt was initially created to handle remote execution across complex application development environments, allowing its users to execute commands across thousands of servers concurrently and automatically. But today we need more from our toolsets than just the ability to run code remotely. Since it began in 2012, Salt has expanded its role to thrive on a mixed open and closed source environment that spans cloud and on-premises infrastructures. Here are some typical scenarios for its use:
A developer needs to schedule tasks that run in a particular sequence, waiting for a dependent server to reach a particular state before it can be launched. While this can be done manually, it can be tedious and error-prone and begs for more automated methods.
Or an IT manager needs to install a particular set of updates and patches to their environment. However, these must be done in a certain order and only when one is successfully installed can the next step be initiated. To add to this complexity, the IT department manages a mixed collection of Windows, Macs and Linux machines that carry particular pieces of their applications infrastructure. Again, this could be done manually, but not in a reasonable time when these patches have to be applied to a thousand different servers.
An application development manager needs to deliver the latest build of their software stack to their production environment, while ensuring that the code is secure and solid. Manual methods are inadequate to handle the velocity of coding changes and applications provisioning in any timely fashion.
An infrastructure engineer needs to set up a multi-tiered web and database application that will require a combination of servers, networks, storage and security devices. The complete collection spans multiple VMs, Docker containers and physical servers, all of which have separate and complex configurations where one misstep could mean a large amount of downtime and debugging.
A new security exploit is discovered that has massive implications across a variety of OS’s and system configurations. Security researchers recommend wholesale updates to be done as quickly as possible, to avoid any potential intrusions by hackers. Using “sneaker-net” or running from server stack to stack will take weeks to accomplish, not counting the time needed to verify the changes are made correctly.
An engineer wants to automaticallyenable auto-scaling features of their cloud provider to match the resources needed as demand rises and falls. While the major cloud vendors offer the ability to spin up and down VMs as needed, more coordination is needed to install the right series of application servers on the new VMs and to balance the overall loads appropriately. This is nearly impossible to accomplish manually.
Or an enterprise wants to migrate its entire cloud infrastructure from AWS to Azure, which involves moving hundreds of virtual servers in a particular order and under certain specifications for each VM. Doing this manually would involve weeks of work, and workers need automation to help with the migration.
Salt’s key features
In each of these cases, the old-school manual methods are inadequate for reasons of time, accuracy, security, or just the sheer effort involving coordinating expensive and highly-skilled IT staffers. That is where Salt comes into play. Here are some of its key features.
Salt’s event-driven automation tools make these tasks much easier to programmatically happen, without a lot of manual operator intervention
Salt also understands orchestration and how the sequencing of various steps has to occur. Salt can handle the necessary conditional logic that control the various configuration and installation steps.
It also contains cloud controls that can manage public, private, and hybrid clouds. It can extract the infrastructure layer, spin up VMs under certain conditions and with certain configurations. This makes moving from one cloud provider to another easier and less error-prone.
Salt comes with sensors that react under certain conditions, such as the presence or absence of a particular application or detection of a particular OS version level.
As we said earlier, Salt originally was created for remote execution tasks. It deploys both push and pull architectures. This differs from many other configuration management tools which make use of one or the other methods. Salt has the flexibility to mix both kinds, making scheduling and message-connected events simple. It addition, it can handle both agent and agentless options, to give its automation processes the maximum level of flexibility and support to the widest collection of endpoint devices, servers and services.
Finally, to support all these automated methods, Salt has solid configuration management features that can detect and manage a wide variety of circumstances. All of its scripts are written in Python, making them more accessible to a wider collection of developers who have learned this language. Other tools have their own proprietary scripting tools that have steeper learning curves.
Salt is used by a wide variety of digital businesses to manage tens of thousands of VMs and physical servers, including LinkedIn and eBay. At the former, it is used to serve up massive amounts of data at very low latencies to improve usability. Salt enables ”us to quickly and dynamically provision caching layers for many of the services that make up our site,” according to that blog post. You should take a closer look at what they offer and how it can be deployed in your organization.
The typical banking IT attack surface has greatly expanded over the past several years. Thanks to more capable mobile devices, social networks, cloud computing, and unofficial or shadow IT operations, authentication now has to be portable, persistent, and flexible enough to handle these new kinds of situations. Banks have also realized that they aren’t just defending themselves against external threats, that authentication challenges have become more complex as IoT has expanded the potential sources of attacks.
That is why banks have moved towards adopting more adaptive authentication methods, using a combination of multi-factor authentication (MFA), passive biometric and other continuous monitoring efforts that can more accurately find fraudulent use. It used to be that adaptive authentication forced a trade-off between usability and security, but that is no longer the case. Nowadays, adaptive authentication can improve overall customer experience and help compliance regulations as well as simplifying a patchwork of numerous legacy banking technologies.
In this white paper I wrote for VASCO (now OneSpan), I describe the current state of authentication and its evolution of adaptive processes. I also talk about the migration from a simple binary login/logout situation to more nuanced states that can be deployed by banks, and why MFA needs to be better integrated into a bank’s functional processes.
I wrote a series of papers for TechTarget, sponsored by Veeam, mainly about ransomware. Here are links to download each paper (reg. req.):
Understanding different types of phishing attacks. As we all know by now, all it takes is just one phishing message to slip by our defenses to ruin our day. Just one click, and an attacker can be inside our network, connecting to that single endpoint and trying to leverage that access to plant additional malware, take control over our critical servers, and find something that can be used to harm our business and steal data and money from our bank accounts. In this paper, I talk about the many different variety of phishing attacks and their increasing sophistication.
How the role of backups have changed in the era of ransomware. (see this pdf) The role of backups has changed in the modern era and this paper describes this evolution. As attackers are getting smarter and more focused, IT managers have to also change with the times. Attackers are getting more adept at penetrating networks, necessitating that backups have to become more sophisticated and cover a multitude of circumstances, threat models, and conditions. And as we change the way we work, the way we consume data, the way we build our business computing systems and the way they depend on more complex online systems, we need to change the way we make backups too.
Tips on defending your network against ransomware. (See this pdf) Defending your network and preventing your users from getting infected with ransomware means more than just implementing various firewalls and network intrusion systems. It is about creating a culture of being resilient. It is developing a concerted backup and recovery process that will cover your systems and your data assets, so they will be protected when an attack happens and your business can return to an operational state as quickly and as inexpensively as possible. In this paper, I share some tips for making your systems more resilient.
Fighting ransomware with tape and cloud: a backup field guide. (See this pdf) The old standby of data protection, tape backups, is still alive and well in many IT shops. Ironically, it is making a resurgence because of ransomware and other malware attacks. We don’t know what tomorrow’s threats will look like, and there is a lot of risk to having something online that is connected to a network with these types of threats today. While tape has had a long history as a backup medium, the cloud can complement tape backups too, as I describe in this paper.
Steps to an effective phishing defense program. (See this pdf) When it comes to defending your network, many enterprise IT managers tend to forget that it is the people behind the keyboards that can make or break their security posture, and sometimes the people matter more than the machines. Phishing is happening all the time, to every organization. The trick is understanding this dynamic. I describe four different steps you can take to improve your defenses.
The story of how the city of Atlanta reacted against a ransomware attack at the end of March 2018 is instructive both in terms of what not to do and how expensive such an attack can become. The city actually experienced two separate attacks, one that began March 22 and another on April 5. My paper describes the series of events and how the city got attacked.