SiliconANGLE: Is it time to deploy passkeys across the enterprise? Here’s what you need to know

It’s a great time to think more about passkeys, and not just because this Thursday is another World Password Day. Let’s look at where those 2022 passkey plans stand, and what companies will have to do to deploy them across their enterprises. Interest in the technology, also referred to as passwordless — a bit of a misnomer — has been growing since Google announced its support last fall and before that when Apple and Microsoft also came out in support last summer.

This post for SiliconANGLE discusses the progress made on these technologies, covers some of the remaining deployment issues, and reviews two sessions at the recent RSA Conference that can be useful for enterprise security managers.

The realities of ChatGPT as cyber threats (webcast)

I had an opportunity to be interviewed by Tony Bryant, of CyberUP, a cybersecurity non-profit training center, about the rise of ChatGPT and its relevance to cyber threats. This complemented a blog that I wrote earlier in the year on the topic, and certainly things are moving quickly with LLM-based AIs. The news this week is that IBM is replacing 7,800 staffers with various AI tools, making new ways of thinking about the future of upskilling GPT-related jobs more important. At the RSAC show last week, there was lots of booths that were focused on the topic, and more than 20 different conference sessions that ranged from danger ahead to how we can learn to love ChatGPT for various mundane security tasks, such as pen testing and vulnerability assessment. And of course news about how ChatGPT writes lots of insecure code, according to French infosec researchers, along with a new malware infostealer is out with a file named ChatGPT For Windows Setup 1.0.0.exe. Don’t download that one!

There are still important questions you need to ask if you are thinking about deploying any chatbot app across your network, including how is your vendor using AI, which algorithms and training data are part of the model, how to build in any resilience or SDLC processes into the code, and what problem are you really trying to solve.

Reporting for SiliconANGLE at the RSA Conference

Last week I was in San Francisco for the annual RSA Conference. The last time I was there in person was 2019 (although I watched many a streamed session in the past several years). I was there on behalf of SiliconANGLE, my new home, as their cybersecurity reporter. Today I wrote my first couple of posts:

  • A recap of Bruce Schneier’s keynote and a summary of his many-storied career in cybersecurity. He has a new book out called A.Hacker’s Mind that is quite entertaining and the talk basically takes off from the book about how he wants to reinvent democracy using technology for good. I know, there are plenty of counter-examples but you should really listen to the talk (you have to be registered for the conference, don’t get me started about that). Anyway, I am a big fan of his, as you can tell if you read this post.
  • A recap of two panels on incident and threat response, both from the POV of specific incident tactics as well as some suggestions for improving overall skills and incident analyst capabilities.
  • Plus this vlog with Dave Vellante that I did the week before the conference, where we talk about a bunch of different topics including the 3CX double-supply chain attack that happened last month.

RSAC has definitely changed over the past four years. One of the panels above had all women on it, including the moderator. That is a welcome change for the better, and what made it also worthwhile was these were all powerhouse experts that I often turn to when I need the appropriate source.

Also while I was there I got to spend a short bit of time with another one of my go-to sources, Tanya Janca. (I reviewed her courseware and book on application security here.)

This was one of only two in-person events that I have attended in the past year, and it was nice to get to rub elbows with Tanya, Bruce and other luminaries, as well as break bread with some of my Bay Area friends that I have known for many years but haven’t seen since the pandemic began. All this personal contact almost mitigated the pain and suffering of being on crowded flights and having to deal with a over-crowded convention center.

If you have cyber news that you want to share, you know where you can find me.

Wikibon Breaking Analysis podcast: the state of infosec today

One of my first outings for SiliconANGLE is doing this pod with co-founder Dave Vellente this week. We cover a wide range of topics, including examining a new report from Unit42, the “double supply chain” attack on 3CX’s network (and how inadequate their response will be, at least according to their own admissions), where passwordless is for enterprise IT, and other infosec matters. You can read my best bits on the transcript link, or watch the entire pod!

Improving devops security in the auto software supply chain

The automotive industry has long been the target of numerous cyberthreats across its software supply chain. Some of these include specific car hacking exploits that have been demonstrated by security researchers which have motivated massive vehicle recalls such as the car hacking work which necessitated the 2015 recall of 1.4M FiatChrysler cars.

Studying this rich history is important for computer professionals in other industries for several reasons. First, the methods of compromise aren’t necessarily car-centric and have general cybersecurity implications. These cyberthreats are relevant for a wide variety of circumstances, regardless of whether you work for other manufacturing-based businesses or  a bank or a hospital. The threat of compromised software supply chain security sadly is now far and wide. Second, cars have become complex digital environments. The average vehicle being made today has dozens of electronic control units. This means a car could be running 100 million lines of code, according to this source. This is twice the amount of code that makes up Windows itself, and more than is used in Apple’s MacOS. Electronics wiring alone is estimated to add 45-65 pounds to each vehicle.

Car hacking is therefore a target of opportunity, and more importantly, car-based cyberthreats can be easily understood even by non-technical managers who might be reluctant to invest in better endpoint security. Finally, the automotive breaches are also good illustrations of common devops security and network security failures, such as unprotected cloud data assets, inadequate API data security and poor password hygiene that can be found across numerous Internet of Things (IoT) situations.

Let’s look at some of the more notable recent car-related developments. Earlier this year, security researcher Sam Curry posted a series of car hacking exploits that could have implications for more than a 1M vehicles from 16 different major brands. He was able to fully remotely lock and unlock and start and stop various engines as well as enable remote management of other car functions. These hacks included SSO account takeovers, remote code execution, privilege escalation – all common exploits for IT operations.

In terms of careless data handling, a software supplier of Nissan was breached in an incident that occurred in June 2022. An unsecured cloud database was exposed, and a hacker collected almost 18,000 customer names, including birth dates and other private data. This was the second time the company’s data was exposed, with another incident happening in January 2021 that leaked 20 GB of data from an unprotected Git server. The issue is that supply chain security must be applied across a myriad of software suppliers and interconnected applications, all of which have their own potential API data security vulnerabilities.

Curry’s cyberthreats may be the most recent, and have widest impact, but there have been antecedents of both car hacking and careless data handling prior to his efforts. In terms of the former, back in 2019, hackers gained access to thousands of vehicles that were running two different GPS tracking apps and were able to remotely turn off running engines. It helped matters immensely that the tracking apps had easily guessed default passwords that weren’t ever changed by their owners. And even further in the past, cyber-security researchers Chris Valasek and Charlie Miller turned to car hacking and were able to compromise a single vehicle via an API vulnerability in the infotainment system in 2015.

But wait, there is more: The automotive industry has also been the target of numerous ransomware events, including:

Here are some suggestions to improve automotive software supply chain security and move towards better devops security practice. And some things to think about, even if you aren’t in this particular market segment.

  • Secure your various manufacturing processes, including improvements in network segmentation and monitoring network traffic to detect malware intrusions and compromised accounts and improvements in overall network security.
  • Secure connected cars, including better threat detection and network segmentation across in-car systems. As cars make use of the internet for communications, reporting traffic and driving conditions and delivering streaming services, these connections bring greater risk of cyberthreats.
  • Software supply chain security, especially with telematics and other in-car software controls. This includes better API security and devops security, including protecting application secret keys, better encryption of communication channels (such as employing SSL and TLS between applications) and not using default passwords that are easily guessed. As we have cited above, thanks to unprotected software supply chains, a single piece of software could eventually harm the entire vehicle, or expose private data.

What a security manager needs to know about chatbots

When I last wrote about chatbots in December, they were a sideshow. Since then, they have taken center stage. In this New Yorker piece, ChatGPT is called making a blurry JPEG of the internet. Since I wrote that post, Google, Microsoft and OpenAI/ChatGPT have released new versions of their machine learning conversation bots. This means it is time to get more serious about this market, understand the security implications for enterprises, and learn more about what these bots can and can’t do.

TechCrunch writes that early adopters include Stripe, which is using GPT-4 to scan business websites and deliver a summary to customer support staff; Duolingo built GPT-4 into a new language learning subscription tier and Morgan Stanley is creating a GPT-4-powered system that’ll retrieve info from company documents and serve it up to financial analysts. These are all great examples of how it is being helpful.

But there is a dark side as well. “ChatGPT can answer very specific questions and use its knowledge to impersonate both security and non-security experts,” says Ron Reiter, Co-Founder and CTO of Israeli data security firm Sentra. “ChatGPT can also translate text into any style of text or proofread text at a very high level, which means that it is much easier for people to pretend to be someone else.” That is a problem because chatbots can be used to refine phishing lures.  

While perhaps the prediction of the coming of Skynet taking over the world is a bit of an over-reach, the chatbots continue to get better. If you are new to the world of large language models, you should read what the UK’s National Cyber Center wrote about them and see how these models relate to the bots’ data collection and operation.

One of ChatGPT’s limitation is that its training data is stale and doesn’t include anything after 2021. But it is quickly learning, thanks to the millions of folks that are willingly uploading more recent bits. That is a big risk for IT managers, who are already fearful that corporate proprietary information is leaking from their networks. We had one such leak this week, where a bug in ChatGPT made public titles of user chat histories. This piece in CSOonline goes into further detail about how this sharing works.

My first recommendation is that a cybersecurity manager should “know thy enemy” and get a paid account and learn more about the OpenAI’s API. This is where the bot will interact with other software, such as interpreting and creating pictures, or generating code, or diagnosing human behavior as a therapist. One of my therapist friends likes this innovation, and that it could help people who need to “speak” to someone urgently. These API connections are potentially the biggest threat vectors for data sharing.

Gartner has suggested a few specific things, such as favoring Azure’s version for your own experimentation and putting the right policies in place to prevent confidential data from being uploaded to the botsCheck Point has posted this analysis last December that talks about how they can easily create malware, and further more recent analysis here.

Ironscales has this very illuminating video shown above on how this can be done. Also, to my earlier point about phishing, IT managers need to think about having better and more targeted awareness and training programs.

Infosys has this five-point plan that includes using the bots to help bolster your defensive posture. They also recommend you learn more about polymorphic malware threats (CyberArk has described such a threat back in January and Morphisec has specialized tools for fighting these that you might want to consider), and review your zero trust policies.

Finally, if you haven’t yet thought about cloud access security brokers, you should read my review in CSOonline about these products and think about using one across your enterprise to protect your data envelope.

How is that right to be forgotten going?

Right To Be Forgotten – Chicago PlaysThe right to be forgotten isn’t part of the US Constitution, or for that matter in any other country’s founding documents. But it is part of the more recent regulations, which define how this data is collected, how it is processed, and mostly importantly, how and when it is erased. The phrase refers to where individuals can ask to have their personal data removed from various digital repositories under certain circumstances.

It is not a new term. Indeed, the EU got going on this almost ten years ago, eventually enshrining rules in its General Data Protection Regulation (GDPR), which have been around now for almost five years. This motivated a few (and I emphasize very few — so far that number is five) states here in the US to enact their own privacy laws, including California’s Consumer Privacy Act (CCPA) and others that mention the “forgotten” rights. Here is a handy comparison chart of what the five states have passed so far.

Security blogger David Froud also wrote about the issue more than four years ago. He pointed out then that the term forgotten doesn’t necessarily mean total erasure of your data, such as the hypothetical case of a convicted criminal in applying for a job. But then, should the stain of that conviction follow someone for the rest of their life? Hard to say. And this is the problem with this right: the subtleties are significant, hard to define, and harder still to create a solid legal framework.

What got me thinking about this issue is a recent survey by Surfshark of the actual progress of the forgotten actions across European countries. They found that residents of France alone accounted for a quarter of the actions recorded by both Google and Microsoft’s search portals, with England and Germany residents together accounted for another quarter of cases. These requests are on the rise since the onset of Covid, and both Cyprus and Portugal have seen a 300% increase in requests since 2020. Interestingly, Estonia (which is a leader in implementing all sorts of other digital tech across the board) had the largest proportion of cases with 53 per 10,000 residents. Compare that to Bulgaria, which had 5.6 requests per 10,000 residents. At the bottom of the page linked above, you can see references to the various search portals’ request removal forms, and yes, you have to submit separate requests for each vendor (here is Google’s link). The EU “suggests” that the process from request to its fulfillment should take about a month, but the way they word it means there is no legal response time encoded in the GDPR. According to the Surfshark report, millions of requests have been filed since the law went into effect.

As the authors of the survey say, “Time will only tell which countries will join the fight for online privacy and to what ends our data is private online. Is the right to be forgotten a universal truth or a way to hide the past indefinitely?” I don’t honestly know.

Temper the Surfshark report with the results of a Spanish university research study that looked at the 500 most-visited websites in that country. They found a huge collection of tracking technologies that were hidden from any user consent, with less than nine percent of the sites actually obtaining any user consent.

But tech doesn’t stand still, and the right to be forgotten has taken on new meaning as the rise of AI chatbots such as ChatGPT that can seek out and find your personal data as a way to train their machine learning models. As my colleague Emma McGowen mentions in her Avast blog from last month, there is no simple mechanism to request removal of your data once the AI has found it online. You don’t know where your data is online, and even if you do there isn’t any simple form that you can fill out to request deletion.

Note: OpenAI released this opt-out form after I wrote this essay.

If you have ever tried to put a credit freeze on your accounts at the four major credit bureaus, you have some idea of the chore involved here. At least there are only four places that process your credit data. There are hundreds if not thousands of potential data collections that you would have seek out and try to get any action. Chances are your data is out there somewhere, and not just in Google’s clutches but on some hard drive running in some darker corner. Good luck tracking this down.

So where does that leave this right to privacy? It is a good sign that more countries and some US states are taking this seriously. But, each state has slightly different takes on what the right means and what consumers can do to remove their data. And for those you happily chatting up your AI bots, be careful about what private info you have them go searching for, lest you unwittingly add more data that you don’t want others to find about you.

Disinformation mercenaries for hire

In the past week I have seen a number of reports that range from unsettling to depressing. The reports document a three-pronged foundation of the darkest parts of the online world: disinformation, cyber-terrorism, and the difficulty in trying to craft better legal approaches to stop both.

Let’s start with the disinformation. A consortium of journalists from around the world wrote about a team of Israeli contractors (called “Team Jorge”) who claim to have covertly influenced more than 30 elections and placed stories to help improve the online reputations of numerous private business clients around the world. They did this by using hacking, sabotage and automated disinformation tools. Call it disinformation-mercenaries-for-hire. If this sounds familiar, it is another news product from the French-based ForbiddenStories group that broke the series of Pegasus-related stories back in the summer of 2021 that I have written about for Avast here. The group labels this effort “Story Killers” and you can read the various pieces here.

What is depressing is how adept this industry has become: by comparison, the Russian Internet Research Agency’s antics in meddling with our 2016 election looks crude and mere child’s play. The reporters uncovered a wide-ranging collection of automated tools to quickly create hundreds of fake social media accounts and generate all kinds of fake posts that are then amplified by the social networks and search engines. “We must be able to recount the life of the characters, their past, their personality,” said one mercenary. “When it’s a small agency, it’s done in a rather sloppy way. If it’s well done, it’s the Israelis.”

info1The Israeli company behind these operations has a wide array of services, including digital surveillance, hack-and-leak smear campaigns, influence operations, and election interference and suppression. They claim to have operated for a decade.

One of the consortium partners is The Guardian and they document one of these automated systems that is used to manage a collection of social media avatars. Called AIMS, it allows for managing 30,000 seemingly real accounts to be created for nonexistent people. These can then be deployed either as a swarm – similar to a network of bots – or as single agents. Other tools are described in this piece by Haaretz.

The disinformation mercenaries sold access to their software to various national intelligence agencies, political parties and corporate clients interested in trying to resolve business disputes. Accounts span Twitter, LinkedIn, Facebook, Telegram, Airbnb, Gmail, Instagram and YouTube. Some of the identities even have Amazon accounts with credit cards and bitcoin wallets. All of this was leveraged to stage real-world events in order to provide ammunition for social media campaigns to provoke outrage.

Let’s move on to the cyberterrorism effort. Speaking about the Russians, also released this week are two reports from the Atlantic Council, a DC-based think tank that has studied the disinformation war the Russians have waged against Ukraine. (To be clear, this is completely independent of the Story Killers effort.) It is also depressing news because you realize that unlike an actual shooting war, there is never any time when you can claim victory. The totality, scope and power of this vast collection of fake news stories, phony government documents, deep fake videos and other digital effluvia is staggering and is being used by the Russians to convince both their own citizens and the rest of the world of Putin’s agenda.

And something else to worry about with the war comes from one final report, this one from Dutch intelligence forces that was covered here. The report says, “Before and during the war, Russian intelligence and security services engaged in widespread digital espionage, sabotage and influencing against Ukraine and NATO allies. The sustained and very high pressure that Russia exerts with this requires constant vigilance from Ukrainian and Western defenders.”

Taken together, you can see that disinformation has become weaponized in both the public and private sector. So what can be done? Cue up part three, which is trying to craft better laws to control these actions. Coincidentally, the US Supreme Court heard two cases that have been moving through our judicial system, Gonzalez v. Google and Twitter v. Taamneh. Both cases involve ISIS attacks. The former involves the 2015 murder in Paris of the 23-year old American student Nohemi Gonzalez, which I wrote about in a blog for Avast last fall. The latter involves the 2017 death of Nawras Alassaf in Istanbul. The first case directly involves the Section 230 statutes, the latter the various sections of the anti-terrorism act. Both were laws passed in the mid 1990s, when the internet was young and by comparison innocent.

You can read the transcriptions of the court’s oral arguments for Gonzalez here. The  oral arguments transcript for Twitter are found here. I have taken the time to read them and if you are interested in my further thoughts, email me directly or post your questions here. Making effective changes to both laws won’t be easy without drastic consequences for how online companies run their businesses, and how we legitimately use them. And that is lesson from reading all these reports: as long as the bad guys can figure out ways to exploit these technologies, we will have to deal with some dire consequences.

CSOonline: What is the Traffic Light Protocol and how it works to share threat data

Traffic Light Protocol (TLP) was created to facilitate greater sharing of potentially sensitive threat information within an organization or business and to enable more effective collaboration among security defenders, system administrators, security managers and researchers. In this piece for CSOonline, I explain the origins of the protocol, how it is used by defenders, and what IT and security managers should do to make use of it in their daily operations.

Book review: The exploits of Space Rogue (Cris Thomas)

Space Rogue: How the Hackers Known As L0pht Changed the World by [Cris Thomas]The hacker Cris Thomas, known by his hacker handle Space Rogue, has a new book out that chronicles his rise into infosec security. I have interviewed him when I was writing for IBM’s Security Intelligence blog about his exploits. IBM’s X-Force has been his employer for many years now where he works for numerous corporate clients, plying the tools and techniques he refined when he was one of the founding members of the hacking collective L0pht.

My story covered his return visit to testify to Congress in 2018. Thomas and his colleagues originally testified there back in 1998. The book’s cover art shows this pivotal moment, along with the hacker handles shown as nameplates. The story of how this meeting came to pass is one of the book’s more interesting chapters, and the transcript of their testimony is included in an appendix too.

I also wrote this post about another member of L0pht named Mudge, during his time as a security consultant for Twitter. L0pht is infamous for developing a series of hacking tools, such as Windows NT password crackers (which Thomas goes into enormous detail about the evolution and enhancement of this tool) and a website called Hacker News Network. Thomas describes those formative years with plenty of wit and charm in his new book, which also serves as a reminder of how computer and network security has evolved — or not as the case may be made.

That cracking tool carried L0pht over the course of some twenty plus years. It began as “a small little piece of proof of concept code, hurriedly produced within a few weeks, and went from an exercise to prove a point, security weaknesses in a major operating system, to shareware, to a commercial success,” he writes.

One of his stories is about how L0pht had its first major penetration test of the Cambridge Technology Partners network. The company would go on to eventually purchase Novell and numerous other tech firms. The hackers managed to get all sorts of access to the CTP network, including being able to listen to voicemails about the proposed merger. The two companies were considering the acquisition of L0pht but couldn’t come to terms, and the hackers had left a backdoor in the CTP network that was never used but left on because by then their testing agreement had expired. Fun times.

The early days of L0pht were wild by today’s standards: the members would often prowl the streets of Boston and dumpster dive in search of used computer parts. They would then clean them up and sell them at the monthly MIT electronics flea market. Dead hard drives were one of their specialties — “guaranteed to be dead or your money back if you could get them working.” None of their customers took them up on this offer, however.

One point about those early hacking days — Thomas writes that the “naïveté of hackers in the late ’90s and early 2000s didn’t last long. Hackers no longer explore networks and computer systems from  their parents’ basements (if they ever did); now it is often about purposeful destruction at the bequest of government agencies.”

He recounts the story of when L0pht members brought federal CyberCzar Richard Clarke to their offices in the 1990s. Clarke was sufficiently impressed and told Thomas, “we have always assumed that for a group or organization to develop the capabilities that you just showed us would take the resources only available to a state-sponsored actor. We are going to have to rethink all of our threat models.” Exactly.

There are other chapters about the purchase of L0pht by @stake and Thomas’ eventual firing from the company, then taking eight years to get a college degree at age 40, along with the temporary rebirth of the Hacker News Network and going to work for Tenable and now at IBM.

Thomas ends his book with some words of wisdom. “Hackers are not the bad guys. Most of the great inventors of our time, such as Alexander Graham Bell, Mildred Kenner, and Nichola Tesla, could easily be considered hackers. Criminal gangs who are running ransomware campaigns or are stealing credit cards are just that, criminals. They just happen to use a computer instead of a crowbar. They are not hackers, not to me anyway. L0pht’s message of bringing security issues to light and getting them fixed still echoes throughout the industry and is more important today than ever.” If you are at all interested in reading about the early days of the infosec industry, I highly recommend this book.