What a security manager needs to know about chatbots

When I last wrote about chatbots in December, they were a sideshow. Since then, they have taken center stage. In this New Yorker piece, ChatGPT is called making a blurry JPEG of the internet. Since I wrote that post, Google, Microsoft and OpenAI/ChatGPT have released new versions of their machine learning conversation bots. This means it is time to get more serious about this market, understand the security implications for enterprises, and learn more about what these bots can and can’t do.

TechCrunch writes that early adopters include Stripe, which is using GPT-4 to scan business websites and deliver a summary to customer support staff; Duolingo built GPT-4 into a new language learning subscription tier and Morgan Stanley is creating a GPT-4-powered system that’ll retrieve info from company documents and serve it up to financial analysts. These are all great examples of how it is being helpful.

But there is a dark side as well. “ChatGPT can answer very specific questions and use its knowledge to impersonate both security and non-security experts,” says Ron Reiter, Co-Founder and CTO of Israeli data security firm Sentra. “ChatGPT can also translate text into any style of text or proofread text at a very high level, which means that it is much easier for people to pretend to be someone else.” That is a problem because chatbots can be used to refine phishing lures.  

While perhaps the prediction of the coming of Skynet taking over the world is a bit of an over-reach, the chatbots continue to get better. If you are new to the world of large language models, you should read what the UK’s National Cyber Center wrote about them and see how these models relate to the bots’ data collection and operation.

One of ChatGPT’s limitation is that its training data is stale and doesn’t include anything after 2021. But it is quickly learning, thanks to the millions of folks that are willingly uploading more recent bits. That is a big risk for IT managers, who are already fearful that corporate proprietary information is leaking from their networks. We had one such leak this week, where a bug in ChatGPT made public titles of user chat histories. This piece in CSOonline goes into further detail about how this sharing works.

My first recommendation is that a cybersecurity manager should “know thy enemy” and get a paid account and learn more about the OpenAI’s API. This is where the bot will interact with other software, such as interpreting and creating pictures, or generating code, or diagnosing human behavior as a therapist. One of my therapist friends likes this innovation, and that it could help people who need to “speak” to someone urgently. These API connections are potentially the biggest threat vectors for data sharing.

Gartner has suggested a few specific things, such as favoring Azure’s version for your own experimentation and putting the right policies in place to prevent confidential data from being uploaded to the botsCheck Point has posted this analysis last December that talks about how they can easily create malware.

Ironscales has this very illuminating video shown above on how this can be done. Also, to my earlier point about phishing, IT managers need to think about having better and more targeted awareness and training programs.

Infosys has this five-point plan that includes using the bots to help bolster your defensive posture. They also recommend you learn more about polymorphic malware threats (CyberArk has described such a threat back in January and Morphisec has specialized tools for fighting these that you might want to consider), and review your zero trust policies.

Finally, if you haven’t yet thought about cloud access security brokers, you should read my review in CSOonline about these products and think about using one across your enterprise to protect your data envelope.

How is that right to be forgotten going?

Right To Be Forgotten – Chicago PlaysThe right to be forgotten isn’t part of the US Constitution, or for that matter in any other country’s founding documents. But it is part of the more recent regulations, which define how this data is collected, how it is processed, and mostly importantly, how and when it is erased. The phrase refers to where individuals can ask to have their personal data removed from various digital repositories under certain circumstances.

It is not a new term. Indeed, the EU got going on this almost ten years ago, eventually enshrining rules in its General Data Protection Regulation (GDPR), which have been around now for almost five years. This motivated a few (and I emphasize very few — so far that number is five) states here in the US to enact their own privacy laws, including California’s Consumer Privacy Act (CCPA) and others that mention the “forgotten” rights. Here is a handy comparison chart of what the five states have passed so far.

Security blogger David Froud also wrote about the issue more than four years ago. He pointed out then that the term forgotten doesn’t necessarily mean total erasure of your data, such as the hypothetical case of a convicted criminal in applying for a job. But then, should the stain of that conviction follow someone for the rest of their life? Hard to say. And this is the problem with this right: the subtleties are significant, hard to define, and harder still to create a solid legal framework.

What got me thinking about this issue is a recent survey by Surfshark of the actual progress of the forgotten actions across European countries. They found that residents of France alone accounted for a quarter of the actions recorded by both Google and Microsoft’s search portals, with England and Germany residents together accounted for another quarter of cases. These requests are on the rise since the onset of Covid, and both Cyprus and Portugal have seen a 300% increase in requests since 2020. Interestingly, Estonia (which is a leader in implementing all sorts of other digital tech across the board) had the largest proportion of cases with 53 per 10,000 residents. Compare that to Bulgaria, which had 5.6 requests per 10,000 residents. At the bottom of the page linked above, you can see references to the various search portals’ request removal forms, and yes, you have to submit separate requests for each vendor (here is Google’s link). The EU “suggests” that the process from request to its fulfillment should take about a month, but the way they word it means there is no legal response time encoded in the GDPR. According to the Surfshark report, millions of requests have been filed since the law went into effect.

As the authors of the survey say, “Time will only tell which countries will join the fight for online privacy and to what ends our data is private online. Is the right to be forgotten a universal truth or a way to hide the past indefinitely?” I don’t honestly know.

Temper the Surfshark report with the results of a Spanish university research study that looked at the 500 most-visited websites in that country. They found a huge collection of tracking technologies that were hidden from any user consent, with less than nine percent of the sites actually obtaining any user consent.

But tech doesn’t stand still, and the right to be forgotten has taken on new meaning as the rise of AI chatbots such as ChatGPT that can seek out and find your personal data as a way to train their machine learning models. As my colleague Emma McGowen mentions in her Avast blog from last month, there is no simple mechanism to request removal of your data once the AI has found it online. You don’t know where your data is online, and even if you do there isn’t any simple form that you can fill out to request deletion.

Note: OpenAI released this opt-out form after I wrote this essay.

If you have ever tried to put a credit freeze on your accounts at the four major credit bureaus, you have some idea of the chore involved here. At least there are only four places that process your credit data. There are hundreds if not thousands of potential data collections that you would have seek out and try to get any action. Chances are your data is out there somewhere, and not just in Google’s clutches but on some hard drive running in some darker corner. Good luck tracking this down.

So where does that leave this right to privacy? It is a good sign that more countries and some US states are taking this seriously. But, each state has slightly different takes on what the right means and what consumers can do to remove their data. And for those you happily chatting up your AI bots, be careful about what private info you have them go searching for, lest you unwittingly add more data that you don’t want others to find about you.

Disinformation mercenaries for hire

In the past week I have seen a number of reports that range from unsettling to depressing. The reports document a three-pronged foundation of the darkest parts of the online world: disinformation, cyber-terrorism, and the difficulty in trying to craft better legal approaches to stop both.

Let’s start with the disinformation. A consortium of journalists from around the world wrote about a team of Israeli contractors (called “Team Jorge”) who claim to have covertly influenced more than 30 elections and placed stories to help improve the online reputations of numerous private business clients around the world. They did this by using hacking, sabotage and automated disinformation tools. Call it disinformation-mercenaries-for-hire. If this sounds familiar, it is another news product from the French-based ForbiddenStories group that broke the series of Pegasus-related stories back in the summer of 2021 that I have written about for Avast here. The group labels this effort “Story Killers” and you can read the various pieces here.

What is depressing is how adept this industry has become: by comparison, the Russian Internet Research Agency’s antics in meddling with our 2016 election looks crude and mere child’s play. The reporters uncovered a wide-ranging collection of automated tools to quickly create hundreds of fake social media accounts and generate all kinds of fake posts that are then amplified by the social networks and search engines. “We must be able to recount the life of the characters, their past, their personality,” said one mercenary. “When it’s a small agency, it’s done in a rather sloppy way. If it’s well done, it’s the Israelis.”

info1The Israeli company behind these operations has a wide array of services, including digital surveillance, hack-and-leak smear campaigns, influence operations, and election interference and suppression. They claim to have operated for a decade.

One of the consortium partners is The Guardian and they document one of these automated systems that is used to manage a collection of social media avatars. Called AIMS, it allows for managing 30,000 seemingly real accounts to be created for nonexistent people. These can then be deployed either as a swarm – similar to a network of bots – or as single agents. Other tools are described in this piece by Haaretz.

The disinformation mercenaries sold access to their software to various national intelligence agencies, political parties and corporate clients interested in trying to resolve business disputes. Accounts span Twitter, LinkedIn, Facebook, Telegram, Airbnb, Gmail, Instagram and YouTube. Some of the identities even have Amazon accounts with credit cards and bitcoin wallets. All of this was leveraged to stage real-world events in order to provide ammunition for social media campaigns to provoke outrage.

Let’s move on to the cyberterrorism effort. Speaking about the Russians, also released this week are two reports from the Atlantic Council, a DC-based think tank that has studied the disinformation war the Russians have waged against Ukraine. (To be clear, this is completely independent of the Story Killers effort.) It is also depressing news because you realize that unlike an actual shooting war, there is never any time when you can claim victory. The totality, scope and power of this vast collection of fake news stories, phony government documents, deep fake videos and other digital effluvia is staggering and is being used by the Russians to convince both their own citizens and the rest of the world of Putin’s agenda.

And something else to worry about with the war comes from one final report, this one from Dutch intelligence forces that was covered here. The report says, “Before and during the war, Russian intelligence and security services engaged in widespread digital espionage, sabotage and influencing against Ukraine and NATO allies. The sustained and very high pressure that Russia exerts with this requires constant vigilance from Ukrainian and Western defenders.”

Taken together, you can see that disinformation has become weaponized in both the public and private sector. So what can be done? Cue up part three, which is trying to craft better laws to control these actions. Coincidentally, the US Supreme Court heard two cases that have been moving through our judicial system, Gonzalez v. Google and Twitter v. Taamneh. Both cases involve ISIS attacks. The former involves the 2015 murder in Paris of the 23-year old American student Nohemi Gonzalez, which I wrote about in a blog for Avast last fall. The latter involves the 2017 death of Nawras Alassaf in Istanbul. The first case directly involves the Section 230 statutes, the latter the various sections of the anti-terrorism act. Both were laws passed in the mid 1990s, when the internet was young and by comparison innocent.

You can read the transcriptions of the court’s oral arguments for Gonzalez here. The  oral arguments transcript for Twitter are found here. I have taken the time to read them and if you are interested in my further thoughts, email me directly or post your questions here. Making effective changes to both laws won’t be easy without drastic consequences for how online companies run their businesses, and how we legitimately use them. And that is lesson from reading all these reports: as long as the bad guys can figure out ways to exploit these technologies, we will have to deal with some dire consequences.

CSOonline: What is the Traffic Light Protocol and how it works to share threat data

Traffic Light Protocol (TLP) was created to facilitate greater sharing of potentially sensitive threat information within an organization or business and to enable more effective collaboration among security defenders, system administrators, security managers and researchers. In this piece for CSOonline, I explain the origins of the protocol, how it is used by defenders, and what IT and security managers should do to make use of it in their daily operations.

Book review: The exploits of Space Rogue (Cris Thomas)

Space Rogue: How the Hackers Known As L0pht Changed the World by [Cris Thomas]The hacker Cris Thomas, known by his hacker handle Space Rogue, has a new book out that chronicles his rise into infosec security. I have interviewed him when I was writing for IBM’s Security Intelligence blog about his exploits. IBM’s X-Force has been his employer for many years now where he works for numerous corporate clients, plying the tools and techniques he refined when he was one of the founding members of the hacking collective L0pht.

My story covered his return visit to testify to Congress in 2018. Thomas and his colleagues originally testified there back in 1998. The book’s cover art shows this pivotal moment, along with the hacker handles shown as nameplates. The story of how this meeting came to pass is one of the book’s more interesting chapters, and the transcript of their testimony is included in an appendix too.

I also wrote this post about another member of L0pht named Mudge, during his time as a security consultant for Twitter. L0pht is infamous for developing a series of hacking tools, such as Windows NT password crackers (which Thomas goes into enormous detail about the evolution and enhancement of this tool) and a website called Hacker News Network. Thomas describes those formative years with plenty of wit and charm in his new book, which also serves as a reminder of how computer and network security has evolved — or not as the case may be made.

That cracking tool carried L0pht over the course of some twenty plus years. It began as “a small little piece of proof of concept code, hurriedly produced within a few weeks, and went from an exercise to prove a point, security weaknesses in a major operating system, to shareware, to a commercial success,” he writes.

One of his stories is about how L0pht had its first major penetration test of the Cambridge Technology Partners network. The company would go on to eventually purchase Novell and numerous other tech firms. The hackers managed to get all sorts of access to the CTP network, including being able to listen to voicemails about the proposed merger. The two companies were considering the acquisition of L0pht but couldn’t come to terms, and the hackers had left a backdoor in the CTP network that was never used but left on because by then their testing agreement had expired. Fun times.

The early days of L0pht were wild by today’s standards: the members would often prowl the streets of Boston and dumpster dive in search of used computer parts. They would then clean them up and sell them at the monthly MIT electronics flea market. Dead hard drives were one of their specialties — “guaranteed to be dead or your money back if you could get them working.” None of their customers took them up on this offer, however.

One point about those early hacking days — Thomas writes that the “naïveté of hackers in the late ’90s and early 2000s didn’t last long. Hackers no longer explore networks and computer systems from  their parents’ basements (if they ever did); now it is often about purposeful destruction at the bequest of government agencies.”

He recounts the story of when L0pht members brought federal CyberCzar Richard Clarke to their offices in the 1990s. Clarke was sufficiently impressed and told Thomas, “we have always assumed that for a group or organization to develop the capabilities that you just showed us would take the resources only available to a state-sponsored actor. We are going to have to rethink all of our threat models.” Exactly.

There are other chapters about the purchase of L0pht by @stake and Thomas’ eventual firing from the company, then taking eight years to get a college degree at age 40, along with the temporary rebirth of the Hacker News Network and going to work for Tenable and now at IBM.

Thomas ends his book with some words of wisdom. “Hackers are not the bad guys. Most of the great inventors of our time, such as Alexander Graham Bell, Mildred Kenner, and Nichola Tesla, could easily be considered hackers. Criminal gangs who are running ransomware campaigns or are stealing credit cards are just that, criminals. They just happen to use a computer instead of a crowbar. They are not hackers, not to me anyway. L0pht’s message of bringing security issues to light and getting them fixed still echoes throughout the industry and is more important today than ever.” If you are at all interested in reading about the early days of the infosec industry, I highly recommend this book.

A10 Networks blog: How to Defeat Emotet Malware

One of the longest-running and more lethal malware strains has once again returned on the scene. Called Emotet, it started out as a simple banking Trojan when it was created in 2014 by a hacking group that goes by various names, including TA542, Mealybug and MummySpider. Emotet malware is back in the headlines and continues to be one of the most significant threats facing companies today. In this review for A10 Networks, I describe what it is and how it works and how to defend against it using a combination of network and security tools.

Emotet Malware Timeline

Avast blog: A Bruce Schneier reader

Bruce Schneier’s work has withstood the test of time and is still relevant today.

If you’re looking for recommendations for infosec books to give to a colleague – or even to catch up on some holiday reading of your own – here’s a suggestion: Take a closer look at the oeuvre of Bruce Schneier, a cryptographer and privacy specialist who has been writing about the topic for more than 30 years and has his own blog that publishes interesting links to security-related events, strategies and failures that you should follow. In my blog post for Avast today, I review some of his books.

Avast blog: An update on international data privacy protection

The 38 member countries of the Organization for Economic Cooperation and Development (OECD) have recently adopted a new international agreement regulating government access to its citizens’ private data. The OECD draws on its membership from countries on several continents, including the US, Israel, Japan, Chile, the Czech Republic, and the UK. The document was released with the rather ungainly title of the “Declaration on Government Access to Personal Data Held by Private Sector Entities.”

There are seven common principles that were adopted, all in the interest of serving to the free flow of data across country borders and promoting trust between citizens and their governments.

You can read more on my post for Avast’s blog today.

Avast blog: DoD supply chain lessons learned

A July 2022 survey of 300 U.S. Department of Defense (DoD) IT contractors shows a woeful lack of information security in the majority of situations. These contractors are part of the DoD’s supply chain that, in typical government speak, is labeled the Defense Industrial Base (DIB). The report should be a warning even for those technology contractors that don’t do any DoD work, as I explain in my latest blog for Avast.


Avast blog: International police operation takes down iSpoof

Last week, an international group of law enforcement agencies took down one of the biggest criminal operators of a spoofing-as-a-service enterprise. Called iSpoof, it collected more than $120M from victims across Europe, Australia, Ukraine, Canada, and the United States. During the 16 months of the site’s operation, the group took in more than $3.8M in fees from its victims. In my blog for Avast, I summarize what happened, why this gang was so significant, and how spoofing has gotten more advanced over the years since those early days when Paris Hilton spoofed her friend’s cellphone.