When I last wrote about chatbots in December, they were a sideshow. Since then, they have taken center stage. In this New Yorker piece, ChatGPT is called making a blurry JPEG of the internet. Since I wrote that post, Google, Microsoft and OpenAI/ChatGPT have released new versions of their machine learning conversation bots. This means it is time to get more serious about this market, understand the security implications for enterprises, and learn more about what these bots can and can’t do.
TechCrunch writes that early adopters include Stripe, which is using GPT-4 to scan business websites and deliver a summary to customer support staff; Duolingo built GPT-4 into a new language learning subscription tier and Morgan Stanley is creating a GPT-4-powered system that’ll retrieve info from company documents and serve it up to financial analysts. These are all great examples of how it is being helpful.
But there is a dark side as well. “ChatGPT can answer very specific questions and use its knowledge to impersonate both security and non-security experts,” says Ron Reiter, Co-Founder and CTO of Israeli data security firm Sentra. “ChatGPT can also translate text into any style of text or proofread text at a very high level, which means that it is much easier for people to pretend to be someone else.” That is a problem because chatbots can be used to refine phishing lures.
While perhaps the prediction of the coming of Skynet taking over the world is a bit of an over-reach, the chatbots continue to get better. If you are new to the world of large language models, you should read what the UK’s National Cyber Center wrote about them and see how these models relate to the bots’ data collection and operation.
One of ChatGPT’s limitation is that its training data is stale and doesn’t include anything after 2021. But it is quickly learning, thanks to the millions of folks that are willingly uploading more recent bits. That is a big risk for IT managers, who are already fearful that corporate proprietary information is leaking from their networks. We had one such leak this week, where a bug in ChatGPT made public titles of user chat histories. This piece in CSOonline goes into further detail about how this sharing works.
My first recommendation is that a cybersecurity manager should “know thy enemy” and get a paid account and learn more about the OpenAI’s API. This is where the bot will interact with other software, such as interpreting and creating pictures, or generating code, or diagnosing human behavior as a therapist. One of my therapist friends likes this innovation, and that it could help people who need to “speak” to someone urgently. These API connections are potentially the biggest threat vectors for data sharing.
Gartner has suggested a few specific things, such as favoring Azure’s version for your own experimentation and putting the right policies in place to prevent confidential data from being uploaded to the bots. Check Point has posted this analysis last December that talks about how they can easily create malware, and further more recent analysis here.
Ironscales has this very illuminating video shown above on how this can be done. Also, to my earlier point about phishing, IT managers need to think about having better and more targeted awareness and training programs.
Infosys has this five-point plan that includes using the bots to help bolster your defensive posture. They also recommend you learn more about polymorphic malware threats (CyberArk has described such a threat back in January and Morphisec has specialized tools for fighting these that you might want to consider), and review your zero trust policies.
Finally, if you haven’t yet thought about cloud access security brokers, you should read my review in CSOonline about these products and think about using one across your enterprise to protect your data envelope.
Pingback: The realities of ChatGPT as cyber threats (webcast) | Web Informant