The realities of ChatGPT as cyber threats (webcast)

I had an opportunity to be interviewed by Tony Bryant, of CyberUP, a cybersecurity non-profit training center, about the rise of ChatGPT and its relevance to cyber threats. This complemented a blog that I wrote earlier in the year on the topic, and certainly things are moving quickly with LLM-based AIs. The news this week is that IBM is replacing 7,800 staffers with various AI tools, making new ways of thinking about the future of upskilling GPT-related jobs more important. At the RSAC show last week, there was lots of booths that were focused on the topic, and more than 20 different conference sessions that ranged from danger ahead to how we can learn to love ChatGPT for various mundane security tasks, such as pen testing and vulnerability assessment. And of course news about how ChatGPT writes lots of insecure code, according to French infosec researchers, along with a new malware infostealer is out with a file named ChatGPT For Windows Setup 1.0.0.exe. Don’t download that one!

There are still important questions you need to ask if you are thinking about deploying any chatbot app across your network, including how is your vendor using AI, which algorithms and training data are part of the model, how to build in any resilience or SDLC processes into the code, and what problem are you really trying to solve.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.