A new report on the security of artificial intelligence large language models, including OpenAI LP’s ChatGPT, shows a series of poor application development decisions that carry weaknesses in protecting enterprise data privacy and security. The report is just one of many examples of mounting evidence of security problems with LLMs that have appeared recently, demonstrating the difficulty in mitigating these threats. I take a deeper dive into a few different sources and suggest ways to mitigate the threats of these tools in my post for SiliconANGLE here.