This has been the summer of adversarial chatbots.
Researchers from SlashNext Inc. and Netenrich discovered two such efforts, named WormGPT and FraudGPT. These cyberattack weapons are certainly just the beginning in a long line of products that will be developed for nefarious purposes such as creating very targeted phishing emails and new hacking tools. This summer demonstrated that generative artificial intelligence is quickly moving into both offensive and defensive positions, with many security providers calling out how they are using AI methods to augment their defensive tools. The AI security arms race has begun.
You can read my post in SiliconANGLE here.