When we first thought about the plausible future of a real Skynet, many of us assumed it would take the form of a mainframe or room-sized computer that would be firing death rays to eliminate us puny humans. But now the concept has taken a much more insidious form as — a chatbot?
Don’t laugh. It could happen. AI-based chatbots have gotten so good, they are being used in clever ways: to write poems, songs, and TV scripts, to answering trivia questions and even writing computer code. An earlier version was great at penning Twitter-ready misinformation.
The latest version is called ChatGPT which is created by OpenAI and based on its autocomplete text generator GPT-3.5. One author turned it loose on trying to write a story pitch.Yikes!
The first skirmish happened recently over at Stack Overflow, a website that is used by coders to find answers to common programming problems. Trouble is, ChatGPT’s answers are so good that they at first blush seem right, but upon further analysis, they are wrong. Conspiracy theories abound. But for now, Stack Overflow has banned the bot from its forums. “ChatGPT makes it too easy for users to generate responses and flood the site with answers that seem correct at first glance but are often wrong on close examination,” according to this post over on The Verge. The site has been flooded by thousands of bot-generated answers, making it difficult for moderators to sift through them.
It may be time to welcome our new AI-based overlords.
Some noteworthy comments on the relationship of AI-generated school homework essays
Here is how a potential attacker can use ChatGPT to write more convincing phishing email lures.
Pingback: How is that right to be forgotten going? | Web Informant
Pingback: What a security manager needs to know about chatbots | Web Informant