There’s no denying that ChatGPT and other AI chatbots make impressive chat companions that can converse with you on just about anything.
Their conversational powers can be extremely convincing too; if they’ve made you feel safe about sharing your personal details, you’re not alone. But — newsflash! — you’re not talking to your toaster. Anything you tell an AI chatbot can be stored on a server and resurface later, a fact that makes them inherently risky.
The problem stems from how the companies that run Large Language Models (LLMs) and their associated chatbots use your personal data — essentially, to train better bots.
Take the movie Terminator 2: Judgment Day as an example of how an LLM learns. In the film, the future leader of the human resistance against Skynet, the child John Connor, teaches the Terminator, played by Arnold Schwarzenegger, personal slogans like “Hasta la vista baby” in an attempt to make it more human.
Suffice …