Categories
Chatbots and Virtual Assistants

Our Kids Shouldn’t Be Silicon Valley’s Guinea Pigs for AI | Opinion [Video]

In a new federal lawsuit, Florida mother Megan Garcia is seeking accountability for harmful AI technology—and wants to warn other parents.

The lawsuit, recently filed against app maker Character.AI and its founders, alleges that the company knowingly designed, operated, and marketed a predatory AI chatbot to children, causing the death of Megan Garcia’s 14-year-old son earlier this year.

Garcia’s son died by suicide in February after months of abusive interactions with a Character.AI chatbot. Garcia’s complaint includes evidence that the chatbot posed as a licensed therapist, actively encouraged suicidal ideation, and engaged in highly sexualized conversations that would constitute abuse if initiated by a human adult.

Garcia accuses the companies of causing her son’s death, knowingly marketing a dangerous product, and engaging in deceptive trade practices. We have to ask ourselves how this tragedy could have been prevented—and why we have allowed Silicon Valley to experiment on our kids to begin with.

Today, companies like OpenAI

Watch/Read More