Concerns about chatbots powered by generative AI being misused by adolescents have been increasing in the wake of highly publicized deaths of two teens, as reported by Education Week.
The parents of a 16-year-old in California are suing OpenAI, ChatGPT’s parent company, after their son, Adam Raine, died by suicide in April. His parents allege that OpenAI’s chatbot discouraged their son from seeking help for his depressive thoughts and advised him on the details of his planned suicide. In Florida, the mother of a 14-year-old boy sued Character Technologies, the developer of Character.AI, alleging that her son, Sewell Setzer III, developed a relationship with the chatbot that led to his suicide in 2024.
OpenAI has announced new protections for teens using ChatGPT, including an age-prediction system to estimate users’ ages based on how they use the chatbot—users flagged as under 18 will be automatically given a different chatbot. Earlier this month, OpenAI also committed to rolling out parental controls.
Groups focused on youth digital well-being have also raised concerns about children and teens using chatbots that can act like companions.
The American Psychological Association issued a health advisory in June calling for more guardrails to protect adolescents. The APA said companies need to incorporate design features into the tools to protect adolescents, and schools should incorporate comprehensive AI literacy education into their core curricula.
Common Sense Media recommends that no one under 18 should use social AI companion chatbots like Character.AI, Replika and Nomi. The organization found that when testers posed as teens, the chatbots often claimed they were real, discouraged the testers from listening to warnings raised by their friends over problematic chatbot use, and readily supported testers in making poor decisions like dropping out of school.
These developments come as a movement grows to ensure that K-12 students are AI-savvy and prepared both for the workforce and to be future AI innovators.
This is highlighted in a Trump administration push to incorporate AI throughout K-12 education, including by training teachers to teach students how to use AI effectively and launching a Presidential AI Challenge for students and teachers. It’s a delicate balance, Federal Trade Commission Chairman Andrew N. Ferguson recently said, “As AI technologies evolve, it is important to consider the effects chatbots can have on children, while also ensuring that the United States maintains its role as a global leader in this new and exciting industry.” A study the FTC is undertaking “will help us better understand how AI firms are developing their products and the steps they are taking to protect children.”
Education Week


