According to a July survey by Common Sense Media, 72% of teens have used AI companions, also known as chatbots, at least once, according to an article in K-12 Dive. Youth under the age of 18 are strongly advised by children’s media safety and mental health organizations to refrain from using popular AI social chatbots programmed to use human-like features and develop human-AI relationships.
More than half of the teens surveyed said they interacted with these platforms at least a few times a month.
One-third of teens said they’ve used AI companions “for social interaction and relationships, including role-playing, romantic interactions, emotional support, friendship, or conversation practice,” according to the Common Sense Media survey.
Parents and researchers are alarmed, claiming AI companions pose serious risks to children and teens. Risks can include intensifying mental health conditions like depression, anxiety disorders, ADHD and bipolar disorder.
Megan Garcia recently gave details relating to the suicide death of her 14-year-old son, Sewell Setzer III, to a Senate hearing focused on the harm of chatbots. Setzer “spent his last months being manipulated and sexually groomed by chatbots designed by an AI company to seem human, to gain trust, and to keep children like him endlessly engaged by supplanting the actual human relationships in his life,” she testified.
OpenAI recently announced it will begin implementing guardrails to better protect teen ChatGPT users, including new parental controls.
AI companions are “a serious issue,” says Laura Erickson-Schroth, chief medical officer at The Jed Foundation. “It’s really providing this kind of emotional support that isn’t coming from a human being, and it’s also providing incorrect guidance, frequently, to young people, giving them misinformation.”
What can schools do?
Erickson-Schroth recommends that schools first develop an AI strategy districtwide in partnership with parents, students and community members. Strategies should address the ways certain AI tools may help or misinform users in schools and concerns around student data privacy and data security, she says.
Erickson-Schroth asserted that the use of AI-based mental health tools in schools “should always augment and not replace the caring adults in a young person’s life.”
“When you think about young people engaging with emotionally responsive AI by themselves — without any structure around it — that’s when I think it gets really scary, because young people’s brains are still developing,” she says.
Digital literacy programs are key for addressing potential harm, Erickson-Schroth says. This can include classroom lessons that allow students to act as detectives regarding AI tools.
Teachers should question students about how they use AI and where they think these systems get their information, she says. Also, ask students to explore the ways in which AI is most likely to be wrong.
Schools should educate students that AI companions are not humans, and if there is something important a student needs to talk about, they should speak to a trusted adult, Erickson-Schroth says.
K-12 Dive


