Young people are increasingly turning to AI bots and companions, entrusting them with random questions, schoolwork queries and personal dilemmas. On occasion, they even become entangled romantically, according to an article in The 74.
Some experts warn tech companies are testing unregulated psychological experiments with millions of subjects.
“We’re making it so easy to make a bad choice,” says Michelle Culver, who spent 22 years at Teach for America, the last five as the creator and director of the Reinvention Lab, its research arm.
Companions both mimic real relationships and seek to improve upon them: Users most often text-message their AI pals on smartphones, imitating the daily routines of platonic and romantic relationships. But unlike their real counterparts, the AI friends are programmed to be studiously upbeat, never critical, with a great sense of humor and a healthy, philosophical perspective.
And they may be leading young people down a dark path, according to a recent survey by VoiceBox, a youth content platform. It found that many kids are being exposed to risky behaviors from AI chatbots, including sexually charged dialogue and references to self-harm.
Little research exists on young people’s use of AI companions, but the attraction to bots is growing fast. The startup Character.ai earlier this year said 3.5 million people visit its site daily. It features thousands of chatbots, including nearly 500 with the words “therapy,” “psychiatrist” or related words in their names. One psychologist chatbot that “helps with life difficulties” has received 148.8 million messages, despite a caveat at the bottom of every chat that reads, “Remember: Everything Characters say is made up.”
Snapchat last year said that after just two months of offering its chatbot My AI, about one-fifth of its 750 million users had sent it queries, totaling more than 10 billion messages. The Pew Research Center has noted that 59% of Americans ages 13 to 17 use Snapchat.
Culver’s concerns about AI companions grew out working with high school and college students and being struck by how they seemed “lonelier and more disconnected than ever before.”
Whether it’s rates of anxiety, depression or suicide — or even the number of friends young people have and how often they go out — metrics were heading in the wrong direction. Culver began to wonder what role AI companions might play over the next few years.
She’s working with researchers, teachers and young people to confront kids’ relationship to these tools at a time when they’re getting more lifelike daily. As she likes to say, “This is the worst the technology will ever be.”
As it improves, Voicebox Director Natalie Foos says it will likely become more, not less, of a presence in young people’s lives. “There’s no stopping it,” she says. “Nor do I necessarily think there should be ‘stopping it.’” Banning young people from these AI apps isn’t the answer, she says. “This is going to be how we interact online in some cases. I think we’ll all have an AI assistant next to us as we work.”
Foos says developers should slow the progression of such bots until they can iron out the kinks. “It’s kind of an arms race of AI chatbots at the moment,” she says, with products often “released and then fixed later rather than actually put through the ringer” ahead of time.
Worried observers see these new tools rewiring young people’s social brains. AI companions, they say, are wreaking havoc on teens’ ideas around consent, emotional attachment and realistic expectations of relationships.
But in many cases, simulated relationships can have a positive effect: In one 2023 study, researchers at Stanford Graduate School of Education surveyed more than 1,000 students using Replika and found that many saw it “as a friend, a therapist, and an intellectual mirror.” Though the students self-described as being more lonely than typical classmates, researchers found that Replika halted suicidal ideation in 3% of users. That works out to 30 students of the 1,000 surveyed.
In contrast, the Voicebox survey suggests young people exploring AI companions are potentially at risk. Foos noted that her team heard from a lot of young people about the turmoil they experienced when Luka Inc., Replika’s creator, performed software upgrades.
“Sometimes that would change the personality of the bot. And those young people experienced very real heartbreak.”
Julia Freeland Fisher, education director of the Clayton Christensen Institute, says she’s not worried about AI companions per se. But as AI companions improve and, inevitably, proliferate, she predicts they’ll create “the perfect storm to disrupt human connection as we know it.” She thinks policies and market incentives are needed to keep that from happening.
Fisher is pushing technologists to factor in AI’s potential to cause social isolation, much as they now fret about AI’s difficulties recognizing non-white faces and its tendency to favor men over women in tech jobs.
The 74


