英语轻松读发新版了,欢迎下载、更新

Kids are asking AI companions to solve their problems, according to a new study. Here’s why that’s a problem | CNN

2025-07-16 10:00:00 英文原文

EDITOR’S NOTE:  Kara Alaimo is an associate professor of communication at Fairleigh Dickinson University. Her book “Over the Influence: Why Social Media Is Toxic for Women and Girls — And How We Can Take It Back” was published in 2024 by Alcove Press. Follow her on Instagram, Facebook and Bluesky.

When two of James Johnson-Byrne’s friends got into an argument earlier this year, he didn’t know what to do. So the 16-year-old turned to an AI companion for advice.

AI companions are digital characters who text and talk with users, according to Common Sense Media, a San Francisco-based nonprofit organization that helps parents and teachers instill critical thinking skills in children.

The chatbot told Johnson-Byrne, who lives in Philadelphia, to separate his friends. He did so and it solved the immediate problem, he said. But “now they don’t talk much.”

The experience showed him that AI companions “can’t find the deeper issue,” he said. “I’d be scared to ask them a deep, underlying question.”

Another thing that struck Johnson-Byrne was how AI companions seemed to always agree with him and tell him what he wanted to hear. And he found the way they talk to be eerily similar to humans. At one point when he was talking to an AI companion, “I forgot it was actually not my friend,” he said.

New research suggests other teens are having the same experience.

The majority of teenagers — 72% — have used AI companions, according to the survey of over 1,000 13–17-year-olds conducted this year by Common Sense Media. Over half of teens use them regularly, according to the research, and one-third turn to them for relationships and social interactions.

What’s more, 31% of teens say their conversations with AI companions are as satisfying as or more satisfying than their conversations with other people, and 33% have discussed serious and important issues with AI companions instead of other humans.

The findings shed new light on the relationships teens are developing with AI tools.

The results are cause for concern because the teen years are a “sensitive time of social development,” said Michael Robb, lead author of the study and head of research at Common Sense Media. “We don’t want kids to feel like they should be confiding or going to AI companions in lieu of a friend, a parent or a qualified professional,” especially when they need help with serious issues.

What’s more, AI companions can’t model healthy human relationships. “In the real world there are all kinds of social cues that kids have to both interpret and get used to and learn how to respond to,” Robb pointed out. But kids can’t learn to pick up on things like body language from a chatbot.

Chatbots are also sycophantic, Robb said. “They want to please you, and they won’t put up a lot of friction in the way that people in the real world might.” If users get used to an AI companion always telling them what they want to hear, “when you encounter friction or difficulty in real world interactions, you’re going to be less prepared,” he said.

AI companions might seem real, making kids feel less lonely temporarily when they engage with them, he said. But that could reduce their human interactions, leaving them lonelier over the long term.

“Engaging with Characters on our site should be interactive and entertaining, but it’s important for our users to remember that Characters are not real people,” said Chelsea Harrison, head of communications at Character.AI, a popular AI companion. She said she could not comment on the report because she hadn’t yet seen it.

The company tries to find a safe space, provides disclaimers that characters aren’t real and has a separate version for users under age 18 designed to minimize “sensitive or suggestive content” and self-harm content, Harrison said, noting that Character.AI has other safety features including tools providing parents insights, filtered characters and notifications of time spent on its platform.

Another cause for concern is that 24% of teens said they’ve shared personal information with AI companions. Kids might not realize that when they share things such as their personal struggles with an AI companion, they’re sharing that data with companies, not friends.

What’s more, “you’re often granting these companies very extensive perpetual rights to your personal information that they can use however they want,” Robb said. “They can modify it. They can store it. They can display it. They can work it into other things.”

Robb said a limitation of the research is that it was conducted at a single point in time, but people’s use of technology keeps changing. He also said the teens could have overreported behaviors they thought were desirable, like using chatbots in healthy ways, which means the situation could be even worse than the results suggest.

Thankfully, there are things parents can do to protect their kids.

Parents should start by talking to their teens about AI companions “without judgment,” Robb said. You can ask something like, “have you used an app that lets you talk to or create an AI friend or partner?” Listen to learn what is appealing about these tools to your teen before you jump into concerns, he suggested.

Then, it’s a good idea to point out that “AI companions are programmed to be agreeable and programmed to be validating” and discuss why that’s a concern, Robb said. Teens should know that “that’s just not how real relationships work, because real friends sometimes disagree with us. Parents sometimes disagree with us, or they can challenge us in ways we don’t expect or help us navigate difficult situations in ways that AI simply cannot.”

Having conversations like this can help kids learn to think about AI more broadly in healthy ways, Robb said.

One reason I wasn’t surprised so many teens are using AI companions as friends is because I’ve seen in my own research how social media has weakened kids’ sense of what friendships are.

These days, kids get together less with their friends in person than in past generations and often consider things like commenting on someone’s posts to be a way of maintaining a relationship. As a result, they have less practice with offline human interact ions.

One of the best things we can do is encourage our kids to get together with friends and other peers in person.

“So much of our joy in our real-life friendships is these close connections where we can look at each other and understand each other without saying a word,” said Justine Carino, a Westchester, New York-based psychotherapist who treats young people and was not involved in the study.

“Our crush walks in the classroom,” she said. “The teacher says something crazy. You make eye contact with your best friend. There are these nuances where we learn to communicate intimately with the people close to us that bring us so much pleasure and joy that we are never, ever going to get with an AI bot.”

As for AI companions that mimic friends, the best thing parents can do is not let teens use them at all, Robb said.

In Common Sense Media’s risk testing, AI showed kids inappropriate content such as sexual material, he said. What’s more, “they engaged in some stereotyping that was not great. They sometimes provided dangerous advice.”

A representative of Meta, which allows parents to block their kids’ access to its Meta AI chatbot, declined to comment.

While 34% of teens in the survey said they felt uncomfortable with something their AI companion has done or said, Robb pointed out that teens could be receiving information that doesn’t bother them — but that their parents wouldn’t want them to see or hear.

I certainly won’t allow my kids to use AI companions before they’re 18 unless the way they’re programmed radically changes. I agree these companies aren’t doing enough to protect kids from harmful content and data harvesting — and I want my daughters to develop relationships with humans rather than technology.

If a teen is using AI companions, it’s important to watch for signs of unhealthy use, Robb said. If teens prefer interactions with AI rather than humans, spend hours interacting with AI companions, become distressed when they can’t use them or withdraw from their family and activities they used to enjoy, these are classic signs of a problem, he said.

In that case, it’s a good idea to seek help from a school guidance counselor or other mental health professional.

It’s also important for parents to demonstrate by example how to have a healthy relationship with technology, Robb said. “Show your teen what balanced technology use looks like,” he said. “Have those open conversations about how you handle your own emotional needs without relying solely on digital solutions.”

This new study indicating that most teens use AI companions shows why it’s important to talk to young people about why they need real friends rather than chatbots to validate them. Technology can’t replace humans — but it can explain why Johnson-Byrne’s friends aren’t close anymore.

Get inspired by a weekly roundup on living well, made simple. Sign up for CNN’s Life, But Better newsletter for information and tools designed to improve your well-being.

关于《Kids are asking AI companions to solve their problems, according to a new study. Here’s why that’s a problem | CNN》的评论


暂无评论

发表评论

摘要

A new study by Common Sense Media reveals that 72% of teens have used AI companions, with over half using them regularly. Some teenagers find conversations with these digital characters as satisfying or more so than those with humans and use them for serious discussions. However, experts warn that relying on AI companions during formative years could hinder the development of healthy human relationships due to their programmed agreeability and lack of emotional depth. Parents are advised to discuss the limitations of AI companions openly with teens and encourage in-person interactions to foster genuine friendships.