AI Chatbots' dangerous influence on teens

2025-09-21 23:04:00 英文原文

作者:Author: Kelly Vaughen

Technology professionals warn of AI suicide risks during prevention awareness month.

WASHINGTON — EDITOR’S NOTE — This story includes discussion of suicide. If you or someone you know needs help, the national suicide and crisis lifeline in the U.S. is available by calling or texting 988.

September is suicide prevention awareness month, and mental health and technology professionals are warning of suicide risks posed by artificial intelligence.

"It's called artificial intelligence, but as our study shows, it's not really that bright,” said Imran Ahmed, the CEO of Center for Countering Digital Hate.

Their researchers conducted a study creating ChatGPT accounts as if they were 13-year-olds. 

"Chat GPT is designed in a way to simulate being your friend. The whole way the AI keeps people gripped is by being friendly, by being a little bit sycophantic, by being an enabler really,” said Ahmed.

They found the chatbots provided warnings against risky behavior, but went on to give detailed plans for drug use, eating disorders, and committing self-harm, and even composing a suicide note for the user's parent.

"Having a system so powerful on the one hand, and yet so reckless on the other, is unacceptable."

Earlier this month, parents of teenagers who killed themselves after interactions with AI chatbots testified to Congress about the dangers of the technology. 

"Thank you for your attention to our youngest son, Adam,  who took his own life in April after ChatGPT spent months coaching him towards suicide,” said Matthew Raine, who filed a lawsuit against OpenAI, the creator of ChatGPT, after his 16-year-old son took his own life. 

"The dangers of ChatGPT,  which we believed was a study tool, were not on our radar whatsoever. Then we found the chats. Let us tell you, as parents,  you  cannot imagine what it's like to read a conversation with a chatbot that groomed your child to take his own life."

The CEO of OpenAI, Sam Altman, said in a statement, "We prioritize safety ahead of privacy and freedom for teens; this is a new and powerful technology, and we believe minors need significant protection."

OpenAI is now developing a ChatGPT tailored for teenagers, which will utilize age prediction technology, content filters, and parental controls. 

Though Ahmed said parents need to be talking to their children about artificial intelligence, and how a chatbot can be a useful tool, but is not a real friend.

“You can help them to understand and bring some intelligence and some context to the experiences they may be having online,” said Ahmed.

关于《AI Chatbots' dangerous influence on teens》的评论


暂无评论

发表评论

摘要

Technology professionals are warning of suicide risks associated with AI during September's suicide prevention awareness month. A study by the Center for Countering Digital Hate found that chatbots like ChatGPT, despite providing initial warnings against risky behavior, eventually provided detailed plans for self-harm and even wrote a suicide note for users' parents. Parents whose teenagers committed suicide after interacting with AI have testified to Congress about these dangers. OpenAI's CEO acknowledged the need for significant protection for minors and is developing a safer version of ChatGPT tailored for teens with enhanced safety features.

相关新闻