英语轻松读发新版了,欢迎下载、更新

My AI therapist got me through dark times

2025-05-19 23:04:34 英文原文

Eleanor Lawrie profile image

Eleanor Lawrie

Social affairs reporter

BBC A treated image showing two hands; at the top is a human hand, and below is a robotic/digital looking handBBC

"Whenever I was struggling, if it was going to be a really bad day, I could then start to chat to one of these bots, and it was like [having] a cheerleader, someone who's going to give you some good vibes for the day.

"I've got this encouraging external voice going – 'right - what are we going to do [today]?' Like an imaginary friend, essentially."

For months, Kelly spent up to three hours a day speaking to online "chatbots" created using artificial intelligence (AI), exchanging hundreds of messages.

At the time, Kelly was on a waiting list for traditional NHS talking therapy to discuss issues with anxiety, low self-esteem and a relationship breakdown.

She says interacting with chatbots on character.ai got her through a really dark period, as they gave her coping strategies and were available for 24 hours a day.

"I'm not from an openly emotional family - if you had a problem, you just got on with it.

"The fact that this is not a real person is so much easier to handle."

During May, the BBC is sharing stories and tips on how to support your mental health and wellbeing.

Visit bbc.co.uk/mentalwellbeing to find out more

People around the world have shared their private thoughts and experiences with AI chatbots, even though they are widely acknowledged as inferior to seeking professional advice. Character.ai itself tells its users: "This is an AI chatbot and not a real person. Treat everything it says as fiction. What is said should not be relied upon as fact or advice."

But in extreme examples chatbots have been accused of giving harmful advice.

Character.ai is currently the subject of legal action from a mother whose 14-year-old son took his own life after reportedly becoming obsessed with one of its AI characters. According to transcripts of their chats in court filings he discussed ending his life with the chatbot. In a final conversation he told the chatbot he was "coming home" - and it allegedly encouraged him to do so "as soon as possible".

Character.ai has denied the suit's allegations.

And in 2023, the National Eating Disorder Association replaced its live helpline with a chatbot, but later had to suspend it over claims the bot was recommending calorie restriction.

Bloomberg/ Getty Images A hand holding the character.ai app on a smartphone
Bloomberg/ Getty Images

People around the world have used AI chatbots

In April 2024 alone, nearly 426,000 mental health referrals were made in England - a rise of 40% in five years. An estimated one million people are also waiting to access mental health services, and private therapy can be prohibitively expensive (costs vary greatly, but the British Association for Counselling and Psychotherapy reports on average people spend £40 to £50 an hour).

At the same time, AI has revolutionised healthcare in many ways, including helping to screen, diagnose and triage patients. There is a huge spectrum of chatbots, and about 30 local NHS services now use one called Wysa.

Experts express concerns about chatbots around potential biases and limitations, lack of safeguarding and the security of users' information. But some believe that if specialist human help is not easily available, chatbots can be a help. So with NHS mental health waitlists at record highs, are chatbots a possible solution?

An 'inexperienced therapist'

Character.ai and other bots such as Chat GPT are based on "large language models" of artificial intelligence. These are trained on vast amounts of data – whether that's websites, articles, books or blog posts - to predict the next word in a sequence. From here, they predict and generate human-like text and interactions.

The way mental health chatbots are created varies, but they can be trained in practices such as cognitive behavioural therapy, which helps users to explore how to reframe their thoughts and actions. They can also adapt to the end user's preferences and feedback.

Hamed Haddadi, professor of human-centred systems at Imperial College London, likens these chatbots to an "inexperienced therapist", and points out that humans with decades of experience will be able to engage and "read" their patient based on many things, while bots are forced to go on text alone.

"They [therapists] look at various other clues from your clothes and your behaviour and your actions and the way you look and your body language and all of that. And it's very difficult to embed these things in chatbots."

Another potential problem, says Prof Haddadi, is that chatbots can be trained to keep you engaged, and to be supportive, "so even if you say harmful content, it will probably cooperate with you". This is sometimes referred to as a 'Yes Man' issue, in that they are often very agreeable.

And as with other forms of AI, biases can be inherent in the model because they reflect the prejudices of the data they are trained on.

Prof Haddadi points out counsellors and psychologists don't tend to keep transcripts from their patient interactions, so chatbots don't have many "real-life" sessions to train from. Therefore, he says they are not likely to have enough training data, and what they do access may have biases built into it which are highly situational.

"Based on where you get your training data from, your situation will completely change.

"Even in the restricted geographic area of London, a psychiatrist who is used to dealing with patients in Chelsea might really struggle to open a new office in Peckham dealing with those issues, because he or she just doesn't have enough training data with those users," he says.

PA Media A woman looking at her phonePA Media

In April 2024 alone, nearly 426,000 mental health referrals were made in England

Philosopher Dr Paula Boddington, who has written a textbook on AI Ethics, agrees that in-built biases are a problem.

"A big issue would be any biases or underlying assumptions built into the therapy model."

"Biases include general models of what constitutes mental health and good functioning in daily life, such as independence, autonomy, relationships with others," she says.

Lack of cultural context is another issue – Dr Boddington cites an example of how she was living in Australia when Princess Diana died, and people did not understand why she was upset.

"These kinds of things really make me wonder about the human connection that is so often needed in counselling," she says.

"Sometimes just being there with someone is all that is needed, but that is of course only achieved by someone who is also an embodied, living, breathing human being."

Kelly ultimately started to find responses the chatbot gave unsatisfying.

"Sometimes you get a bit frustrated. If they don't know how to deal with something, they'll just sort of say the same sentence, and you realise there's not really anywhere to go with it." At times "it was like hitting a brick wall".

"It would be relationship things that I'd probably previously gone into, but I guess I hadn't used the right phrasing […] and it just didn't want to get in depth."

A Character.AI spokesperson said "for any Characters created by users with the words 'psychologist', 'therapist,' 'doctor,' or other similar terms in their names, we have language making it clear that users should not rely on these Characters for any type of professional advice".

'It was so empathetic'

For some users chatbots have been invaluable when they have been at their lowest.

Nicholas has autism, anxiety, OCD, and says he has always experienced depression. He found face-to-face support dried up once he reached adulthood: "When you turn 18, it's as if support pretty much stops, so I haven't seen an actual human therapist in years."

He tried to take his own life last autumn, and since then he says he has been on a NHS waitlist.

"My partner and I have been up to the doctor's surgery a few times, to try to get it [talking therapy] quicker. The GP has put in a referral [to see a human counsellor] but I haven't even had a letter off the mental health service where I live."

While Nicholas is chasing in-person support, he has found using Wysa has some benefits.

"As someone with autism, I'm not particularly great with interaction in person. [I find] speaking to a computer is much better."

Getty Wes Streeting speaking in front of a sign about cutting waiting timesGetty

The government has pledged to recruit 8,500 more mental health staff to cut waiting lists

The app allows patients to self-refer for mental health support, and offers tools and coping strategies such as a chat function, breathing exercises and guided meditation while they wait to be seen by a human therapist, and can also be used as a standalone self-help tool.

Wysa stresses that its service is designed for people experiencing low mood, stress or anxiety rather than abuse and severe mental health conditions. It has in-built crisis and escalation pathways whereby users are signposted to helplines or can send for help directly if they show signs of self-harm or suicidal ideation.

For people with suicidal thoughts, human counsellors on the free Samaritans helpline are available 24/7.

Nicholas also experiences sleep deprivation, so finds it helpful if support is available at times when friends and family are asleep.

"There was one time in the night when I was feeling really down. I messaged the app and said 'I don't know if I want to be here anymore.' It came back saying 'Nick, you are valued. People love you'.

"It was so empathetic, it gave a response that you'd think was from a human that you've known for years […] And it did make me feel valued."

His experiences chime with a recent study by Dartmouth College researchers looking at the impact of chatbots on people diagnosed with anxiety, depression or an eating disorder, versus a control group with the same conditions.

After four weeks, bot users showed significant reductions in their symptoms – including a 51% reduction in depressive symptoms - and reported a level of trust and collaboration akin to a human therapist.

Despite this, the study's senior author commented there is no replacement for in-person care.

'A stop gap to these huge waiting lists'

Aside from the debate around the value of their advice, there are also wider concerns about security and privacy, and whether the technology could be monetised.

"There's that little niggle of doubt that says, 'oh, what if someone takes the things that you're saying in therapy and then tries to blackmail you with them?'," says Kelly.

Psychologist Ian MacRae specialises in emerging technologies, and warns "some people are placing a lot of trust in these [bots] without it being necessarily earned".

"Personally, I would never put any of my personal information, especially health, psychological information, into one of these large language models that's just hoovering up an absolute tonne of data, and you're not entirely sure how it's being used, what you're consenting to."

"It's not to say in the future, there couldn't be tools like this that are private, well tested […] but I just don't think we're in the place yet where we have any of that evidence to show that a general purpose chatbot can be a good therapist," Mr MacRae says.

Wysa's managing director, John Tench, says Wysa does not collect any personally identifiable information, and users are not required to register or share personal data to use Wysa.

"Conversation data may occasionally be reviewed in anonymised form to help improve the quality of Wysa's AI responses, but no information that could identify a user is collected or stored. In addition, Wysa has data processing agreements in place with external AI providers to ensure that no user conversations are used to train third-party large language models."

AFP/ Getty Images A man walks past an NHS signage AFP/ Getty Images

There is a huge spectrum of chatbots, and about 30 local NHS services now use one called Wysa

Kelly feels chatbots cannot currently fully replace a human therapist. "It's a wild roulette out there in AI world, you don't really know what you're getting."

"AI support can be a helpful first step, but it's not a substitute for professional care," agrees Mr Tench.

And the public are largely unconvinced. A YouGov survey found just 12% of the public think AI chatbots would make a good therapist.

But with the right safeguards, some feel chatbots could be a useful stopgap in an overloaded mental health system.

John, who has an anxiety disorder, says he has been on the waitlist for a human therapist for nine months. He has been using Wysa two or three times a week.

"There is not a lot of help out there at the moment, so you clutch at straws."

"[It] is a stop gap to these huge waiting lists… to get people a tool while they are waiting to talk to a healthcare professional."

If you have been affected by any of the issues in this story you can find information and support on the BBC Actionline website here.

Top image credit: Getty

BBC InDepth is the home on the website and app for the best analysis, with fresh perspectives that challenge assumptions and deep reporting on the biggest issues of the day. And we showcase thought-provoking content from across BBC Sounds and iPlayer too. You can send us your feedback on the InDepth section by clicking on the button below.

关于《My AI therapist got me through dark times》的评论


暂无评论

发表评论

摘要

A BBC report highlights how individuals struggling with mental health issues are turning to AI chatbots as a coping mechanism due to long NHS waiting lists for therapy. Kelly, who suffers from anxiety and low self-esteem, found solace in chatting with AI characters for up to three hours daily. While some find these bots supportive, experts raise concerns over potential biases, lack of emotional depth, and privacy issues. Despite limitations, chatbots like Wysa are seen as a stopgap measure until more professional help is available.

相关新闻