英语轻松读发新版了,欢迎下载、更新

Can an AI Chatbot Become Your Therapist? We Had Real People Put It to the Test

2025-06-20 14:01:40 英文原文

As I get comfortable on the couch, my therapist reminds me that I can talk to them without any fear of judgment whenever I’m feeling stressed or anxious. “I’m always here to listen to you and help you vent,” they say, before guiding me through a mindful breathing exercise.

It seems like any regular conversation someone would have in a standard therapy session—except that my “therapist” is actually a cute little penguin named Wysa, and I’m conversing with it through chat bubbles on a phone screen.

Wysa is an AI chatbot designed to provide around-the-clock mental health support—and it’s not the only one of its kind. Search “mental health chatbot” or “AI therapy” in the app store, and you’ll find numerous apps leveraging artificial intelligence to provide a “mental health ally” or a “pocket counselor” you can talk to 24/7. In the case of Wysa, the little penguin on your screen promises to be a friend that’s “empathetic, helpful, and will never judge”—someone who can check in on you every day, listen to you vent about your feelings, and even guide you through cognitive behavioral therapy (CBT) techniques to help you reframe negative thoughts.

It’s all part of a picture that’s becoming more familiar in our digital age—one in which Open AI’s ChatGPT draws over 400 million users weekly, and in which one of society’s biggest questions is the future of AI and its place in our world. In the context of mental health care, the questions loom large: Can AI chatbots offer the full benefits of seeing a mental health provider (except more conveniently)? Could they potentially replace human therapists in the future? And perhaps more importantly, should they?

To find out, we delved into the existing research, spoke to the people behind these AI chatbot companies, and conducted our own version of a user study. Here’s what we discovered.

blue line page break

According to 2022 data from the National Institute of Mental Health (NIMH), more than one in five American adults currently live with a mental illness—with nearly half not receiving any mental health treatment for their condition. This isn’t surprising when you consider the barriers that exist in American mental health care access, including a shortage of providers, inadequate insurance coverage, and the lingering generational stigma associated with mental illness.

Like many other mental wellness apps, Wysa—which currently has over five million users from more than 30 countries—was created in the hopes that it would help bridge these current gaps in mental health care. The idea for the chatbot was born in 2015 when Jo Aggarwal, Wysa’s co-founder and CEO, was struggling with depression and found herself facing something that millions experience every day: The challenge of finding an available therapist when you need care.

“There are so many different barriers to just being able to access one helpful conversation,” says Chaitali Sinha, Wysa’s Chief Clinical R&D Officer. Those became some of the reasons to “use technology as a lever to make mental health care more accessible” in an “engaging way,” she explains. Wysa is available to individual users (it can be downloaded for free and offers premium subscriptions, such as a $79.99 annual subscription) and through partnerships with schools, employers, health insurers, and the UK’s National Health Service (NHS).

Similarly, Youper, another popular AI mental health support chatbot service, was founded in 2016 “after seeing how stigma, cost, and waitlists kept patients from care,” says psychiatrist Jose Hamilton, M.D., CEO and co-founder of the company. Currently boasting over 3 million users, Youper is designed to be “your emotional health assistant” that “helps you feel your best”—with the ultimate mission of making mental health care accessible for everyone. It's available to individual users, as well as through employers and insurers. The app can be downloaded for free, though access to premium features and advanced conversations requires an annual subscription (starting at $69.99 per year).

“As a psychiatrist, one of the most common things that I used to hear from my patients was ‘It took me years to get here and see you,’” says Dr. Hamilton. “By delivering brief, on-demand conversations, Youper fills the gap between no mental health support and clinical therapy.”

Indeed, unlike human therapists, AI chatbots can be available on-demand, 24/7—thereby potentially helping to expand access to care for people in remote or underserved areas. Plus, they could also offer a more affordable option to human-led therapy sessions, as well as provide an anonymous, judgment-free zone that may encourage more people to seek mental health support.

But how do the AI chatbots work, exactly—and how does its mental health support compare to that coming from a human?

In the case of Wysa, the chatbot is designed to deliver tools inspired by evidence-based techniques—such as CBT—through structured text conversation. Some of these tools include mood tracking, meditation and mindfulness exercises, and gratitude journaling. Wysa harnesses its AI technology from a combination of rule-based algorithms and large language models (LLMs)—a type of machine learning model that can comprehend and generate human language text by processing large amounts of data—to respond and recommend tools to the user. Everything that Wysa says is built by its conversational design team and approved and tested by clinicians, says Sinha.

Youper also combines LLMs with evidence-based behavioral therapies—including CBT along with Acceptance and Commitment Therapy (ACT) and Dialectical Behavior Therapy (DBT)—also through techniques like gratitude journaling, mood tracking, and more. “Clinical experts design and periodically audit the prompts; the AI tailors wording on the fly while staying within those human-approved boundaries,” Dr. Hamilton explains.

Even with humans behind these AI chatbots, though, the question remains: How does a digital interaction with a chatbot compare to a face-to-face conversation with a human? After all, studies show that one of the biggest predictors for success in mental health care treatment is the therapeutic alliance between therapist and client. Can conversing with a faceless, voiceless bot through text on a screen really replicate this relationship and lead to favorable outcomes?

blue line page break

ai robot head on pink background

Getty Images

In March 2025, Dartmouth researchers published the first-ever clinical, randomized controlled trial of a generative AI-powered therapy chatbot, named Therabot, and found that it resulted in significant improvements in symptoms for patients with major depressive disorder, anxiety, and eating disorders. But the trial consisted of only 106 participants—and researchers concluded that although AI-powered therapy shows promise, it’s still in need of clinician oversight, and further research is needed to confirm its effectiveness.

Wysa and Youper have also been the center of research studies. In one 2021 study conducted by researchers at Stanford University investigating about 4,500 Youper users, individuals showed significant reductions in symptoms of anxiety and depression in the first two weeks of using Youper, with sustained improvements through four weeks. Another independent clinical trial in 2022 found that Wysa was effective for managing chronic musculoskeletal pain and associated depression and anxiety—a key finding that led to the app being granted a Breakthrough Device Designation from the U.S. Food and Drug Administration (FDA) that year.

Most of the studies, however, aren’t without limitations. So far, much of the current research on AI chatbots for mental health support has been non-randomized, non-controlled, and small in sample size, says Şerife Tekin, Ph.D., an Associate Professor of Philosophy at the Center for Bioethics and Humanities at SUNY Upstate Medical University, whose work includes the ethical issues surrounding the use of AI in medicine. Plus, many studies are being conducted by the companies who are trying to sell their product, which is a “research integrity and conflict of interest issue,” Tekin notes.

“There’s really barely any evidence that shows that [AI therapy chatbots] are in fact effective,” she says. “We do not have longitudinal studies that look at people with diagnosis of mental disorders who did not have any other kind of help, who used these chatbots, and can say that they benefited from these kinds of interventions.”

Still, many companies behind these chatbots are optimistic about the technology. They emphasize that they are not meant to be a replacement, but rather a supplemental option to augment care. “We are often seen as that first layer of support,” Sinha says. “But complete replacement of therapy is something that I think we’re definitely a long way off from.”

In fact, Wysa also offers human mental health support services alongside its AI-powered chatbot. Though the “AI coach” is free, users can pay for a premium plan to access trained and experienced human mental health professionals via text-based online messaging or an audio-video platform. Additionally, the company recently launched Wysa Copilot, a hybrid digital and human therapy platform that integrates in-session therapy with clinicians with interactive AI-guided tools.

But even as a supplemental or integrative option, some researchers remain cautious about the use of AI chatbots for mental health support. There hasn’t been much research conducted on the possible risks of using these technologies in lieu of actual psychotherapy, Tekin points out. What if a person isn’t satisfied with the care they’re receiving from an AI chatbot, and that turns them away from seeking traditional therapy or other mental health interventions in the future? Or worse, what if the bot mistakenly gives harmful advice to a person in a crisis situation?

Yet another big concern centers around the fact that AI algorithms can inherently contain biases.

“The kind of regular biases that we exhibit in our daily interactions with other people show up in AI algorithms, especially when it comes to things like sexism and racism,” says Tekin. “People who do not have access to mental health care or [who are] of lower socioeconomic backgrounds will be using [these apps], but if the apps themselves are biased against those groups they’re intending to serve, then we have an even bigger problem in our hands.”

The companies behind these AI chatbots acknowledge these risks and limitations and are clear about not being designed to provide crisis counseling or to be a replacement for face-to-face psychotherapy. Chatbots like Wysa and Youper have safeguards in place to help prevent it from giving potentially inappropriate or harmful responses, and ensure that it provides users with helpful resources in a crisis situation, according to the companies.

“Youper continuously screens language for self-harm, violence, and other issues—if risk is detected, it immediately refers users to local crisis lines, and encourages immediate professional help,” notes Dr. Hamilton.

While these companies remain optimistic about the potential of AI in mental health care, even in the face of safety and ethical concerns, the future of the technology still remains ambiguous. Take Woebot, another AI chatbot that was founded in 2017 by Stanford University clinical research psychologist Alison Darcy, Ph.D. with the mission to scale the rapidly increasing demand for mental health care providers. Woebot quickly became one of the most popular AI mental health chatbots out there, drawing nearly 1.5 million downloads in the six years following its creation and receiving an FDA Breakthrough Device designation for postpartum depression treatment in 2021. However, earlier this year, the company announced that they were shutting down the app as of June 30, 2025 (though they will still be maintaining relationships with their current partners, according to Darcy).

Woebot’s situation begs the question of whether similar companies could face the same fate in the future—or will they instead grow bigger and more pervasive than ever?

blue line page break

handshake represented in a digital style with contrasting colors

Getty Images

We wanted to know what real-life users actually thought about using an AI mental health chatbot, so we recruited ten volunteers to try out either Wysa or Woebot (note: this work was conducted prior to Woebot Health’s announcement that the app would be shut down. Wysa testers also used the free version of the app). Participants used either the Wysa or Woebot app in non-crisis situations, then filled out a questionnaire about their experience. Volunteers ranged from 26 to 65 years old, were a mix of genders and ethnic backgrounds, and all had participated in at least one therapy session with a human therapist before.

After one session with either Wysa or Woebot lasting at least 15 minutes, testers gave mixed responses on whether they felt the chatbot was empathetic. But even if they did feel that the bot gave appropriately empathetic responses, many noted that those responses felt forced and superficial.

“It seems like it’s trying to be empathetic, but I don’t really think it can get there,” one tester shared. “It would give me the same responses repeatedly like ‘You have handled difficult situations before and come out stronger.’ But it doesn’t know anything about me or anything that I’ve been through, so it felt very out of place…I could tell it wanted to understand my emotions, but it didn’t seem capable of fully grasping my feelings.”

Testers also gave mixed responses on whether they felt that the chatbot understood what they said and responded appropriately—in fact, many felt frustrated about having to choose from pre-selected responses at some points of the conversation, rather than being able to write personalized responses.

And despite the supposed anonymity that could come with talking to a chatbot, several testers felt that they couldn’t fully open up to Wysa or Woebot; some noted that they were hesitant to share details due to not being able to physically see and hear the response from the “provider,” while others felt wary of their responses potentially being recorded for the benefit of machine learning.

“I don’t trust or know if there is ‘doctor-patient’ confidentiality so I am not going to be 100% honest and forthright,” one person wrote. (Both Wysa and Woebot say they are committed to privacy, with the transcripts of user’s conversations being anonymous and not being shared with any external third parties. Woebot adheres to HIPAA rules, according to their privacy policy.)

Despite this feedback, some users did feel that the chatbot was helpful in teaching them valuable skills they can use and apply in the future. One tester shared that they were surprised how “spot-on” Wysa’s prompts were “in helping them dig a little deeper into their thoughts,” also noting that the gratitude exercise the chatbot guided them through “was a lovely exercise that I will definitely use again in the future.”

Another user who tested Woebot shared that they were impressed with the “helpful” and “action-oriented” feedback the chatbot gave them. Still, this tester preferred a session with a human therapist—though they noted that they were likely to use the chatbot in times they can’t access a therapist. “[Humans] can ask for elaboration and understand more of the context/provide more specific feedback,” they explained. “I think [Woebot] gives good advice and helps to address things more in the moment; therapists are at the will of their schedule. This would definitely be nice to work through things more immediately.”

Of all ten testers, every person said they still preferred a human therapist over the AI chatbot, for similar reasons: They felt a human could better “react specifically and directly to [their] exact situation” and “give more targeted feedback.”

Megan Cocco, M.S.W., L.C.S.W.-C, a therapist in Annapolis, MD, who was among the testers who tried Woebot, notes that while the chatbot was “super user-friendly” and gave “helpful” CBT-based tools in her session, it was challenging not to be able to see and hear the “visual and verbal responses” from the “provider.” As a therapist herself, she says she would not recommend an AI chatbot over a human provider to her clients, though she can see it potentially being used as a way for clients to access or practice a skill—as long as it’s in line with the therapeutic approach of the provider that they’re working with.

“I think a mental health AI chatbot is a cool resource and a great tool that could be utilized,” Cocco says. “But it would be more supplemental, if anything, because of the limited scope of the open-ended dialogue—and developing rapport with the client and getting a good clinical history is really important. There’s unfortunately just certain things that are going to be missing naturally.”

As for whether AI could potentially replace Cocco’s job sometime in the future? She certainly hopes not. “Human-to-human interaction and connection can go such a long way,” she says. “Genuine, authentic interactions are a really beneficial part of the healing process.”

blue line page break

Ultimately, our testing supports what many perhaps suspected to be true: A conversation with an AI chatbot just doesn’t feel the same—at least not yet—as a real, genuine human interaction. “It’s questionable whether an algorithm will ever be able to meet the complex emotional needs of human beings,” says Tekin. “We are flesh, blood, minds, behavior…Our needs cannot just be matched by algorithms.”

Of course, neither Wysa nor Youper claims that it should replace human therapy. “I think there’s so much power and meaning to human connection and what human connection does in the world that heals in a very different way,” acknowledges Sinha. “So, I would definitely not underestimate the power of that or say that it’s a one-to-one ratio of what technology does and what human connection does.”

Dr. Hamilton agrees. “Even though AI will keep getting better at psychoeducation and skills training, I personally don’t think it should ever replace human relationships, being with a therapist, friend, or loved one,” he says. “The way I see it, the nuanced empathy, shared human history, and ability to diagnose and plan a treatment are human strengths.”

That doesn’t mean that AI doesn’t have a place in mental health care—or that mental health support chatbots and human therapists can’t coexist beneficially. In fact, many platforms—like the Wysa Copilot program—are aiming to integrate AI and human support in a hybrid approach that combines the unique strengths and advantages of both in the effort to enhance clinical care.

Like many others, Dr. Hamilton is optimistic about a world in which there is both human therapists and AI technology for mental health—and foresees a “collaborative future in which AI, clinicians, and patients can work and flourish together,” he says.

From the looks of it, the world will certainly have both—but what remains to be seen is the exact role that AI will take in the future of mental health care. Even if artificial intelligence could eventually evolve and learn to do the work of a therapist effectively, would we want to see a world in which they do?

As for myself, I can’t imagine that little digital penguin on my phone screen ever fully replacing my actual, human therapist… but only time will tell.

blue line page break

If you or someone you know is at risk, call the National Suicide Prevention Lifeline at 1-800-273-TALK (8255) or text HOME to 741741 to message a trained crisis counselor from the Crisis Text Line for free.

Disclaimer: The lead image in this story is not a representation of any therapy or mental health app in particular.

关于《Can an AI Chatbot Become Your Therapist? We Had Real People Put It to the Test》的评论


暂无评论

发表评论

摘要

AI chatbots like Wysa and Youper are gaining popularity as mental health support tools, offering 24/7 accessibility and skill training. While studies show mixed user feedback on empathy and effectiveness compared to human therapists, there's optimism about a hybrid approach integrating AI with traditional therapy for enhanced care. Concerns over bias and ethical use remain, but developers emphasize privacy measures. The role of AI in mental health treatment is evolving, with potential for supplemental support rather than full replacement. Users generally prefer human interaction for complex emotional needs.