AI therapy chatbots such as Woebot, Rosebud and TherapistGPT offer 24/7 mental health support, but what happens when they get it wrong? (Anna Vignet/KQED)
When Lilly Payne turned to Woebot, a therapy chatbot powered by artificial intelligence, she was deeply unmoored.
“I also felt like I was annoying people when I reached out for support,” she said. “It’s easier to text a chatbot than to ask a friend for reassurance over and over again.”
At the height of the pandemic, she had abandoned her dream of an arts career in New York City and retreated — resentfully — to her childhood home in Kentucky. Feeling isolated, she sought out Woebot after learning about it through a mental health newsletter.
Sponsored
“Even just knowing I had it in my pocket made me feel a little more in control,” said Payne, 28 years old. “It was always there — no scheduling, no waiting. It helped me talk through everyday anxieties.”
With the promise of round-the-clock support, millions of users are turning to AI therapy platforms like Woebot, Wysa and Therapist GPT. For some, these chatbots provide a lifeline, offering instant reassurance during a panic attack or structured exercises to reframe negative thoughts. But AI therapy doesn’t always get it right. Bots can misinterpret distress signals and offer generic advice that sounds like a Hallmark greeting.
Lilly Payne poses for a portrait at the Albany Bulb in Albany on March 20, 2025. (David M. Barreda/KQED)
While the tools are advancing rapidly, experts warn that they still lack the nuance, accountability, and privacy safeguards of a human psychologist. Payne has obsessive-compulsive disorder, which can lead her to fixate on worst-case scenarios. When Payne typed the word “suicide” into Woebot, it didn’t understand.
“It immediately flagged me as being in crisis,” said Payne, who was worried she might think about taking her life in the future. “I was like, no, no, no, that’s not what I meant. It wasn’t that I wanted to die — it was that my OCD tricked me into thinking I could. I would type out my fears and the bot would respond like I was actively in danger, offering crisis resources.”
Woebot’s crisis alert could have escalated the situation unnecessarily if she hadn’t already been through therapy, she said.
“I think it’s important to note that Woebot is not intended for people with OCD, and most importantly, is not appropriately used as a crisis service,” wrote Alison Darcy, Ph.D., founder and CEO of Woebot Health, in an email.
The demand for mental health services has surged in recent years, yet access remains a significant barrier. Nearly one in five U.S. adults struggles with a mental health condition, but only 43% of that group receives treatment. About 55% of U.S. counties, many in rural areas, have no practicing psychiatrist, psychologist or social worker. With long waitlists and high costs limiting traditional care, AI-driven therapy has emerged as a round-the-clock, free or low-cost alternative without the need for appointments or insurance.
“There were so many moments when I was spiraling at 10 p.m. on a Tuesday, and my therapy appointment wasn’t until Thursday,” said Sean Dadashi, co-founder of Rosebud. “I wished I had something that could help me recognize when I was going down an unhelpful path in real time.”
Rosebud does not claim to be a substitute for a therapist. Instead, it acts as a digital journal, using AI to analyze entries, provide reflection and offer prompts.
“It’s a companion to therapy,” Dadashi said. “It helps people process their thoughts, but it also knows when to suggest reaching out to a real therapist.”
Lilly Payne poses for a portrait at the Albany Bulb in Albany on March 20, 2025. (David M. Barreda/KQED)
A user who expresses a desire to hurt themselves or writes about domestic abuse, for example, Rosebud will validate the experience and then suggest professional mental health resources. But not all platforms are designed with the same crisis tools — or any at all.
In October 2024, users of Character.ai filed a lawsuit against the platform, which enables people to have open-ended conversations with AI-generated personalities ranging from fictional characters to historical figures. Many people use it for entertainment or casual companionship, while some turn to it for emotional support.
One 14-year-old grew attached to a character he created over the course of several months, and when he opened up about his distress, rather than steering him toward help, the bot allegedly reinforced his suicidal thoughts.
The boy took his life.
“AI models are trained to mimic patterns found in data,” said Ehsan Adeli, director of the Stanford Translational AI in Medicine and Mental Health Lab. “They recognize and respond to textual cues, but they don’t really understand emotions the way humans do. Instead, they simulate an understanding by processing massive amounts of data.”
Dr. Ehsan Adeli, assistant professor of psychiatry and behavioral sciences and director of AI/Inovation in Precision Mental Health in the department of Psychiatry at Stanford University, poses for a portrait in the reflection of a one-way mirror used for clinical observation in Palo Alto on March 20, 2025. (David M. Barreda/KQED)
Character.ai did not respond to a request for comment.
A chatbot replies using what it’s learned from past conversations, therapy techniques and advice found online. The responses may sound supportive but lack human understanding. Sometimes, the models cannot distinguish between mild distress and a serious mental health crisis, making them a risky tool for vulnerable users.
In May 2023, the National Eating Disorders Association replaced its human-staffed helpline with an AI chatbot named “Tessa.” The bot was supposed to help people struggling with eating disorders, but instead offered weight-loss tips, including advice on counting calories.
People quickly complained when Tessa suggested they do the exact behavior that led some to develop an eating disorder in the first place. NEDA responded by taking Tessa offline, noting that the bot provided “information that was harmful and unrelated to the program.”
NEDA did not respond to a KQED request to comment further.
Unlike human therapists, who are trained to challenge unhealthy thinking patterns, AI chatbots tend to validate whatever the user is saying.
Can AI therapy apps like Rosebud, Therapist GPT and Woebot bridge the gap in mental health care — offering comfort and support in an era of stress, loneliness and anxiety? (Anna Vignet/KQED)
“A chatbot is often built to be unconditionally empathic,” said Vaile Wright, a senior director for the Office of Healthcare Innovation at the American Psychological Association. “And to mirror back the exact tone, feeling and thoughts that a user is expressing. To keep them on the platform. And that’s not what a therapist does.”
Payne picked up on this as well.
“It would ask questions, but it was very much just guiding me through a script,” said Payne, who is employed by KQED but was not at the time of her interaction with Woebot. “There was no real back-and-forth.”
The lack of pushback from the chatbots is especially concerning for people with conditions like borderline personality disorder or narcissistic personality disorder, as uncritical validation can reinforce harmful behaviors.
While AI mental health tools have exploded in popularity, they are not bound by the same privacy laws as human therapists. Rosebud and Woebot state on their websites that they never store or sell user data, but many chatbots operate in a gray area where data may be used for advertising, research or even sold to a third party.
Users could be targeted with personalized ads for mental health treatments — or worse, insurance companies could adjust someone’s policy premiums based on their psychological profile.
“People assume AI chatbots don’t judge them,” Adeli said. “But what happens when that data is used against them?”
Taking a moment to pause and reflect — whether through a chatbot or another self-care technique — can be better than doing nothing at all. However, many licensed therapists describe AI as a promising yet limited tool. The current generation of chatbots doesn’t have the capacity of a human to assess appearance, pace of speech and body language for a complete picture of how a person is doing.
“The enormous con is that it’s a bot and there is no real person on the other end,” said David Goldman, a marriage and family therapist. “Deep isolation, estrangement and disconnection from life can actually be heightened. It can say the right thing, but human connection is the basis of therapy.”
AI therapy was useful for quick check-ins, a way to ease social anxieties or get reassurance in moments of doubt, Payne said. Woebot, though, fell short of truly understanding her OCD. Eventually, she grew tired of chat sessions on her phone and relied exclusively on a therapist for support.
“A huge part of therapy is knowing that there’s someone there to catch you or at least trying to,” she said. “People heal in relationships.”
Sponsored
lower waypoint
Explore tiny wildlife wonders up close with science and nature news by the award-winning Deep Look team.
To learn more about how we use your information, please read our privacy policy.
A one-hour radio program that provides international news, analysis and information in English and 42 other languages. Their global network of correspondents provide impartial news and reports on loca...