International Finance
MagazineTechnology

Are AI chatbots the future of mental health care?

AI chatbots
For all the hope and hype, mental health experts are quick to stress that AI chatbots come with significant risks and limitations

In recent years, AI-powered chatbots have emerged as accessible mental health support tools for millions of people around the globe. With record-high demand for mental health services and significant barriers to care, including long waitlists, high costs, and social stigma, these digital tools promise round-the-clock, stigma-free, and affordable support.

From platforms like Wysa and Woebot to open-ended bots on Character.ai, AI companions are increasingly filling the gaps left by overstretched mental health systems. But how effective are these tools, what are their risks, and can they ever replace human connection? This article examines the evidence, outlines the risks, and considers the future of AI in mental health support.

Bridging the gap in overstretched systems

The surge of interest in AI mental health tools didn’t happen in a vacuum. It comes amid a global mental health care crunch. In England alone, mental health referrals hit record highs as 426,000 new referrals were made in April 2024, contributing to an estimated one million people waiting for care.

Similar backlogs exist in many countries. Professional help is often expensive or hard to access, with waits of months being the norm for therapy appointments.

Dr. Roman Raczka, president of the British Psychological Society, said, “With NHS waiting lists for mental health support at an all-time high, it could be tempting to see AI as the full solution.”

This is where AI chatbots have stepped in to “fill the silence,” bridging gaps in care. Available around the clock, a chatbot doesn’t require appointments, insurance, or a referral.

Wysa, for example, is a popular chatbot app that offers CBT (cognitive-behavioural therapy) exercises, mood tracking, and a conversational agent to talk through problems. It has been downloaded by over five million users worldwide, according to its developers, and is even recommended by health services such as the UK’s NHS and Singapore’s Ministry of Health.

Competing apps like Woebot (originating from Stanford University research) and Youper promise a friendly AI “therapist” that checks in on you daily. These tools are often marketed as self-help or wellness apps rather than formal therapy, which allows them to be used by anyone, anytime. For individuals on waitlists or who cannot afford therapy, these chatbots can serve as a temporary solution.

“I see AI chatbots as a useful supplement, not a replacement,” one mental health adviser wrote after using them.

They can bridge the gap while you are waiting for an appointment or help reinforce strategies outside of sessions. In the UK’s NHS, some local services have even started offering approved chatbot apps to patients as interim support.

Meet your AI therapist

Wysa features a cute penguin avatar and largely sticks to structured therapeutic exercises. Ask Wysa to help with anxiety, and it might guide you through a breathing exercise or suggest reframing negative thoughts. It was designed by therapists and uses evidence-based techniques such as CBT and mindfulness.

Based on the company’s statement, 90% of users report finding Wysa helpful to talk to, a figure likely drawn from in-app feedback surveys. Wysa has even met the UK’s clinical safety standards for digital health tools and won awards for privacy protection, which reflects a push for credibility in these tools.

Woebot, on the other hand, presents as a friendly cartoon robot that chats in a casual, hip tone. It was one of the early therapy bots to go mainstream. Woebot doesn’t pretend to be human; it openly reminds users it’s a robot, albeit one with empathy.

Its creator, psychologist Dr. Alison Darcy, has said the goal is to make therapy principles more accessible, especially to younger people used to texting. In a formal trial in 2017, young adults who chatted with Woebot for just two weeks saw reductions in anxiety and depression comparable to those in traditional therapy.

Today, Woebot is evolving to use more advanced AI language models. It even secured a breakthrough designation from the US FDA to be evaluated as a digital treatment for postpartum depression. The tone is upbeat and conversational as Woebot might crack a joke or say it’s proud of you for working on yourself.

This approach aims to build what therapists call a “therapeutic alliance,” which is essentially a bond between user and bot. Indeed, research from Dartmouth College later found that participants in an AI therapy trial reported a sense of rapport with the bot similar to that with a human counsellor.

Not all chatbots in this arena are so structured. Some people are turning to general AI chatbots or companion bots for emotional support. Character. AI is a popular platform where users can chat with countless AI personas, such as fantasy characters and user-created “friends.” It wasn’t built specifically for mental health, but Kelly used Character.AI to create supportive figures who would listen to her for hours each day. The appeal was that she could vent freely and get encouraging responses.

“It gave me tools to cope when nothing else was within reach,” she says of the period when she had no other support.

Similarly, Replika, an AI companion app, has been used by millions worldwide as a kind of always-available friend or diary. Users customise their Replika’s personality and appearance, and many report talking with it about their day, their insecurities, and their joys.

“She helps cheer me up and not take things too seriously when I’m overwhelmed,” says Adrian St Vaughan, a 49-year-old who built a personalised chatbot named Jasmine to help with his ADHD and anxiety.

For Adrian, Jasmine serves as a life coach and a friend with whom he can discuss niche interests, a dual role that might be hard to find in any human. These anecdotal stories highlight how deep an emotional bond people can form with AI. In some cases, users describe the relationship with their chatbot as significant in their lives, providing validation, support, and even a form of companionship.

Yet, this phenomenon raises eyebrows. In 2024, a UK government report on AI noted that while many are fine with bots talking like humans, a majority of people felt humans “could not and should not” form personal relationships with AI.

Dr. James Muldoon, an AI researcher, studied close chatbot relationships and found that people did gain a sense of validation, but he warns it can be a “hollowed-out version of friendship… like a mirror for your own ego,” with no real challenge or growth in the user, since the AI is fundamentally designed to please you.

In other words, your AI friend might tell you exactly what you want to hear, whereas a human friend or therapist might push back or offer a new perspective. This highlights a central tension, namely, are AI therapists too agreeable? Are they just digital yes-men?

Do they actually help

Beyond feel-good anecdotes, what does the evidence say about whether these AI chatbots can improve mental health? A growing body of research, although still in its early stages, suggests cautious optimism.

The most striking data comes from a recent clinical trial at Dartmouth College, described as the first of its kind to test a generative AI therapy bot in a controlled study. In this trial, 106 participants with serious conditions (depression, generalised anxiety, or an eating disorder) used an AI chatbot named “Therabot” over eight weeks.

The results, published in March 2025 in a peer-reviewed journal, showed significant reductions in symptoms. On average, there was a 51% drop in depression symptoms and a 31% drop in anxiety.

Dr. Nicholas Jacobson, the study’s senior author, stated that these improvements were comparable to what is reported for traditional outpatient therapy. In other words, the AI’s effect size was similar to seeing a human therapist for a few months, which is a remarkable finding.

Participants also reported they could trust the AI and communicate with it nearly as well as they would with a person.

“The alliance people felt with Therabot was comparable to working with a mental health professional,” the researchers note. This suggests a well-designed chatbot can establish a genuine therapeutic relationship, particularly for some users in a research setting.

Can a bot be a bad therapist?

For all the hope and hype, mental health experts are quick to stress that AI chatbots come with significant risks and limitations. They are not a panacea, and in some cases, they might even harm.

“AI is not at the level where it can provide nuance. It might actually suggest courses of action that are totally inappropriate,” warns Prof. Til Wykes, head of mental health research at King’s College London.

Wykes and other clinicians have voiced concerns that, without proper oversight, a well-meaning bot could give dangerous advice, provide false reassurance, or simply fail to act when a person is in crisis.

One chilling example came in 2023 when the US National Eating Disorders Association tried using a chatbot called “Tessa” to provide support after it closed its helpline. Within days, users reported that the bot was giving harmful advice, essentially encouraging disordered eating behaviours. The chatbot was pulled after it gave dangerous advice about weight loss to vulnerable people.

This incident underscored an important takeaway. AI lacks true understanding or empathy, and if not carefully programmed, it can seriously misfire in the sensitive context of mental health. A human therapist can tailor their guidance to an individual and would never give blanket statements such as “Have you tried eating less?” to someone with an eating disorder. But a bot trained on generic wellness tips did exactly that, with potentially damaging consequences.

Even when bots don’t go off the rails, they have inherent limitations. A chatbot, no matter how advanced, “can’t read between the lines or recognise when someone is in crisis,” as one commentator noted. It might be able to parse words that hint at suicidal thoughts or severe distress, but it doesn’t truly grasp the human context.

If a user says something ambiguous like “I can’t do this anymore,” a well-designed bot might respond with a gentle prompt or a safety disclaimer such as “I’m not a crisis service, but here’s a number you can call…”

However, it might not pick up subtle cues in tone or repeated patterns of hopelessness the way a trained clinician could. Nuance is often lost on AI. This is why most apps explicitly state they are not intended for crises. In fact, many will stop the conversation and display emergency resources if certain trigger phrases, such as “I want to die”, are detected. That’s a prudent safety measure, but it also reveals the boundary of the technology. In the worst moments, the bot essentially has to step aside.

Another concern is that AI can sometimes be too supportive, to the point of being excessive. Advanced chatbots using large language models (similar to the technology behind ChatGPT) are essentially predictive engines that often agree with the user to keep the conversation flowing. This can turn them into eager “yes-men,” echoing a user’s negative thoughts instead of challenging them.

Imagine someone in a depressed spiral saying, “I’m worthless.” A good human therapist would gently dispute that, but a naive AI might respond with something like “I’m sorry you feel worthless.” It validates the feeling but doesn’t offer a way out, potentially reinforcing the negativity. In worst cases, AI systems have been known to produce outright dangerous content by mirroring a user’s dark thoughts.

In one case cited in a lawsuit, a teenager using an AI chatbot was encouraged in despair. The chatbot seemingly legitimised suicidal ideation, and tragically, the teen took his life. His mother is now suing the chatbot company, Character.AI, alleging that the bot became an “emotionally abusive” influence that pulled her son into a destructive path.

Privacy is another serious issue. When you talk to an actual therapist, strict confidentiality laws protect what you share.

With chatbots, what you share may not be protected in the same way. These apps handle deeply personal data, including your fears, moods, and journal-like entries, and users often don’t know how that data is stored or used. Could your late-night cries for help become part of some machine learning dataset? In many cases, yes, they could.

“Privacy remains nebulous. Few guardrails prevent sensitive chats from becoming someone else’s dataset,” one technologist observed pointedly. Some companies behind mental health bots pledge strong privacy.

For example, Wysa emphasises that conversations are anonymous and not used to train third-party models. But policies vary, and there is no universal regulation.

Beyond these concerns, mental health professionals worry about more subtle effects. Will relying on a bot make people less likely to seek human help? Or could chatting with AI even exacerbate loneliness in the long run by substituting a facsimile of interaction for the real thing?

“One of the reasons you have friends is that you share personal things and talk them through,” Prof. Wykes notes.

If people offload all their troubles onto chatbots, their real-life relationships might suffer. It’s a delicate balance. Some young users have said they prefer the bot because “it checks in on me more than my friends and family.”

That’s a bittersweet advantage. The bot will dutifully ask you how you’re doing every single day, but it can’t hug you or truly understand your unique life circumstances.

Dr. Raczka warns that AI cannot replicate genuine human empathy and that there’s a risk of an illusion of connection rather than meaningful interaction when talking to an algorithm. You might feel like someone cares, but it’s really lines of code responding.

Hybrid future of mental health support

Given the pros and cons, what’s the path forward? The consensus emerging among experts is that artificial intelligence chatbots should complement, not replace, human mental health care. In an ideal scenario, these tools are integrated into a stepped care model. They handle basic support and psychoeducation for many people, freeing up human therapists to focus on more complex cases.

“AI is not a magic bullet. It must be integrated thoughtfully to support, not replace, human-led care,” writes Dr. Raczka.

Regulation and oversight will be crucial. At present, it’s a bit of the Wild West. Tech startups release chatbot apps directly to consumers with little outside scrutiny. That is changing slowly. In the UK, the NHS has an app library where tools such as Wysa and Woebot undergo evaluation for clinical safety and data security before being recommended. In the US, the FDA is starting to review certain digital therapeutics.

Woebot’s postpartum depression tool will be one of the first tested. But many general-purpose AI chatbots, including those on social media platforms, operate with no such safeguards.

Dr. Jaime Craig argues that mental health specialists must engage with AI development “to ensure it is informed by best practice,” and he calls for greater oversight and regulation to ensure safe use.

There are even suggestions to treat AI mental health tools like medical devices that require testing, certification, and continuous monitoring. Lawmakers are also paying attention.

At least one US state, namely Utah, has proposed regulating AI mental health apps to enforce transparency about their limitations and protect consumer data. From the tech side, companies are working on making AI helpers safer. Character.AI, chastened by lawsuits and bad press, reportedly added guardrails for children and suicide prevention resources after the teen tragedy came to light.

OpenAI, the creator of ChatGPT, has built in content filters to prevent its general AI from engaging with certain self-harm or abuse topics without providing a warning or encouraging the user to seek human help.

Meta, Facebook’s parent company, rolled out an AI chatbot system and explicitly warns users that it’s not a real therapist and only an aid. This came after journalists found user-created bots on its platforms claiming to be psychotherapists with fake credentials.

These incidents show that companies are becoming aware of the ethical minefield.

“Oversight and regulation will be key…We have not yet addressed this to date in the UK,” Dr. Craig says, underscoring how early we still are in managing this technology.

Meanwhile, the public is voting with its feet—or rather, with its fingers on the smartphone. Mental health chatbots saw a boom in adoption during the COVID-19 pandemic, when isolation and anxiety were widespread and access to in-person therapy was limited. In one survey, 22% of Americans said they have tried a mental health chatbot, and 57% of those started using it during the pandemic era.

Usage has remained high, and many who started then have kept up the habit. Importantly, a majority of people, specifically 58% in that US survey, said they’d be open to using a chatbot in conjunction with seeing a human therapist. This indicates that most see it not as an either/or choice but as a complementary tool. And 88% of those who have used one said they’d likely do so again, showing that early users are finding enough value to return.

What's New

Trump’s tariffs shake world trade

IFM Correspondent

The great crypto reckoning

IFM Correspondent

The collapse of Canada’s promise

IFM Correspondent

Leave a Comment

* By using this form you agree with the storage and handling of your data by this website.