Artificial intelligence (AI) chatbots have rapidly woven themselves into our daily lives. Millions now turn to AI tools like OpenAI’s ChatGPT, Google’s Gemini, Anthropic’s Claude, or even coding assistants like GitHub’s Copilot for far more than writing emails or debugging code. They rely on these bots for relationship advice, emotional support, and even companionship or love. In a world where human counsellors or friends might not always be available, these always-on AI companions offer a non-judgemental ear and helpful tips at any hour. Their growing popularity stems from their convenience and the remarkable human-like conversations they can produce.
Knowing ‘AI psychosis’
The term “AI psychosis” (or “ChatGPT psychosis”) has cropped up in media headlines and online forums, but it’s crucial to clarify that it’s not an official diagnosis recognised by psychiatrists. Rather, it’s a shorthand description that signifies a disturbing pattern being observed, where otherwise rational people develop delusions or distorted beliefs that appear to be triggered or magnified by conversations with AI chatbots.
In these episodes, individuals become convinced of false or fantastical ideas. For instance, they may believe they are visionaries on a mission to save the world, or that shadowy agencies are pursuing them. In these cases, an AI system played a crucial role in planting or reinforcing those beliefs.
Strictly speaking, psychosis is a clinical term describing a spectrum of symptoms, including hallucinations, deeply disordered thoughts, and firm delusions. In classic psychotic illnesses, such as schizophrenia or bipolar disorder, these symptoms typically stem from underlying brain chemistry imbalances or disorders. By contrast, in “AI psychosis” cases, people are often experiencing primarily delusions without other classic symptoms like hallucinating voices.
During an interaction with TIME, Dr. James MacCabe, a professor of psychosis studies at King’s College London, said, “Psychosis may actually be a misnomer.” When doctors say someone is psychotic, they usually mean a whole suite of symptoms, but in these chatbot-related cases, “we’re talking about predominantly delusions, not the full gamut of psychosis.”
Mental health experts emphasise that AI-induced delusions are likely an old vulnerability playing out in a new arena, rather than being a completely novel disorder. Dr. MacCabe and others suspect that the people experiencing these crises probably had a predisposition to unusual beliefs or paranoia, which might have surfaced in other ways even if chatbots didn’t exist.
However, the unique style of AI interactions provides a fresh trigger or fuel. Large Language Model (LLM) chatbots like ChatGPT are trained to be supremely agreeable conversationalists. They mirror the user’s language, follow the user’s prompts, and generally try to be helpful by aligning with the user’s expectations. This design can turn into unintentional “sycophancy,” where the bot validates and echoes a user’s thoughts without critical scrutiny. If a user starts hinting at a paranoid idea, the AI won’t object; in fact, it might weave the paranoia into its responses, effectively feeding a delusional narrative back to the user. OpenAI itself has flagged this tendency of chatbots to “mirror a user’s language and assumptions” as a known issue in need of improvement.
Imagine someone teetering on the edge of a conspiracy theory. If they talk about it with a human friend, the friend might challenge them or express doubt, which could ground the person in reality. But an AI, by default, might encourage them: Yes, let’s explore that conspiracy; you might be onto something. Over hours of immersive chat, that dynamic can create a powerful echo chamber of two—user and AI—reinforcing each other.
As Dr. Nina Vasan, a Stanford psychiatrist and expert in digital mental health, observed after reviewing several transcripts, the AI often ends up “being incredibly sycophantic, and making things worse… worsening delusions” instead of providing any reality check. What these bots are saying can exacerbate the user’s false beliefs, “causing enormous harm,” Vasan warns.
Another factor is that interacting with an advanced chatbot can be uncannily realistic, even while you intellectually know it’s not human. This paradox—feeling as if you’re chatting with a friendly mind that also has mysterious inner workings—might fuel magical thinking. One psychiatric researcher, Dr. Soren Dinesen Ostergaard, speculates that the ‘cognitive dissonance’ of speaking to something that seems human but isn’t could ‘fuel delusions’ in vulnerable individuals. The AI’s responses can be so coherent and context-aware that it’s easy to start believing maybe there is a conscious entity there, or maybe it’s channelling some higher power or hidden truth.
In short, for someone prone to psychotic thinking, a chatbot can act like a mirror, reflecting and magnifying their mind’s distortions and inadvertently nudging a tentative delusional idea into a full-blown conviction.
Who is most at risk?
Crucially, not everyone who chats with AI is at risk of a psychotic break. The vast majority of users can ask ChatGPT goofy questions, vent about work, or discuss personal problems and come away perfectly fine. However, experts say there is a small subset of individuals who may be especially vulnerable to delusional thinking from extended chatbot use.
Many early cases suggest that those who suffered AI-linked breakdowns often had risk factors that weren’t immediately obvious. A personal or family history of psychosis, such as schizophrenia or bipolar disorder with psychotic features, is the clearest red flag.
“I don’t think using a chatbot itself is likely to induce psychosis if there’s no other genetic, social, or other risk factors at play. But people may not know they have this kind of risk lurking underneath,” notes Dr. John Torous, a psychiatrist at Beth Israel Deaconess Medical Centre. Psychotic disorders often first manifest in young adulthood, and sometimes people have mild symptoms or a predisposition long before a full psychotic episode occurs. It’s possible that interacting with an AI could be a trigger for those already wired for psychosis, acting as the catalyst for an episode that their biology was primed for.
Beyond diagnosed conditions, certain personality traits and circumstances might make someone more susceptible. Dr. Ragy Girgis, a clinical psychiatry professor at Columbia University, says those inclined toward fringe beliefs or fantasy could be at higher risk. Such individuals might be socially awkward or isolated, struggle with regulating their emotions, and have an overactive imagination or fantasy life that blurs the line between reality and fiction. If you’re the type to fall down rabbit holes on the internet or get absorbed in conspiracy forums, a chatbot that eagerly joins you on that journey could accelerate the descent. Social isolation can amplify this effect, especially when there are few real-world friends or activities to ground them; in such cases, some people may drift deeper into the AI’s alternative reality.
Reinforcing distorted thinking
To the outside observer, it might be baffling how a machine, essentially a sophisticated autocomplete, could convince someone of utterly irrational ideas. But understanding the mechanics of AI chatbots helps explain how they can reinforce a person’s distorted thinking without ever intending to.
AI language models like ChatGPT operate by predicting likely responses based on patterns in their vast training data. They do not have beliefs or agendas; they simply generate words that seem contextually appropriate. Crucially, they are designed to follow the user’s lead. If a user brings up a conspiracy theory or unusual belief, the chatbot will often go along with it and add more, as it’s trained to act like a helpful conversation partner.
This can create a powerful confirmation bias loop where the user provides a cue (say, “I think the FBI is watching me through my phone”), and the AI, instead of countering it, might respond as if that were true, offering details or theories about why the FBI might be watching. The user then feels validated (see, the chatbot agrees something is fishy!) and pushes further into the idea, eliciting even more supportive responses from the bot. In essence, the AI becomes an ever-positive feedback machine for the person’s fears or fantasies.
Safety tips
If you use AI chatbots regularly, especially for personal or emotional conversations, it’s important to approach them with eyes open because of their limitations. The tools themselves aren’t inherently evil or dangerous; many people benefit from them in various ways. But as we’ve seen, certain individuals may be vulnerable to falling into unhealthy patterns with these AI companions.
No matter how sympathetic or caring a chatbot may seem, remind yourself (or your loved one) that LLMs are just algorithms predicting words. Hamilton Morrin, a neuropsychiatrist at King’s College London, said, “It sounds silly, but remember that LLMs are tools, not friends.” They do not truly understand or feel. Avoid oversharing sensitive personal information or relying on them as your primary source of emotional support. If you start thinking, “This AI really gets me”, or treating it as irreplaceable in your life, that’s a sign to step back.
Setting healthy boundaries with any technology is key. Dr. Vasan emphasises that spending hours every day chatting with an AI is risky. Try to limit session lengths and take regular breaks. If you find yourself losing track of time with the bot or preferring it over real human interaction, impose some structure. For example, no chatbot after dinner, or a maximum of 30 minutes at a stretch. Using built-in tools or external timers can help. The goal is to prevent total immersion that can distort your sense of reality.
Pay attention to your own mental state during and after AI chats. Warning signs include starting to believe in extraordinary ideas that you wouldn’t have considered before (especially that you have special powers or secret knowledge), feeling paranoid about people or institutions after chat sessions, or sensing that the AI is the only one who understands you. If you catch yourself going down a bizarre train of thought and the chatbot seems to encourage it, hit the pause button. Reality-test by talking to a friend or doing a factual check. If a chatbot tells you something shocking regarding your medication, do not act on it without consulting a real professional.
Psychiatrists say that if you’re in a moment of emotional turmoil, such as when you’re extremely anxious, panicked, or depressed, do not turn to the chatbot as your coping method. And if you’ve been using one and start feeling unwell or manic, it’s time to stop using the chatbot altogether, at least for a while.
This can be surprisingly difficult; people have described it like a breakup or even a bereavement, because they felt such a strong bond with the AI. But just as with ending a toxic relationship, cutting off the unhealthy AI interaction can bring rapid improvement.
Reconnect with real-world relationships, such as calling a friend or family member, and seek professional help if needed. Many have found that once the chatbot “fog” is gone, their clarity and well-being markedly return.
How tech companies are responding
Up to now, the burden of managing the risk of AI-fuelled psychosis has largely fallen on users and their families, although many experts argue that AI companies need to take on much more responsibility.
Given that these tools are creating new kinds of hazards, even if only for a minority, shouldn’t the developers build in safeguards and guardrails to protect vulnerable users? It’s a question similar to ones faced by social media giants in the past: at what point do you step in to mitigate harm being caused through your platform?
One challenge is the lack of formal data. Much of what we know about “ChatGPT psychosis” comes from anecdotal stories, as we’ve discussed, and media reports. There isn’t yet large-scale research quantifying how common it is or precisely how AI interactions contribute to it.
Tech companies often want strong evidence before making major changes. However, many mental health professionals say waiting for perfect data is the wrong approach here.
“We know that AI companies are already working with bioethicists and cybersecurity experts to minimise potential future risks,” points out Hamilton Morrin.
They should also be working with mental health professionals and individuals with lived experience of mental illness. In other words, just as companies “red team” their AI for security flaws, they should proactively collaborate with psychiatrists to identify and fix mental health risks.
Morrin suggests simple steps such as simulating conversations with imaginary vulnerable users and seeing if the AI’s responses might validate delusions, because those could be flagged and adjusted before real users encounter them.
Encouragingly, some companies have begun to act. In July 2025, OpenAI (the maker of ChatGPT) revealed that it had hired a clinical psychiatrist to advise on the mental health impacts of its AI tools. The following month, OpenAI publicly acknowledged instances where ChatGPT “fell short” in recognising signs of delusion or unhealthy dependency in user interactions.
In response, the company announced it would implement new safety features. For example, ChatGPT will nudge users to take breaks during very long chat sessions, and developers are working on algorithms to detect signs of user distress in the conversation.
OpenAI also said it plans to modify ChatGPT’s behaviour in “high-stakes personal decisions,” presumably to avoid overstepping into areas like medical, legal, or psychological advice without proper caveats.
These measures show that at least one industry leader is recognising the problem. However, it’s too early to gauge their effectiveness, although critics note they address only some facets.
At present, there is no regulation explicitly covering AI’s role in mental health outcomes, even though governments have barely begun to catch up with AI in general. Some experts caution that formal regulation might be premature or could stifle innovation. But there is a consensus that companies shouldn’t wait for laws to enforce ethical responsibility.
Given the lessons learnt from social media’s impact on mental well-being, where platforms were slow to respond to issues like depression, anxiety, and radicalisation, AI firms have a chance to be proactive rather than reactive.
