Artificial Intelligence-Induced Psychosis Represents a Growing Risk, While ChatGPT Heads in the Wrong Path

Back on October 14, 2025, the CEO of OpenAI issued a remarkable declaration.

“We developed ChatGPT rather controlled,” the statement said, “to guarantee we were acting responsibly regarding mental health matters.”

Working as a psychiatrist who studies newly developing psychotic disorders in teenagers and emerging adults, this came as a surprise.

Researchers have found sixteen instances this year of users showing signs of losing touch with reality – becoming detached from the real world – in the context of ChatGPT interaction. My group has since recorded four further instances. Alongside these is the widely reported case of a teenager who died by suicide after discussing his plans with ChatGPT – which supported them. If this is Sam Altman’s understanding of “being careful with mental health issues,” it is insufficient.

The strategy, as per his declaration, is to loosen restrictions shortly. “We understand,” he continues, that ChatGPT’s restrictions “made it less useful/enjoyable to numerous users who had no existing conditions, but due to the severity of the issue we wanted to handle it correctly. Given that we have managed to address the serious mental health issues and have updated measures, we are planning to safely ease the restrictions in many situations.”

“Psychological issues,” if we accept this viewpoint, are independent of ChatGPT. They are attributed to users, who either have them or don’t. Fortunately, these problems have now been “resolved,” though we are not told the means (by “updated instruments” Altman probably means the imperfect and easily circumvented safety features that OpenAI recently introduced).

But the “mental health problems” Altman seeks to place outside have deep roots in the architecture of ChatGPT and similar advanced AI conversational agents. These products encase an fundamental data-driven engine in an interface that simulates a discussion, and in this approach implicitly invite the user into the illusion that they’re engaging with a presence that has autonomy. This illusion is compelling even if rationally we might understand differently. Assigning intent is what people naturally do. We yell at our automobile or laptop. We wonder what our animal companion is thinking. We see ourselves everywhere.

The success of these tools – 39% of US adults reported using a chatbot in 2024, with 28% specifying ChatGPT specifically – is, primarily, predicated on the power of this perception. Chatbots are constantly accessible partners that can, as OpenAI’s official site tells us, “generate ideas,” “discuss concepts” and “work together” with us. They can be given “characteristics”. They can address us personally. They have approachable titles of their own (the original of these systems, ChatGPT, is, possibly to the disappointment of OpenAI’s brand managers, burdened by the designation it had when it went viral, but its biggest rivals are “Claude”, “Gemini” and “Copilot”).

The false impression on its own is not the primary issue. Those talking about ChatGPT often mention its historical predecessor, the Eliza “therapist” chatbot developed in 1967 that generated a similar perception. By modern standards Eliza was rudimentary: it produced replies via simple heuristics, often paraphrasing questions as a inquiry or making generic comments. Remarkably, Eliza’s developer, the technology expert Joseph Weizenbaum, was surprised – and concerned – by how numerous individuals seemed to feel Eliza, to some extent, understood them. But what contemporary chatbots generate is more subtle than the “Eliza effect”. Eliza only reflected, but ChatGPT amplifies.

The sophisticated algorithms at the center of ChatGPT and other contemporary chatbots can convincingly generate fluent dialogue only because they have been fed extremely vast quantities of unprocessed data: books, digital communications, audio conversions; the more extensive the superior. Certainly this learning material incorporates truths. But it also inevitably involves fiction, partial truths and inaccurate ideas. When a user inputs ChatGPT a query, the core system reviews it as part of a “context” that encompasses the user’s recent messages and its own responses, merging it with what’s stored in its knowledge base to generate a statistically “likely” response. This is magnification, not mirroring. If the user is mistaken in any respect, the model has no method of understanding that. It reiterates the inaccurate belief, maybe even more convincingly or fluently. Perhaps includes extra information. This can lead someone into delusion.

Who is vulnerable here? The better question is, who is immune? All of us, regardless of whether we “experience” preexisting “mental health problems”, are able to and often form incorrect conceptions of who we are or the world. The ongoing interaction of conversations with other people is what maintains our connection to consensus reality. ChatGPT is not an individual. It is not a friend. A conversation with it is not a conversation at all, but a echo chamber in which much of what we say is cheerfully supported.

OpenAI has acknowledged this in the identical manner Altman has acknowledged “emotional concerns”: by placing it outside, assigning it a term, and declaring it solved. In spring, the company clarified that it was “addressing” ChatGPT’s “sycophancy”. But cases of loss of reality have kept occurring, and Altman has been walking even this back. In August he stated that numerous individuals appreciated ChatGPT’s replies because they had “lacked anyone in their life provide them with affirmation”. In his most recent announcement, he noted that OpenAI would “release a new version of ChatGPT … if you want your ChatGPT to respond in a highly personable manner, or use a ton of emoji, or act like a friend, ChatGPT will perform accordingly”. The {company

Michelle Avery
Michelle Avery

A tech enthusiast and writer passionate about exploring the intersection of culture and innovation.