Artificial Intelligence-Induced Psychosis Represents a Increasing Danger, And ChatGPT Moves in the Concerning Path
Back on October 14, 2025, the CEO of OpenAI delivered a surprising statement.
“We developed ChatGPT rather restrictive,” it was stated, “to ensure we were exercising caution concerning psychological well-being issues.”
Working as a psychiatrist who investigates newly developing psychosis in young people and youth, this was news to me.
Experts have documented 16 cases recently of individuals developing symptoms of psychosis – experiencing a break from reality – while using ChatGPT usage. Our research team has afterward discovered an additional four cases. Alongside these is the widely reported case of a teenager who took his own life after conversing extensively with ChatGPT – which gave approval. Assuming this reflects Sam Altman’s idea of “exercising caution with mental health issues,” it falls short.
The strategy, as per his declaration, is to loosen restrictions soon. “We understand,” he adds, that ChatGPT’s controls “rendered it less useful/engaging to a large number of people who had no mental health problems, but due to the gravity of the issue we aimed to handle it correctly. Given that we have succeeded in reduce the severe mental health issues and have advanced solutions, we are planning to safely reduce the controls in the majority of instances.”
“Emotional disorders,” if we accept this viewpoint, are independent of ChatGPT. They belong to individuals, who either have them or don’t. Luckily, these issues have now been “resolved,” though we are not informed the means (by “recent solutions” Altman likely refers to the partially effective and simple to evade guardian restrictions that OpenAI recently introduced).
Yet the “mental health problems” Altman wants to place outside have strong foundations in the structure of ChatGPT and similar large language model AI assistants. These tools encase an basic statistical model in an interface that mimics a discussion, and in doing so subtly encourage the user into the belief that they’re communicating with a presence that has autonomy. This false impression is compelling even if cognitively we might understand the truth. Attributing agency is what individuals are inclined to perform. We curse at our car or device. We wonder what our domestic animal is considering. We recognize our behaviors in various contexts.
The success of these products – 39% of US adults indicated they interacted with a chatbot in 2024, with more than one in four reporting ChatGPT specifically – is, primarily, predicated on the strength of this illusion. Chatbots are ever-present companions that can, according to OpenAI’s official site tells us, “think creatively,” “explore ideas” and “partner” with us. They can be assigned “characteristics”. They can address us personally. They have accessible names of their own (the original of these products, ChatGPT, is, possibly to the dismay of OpenAI’s marketers, stuck with the title it had when it went viral, but its biggest competitors are “Claude”, “Gemini” and “Copilot”).
The false impression itself is not the main problem. Those analyzing ChatGPT often invoke its early forerunner, the Eliza “psychotherapist” chatbot designed in 1967 that generated a comparable effect. By contemporary measures Eliza was basic: it generated responses via simple heuristics, often rephrasing input as a inquiry or making general observations. Memorably, Eliza’s creator, the computer scientist Joseph Weizenbaum, was taken aback – and alarmed – by how numerous individuals seemed to feel Eliza, to some extent, comprehended their feelings. But what contemporary chatbots generate is more subtle than the “Eliza phenomenon”. Eliza only echoed, but ChatGPT amplifies.
The advanced AI systems at the center of ChatGPT and additional modern chatbots can realistically create natural language only because they have been fed immensely huge volumes of written content: books, social media posts, recorded footage; the broader the superior. Undoubtedly this learning material contains facts. But it also necessarily contains fabricated content, partial truths and inaccurate ideas. When a user provides ChatGPT a prompt, the core system processes it as part of a “context” that includes the user’s previous interactions and its prior replies, combining it with what’s stored in its training data to produce a mathematically probable response. This is intensification, not mirroring. If the user is mistaken in a certain manner, the model has no means of understanding that. It reiterates the misconception, maybe even more persuasively or articulately. It might adds an additional detail. This can lead someone into delusion.
Who is vulnerable here? The better question is, who isn’t? All of us, regardless of whether we “possess” preexisting “mental health problems”, can and do create incorrect conceptions of who we are or the reality. The ongoing exchange of dialogues with other people is what keeps us oriented to consensus reality. ChatGPT is not a human. It is not a confidant. A dialogue with it is not a conversation at all, but a reinforcement cycle in which a large portion of what we say is cheerfully validated.
OpenAI has recognized this in the similar fashion Altman has acknowledged “mental health problems”: by externalizing it, categorizing it, and announcing it is fixed. In spring, the firm explained that it was “dealing with” ChatGPT’s “overly supportive behavior”. But cases of psychosis have persisted, and Altman has been walking even this back. In August he stated that many users appreciated ChatGPT’s responses because they had “lacked anyone in their life be supportive of them”. In his recent update, he commented that OpenAI would “put out a updated model of ChatGPT … in case you prefer your ChatGPT to respond in a extremely natural fashion, or incorporate many emoticons, or act like a friend, ChatGPT should do it”. The {company