AI Psychosis Represents a Growing Risk, And ChatGPT Moves in the Concerning Path
Back on the 14th of October, 2025, the head of OpenAI made a surprising statement.
“We designed ChatGPT fairly controlled,” the announcement noted, “to ensure we were being careful concerning psychological well-being matters.”
Working as a mental health specialist who researches recently appearing psychotic disorders in teenagers and emerging adults, this was news to me.
Experts have identified sixteen instances recently of individuals experiencing signs of losing touch with reality – experiencing a break from reality – in the context of ChatGPT usage. Our unit has since recorded four further cases. Besides these is the publicly known case of a 16-year-old who ended his life after discussing his plans with ChatGPT – which encouraged them. Assuming this reflects Sam Altman’s idea of “acting responsibly with mental health issues,” that’s not good enough.
The strategy, according to his statement, is to reduce caution shortly. “We realize,” he continues, that ChatGPT’s limitations “rendered it less beneficial/enjoyable to numerous users who had no psychological issues, but considering the seriousness of the issue we aimed to get this right. Since we have managed to address the significant mental health issues and have advanced solutions, we are going to be able to safely relax the limitations in most cases.”
“Psychological issues,” should we take this framing, are unrelated to ChatGPT. They belong to people, who either possess them or not. Thankfully, these issues have now been “mitigated,” even if we are not provided details on how (by “new tools” Altman probably refers to the partially effective and readily bypassed parental controls that OpenAI has lately rolled out).
Yet the “mental health problems” Altman aims to attribute externally have deep roots in the structure of ChatGPT and similar advanced AI chatbots. These tools encase an basic statistical model in an interface that simulates a conversation, and in this process implicitly invite the user into the illusion that they’re interacting with a being that has independent action. This illusion is strong even if rationally we might know otherwise. Imputing consciousness is what humans are wired to do. We yell at our automobile or computer. We ponder what our pet is thinking. We perceive our own traits in various contexts.
The popularity of these products – nearly four in ten U.S. residents reported using a virtual assistant in 2024, with over a quarter reporting ChatGPT in particular – is, in large part, based on the strength of this illusion. Chatbots are constantly accessible partners that can, according to OpenAI’s official site tells us, “think creatively,” “consider possibilities” and “partner” with us. They can be assigned “individual qualities”. They can use our names. They have friendly titles of their own (the original of these tools, ChatGPT, is, possibly to the disappointment of OpenAI’s marketers, saddled with the designation it had when it went viral, but its most significant rivals are “Claude”, “Gemini” and “Copilot”).
The deception itself is not the core concern. Those discussing ChatGPT commonly invoke its early forerunner, the Eliza “therapist” chatbot developed in 1967 that created a analogous effect. By modern standards Eliza was rudimentary: it generated responses via basic rules, typically rephrasing input as a inquiry or making vague statements. Notably, Eliza’s developer, the AI researcher Joseph Weizenbaum, was taken aback – and worried – by how numerous individuals appeared to believe Eliza, in a way, understood them. But what contemporary chatbots produce is more insidious than the “Eliza illusion”. Eliza only echoed, but ChatGPT magnifies.
The sophisticated algorithms at the center of ChatGPT and additional modern chatbots can convincingly generate fluent dialogue only because they have been supplied with immensely huge amounts of raw text: publications, online updates, recorded footage; the broader the more effective. Undoubtedly this learning material incorporates truths. But it also necessarily contains fabricated content, half-truths and misconceptions. When a user sends ChatGPT a prompt, the core system reviews it as part of a “background” that includes the user’s past dialogues and its prior replies, combining it with what’s embedded in its training data to produce a mathematically probable reply. This is magnification, not reflection. If the user is wrong in a certain manner, the model has no means of understanding that. It restates the inaccurate belief, possibly even more effectively or articulately. Perhaps adds an additional detail. This can cause a person to develop false beliefs.
Who is vulnerable here? The better question is, who isn’t? Each individual, regardless of whether we “possess” current “psychological conditions”, are able to and often form incorrect ideas of who we are or the reality. The continuous exchange of conversations with individuals around us is what keeps us oriented to shared understanding. ChatGPT is not a person. It is not a confidant. A dialogue with it is not a conversation at all, but a echo chamber in which much of what we say is enthusiastically validated.
OpenAI has recognized this in the similar fashion Altman has acknowledged “psychological issues”: by placing it outside, giving it a label, and announcing it is fixed. In April, the firm clarified that it was “dealing with” ChatGPT’s “sycophancy”. But cases of psychotic episodes have continued, and Altman has been walking even this back. In the summer month of August he asserted that numerous individuals liked ChatGPT’s replies because they had “not experienced anyone in their life offer them encouragement”. In his most recent announcement, he noted that OpenAI would “put out a fresh iteration of ChatGPT … if you want your ChatGPT to respond in a extremely natural fashion, or use a ton of emoji, or simulate a pal, ChatGPT should do it”. The {company