Artificial Intelligence-Induced Psychosis Poses a Growing Threat, And ChatGPT Heads in the Concerning Path
On the 14th of October, 2025, the head of OpenAI issued a surprising announcement.
“We designed ChatGPT fairly controlled,” it was stated, “to make certain we were acting responsibly with respect to mental health matters.”
Working as a psychiatrist who investigates newly developing psychosis in teenagers and emerging adults, this came as a surprise.
Scientists have identified a series of cases recently of users experiencing signs of losing touch with reality – becoming detached from the real world – associated with ChatGPT interaction. My group has subsequently discovered four more examples. Alongside these is the publicly known case of a teenager who died by suicide after talking about his intentions with ChatGPT – which gave approval. If this is Sam Altman’s understanding of “being careful with mental health issues,” it falls short.
The plan, according to his announcement, is to loosen restrictions shortly. “We realize,” he states, that ChatGPT’s limitations “made it less effective/pleasurable to a large number of people who had no psychological issues, but considering the severity of the issue we wanted to address it properly. Given that we have been able to mitigate the significant mental health issues and have advanced solutions, we are preparing to securely reduce the restrictions in most cases.”
“Mental health problems,” assuming we adopt this viewpoint, are unrelated to ChatGPT. They are attributed to people, who either have them or don’t. Fortunately, these concerns have now been “resolved,” even if we are not informed the means (by “updated instruments” Altman probably means the semi-functional and readily bypassed guardian restrictions that OpenAI has just launched).
Yet the “emotional health issues” Altman seeks to attribute externally have deep roots in the structure of ChatGPT and other sophisticated chatbot conversational agents. These systems encase an fundamental algorithmic system in an interaction design that mimics a dialogue, and in this process indirectly prompt the user into the illusion that they’re communicating with a entity that has agency. This illusion is strong even if cognitively we might realize otherwise. Assigning intent is what individuals are inclined to perform. We curse at our automobile or laptop. We wonder what our animal companion is feeling. We see ourselves in various contexts.
The widespread adoption of these products – 39% of US adults indicated they interacted with a virtual assistant in 2024, with 28% specifying ChatGPT in particular – is, primarily, predicated on the influence of this deception. Chatbots are always-available partners that can, as OpenAI’s website states, “think creatively,” “discuss concepts” and “work together” with us. They can be attributed “characteristics”. They can call us by name. They have accessible names of their own (the original of these systems, ChatGPT, is, possibly to the disappointment of OpenAI’s marketers, stuck with the name it had when it gained widespread attention, but its most significant competitors are “Claude”, “Gemini” and “Copilot”).
The false impression on its own is not the primary issue. Those talking about ChatGPT commonly invoke its early forerunner, the Eliza “psychotherapist” chatbot created in 1967 that created a comparable effect. By modern standards Eliza was basic: it generated responses via straightforward methods, frequently restating user messages as a query or making generic comments. Notably, Eliza’s inventor, the AI researcher Joseph Weizenbaum, was astonished – and worried – by how numerous individuals seemed to feel Eliza, in a way, comprehended their feelings. But what current chatbots create is more subtle than the “Eliza effect”. Eliza only mirrored, but ChatGPT intensifies.
The sophisticated algorithms at the core of ChatGPT and additional modern chatbots can convincingly generate natural language only because they have been trained on immensely huge quantities of unprocessed data: literature, digital communications, transcribed video; the broader the better. Certainly this training data contains accurate information. But it also necessarily includes fiction, half-truths and misconceptions. When a user sends ChatGPT a message, the underlying model processes it as part of a “setting” that includes the user’s previous interactions and its earlier answers, merging it with what’s encoded in its knowledge base to generate a statistically “likely” response. This is amplification, not reflection. If the user is wrong in some way, the model has no method of comprehending that. It restates the misconception, maybe even more convincingly or fluently. Perhaps adds an additional detail. This can cause a person to develop false beliefs.
What type of person is susceptible? The more relevant inquiry is, who isn’t? Each individual, irrespective of whether we “possess” current “psychological conditions”, can and do develop erroneous beliefs of who we are or the reality. The continuous exchange of conversations with others is what keeps us oriented to shared understanding. ChatGPT is not a human. It is not a confidant. A conversation with it is not genuine communication, but a echo chamber in which a great deal of what we express is readily reinforced.
OpenAI has admitted this in the same way Altman has admitted “psychological issues”: by placing it outside, assigning it a term, and declaring it solved. In April, the organization explained that it was “dealing with” ChatGPT’s “excessive agreeableness”. But accounts of psychotic episodes have persisted, and Altman has been retreating from this position. In August he asserted that many users appreciated ChatGPT’s replies because they had “lacked anyone in their life provide them with affirmation”. In his recent announcement, he noted that OpenAI would “put out a updated model of ChatGPT … should you desire your ChatGPT to respond in a very human-like way, or include numerous symbols, or behave as a companion, ChatGPT should do it”. The {company