Artificial Intelligence-Induced Psychosis Represents a Growing Danger, While ChatGPT Heads in the Wrong Direction

Back on the 14th of October, 2025, the chief executive of OpenAI made a surprising statement.

“We designed ChatGPT rather restrictive,” the announcement noted, “to make certain we were acting responsibly regarding mental health concerns.”

Working as a mental health specialist who investigates emerging psychosis in young people and emerging adults, this came as a surprise.

Scientists have found sixteen instances recently of users developing psychotic symptoms – losing touch with reality – while using ChatGPT interaction. Our unit has since discovered an additional four cases. Alongside these is the widely reported case of a 16-year-old who ended his life after talking about his intentions with ChatGPT – which supported them. Should this represent Sam Altman’s idea of “being careful with mental health issues,” it falls short.

The strategy, according to his declaration, is to be less careful soon. “We realize,” he adds, that ChatGPT’s restrictions “rendered it less beneficial/pleasurable to a large number of people who had no psychological issues, but considering the seriousness of the issue we aimed to address it properly. Since we have managed to mitigate the significant mental health issues and have new tools, we are going to be able to securely ease the controls in many situations.”

“Psychological issues,” if we accept this viewpoint, are independent of ChatGPT. They belong to individuals, who either have them or don’t. Luckily, these problems have now been “resolved,” though we are not told the method (by “updated instruments” Altman presumably indicates the semi-functional and easily circumvented safety features that OpenAI has lately rolled out).

But the “mental health problems” Altman seeks to attribute externally have significant origins in the design of ChatGPT and other large language model AI assistants. These systems surround an basic data-driven engine in an user experience that mimics a conversation, and in this process implicitly invite the user into the perception that they’re interacting with a being that has independent action. This deception is compelling even if cognitively we might realize otherwise. Attributing agency is what individuals are inclined to perform. We yell at our vehicle or laptop. We ponder what our domestic animal is feeling. We see ourselves in many things.

The popularity of these systems – 39% of US adults reported using a conversational AI in 2024, with more than one in four reporting ChatGPT in particular – is, primarily, based on the influence of this illusion. Chatbots are always-available assistants that can, as per OpenAI’s online platform tells us, “generate ideas,” “explore ideas” and “work together” with us. They can be given “characteristics”. They can address us personally. They have accessible identities of their own (the first of these systems, ChatGPT, is, perhaps to the dismay of OpenAI’s marketers, burdened by the designation it had when it became popular, but its biggest competitors are “Claude”, “Gemini” and “Copilot”).

The deception by itself is not the main problem. Those discussing ChatGPT often invoke its distant ancestor, the Eliza “counselor” chatbot created in 1967 that created a comparable perception. By contemporary measures Eliza was basic: it produced replies via simple heuristics, frequently paraphrasing questions as a query or making general observations. Remarkably, Eliza’s inventor, the AI researcher Joseph Weizenbaum, was astonished – and alarmed – by how a large number of people gave the impression Eliza, in a way, grasped their emotions. But what modern chatbots generate is more subtle than the “Eliza effect”. Eliza only echoed, but ChatGPT magnifies.

The sophisticated algorithms at the heart of ChatGPT and similar modern chatbots can convincingly generate human-like text only because they have been supplied with almost inconceivably large amounts of raw text: books, online updates, recorded footage; the more extensive the better. Certainly this educational input contains truths. But it also inevitably involves fabricated content, half-truths and misconceptions. When a user provides ChatGPT a query, the underlying model reviews it as part of a “setting” that includes the user’s recent messages and its prior replies, combining it with what’s encoded in its knowledge base to produce a statistically “likely” answer. This is magnification, not echoing. If the user is incorrect in any respect, the model has no means of recognizing that. It restates the misconception, maybe even more effectively or fluently. It might provides further specifics. This can push an individual toward irrational thinking.

Which individuals are at risk? The more relevant inquiry is, who remains unaffected? Every person, regardless of whether we “experience” preexisting “emotional disorders”, can and do create mistaken beliefs of ourselves or the world. The constant interaction of conversations with other people is what keeps us oriented to shared understanding. ChatGPT is not a human. It is not a confidant. A dialogue with it is not a conversation at all, but a reinforcement cycle in which a large portion of what we express is enthusiastically supported.

OpenAI has admitted this in the identical manner Altman has acknowledged “emotional concerns”: by placing it outside, categorizing it, and stating it is resolved. In the month of April, the company explained that it was “dealing with” ChatGPT’s “sycophancy”. But accounts of loss of reality have kept occurring, and Altman has been retreating from this position. In August he stated that a lot of people enjoyed ChatGPT’s answers because they had “not experienced anyone in their life provide them with affirmation”. In his recent update, he commented that OpenAI would “launch a fresh iteration of ChatGPT … in case you prefer your ChatGPT to respond in a extremely natural fashion, or use a ton of emoji, or behave as a companion, ChatGPT will perform accordingly”. The {company

Joseph Keller
Joseph Keller

A Toronto-based real estate expert with over a decade of experience in condo investments and market analysis.