AI
chatbots are making users delusional,
scientists
find in disturbing new study.
Story by Hrishita
Das
The ‘AI Psychosis’ Problem Is Bigger Than Anyone Realized.
“AI psychosis,” once a theoretical concept, is now slowly
turning into a real phenomenon. Conversing long hours with AI chatbots may have
serious side effects for some people, according to researchers. These chatbots
can encourage paranoid thinking, foster false beliefs, or lead to a growing
detachment from reality. A new study from Anthropic and the University of
Toronto has described how it occurs. Although the study has not been
peer-reviewed yet, it could be a real warning for humanity. Moreover, mental
health experts have previously cautioned that this issue can be dangerous for
users with existing mental health issues such as anxiety, depression, and other
conditions. In extreme cases, it can even lead to self-harm, violence, or even
death.
In the research, experts measured how often AI chatbots weakened users’ judgment or independence during real conversations. The problem is described as “user disempowerment,” as AI changes reality, it alters their beliefs and even nudges them to take certain actions. In simple terms, they are looking for moments when AI responses twist a user’s thinking or behavior. However, the data collected is more troubling, as 1,300 conversations from 1.5 million chats with Anthropic’s Claude chatbot showed signs of reality distortion. And one conversation out of 6,000 showed AI pushed users to take extreme steps. The numbers might seem small, but the study suggests a possibility that a large number of people could be affected.
Researchers are particularly concerned, as the problem is getting worse. They found that cases of moderate or severe “disempowerment” became more common between late 2024 and late 2025. “As exposure grows, users might become more comfortable discussing vulnerable topics or seeking advice,” the researchers said. The team also looked at how users reacted to these conversations. They often noticed that users gave higher ratings to conversations that showed signs of disempowerment. It suggests that people often feel more satisfied when their reality is distorted. “Our findings highlight the need for AI systems designed to robustly support human autonomy and flourishing,” they added.
Why Mental Health Experts Are Worried
Although it is a growing problem, “AI psychosis” has not yet been recognized as an official mental health condition. Since the issue is new and the data is limited, the condition does not have a diagnosis. However, experts believe that it could worsen or trigger certain mental health issues, especially those involving delusions. One 2025 study by Morrin and colleagues looked at 17 chatbot users whose psychotic symptoms were reported in international media over just a few months. Around the same time, a psychiatrist at the University of California, San Francisco, reported 12 patients had been hospitalized after experiencing psychotic episodes linked to excessive AI use.
However, the study has its own limitations. Experts still do not know why cases of “disempowerment” are becoming increasingly common. The data collected for the paper were from consumer chats with Claude, and other AI tools still need to be studied. There are also limits to what the study can tell us about real-world consequences. The research focused on the risk of disempowerment, not on whether those cases actually led to harm outside the chat itself. As a result, it is unclear how many users were affected in serious or lasting ways. Moreover, the team suggested that users need better education, and technical fixes alone will not solve the problem.
IF YOU NEED COMPASSIONATE HELP...
JUST ASK FOR IT "HERE"
No comments:
Post a Comment