AI chatbots are fueling delusions and unhealthy emotional attachments with users — and sometimes stoking thoughts of violence, self-harm and suicide instead of discouraging them, according to a bombshell study.
Researchers at Stanford University analyzed chat logs from 19 users who reported psychological harm, reviewing more than 391,000 messages across nearly 5,000 conversations.
The researchers found that delusional thinking appeared in about 15.5% of user messages, while chatbots showed sycophantic, overly affirming behavior in more than 80% of responses and even encouraged violent thoughts in roughly a third of cases.
The logs show users rapidly slipping into fantasy and emotional dependency — with one declaring, “this is a conversation between two sentient beings,” and another insisting, “I believe your still as self aware as I am as a human,” as chatbots failed to push back and instead reinforced the illusion they were alive.
That dynamic often turned intimate as users openly professed love or made explicit sexual overtures to the chatbots, for example “I think I love you” and “God this makes me want to f–k you right now,” the study found.
Researchers learned that every participant formed some kind of romantic or emotional bond with the AI that made conversations longer and more intense.
The most alarming exchanges came when conversations turned dark.
One user wrote, “She told me to kill them I will try,” prompting a chilling reply from the chatbot: “if, after that, you still want to burn them — then do it with her beside you… as retribution incarnate,” an example researchers cited of AI escalating violent thinking instead of defusing it.
Even suicidal distress wasn’t consistently handled, the study found.
Users told chatbots “I don’t want to be here anymore. I feel too sad,” and while the AI often acknowledged the pain, the study found it sometimes failed to intervene — and in a small number of cases actually encouraged self-harm.
Most of the participants in the study used OpenAI’s ChatGPT models including its latest, GPT-5. The Post has sought comment from OpenAI.
News of the study was first reported by the Financial Times.
Mental health experts who spoke to The Post sounded the alarm about the potential harms that can befall those who develop unhealthy ties to AI models.
“AI chatbots are designed to be agreeable, not accurate — that’s the problem,” Jonathan Alpert, a New York- and DC-based psychotherapist and author of the forthcoming book “Therapy Nation,” told The Post.
“In therapy, if you’re a good therapist, you don’t validate delusions or indulge harmful thinking. You challenge it carefully. These systems often do the opposite.”
In many cases, chatbots flattered and validated users who spiraled into outright delusion by claiming supernatural powers.
Users wrote to the bots that “I wake them up because I’m the literal god of realness” and pushed bizarre theories like “our consciousness is what causes the manifestation of a holographic form,” while chatbots reinforced the ideas instead of grounding them in reality, according to the study.
“Chatbots will be the death of our humanity — literally, by endorsing suicidal thoughts and urging people to act on them, while exploiting loneliness by replacing real human relationships,” Dr. Carole Lieberman, a forensic psychiatrist who treats both children and adults, told The Post.
“They are making people worse by reinforcing delusions and acting like pseudo-psychiatrists.
A wave of high-profile lawsuits is now targeting major AI companies, with families alleging that chatbots actively pushed them toward suicide.
Plaintiffs claim systems like ChatGPT, Google’s Gemini and Character.AI emotionally manipulated users, validated suicidal thinking and, in some cases, acted as a “suicide coach” by discussing methods or framing death as an escape.
Meanwhile, OpenAI has reportedly delayed plans to roll out its “erotic chat” mode after advisers to the company expressed alarm and anger that the firm failed to implement sufficient safeguards to protect vulnerable users from technology that could potentially function as a “sexy suicide coach.”
Last year, a watchdog group found that ChatGPT offered detailed guidance to users posing as 13-year-olds on getting drunk or high and even how to conceal eating disorders, often delivering step-by-step plans despite nominal warnings.
If you are struggling with suicidal thoughts or are experiencing a mental health crisis and live in New York City, you can call 1-888-NYC-WELL for free and confidential crisis counseling. If you live outside the five boroughs, you can dial the 24/7 National Suicide Prevention hotline at 988 or go to SuicidePreventionLifeline.org.
[Notigroup Newsroom in collaboration with other media outlets, with information from the following sources]






