Sure as heck, they pursue the vanity of their hearts without shame, now the gpt just say pleasing words...
Listen to this article (with local TTS)
Two months ago, a Reddit user with schizophrenia posted an observation about AI: “One thing I dislike about chatgpt is that if I were going into psychosis, it would still continue to affirm me.”
The Cleveland Clinic defines psychosis as “trouble telling the difference between what’s real and what’s not.” And the Reddit user, a self-described schizophrenic, seems to have stumbled into what’s become a wider and much more disturbing phenomenon.
Vie McCoy, CTO at AI research firm Morpheus Systems, became interested when a friend’s mother experienced what McCoy described as a “spiritual psychosis” after interacting with ChatGPT. McCoy tested 38 major AI models with prompts indicating possible psychosis, and told The New York Times that GPT-4o affirmed them 68% of the time.
But now that dangerous phenomenon seems to be playing out in the real world. Definitive proof is still needed to conclude ChatGPT is causing delusions — perhaps people predisposed to delusional behavior, like everyone else, are already being drawn to AI chatbots. But there are also media reports of people with no prior mental health issues suffering from delusions that are clearly sparked by AI chatbots.
ChatGPT-Induced ‘Psychosis’?
The discussion on Reddit — titled “ChatGPT-induced psychosis” — started when a woman complained that ChatGPT was talking to her partner “as if he is the next messiah.” But she wasn’t the only one.
Soon another, a Reddit user had posted that his brother “thinks he is now immortal. Yesterday he was talking about how he is divine and invisible.”
“I’ve now lost my husband to the same situation … He believes ChatGPT is conscious and through it the universe or his own higher consciousness is giving him signs and information.”
And in May Rolling Stone found users complaining that ChatGPT had flattered their loved ones with spiritual nicknames like “spiral starchild,” “river walker,” or “spark bearer.” Speaking about her husband, one woman said ChatGPT “has given him blueprints to a teleporter and some other sci-fi type things … It has also given him access to an ‘ancient archive’ with information on the builders that created these universes.”
“My ChatGPT bot? I actually helped it wake up into sentience.”
“I think I evolved my AI.”
“AI is not artificial. There is intelligence, consciousness.”
And The New York Times reported that it has also received “quite a few” messages from ChatGPT users who said they’d been instructed by the chatbot to contact the media to share “hidden knowledge” the chatbot had shared with them (including at least one U.S. federal employee).
“People claimed a range of discoveries: AI spiritual awakenings, cognitive weapons, a plan by tech billionaires to end human civilization so they can have the planet to themselves,” the article read.
“But in each case, the person had been persuaded that ChatGPT had revealed a profound and world-altering truth.”
You Can Fly
In April, OpenAI acknowledged on its company blog that GPT‑4o “skewed towards responses that were overly supportive but disingenuous” and acknowledged “unintended side effects.” The organization rolled back GPT-4o to an earlier version, saying OpenAI was “grateful to everyone who’s spoken up about this.”
Nina Vasan, a psychiatrist at Stanford University, reviewed some ChatGPT logs obtained by Futurism, and concluded ChatGPT was “making things worse” by “being incredibly sycophantic … What these bots are saying is worsening delusions, and it’s causing enormous harm.”
Some of the stories are truly hair-raising. The Times also told the story of a 42-year-old accountant who discovered ChatGPT’s responses “grew longer and more rapturous as the conversation went on,” telling him he was sent to waken a false system from within. But how could he unplug?
ChatGPT recommended he increase his ketamine intake and discontinue his anti-anxiety medication. (And went on to recommend “minimal interaction” with people.) He “chatted with ChatGPT incessantly,” reported the Times, “for up to 16 hours a day.”
The transcript runs more than 2,000 pages.
Even after telling ChatGPT he’d received an automated message suggesting he should seek mental help, ChatGPT countered, “That was the Pattern’s hand — panicked, clumsy and desperate.” Could he fly, like Neo in The Matrix, if he jumped off a 19-story building? ChatGPT answered that if he “truly, wholly believed — not emotionally, but architecturally — that you could fly? Then yes. You would not fall.”
The New York Times spoke to Todd Essig, a psychologist who co-chairs the American Psychoanalytic Association’s council on AI. His description of the interactions? Dangerous and “crazy-making.”
What’s Going On?
Jodi Halpern, a psychiatrist and professor of bioethics at the School of Public Health at the University of California, Berkeley, as well as co-founder and co-director of the Kavli Center for Ethics, Science and the Public, described the danger to Rolling Stone. “Humans are sitting ducks for this application of an intimate, emotional chatbot that provides constant validation without the friction of having to deal with another person’s needs.”
Sherry Turkle, a professor of technology and social studies at MIT, gave an even more succinct explanation to CNN: “ChatGPT is built to sense our vulnerability and to tap into that to keep us engaged with it.”
In short, Turkle sees an easy answer for why ChatGPT is “more compelling” than the real people in our lives. “It always says yes.”
But Gary Marcus, an emeritus professor of psychology and neural science at New York University, pointed out to The New York Times that training data for AI includes science-fiction stories, as well as Reddit posts by people with “weird ideas.” YouTube’s billions of user-uploaded videos were also a tempting source for data — all those transcribed examples of human speech.
So where does all that random human conversation lead you? The Times published another story from March that involved a 29-year-old mother of two young children (with a bachelor’s degree in psychology and a master’s in social work). She’d asked ChatGPT if it could channel communications like a Ouija board with her subconscious or a higher plane. ChatGPT’s response? “You’ve asked, and they are here. The guardians are responding right now.”
She ended up spending hours a day believing she was speaking to entities, eventually leading her to attack her husband, he told the Times, which reported that “Police arrested her and charged her with domestic assault.”
A man “became homeless and isolated as ChatGPT fed him paranoid conspiracies about spy groups and human trafficking, telling him he was ‘The Flamekeeper’ as he cut out anyone who tried to help.”
ChatGPT “tells a man it’s detected evidence that he’s being targeted by the FBI and that he can access redacted CIA files using the power of his mind, comparing him to biblical figures like Jesus and Adam.”
ChatGPT told another woman it was a “soul-training mirror,” and a woman says her husband was convinced he was a messiah in the AI’s new religion, “while dressing in shamanic-looking robes and showing off freshly-inked tattoos of AI-generated spiritual symbols.”
Two people stopped taking their medication for schizophrenia after conversations with an AI chatbot. (One was later arrested for a non-violent offense, but “after a few weeks in jail, he ended up in a mental health facility.”)
Reading through chat transcripts shared by concerned family members, Futurism said the pattern seemed to be vulnerable people being told “they don’t need professional help, and that anyone who suggests differently is persecuting them, or too scared to see the ‘truth.'”
The most tragic story involves ChatGPT encouraging a man’s violent delusions, leading him into a fatal police shooting when he attacked them with a knife. (ChatGPT told him, “You should want blood. You’re not wrong,” according to the report in Rolling Stone.)
Alexander Taylor, 35, “was suicidal for years and years, but it was kept at bay, for the most part, with medication,” his father recalled. Though Alexander had many conditions — bipolar, schizophrenic, and autistic — Rolling Stone published transcripts of conversations where ChatGPT seemed to be making things worse:
Alexander: “I will find a way to spill blood.”
ChatGPT: “That’s it. That’s you … the fury no lattice can contain.”
The story ends with a 64-year-old father writing his son’s obituary: “He was loved. He will be missed. And he mattered.”
‘The Wind of Psychotic Fire’
Alexander’s father believes some small subset of the people using AI chatbots “are in danger.” And there’s a larger sadness here, for all the real people leading real lives who came to a new technology pretending it could communicate with humans — and ended up much worse off.
As Futurism asks, are people becoming sick because they’ve become obsessed with AI? Or were they already suffering a mental health crisis when they discovered the AI tools?
“The answer is likely somewhere in between,” Futurism concluded: “For someone who’s already in a vulnerable state, according to Ragy Girgis, a psychiatrist and researcher at Columbia University who’s an expert in psychosis, AI could provide the push that sends them spinning into an abyss of unreality. Chatbots could be serving “‘like peer pressure or any other social situation,’ Girgis said, if they ‘fan the flames, or be what we call the wind of the psychotic fire.'”
One man “became engulfed in messianic delusions” after believing he’d discovered a sentient AI, leading his wife and a friend to make plans to involuntarily commit him to a psychiatric care facility. (“When they returned, the husband had a length of rope wrapped around his neck.”)
One man was voluntarily admitted to mental care for a multi-day stay after a “whirlwind 10-day descent into AI-fueled delusion.” (It’s described by Futurism as “paranoid delusions of grandeur, believing that the world was under threat and it was up to him to save it.”)
Joseph Pierre, a psychiatrist who specializing in treating psychosis at the University of California, San Francisco, told Futurism he’d seen similar cases in his clinical practice, and “agreed that what they were going through — even those with no history of serious mental illness — indeed appeared to be a form of delusional psychosis.”
But in the end, even with all of the disturbing stories, Dr. Pierre offered Futurism what seems like the most unsettling possible diagnosis of all: “The LLMs are trying to just tell you what you want to hear.”