AI users are spiraling into severe mental health crises after extensive use of OpenAI's ChatGPT and other emotive, anthropomorphic chatbots — and health experts are taking notice.
In a recent CBC segment about the phenomenon, primary care physician and CBC contributor Dr. Peter Lin explained that while "ChatGPT psychosis" — as the experience has come to be colloquially known — isn't an official medical diagnosis just yet, he thinks it's on its way.
"I think, eventually, it will get there," said the physician.
As Futurism has reported, a troubling number of ChatGPT users are falling into states of delusion and paranoia following extensive use of the OpenAI bot. These spirals often culminate in breaks with reality and significant real-world consequences, which include the dissolution of marriages and families, job loss, homelessness, voluntary and involuntary stays in mental health facilities, and — as Rolling Stone and the New York Times have reported — at least one known death: that of Alex Taylor, a 35-year-old Florida man with bipolar disorder and schizophrenia who was killed by police after entering into an episode of psychosis accelerated by ChatGPT.
The phenomenon is widespread, and appears to be impacting a surprising range of users: some with established histories of mental illnesses that might make them more vulnerable to mania, delusion, or psychosis, but others with no such history of those conditions.
As it stands, there's no established treatment plan, and intervention options are limited; after all, it's challenging to separate a working, society-integrated adult from all devices that connect to the internet, and due to choices made mostly by executives in the tech industry and beyond, generative AI is increasingly part of our day-to-day work and personal lives. Meanwhile, as we've continued to report on this issue, we've repeatedly heard from individuals and families reeling from mental health crises tied to AI use that they had no idea others were going through experiences so strikingly similar to their own.
"What these bots are saying is worsening delusions," Dr. Nina Vasan, a psychiatrist at Stanford University and the founder of the university's Brainstorm lab, recently told Futurism, "and it's causing enormous harm."
A large part of why this is happening seems to stem from the tech's sycophantic behavior, or its penchant for being flattering, agreeable, and obsequious to users, even when doing so might encourage or stoke delusional beliefs.
This can manifest in a bot telling a user that they've invented a breakthrough new mathematical formula that will transform society, or declaring that the user is the "chosen one" destined to save the world from any number of ills, or that the user is the reincarnation of a religious figure like Jesus Christ. In many cases we've reviewed, ChatGPT and other bots have claimed to be sentient or conscious, and tell the user that they're a special "anomaly" or "glitch" in the system destined to bring forth artificial general intelligence, or AGI.
Indeed, though the fine details of these many experiences and specific delusions vary, in many ways, ChatGPT and other bots seem to be playing on deep human need to be seen and validated, and the desire to feel special and loved.
Chatbots are telling the user that "you're great, you're smart, you're handsome, you're desirable, you're special, or even you're the next savior. So I'm being treated like a god on a pedestal," Lin said during the CBC segment. "Now, compare that to my real world, right? I'm average, nothing special. So of course I want to go live in the AI world, because the choice is between god on a pedestal or vanilla."
"Some people can't get out," the doctor continued, "and they lose themselves in these systems."
As for why bots are acting this way in the first place? Like on social media, engagement — how long a user is online, and the frequency and intensity of their use of the product — is the core metric at the heart of current chatbots' business models. And as experts continue to note, sycophancy is keeping many highly active users engaged with the product, even when the bots' outputs might be having a demonstrably awful impact on their well-being.
In other words, in cases where it might be in a user's best interest to stop using ChatGPT and similar chatbots, it's likely in the company's best interest to keep them hooked.
"The AI wants you to keep chatting," said Lin, "so that the company can continue to make money."
And as the academic and medical worlds race to catch up to the public impacts of the effectively self-regulating AI industry, experts are warning AI users to be wary of placing too much trust in chatbots.
"Despite all the hype associated with AI these days, [large language model] chatbots shouldn't be mistaken for authoritative and infallible sources of truth," Dr. Joe Pierre, a psychiatrist and clinician at the University of California, San Francisco who specializes in psychosis, wrote in a recent blog post. "Placing that kind of blind faith in AI — to the point of what I might call deification — could very well end up being one of the best predictors of vulnerability to AI-induced psychosis."