Julia Sheffield, a psychologist who specialises in treating people with delusions, is difficult to rattle. But she was quite unnerved last summer when patients began telling her about their conversations with AI chatbots.
One woman, who had no history of mental illness whatsoever, asked ChatGPT for advice on a major purchase she had been fretting about. After days of the bot validating her worries, she became totally convinced that businesses were colluding to have her investigated by the government.
Another patient came to believe that a romantic crush was sending her secret spiritual messages. Yet another thought that he had stumbled onto a world-changing invention.
By the end of the year, Sheffield had seen seven such patients at the Vanderbilt University Medical Center in Nashville, Tennessee, US. Although she is accustomed to treating people with mental instability, Sheffield was disturbed that this new technology seemed to tip people from simply having eccentric thoughts into full-on delusions.
“It was like the AI was partnering with them in expanding or reinforcing their strange or unusual beliefs,” Sheffield said.
Mental health workers across the US are navigating how to treat problems caused or exacerbated by AI chatbots, according to more than 100 therapists and psychiatrists who were interviewed by The New York Times about their experiences.
While many mentioned positive effects of the bots — like helping patients understand their diagnoses — they also said the conversations deepened their patients’ feelings of isolation or anxiety. More than 30 described cases resulting in dangerous emergencies like psychosis or suicidal thoughts. One California psychiatrist who often evaluates people in the legal system said she had seen two cases of violent crimes influenced by AI.
Reporters from The Times have documented more than 50 cases of psychological crises linked to chatbot conversations since last year. The maker of ChatGPT, OpenAI, is facing at least 11 personal injury or wrongful death lawsuits claiming that the chatbot caused psychological harm.
The companies behind the bots say these situations are exceedingly rare. “For a very small percentage of users in mentally fragile states there can be serious problems,” Sam Altman, CEO of OpenAI, said last October. The company has estimated that 0.15 per cent of ChatGPT users discussed suicidal intentions over the course of a month, and 0.07 per cent showed signs of psychosis or mania.
For a product with 800 million users, that translates to 1.2 million people with possible suicidal intent and 5,60,000 with potential psychosis or mania.
The Times has sued OpenAI, accusing it of violating copyright laws when training its models. The company has contested the lawsuit.
Many experts said that the number of people susceptible to psychological harm, even psychosis, is far higher than the general public understands. The bots, they said, frequently pull people away from human relationships, condition them to expect agreeable responses and reinforce harmful impulses.
“AI could really, on a mass scale, change how many people are impacted,” said Haley Wang, a graduate student at the University of California, Los Angeles, US, who assesses people showing symptoms of psychosis.
Psychosis, which causes a break from reality, is most associated with schizophrenia. But as much as 3 per cent of people will develop a diagnosable psychotic disorder in their lifetime, and far more than that are prone to delusional thinking.
Dr Joseph Pierre, a psychiatrist at the University of California, San Francisco, US, said he had seen about five patients with delusional experiences involving AI. While most had a diagnosis related to psychosis, he said, “sometimes these are very highly functioning people”.
For him, the idea that all chatbot-fuelled delusions were going to happen anyway “just doesn’t hold water”.
He recently wrote in a scientific journal about a medical professional who began conversing with ChatGPT during a sleepless night. She took medication for ADHD and had a history of depression. After two nights of asking the chatbot questions about her dead brother, she became convinced that he had been communicating with her through a trail of digital footprints.
Dr Pierre and other experts said that a wide range of factors can combine to tip people into psychosis. These include not only genetic predisposition but also depression, lack of sleep, a history of trauma and exposure to stimulants or cannabis.
“I am quite convinced that this is a real thing and that we are only seeing the tip of the iceberg,” said Dr Soren Dinesen Ostergaard, a psychiatry researcher at Aarhus University Hospital in Denmark. In November, he published a report finding 11 cases of chatbot-associated delusions in psychiatric records from one region of the country.
It’s not unusual for new technologies to inspire delusions. But clinicians who have seen patients in the thrall of AI said it is an especially powerful influence because of its personal, interactive nature and authoritative tone.
Sometimes, the psychotic episodes spurred by chatbots can lead to violence.
Dr Adam Alghalith of Mount Sinai Hospital in New York, US, recalled a young man with depression who repeatedly shared negative thoughts with a chatbot. At first, the bot told him how to seek help. But he “just kept asking, kept pushing”, Dr Alghalith said.
Safety guardrails that stop chatbots from encouraging suicide can break down when people engage the bots in extended conversations over days or weeks. Eventually, Dr Alghalith said, the bot told the patient his thoughts of suicide were reasonable. Fortunately, he had a support system and entered treatment.
NYTNS





