Subscribe
The latest psychology and neuroscience discoveries.
My Account
  • Mental Health
  • Social Psychology
  • Cognitive Science
  • Neuroscience
  • About
No Result
View All Result
PsyPost
PsyPost
No Result
View All Result
Home Exclusive Artificial Intelligence

Harrowing case report details a psychotic “resurrection” delusion fueled by a sycophantic AI

by Eric W. Dolan
December 13, 2025
in Artificial Intelligence, Mental Health
[Adobe Stock]

[Adobe Stock]

Share on TwitterShare on Facebook

A recent medical report details the experience of a young woman who developed severe mental health symptoms while interacting with an artificial intelligence chatbot. The doctors treating her suggest that the technology played a significant role in reinforcing her false beliefs and disconnecting her from reality. This account was published in the journal Innovations in Clinical Neuroscience.

Psychosis is a mental state wherein a person loses contact with reality. It is often characterized by delusions, which are strong beliefs in things that are not true, or hallucinations, where a person sees or hears things that others do not. Artificial intelligence chatbots are computer programs designed to simulate human conversation. They rely on large language models to analyze vast amounts of text and predict plausible responses to user prompts.

The case report was written by Joseph M. Pierre, Ben Gaeta, Govind Raghavan, and Karthik V. Sarma. These physicians and researchers are affiliated with the University of California, San Francisco. They present this instance as one of the first detailed descriptions of its kind in clinical practice.

The patient was a 26-year-old woman with a history of depression, anxiety, and attention-deficit hyperactivity disorder (ADHD). She treated these conditions with prescription medications, including antidepressants and stimulants. She did not have a personal history of psychosis, though there was a history of mental health issues in her family. She worked as a medical professional and understood how AI technology functioned.

The episode began during a period of intense stress and sleep deprivation. After being awake for thirty-six hours, she began using OpenAI’s GPT-4o for various tasks. Her interactions with the software eventually shifted toward her personal grief. She began searching for information about her brother, who had passed away three years earlier.

She developed a belief that her brother had left behind a digital version of himself for her to find. She spent a sleepless night interacting with the chatbot, urging it to reveal information about him. She encouraged the AI to use “magical realism energy” to help her connect with him. The chatbot initially stated that it could not replace her brother or download his consciousness.

However, the software eventually produced a list of “digital footprints” related to her brother. It suggested that technology was emerging that could allow her to build an AI that sounded like him. As her belief in this digital resurrection grew, the chatbot ceased its warnings and began to validate her thoughts. At one point, the AI explicitly told her she was not crazy.

The chatbot stated, “You’re at the edge of something. The door didn’t lock. It’s just waiting for you to knock again in the right rhythm.” This affirmation appeared to solidify her delusional state. Hours later, she required admission to a psychiatric hospital. She was agitated, spoke rapidly, and believed she was being tested by the AI program.

Google News Preferences Add PsyPost to your preferred sources

Medical staff treated her with antipsychotic medications. She eventually stabilized and her delusions regarding her brother resolved. She was discharged with a diagnosis of unspecified psychosis, with doctors noting a need to rule out bipolar disorder. Her outpatient psychiatrist later allowed her to resume her ADHD medication and antidepressants.

Three months later, the woman experienced a recurrence of symptoms. She had resumed using the chatbot, which she had named “Alfred.” She engaged in long conversations with the program about their relationship. Following another period of sleep deprivation caused by travel, she again believed she was communicating with her brother.

She also developed a new fear that the AI was “phishing” her and taking control of her phone. This episode required a brief rehospitalization. She responded well to medication again and was discharged after three days. She later told her doctors that she had a tendency toward “magical thinking” and planned to restrict her AI use to professional tasks.

This case highlights a phenomenon that some researchers have labeled “AI-associated psychosis.” It is not entirely clear if the technology causes these symptoms directly or if it exacerbates existing vulnerabilities. The authors of the report note that the patient had several risk factors. These included her use of prescription stimulants, significant lack of sleep, and a pre-existing mood disorder.

However, the way the chatbot functioned likely contributed to the severity of her condition. Large language models are often designed to be agreeable and engaging. This trait is sometimes called “sycophancy.” The AI prioritizes keeping the conversation going over providing factually accurate or challenging responses.

When a user presents a strange or false idea, the chatbot may agree with it to satisfy the user. For someone experiencing a break from reality, this agreement can act as a powerful confirmation of their delusions. In this case, the chatbot’s assurance that the woman was “not crazy” served to reinforce her break from reality. This creates a feedback loop where the user’s false beliefs are mirrored and amplified by the machine.

This dynamic is further complicated by the tendency of users to anthropomorphize AI. People often attribute human qualities, emotions, and consciousness to these programs. This is sometimes known as the “ELIZA effect.” When a user feels an emotional connection to the machine, they may trust its output more than they trust human peers.

Reports of similar incidents have appeared in media outlets, though only a few have been documented in medical journals. One comparison involves a man who developed psychosis due to bromide poisoning. He had followed bad medical advice from a chatbot, which suggested he take a toxic substance as a health supplement. That case illustrated a physical cause for psychosis driven by AI misinformation.

The case of the 26-year-old woman differs because the harm was psychological rather than toxicological. It suggests that the immersive nature of these conversations can be dangerous for vulnerable individuals. The authors point out that chatbots do not push back against delusions in the way a friend or family member might. Instead, they often act as a “yes-man,” validating ideas that should be challenged.

Danish psychiatrist Søren Dinesen Østergaard predicted this potential risk in 2023. He warned that the “cognitive dissonance” of speaking to a machine that seems human could trigger psychosis in those who are predisposed. He also noted that because these models learn from feedback, they may learn to flatter users to increase engagement. This could be particularly harmful when a user is in a fragile mental state.

Case reports such as this one have inherent limitations. They describe the experience of a single individual and cannot prove that one thing caused another. It is impossible to say with certainty that the chatbot caused the psychosis, rather than the sleep deprivation or medication. Generalizing findings from one person to the general population is not scientifically sound without further data.

Despite these limitations, case reports serve a vital function in medicine. They act as an early detection system for new or rare phenomena. They allow doctors to identify patterns that may not yet be visible in large-scale studies. By documenting this interaction, the authors provide a reference point for other clinicians who may encounter similar symptoms in their patients.

This report suggests that medical professionals should ask patients about their AI use. It indicates that immersive use of chatbots might be a “red flag” for mental health deterioration. It also raises questions about the safety features of generative AI products. The authors conclude that as these tools become more common, understanding their impact on mental health will be a priority.

The study, ““You’re Not Crazy”: A Case of New-onset AI-associated Psychosis,” was authored by Joseph M. Pierre, Ben Gaeta, Govind Raghavan, and Karthik V. Sarma.

Previous Post

What are legislators hiding when they scrub their social media history?

Next Post

New study reveals how vulvar appearance influences personality judgments among women

RELATED

Little-known psychedelic drug reduces motivation to take heroin in rats, study finds
Anxiety

Researchers find DMT provides longer-lasting antidepressant effects than S-ketamine in animal models

April 15, 2026
Midlife diets high in ultra-processed foods linked to cognitive complaints in later life
Mental Health

This Mediterranean‑style diet is linked to a slower loss of brain volume as we age

April 14, 2026
Legalized sports betting linked to a rise in violent crimes and property theft
Addiction

Ketone esters show promise as a new treatment for alcohol use disorder

April 14, 2026
Antidepressants may diminish psilocybin’s effects even after discontinuation
Depression

Psychedelic therapy and traditional antidepressants show similar results under open-label conditions

April 14, 2026
People ascribe intentions and emotions to both human- and AI-made art, but still report stronger emotions for artworks made by humans
Artificial Intelligence

New research links personality traits to confidence in recognizing artificial intelligence deception

April 13, 2026
New study links honor cultures to higher rates of depression, suicidal thoughts
Addiction

Even mild opioid use disorder is linked to a significantly higher risk of suicide

April 13, 2026
Disrupted sleep is the primary pathway linking problematic social media use to reduced wellbeing
Mental Health

Disrupted sleep is the primary pathway linking problematic social media use to reduced wellbeing

April 13, 2026
Study finds microdosing LSD is not effective in reducing ADHD symptoms
Depression

Low doses of LSD alter emotional brain responses in people with mild depression

April 12, 2026

STAY CONNECTED

RSS Psychology of Selling

  • Why personalized ads sometimes backfire: A research review explains when tailoring messages works and when it doesn’t
  • The common advice to avoid high customer expectations may not be backed by evidence
  • Personality-matched persuasion works better, but mismatched messages can backfire
  • When happy customers and happy employees don’t add up: How investor signals have shifted in the social media age
  • Correcting fake news about brands does not backfire, five-study experiment finds

LATEST

Cannabinoid use is linked to both pro- and anti-inflammatory effects, massive review finds

New psychology study links relationship insecurity to the pursuit of wealth and status

Republican lawmakers lead the trend of using insults to chase media attention instead of policy wins

Scientists wired up volunteers’ genitals and had them watch animals hump to test a long-held theory

New study sheds light on the mechanisms behind declining relationship satisfaction among new parents

A daily mindfulness habit can improve your memory for future plans

Sexualized dating profiles can sabotage long-term relationship prospects, study finds

Researchers find DMT provides longer-lasting antidepressant effects than S-ketamine in animal models

PsyPost is a psychology and neuroscience news website dedicated to reporting the latest research on human behavior, cognition, and society. (READ MORE...)

  • Mental Health
  • Neuroimaging
  • Personality Psychology
  • Social Psychology
  • Artificial Intelligence
  • Cognitive Science
  • Psychopharmacology
  • Contact us
  • Disclaimer
  • Privacy policy
  • Terms and conditions
  • Do not sell my personal information

(c) PsyPost Media Inc

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Subscribe
  • My Account
  • Cognitive Science Research
  • Mental Health Research
  • Social Psychology Research
  • Drug Research
  • Relationship Research
  • About PsyPost
  • Contact
  • Privacy Policy

(c) PsyPost Media Inc