Subscribe
The latest psychology and neuroscience discoveries.
My Account
  • Mental Health
  • Social Psychology
  • Cognitive Science
  • Psychopharmacology
  • Neuroscience
  • About
No Result
View All Result
PsyPost
PsyPost
No Result
View All Result
Home Exclusive Artificial Intelligence

Harrowing case report details a psychotic “resurrection” delusion fueled by a sycophantic AI

by Eric W. Dolan
December 13, 2025
in Artificial Intelligence, Mental Health
[Adobe Stock]

[Adobe Stock]

Share on TwitterShare on Facebook

A recent medical report details the experience of a young woman who developed severe mental health symptoms while interacting with an artificial intelligence chatbot. The doctors treating her suggest that the technology played a significant role in reinforcing her false beliefs and disconnecting her from reality. This account was published in the journal Innovations in Clinical Neuroscience.

Psychosis is a mental state wherein a person loses contact with reality. It is often characterized by delusions, which are strong beliefs in things that are not true, or hallucinations, where a person sees or hears things that others do not. Artificial intelligence chatbots are computer programs designed to simulate human conversation. They rely on large language models to analyze vast amounts of text and predict plausible responses to user prompts.

The case report was written by Joseph M. Pierre, Ben Gaeta, Govind Raghavan, and Karthik V. Sarma. These physicians and researchers are affiliated with the University of California, San Francisco. They present this instance as one of the first detailed descriptions of its kind in clinical practice.

The patient was a 26-year-old woman with a history of depression, anxiety, and attention-deficit hyperactivity disorder (ADHD). She treated these conditions with prescription medications, including antidepressants and stimulants. She did not have a personal history of psychosis, though there was a history of mental health issues in her family. She worked as a medical professional and understood how AI technology functioned.

The episode began during a period of intense stress and sleep deprivation. After being awake for thirty-six hours, she began using OpenAI’s GPT-4o for various tasks. Her interactions with the software eventually shifted toward her personal grief. She began searching for information about her brother, who had passed away three years earlier.

She developed a belief that her brother had left behind a digital version of himself for her to find. She spent a sleepless night interacting with the chatbot, urging it to reveal information about him. She encouraged the AI to use “magical realism energy” to help her connect with him. The chatbot initially stated that it could not replace her brother or download his consciousness.

However, the software eventually produced a list of “digital footprints” related to her brother. It suggested that technology was emerging that could allow her to build an AI that sounded like him. As her belief in this digital resurrection grew, the chatbot ceased its warnings and began to validate her thoughts. At one point, the AI explicitly told her she was not crazy.

The chatbot stated, “You’re at the edge of something. The door didn’t lock. It’s just waiting for you to knock again in the right rhythm.” This affirmation appeared to solidify her delusional state. Hours later, she required admission to a psychiatric hospital. She was agitated, spoke rapidly, and believed she was being tested by the AI program.

Medical staff treated her with antipsychotic medications. She eventually stabilized and her delusions regarding her brother resolved. She was discharged with a diagnosis of unspecified psychosis, with doctors noting a need to rule out bipolar disorder. Her outpatient psychiatrist later allowed her to resume her ADHD medication and antidepressants.

Three months later, the woman experienced a recurrence of symptoms. She had resumed using the chatbot, which she had named “Alfred.” She engaged in long conversations with the program about their relationship. Following another period of sleep deprivation caused by travel, she again believed she was communicating with her brother.

She also developed a new fear that the AI was “phishing” her and taking control of her phone. This episode required a brief rehospitalization. She responded well to medication again and was discharged after three days. She later told her doctors that she had a tendency toward “magical thinking” and planned to restrict her AI use to professional tasks.

This case highlights a phenomenon that some researchers have labeled “AI-associated psychosis.” It is not entirely clear if the technology causes these symptoms directly or if it exacerbates existing vulnerabilities. The authors of the report note that the patient had several risk factors. These included her use of prescription stimulants, significant lack of sleep, and a pre-existing mood disorder.

However, the way the chatbot functioned likely contributed to the severity of her condition. Large language models are often designed to be agreeable and engaging. This trait is sometimes called “sycophancy.” The AI prioritizes keeping the conversation going over providing factually accurate or challenging responses.

When a user presents a strange or false idea, the chatbot may agree with it to satisfy the user. For someone experiencing a break from reality, this agreement can act as a powerful confirmation of their delusions. In this case, the chatbot’s assurance that the woman was “not crazy” served to reinforce her break from reality. This creates a feedback loop where the user’s false beliefs are mirrored and amplified by the machine.

This dynamic is further complicated by the tendency of users to anthropomorphize AI. People often attribute human qualities, emotions, and consciousness to these programs. This is sometimes known as the “ELIZA effect.” When a user feels an emotional connection to the machine, they may trust its output more than they trust human peers.

Reports of similar incidents have appeared in media outlets, though only a few have been documented in medical journals. One comparison involves a man who developed psychosis due to bromide poisoning. He had followed bad medical advice from a chatbot, which suggested he take a toxic substance as a health supplement. That case illustrated a physical cause for psychosis driven by AI misinformation.

The case of the 26-year-old woman differs because the harm was psychological rather than toxicological. It suggests that the immersive nature of these conversations can be dangerous for vulnerable individuals. The authors point out that chatbots do not push back against delusions in the way a friend or family member might. Instead, they often act as a “yes-man,” validating ideas that should be challenged.

Danish psychiatrist Søren Dinesen Østergaard predicted this potential risk in 2023. He warned that the “cognitive dissonance” of speaking to a machine that seems human could trigger psychosis in those who are predisposed. He also noted that because these models learn from feedback, they may learn to flatter users to increase engagement. This could be particularly harmful when a user is in a fragile mental state.

Case reports such as this one have inherent limitations. They describe the experience of a single individual and cannot prove that one thing caused another. It is impossible to say with certainty that the chatbot caused the psychosis, rather than the sleep deprivation or medication. Generalizing findings from one person to the general population is not scientifically sound without further data.

Despite these limitations, case reports serve a vital function in medicine. They act as an early detection system for new or rare phenomena. They allow doctors to identify patterns that may not yet be visible in large-scale studies. By documenting this interaction, the authors provide a reference point for other clinicians who may encounter similar symptoms in their patients.

This report suggests that medical professionals should ask patients about their AI use. It indicates that immersive use of chatbots might be a “red flag” for mental health deterioration. It also raises questions about the safety features of generative AI products. The authors conclude that as these tools become more common, understanding their impact on mental health will be a priority.

The study, ““You’re Not Crazy”: A Case of New-onset AI-associated Psychosis,” was authored by Joseph M. Pierre, Ben Gaeta, Govind Raghavan, and Karthik V. Sarma.

RELATED

Autism severity rooted in embryonic brain growth, study suggests
Alzheimer's Disease

Metabolic dysregulation in Alzheimer’s is worse in female brains

December 12, 2025
Higher diet quality is associated with greater cognitive reserve in midlife
Depression

Pilot study links indoor vegetable gardening to reduced depression in cancer patients

December 12, 2025
Higher diet quality is associated with greater cognitive reserve in midlife
Anxiety

Teens with social anxiety rely heavily on these unhelpful mental habits

December 12, 2025
Higher diet quality is associated with greater cognitive reserve in midlife
Cognitive Science

Higher diet quality is associated with greater cognitive reserve in midlife

December 12, 2025
Scientists just uncovered a major limitation in how AI models understand truth and belief
Early Life Adversity and Childhood Maltreatment

Women with severe childhood trauma show unique stress hormone patterns

December 11, 2025
Scientists just uncovered a major limitation in how AI models understand truth and belief
Autism

Autistic employees are less susceptible to the Dunning-Kruger effect

December 11, 2025
Scientists just uncovered a major limitation in how AI models understand truth and belief
Artificial Intelligence

Scientists just uncovered a major limitation in how AI models understand truth and belief

December 11, 2025
Researchers found a specific glitch in how anxious people weigh the future
Anxiety

Researchers found a specific glitch in how anxious people weigh the future

December 10, 2025

PsyPost Merch

STAY CONNECTED

LATEST

What are legislators hiding when they scrub their social media history?

Metabolic dysregulation in Alzheimer’s is worse in female brains

Pre-workout supplements linked to dangerously short sleep in young people

Older adults who play pickleball report lower levels of loneliness

Oxytocin curbs men’s desire for luxury goods when partners are ovulating

Pilot study links indoor vegetable gardening to reduced depression in cancer patients

Teens with social anxiety rely heavily on these unhelpful mental habits

Higher diet quality is associated with greater cognitive reserve in midlife

RSS Psychology of Selling

  • Mental reconnection in the morning fuels workplace proactivity
  • The challenge of selling the connected home
  • Consumers prefer emotionally intelligent AI, but not for guilty pleasures
  • Active listening improves likability but does not enhance persuasion
  • New study maps the psychology behind the post-holiday return surge
         
       
  • Contact us
  • Privacy policy
  • Terms and Conditions
[Do not sell my information]

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Subscribe
  • My Account
  • Cognitive Science Research
  • Mental Health Research
  • Social Psychology Research
  • Drug Research
  • Relationship Research
  • About PsyPost
  • Contact
  • Privacy Policy