Subscribe
The latest psychology and neuroscience discoveries.
My Account
  • Mental Health
  • Social Psychology
  • Cognitive Science
  • Psychopharmacology
  • Neuroscience
  • About
No Result
View All Result
PsyPost
PsyPost
No Result
View All Result
Home Exclusive Artificial Intelligence

ChatGPT produces accurate psychiatric diagnoses from case vignettes, study finds

by Vladimir Hedrih
April 9, 2025
in Artificial Intelligence
[Adobe Stock]

[Adobe Stock]

Share on TwitterShare on Facebook
Stay informed on the latest psychology and neuroscience research—follow PsyPost on LinkedIn for daily updates and insights.

An examination of ChatGPT’s responses to 100 vignettes of clinical psychiatric cases found that the model performs exceptionally well in producing psychiatric diagnoses from such material. It received the highest grade in 61 vignettes and the second-highest grade in an additional 31. Notably, there were no responses that contained diagnostic errors. The research was published in the Asian Journal of Psychiatry.

ChatGPT is an advanced language model developed by OpenAI, designed to understand and generate human-like text based on user input. It is trained on a diverse dataset to handle a wide range of topics. ChatGPT aims to assist users by providing information, facilitating learning, and engaging in thoughtful dialogue.

Shortly after its launch, ChatGPT became the fastest-growing internet application, reaching 1 million users just five days after its release in November 2022. Since then, the user base has grown substantially. Numerous scientific studies have evaluated its capabilities, and ChatGPT often passes assessments that were traditionally the domain of humans—frequently with impressive results. One of its most notable achievements is successfully passing the United States Medical Licensing Examination. In many studies assessing its performance in providing medical advice or interpreting clinical results, ChatGPT has performed on par with—or even better than—human professionals.

Study author Russell Franco D’Souza and his colleagues note that ChatGPT could potentially serve as a valuable AI-based tool for detecting, interpreting, and managing various medical conditions by assisting clinicians in making diagnostic and treatment decisions, particularly in psychiatry. To explore this potential, the researchers conducted a study assessing the performance of ChatGPT 3.5 on 100 psychiatric case vignettes.

The study used clinical case vignettes from 100 Cases in Psychiatry by Barry Wright and colleagues. Each vignette begins with a detailed description of a patient’s symptoms, along with relevant personal and medical history. This is followed by a series of questions designed to guide the reader through the diagnostic process and management planning, encouraging critical thinking and the application of psychiatric knowledge.

The researchers presented ChatGPT with each vignette and recorded its responses. These responses were then evaluated by two experienced psychiatrists who are also faculty members with substantial teaching and clinical backgrounds. Each of the 100 responses was compared to reference answers from the source material and graded based on quality. Grades ranged from A (the highest) to D (indicating an unacceptable response).

Overall, ChatGPT received an A grade for 61 vignettes, a B for 31, and a C for the remaining 8. It did not produce any responses that were considered unacceptable. The model performed best in proposing strategies for managing disorders and symptoms, followed by making diagnoses and considering differential diagnoses.

“It is evident from our study that ChatGPT 3.5 has appreciable knowledge and interpretation skills in Psychiatry. Thus, ChatGPT 3.5 undoubtedly has the potential to transform the field of Medicine and we emphasize its utility in Psychiatry through the finding of our study. However, for any AI model to be successful, assuring the reliability, validation of information, proper guidelines and implementation framework are necessary,” the study authors concluded.

The study contributes to the understanding of potential applications of ChatGPT and large language models in general. However, it remains unclear how much of the materials contained in this book were used in training ChatGPT. ChatGPT has information about the existence of this book with vignettes and can produce quite a few details about it. It remains unknown whether it was included in its training materials, as ChatGPT cannot report on what its training materials are. Results might differ if case descriptions were used from a source completely unknown to ChatGPT.

The paper, “Appraising the performance of ChatGPT in psychiatry using 100 clinical case vignettes,” was authored by Russell Franco D’Souza, Shabbir Amanullah, Mary Mathew, and Krishna Mohan Surapaneni.

RELATED

His psychosis was a mystery—until doctors learned about ChatGPT’s health advice
Artificial Intelligence

His psychosis was a mystery—until doctors learned about ChatGPT’s health advice

August 13, 2025

Doctors were baffled when a healthy man developed hallucinations and paranoia. The cause? Bromide toxicity—triggered by an AI-guided experiment to eliminate chloride from his diet. The case raises new concerns about how people use chatbots like ChatGPT for health advice.

Read moreDetails
Brain imaging study reveals blunted empathic response to others’ pain when following orders
Artificial Intelligence

Machine learning helps tailor deep brain stimulation to improve gait in Parkinson’s disease

August 12, 2025

A new study shows that adjusting deep brain stimulation settings based on wearable sensor data and brain recordings can enhance walking in Parkinson’s disease. The personalized approach improved gait performance and revealed neural signatures linked to mobility gains.

Read moreDetails
Assimilation-induced dehumanization: Psychology research uncovers a dark side effect of AI
Artificial Intelligence

Assimilation-induced dehumanization: Psychology research uncovers a dark side effect of AI

August 11, 2025

As AI becomes more empathetic, a surprising psychological shift occurs. New research finds that interacting with emotionally intelligent machines can make us see real people as more machine-like, subtly eroding our respect for humanity.

Read moreDetails
Pet dogs fail to favor generous people over selfish ones in tests
Artificial Intelligence

AI’s personality-reading powers aren’t always what they seem, study finds

August 9, 2025

A closer look at AI language models shows that while they can detect meaningful personality signals in text, much of their success with certain datasets comes from exploiting superficial cues, raising questions about the validity of some assessments.

Read moreDetails
High sensitivity may protect against anomalous psychological phenomena
Artificial Intelligence

ChatGPT psychosis? This scientist predicted AI-induced delusions — two years later it appears he was right

August 7, 2025

A psychiatrist’s 2023 warning that AI chatbots could trigger psychosis now appears eerily accurate. Real-world cases show vulnerable users falling into delusional spirals after intense chatbot interactions—raising urgent questions about the mental health risks of generative artificial intelligence.

Read moreDetails
Generative AI simplifies science communication, boosts public trust in scientists
Artificial Intelligence

Conservatives are more receptive to AI-generated recommendations than liberals, study finds

August 4, 2025

Contrary to popular belief, conservatives may be more receptive to AI in everyday life. A series of studies finds that conservatives are more likely than liberals to accept AI-generated recommendations.

Read moreDetails
AI chatbots outperform humans in evaluating social situations, study finds
Artificial Intelligence

Humans still beat AI at one key creative task, new study finds

July 25, 2025

Is AI the best brainstorming partner? Not quite, according to new research. Human pairs came up with more original ideas and felt more creatively confident than those working with ChatGPT or Google in a series of collaborative thinking tasks.

Read moreDetails
New psychology study: Inner reasons for seeking romance are a top predictor of finding it
Artificial Intelligence

Scientists demonstrate that “AI’s superhuman persuasiveness is already a reality”

July 18, 2025

A recent study reveals that AI is not just a capable debater but a superior one. When personalized, ChatGPT's arguments were over 64% more likely to sway opinions than a human's, a significant and potentially concerning leap in persuasive capability.

Read moreDetails

STAY CONNECTED

LATEST

Esketamine nasal spray shows rapid antidepressant effects as standalone treatment

Game-based training can boost executive function and math skills in children

Gabapentin use for back pain linked to higher risk of dementia, study finds

Researchers identify a key pathway linking socioeconomic status to children’s reading skills

These fascinating new studies show ADHD extends into unexpected areas

A woman’s craving for clay got so intense it mimicked signs of addiction

Lonely individuals show greater mood instability, especially with positive emotions, study finds

Study hints cannabis use may influence sleep test results, raising concerns about misdiagnosis

         
       
  • Contact us
  • Privacy policy
  • Terms and Conditions
[Do not sell my information]

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Subscribe
  • My Account
  • Cognitive Science Research
  • Mental Health Research
  • Social Psychology Research
  • Drug Research
  • Relationship Research
  • About PsyPost
  • Contact
  • Privacy Policy