Subscribe
The latest psychology and neuroscience discoveries.
My Account
  • Mental Health
  • Social Psychology
  • Cognitive Science
  • Psychopharmacology
  • Neuroscience
  • About
No Result
View All Result
PsyPost
PsyPost
No Result
View All Result
Home Exclusive Artificial Intelligence

ChatGPT produces accurate psychiatric diagnoses from case vignettes, study finds

by Vladimir Hedrih
April 9, 2025
in Artificial Intelligence
[Adobe Stock]

[Adobe Stock]

Share on TwitterShare on Facebook
Stay on top of the latest psychology findings: Subscribe now!

An examination of ChatGPT’s responses to 100 vignettes of clinical psychiatric cases found that the model performs exceptionally well in producing psychiatric diagnoses from such material. It received the highest grade in 61 vignettes and the second-highest grade in an additional 31. Notably, there were no responses that contained diagnostic errors. The research was published in the Asian Journal of Psychiatry.

ChatGPT is an advanced language model developed by OpenAI, designed to understand and generate human-like text based on user input. It is trained on a diverse dataset to handle a wide range of topics. ChatGPT aims to assist users by providing information, facilitating learning, and engaging in thoughtful dialogue.

Shortly after its launch, ChatGPT became the fastest-growing internet application, reaching 1 million users just five days after its release in November 2022. Since then, the user base has grown substantially. Numerous scientific studies have evaluated its capabilities, and ChatGPT often passes assessments that were traditionally the domain of humans—frequently with impressive results. One of its most notable achievements is successfully passing the United States Medical Licensing Examination. In many studies assessing its performance in providing medical advice or interpreting clinical results, ChatGPT has performed on par with—or even better than—human professionals.

Study author Russell Franco D’Souza and his colleagues note that ChatGPT could potentially serve as a valuable AI-based tool for detecting, interpreting, and managing various medical conditions by assisting clinicians in making diagnostic and treatment decisions, particularly in psychiatry. To explore this potential, the researchers conducted a study assessing the performance of ChatGPT 3.5 on 100 psychiatric case vignettes.

The study used clinical case vignettes from 100 Cases in Psychiatry by Barry Wright and colleagues. Each vignette begins with a detailed description of a patient’s symptoms, along with relevant personal and medical history. This is followed by a series of questions designed to guide the reader through the diagnostic process and management planning, encouraging critical thinking and the application of psychiatric knowledge.

The researchers presented ChatGPT with each vignette and recorded its responses. These responses were then evaluated by two experienced psychiatrists who are also faculty members with substantial teaching and clinical backgrounds. Each of the 100 responses was compared to reference answers from the source material and graded based on quality. Grades ranged from A (the highest) to D (indicating an unacceptable response).

Overall, ChatGPT received an A grade for 61 vignettes, a B for 31, and a C for the remaining 8. It did not produce any responses that were considered unacceptable. The model performed best in proposing strategies for managing disorders and symptoms, followed by making diagnoses and considering differential diagnoses.

“It is evident from our study that ChatGPT 3.5 has appreciable knowledge and interpretation skills in Psychiatry. Thus, ChatGPT 3.5 undoubtedly has the potential to transform the field of Medicine and we emphasize its utility in Psychiatry through the finding of our study. However, for any AI model to be successful, assuring the reliability, validation of information, proper guidelines and implementation framework are necessary,” the study authors concluded.

The study contributes to the understanding of potential applications of ChatGPT and large language models in general. However, it remains unclear how much of the materials contained in this book were used in training ChatGPT. ChatGPT has information about the existence of this book with vignettes and can produce quite a few details about it. It remains unknown whether it was included in its training materials, as ChatGPT cannot report on what its training materials are. Results might differ if case descriptions were used from a source completely unknown to ChatGPT.

The paper, “Appraising the performance of ChatGPT in psychiatry using 100 clinical case vignettes,” was authored by Russell Franco D’Souza, Shabbir Amanullah, Mary Mathew, and Krishna Mohan Surapaneni.

RELATED

Psilocybin-assisted group therapy may help reduce depression and burnout among healthcare workers
Artificial Intelligence

Just a few chats with a biased AI can alter your political opinions

October 4, 2025
AI chatbots often misrepresent scientific studies — and newer models may be worse
Artificial Intelligence

AI chatbots give inconsistent responses to suicide-related questions, study finds

September 29, 2025
Study reveals AI’s potential to detect loneliness by deciphering speech patterns
Artificial Intelligence

People are more likely to act dishonestly when delegating tasks to AI

September 26, 2025
Frequent AI chatbot use associated with lower grades among computer science students
Artificial Intelligence

Frequent AI chatbot use associated with lower grades among computer science students

September 24, 2025
Too much ChatGPT? Study ties AI reliance to lower grades and motivation
Artificial Intelligence

Managers who use AI to write emails seen as less sincere, caring, and confident

September 24, 2025
Daughters who feel more attractive report stronger, more protective bonds with their fathers
Artificial Intelligence

Personality traits predict students’ use of generative AI in higher education, study finds

September 22, 2025
New AI tool detects hidden consciousness in brain-injured patients by analyzing microscopic facial movements
Artificial Intelligence

New AI tool detects hidden consciousness in brain-injured patients by analyzing microscopic facial movements

September 20, 2025
Veterans who develop excessive daytime sleepiness face increased risk of death
Artificial Intelligence

Artificial intelligence reveals hidden facial cues of mild depression

September 18, 2025

STAY CONNECTED

LATEST

People are more likely to honk at bad drivers with political bumper stickers

Children with more autistic traits show increased vulnerability to PTSD in early adulthood

Study finds a synergy between caffeine and music for athletes

Blackcurrant juice increases blood flow in the brain’s prefrontal cortex

Surprising hormone found to protect male brains from stress

Childhood trauma appears to leave a lasting metabolic signature

Scientists studied ayahuasca users—what they found about death is stunning

Neuroscientists reveal five distinct sleep patterns linked to health and cognition

         
       
  • Contact us
  • Privacy policy
  • Terms and Conditions
[Do not sell my information]

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Subscribe
  • My Account
  • Cognitive Science Research
  • Mental Health Research
  • Social Psychology Research
  • Drug Research
  • Relationship Research
  • About PsyPost
  • Contact
  • Privacy Policy