Subscribe
The latest psychology and neuroscience discoveries.
My Account
  • Mental Health
  • Social Psychology
  • Cognitive Science
  • Psychopharmacology
  • Neuroscience
  • About
No Result
View All Result
PsyPost
PsyPost
No Result
View All Result
Home Exclusive Artificial Intelligence

ChatGPT produces accurate psychiatric diagnoses from case vignettes, study finds

by Vladimir Hedrih
April 9, 2025
in Artificial Intelligence
[Adobe Stock]

[Adobe Stock]

Share on TwitterShare on Facebook
Stay on top of the latest psychology findings: Subscribe now!

An examination of ChatGPT’s responses to 100 vignettes of clinical psychiatric cases found that the model performs exceptionally well in producing psychiatric diagnoses from such material. It received the highest grade in 61 vignettes and the second-highest grade in an additional 31. Notably, there were no responses that contained diagnostic errors. The research was published in the Asian Journal of Psychiatry.

ChatGPT is an advanced language model developed by OpenAI, designed to understand and generate human-like text based on user input. It is trained on a diverse dataset to handle a wide range of topics. ChatGPT aims to assist users by providing information, facilitating learning, and engaging in thoughtful dialogue.

Shortly after its launch, ChatGPT became the fastest-growing internet application, reaching 1 million users just five days after its release in November 2022. Since then, the user base has grown substantially. Numerous scientific studies have evaluated its capabilities, and ChatGPT often passes assessments that were traditionally the domain of humans—frequently with impressive results. One of its most notable achievements is successfully passing the United States Medical Licensing Examination. In many studies assessing its performance in providing medical advice or interpreting clinical results, ChatGPT has performed on par with—or even better than—human professionals.

Study author Russell Franco D’Souza and his colleagues note that ChatGPT could potentially serve as a valuable AI-based tool for detecting, interpreting, and managing various medical conditions by assisting clinicians in making diagnostic and treatment decisions, particularly in psychiatry. To explore this potential, the researchers conducted a study assessing the performance of ChatGPT 3.5 on 100 psychiatric case vignettes.

The study used clinical case vignettes from 100 Cases in Psychiatry by Barry Wright and colleagues. Each vignette begins with a detailed description of a patient’s symptoms, along with relevant personal and medical history. This is followed by a series of questions designed to guide the reader through the diagnostic process and management planning, encouraging critical thinking and the application of psychiatric knowledge.

The researchers presented ChatGPT with each vignette and recorded its responses. These responses were then evaluated by two experienced psychiatrists who are also faculty members with substantial teaching and clinical backgrounds. Each of the 100 responses was compared to reference answers from the source material and graded based on quality. Grades ranged from A (the highest) to D (indicating an unacceptable response).

Overall, ChatGPT received an A grade for 61 vignettes, a B for 31, and a C for the remaining 8. It did not produce any responses that were considered unacceptable. The model performed best in proposing strategies for managing disorders and symptoms, followed by making diagnoses and considering differential diagnoses.

“It is evident from our study that ChatGPT 3.5 has appreciable knowledge and interpretation skills in Psychiatry. Thus, ChatGPT 3.5 undoubtedly has the potential to transform the field of Medicine and we emphasize its utility in Psychiatry through the finding of our study. However, for any AI model to be successful, assuring the reliability, validation of information, proper guidelines and implementation framework are necessary,” the study authors concluded.

The study contributes to the understanding of potential applications of ChatGPT and large language models in general. However, it remains unclear how much of the materials contained in this book were used in training ChatGPT. ChatGPT has information about the existence of this book with vignettes and can produce quite a few details about it. It remains unknown whether it was included in its training materials, as ChatGPT cannot report on what its training materials are. Results might differ if case descriptions were used from a source completely unknown to ChatGPT.

The paper, “Appraising the performance of ChatGPT in psychiatry using 100 clinical case vignettes,” was authored by Russell Franco D’Souza, Shabbir Amanullah, Mary Mathew, and Krishna Mohan Surapaneni.

TweetSendScanShareSendPin4ShareShareShareShareShare

RELATED

New research links certain types of narcissism to anti-immigrant attitudes
Artificial Intelligence

Fears about AI push workers to embrace creativity over coding, new research suggests

June 13, 2025

A new study shows that when workers feel threatened by artificial intelligence, they tend to highlight creativity—rather than technical or social skills—in job applications and education choices. The research suggests people see creativity as a uniquely human skill machines can’t replace.

Read moreDetails
Smash or pass? AI could soon predict your date’s interest via physiological cues
Artificial Intelligence

A neuroscientist explains why it’s impossible for AI to “understand” language

June 12, 2025

Can artificial intelligence truly “understand” language the way humans do? A neuroscientist challenges this popular belief, arguing that machines may generate convincing text—but they lack the emotional, contextual, and biological grounding that gives real meaning to human communication.

Read moreDetails
Scientists reveal ChatGPT’s left-wing bias — and how to “jailbreak” it
Artificial Intelligence

ChatGPT mimics human cognitive dissonance in psychological experiments, study finds

June 3, 2025

OpenAI’s GPT-4o demonstrated behavior resembling cognitive dissonance in a psychological experiment. After writing essays about Vladimir Putin, the AI changed its evaluations—especially when it thought it had freely chosen which argument to make, echoing patterns seen in people.

Read moreDetails
Generative AI simplifies science communication, boosts public trust in scientists
Artificial Intelligence

East Asians more open to chatbot companionship than Westerners

May 30, 2025

A new study highlights cultural differences in attitudes toward AI companionship. East Asian participants were more open to emotionally connecting with chatbots, a pattern linked to greater anthropomorphism and differing exposure to social robots across regions.

Read moreDetails
AI can predict intimate partner femicide from variables extracted from legal documents
Artificial Intelligence

Being honest about using AI can backfire on your credibility

May 29, 2025

New research reveals a surprising downside to AI transparency: people who admit to using AI at work are seen as less trustworthy. Across 13 experiments, disclosing AI use consistently reduced credibility—even among tech-savvy evaluators and in professional contexts.

Read moreDetails
Too much ChatGPT? Study ties AI reliance to lower grades and motivation
Artificial Intelligence

Too much ChatGPT? Study ties AI reliance to lower grades and motivation

May 27, 2025

A new study suggests that conscientious students are less likely to use generative AI tools like ChatGPT and that this may work in their favor. Frequent AI users reported lower grades, weaker academic confidence, and greater feelings of helplessness.

Read moreDetails
Groundbreaking AI model uncovers hidden patterns of political bias in online news
Artificial Intelligence

Groundbreaking AI model uncovers hidden patterns of political bias in online news

May 23, 2025

Researchers developed a large-scale system that detects political bias in web-based news outlets by examining topic selection, tone, and coverage patterns. The AI tool offers transparency and accuracy—even outperforming large language models.

Read moreDetails
Attractiveness shapes beliefs about whether faces are real or AI-generated, study finds
Artificial Intelligence

Attractiveness shapes beliefs about whether faces are real or AI-generated, study finds

May 21, 2025

A new study published in Acta Psychologica reveals that people’s judgments about whether a face is real or AI-generated are influenced by facial attractiveness and personality traits such as narcissism and honesty-humility—even when all the images are of real people.

Read moreDetails

SUBSCRIBE

Go Ad-Free! Click here to subscribe to PsyPost and support independent science journalism!

STAY CONNECTED

LATEST

Single-dose psilocybin therapy shows promise for reducing alcohol consumption

Low-carb diets linked to reduced depression symptoms — but there’s a catch

Neuroscientists discover biological mechanism that helps the brain ignore irrelevant information

Problematic porn use remains stable over time and is strongly linked to mental distress, study finds

Christian nationalists tend to imagine God as benevolent, angry over sins, and engaged

Psilocybin induces large-scale brain network reorganization, offering insights into the psychedelic state

Scientists map how alcohol changes bodily sensations

Poor sleep may shrink brain regions vulnerable to Alzheimer’s disease, study suggests

         
       
  • Contact us
  • Privacy policy
  • Terms and Conditions
[Do not sell my information]

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Subscribe
  • My Account
  • Cognitive Science Research
  • Mental Health Research
  • Social Psychology Research
  • Drug Research
  • Relationship Research
  • About PsyPost
  • Contact
  • Privacy Policy