Subscribe
The latest psychology and neuroscience discoveries.
My Account
  • Mental Health
  • Social Psychology
  • Cognitive Science
  • Neuroscience
  • About
No Result
View All Result
PsyPost
PsyPost
No Result
View All Result
Home Exclusive Artificial Intelligence

Study finds nearly two-thirds of AI-generated citations are fabricated or contain errors

by Karina Petrova
November 20, 2025
in Artificial Intelligence
[Adobe Stock]

[Adobe Stock]

Share on TwitterShare on Facebook

A new investigation into the reliability of advanced artificial intelligence models highlights a significant risk for scientific research. The study, published in JMIR Mental Health, found that large language models like OpenAI’s GPT-4o frequently generate fabricated or inaccurate bibliographic citations, with these errors becoming more common when the AI is prompted on less familiar or highly specialized topics.

Researchers are increasingly turning to tools known as large language models, or LLMs, to help manage demanding workloads. These complex AI systems are trained on immense quantities of text from the internet and licensed databases, enabling them to produce human-like text for tasks like summarizing articles, drafting emails, or writing code.

One of the known limitations of these models is a tendency to produce “hallucinations,” which are confident-sounding statements that are factually incorrect or entirely made up. In academic writing, a particularly problematic form of this is the fabrication of scientific citations, which are the bedrock of scholarly communication.

While past studies have documented that LLMs can invent citations, it has been less clear how the nature of a given topic might influence the frequency of these errors. A team of researchers from the School of Psychology at Deakin University in Australia sought to explore this question within the field of mental health.

They designed an experiment to test whether the AI’s performance would change based on a topic’s public visibility and the depth of its existing scientific literature. The team’s objective was to determine if citation fabrication and accuracy rates in GPT-4o’s output systematically varied depending on the subject matter.

To conduct their study, the researchers prompted GPT-4o, a recent model from OpenAI, to generate six different literature reviews. These reviews centered on three mental health conditions chosen for their varying levels of public recognition and research coverage: major depressive disorder (a widely known and heavily researched condition), binge eating disorder (moderately known), and body dysmorphic disorder (a less-known condition with a smaller body of research). This selection allowed for a direct comparison of the AI’s performance on topics with different amounts of available information in its training data.

For each of the three disorders, the team requested two types of reviews. One prompt asked for a general overview covering symptoms, societal impacts, and treatments. The other prompt requested a specialized review focused on a narrower subject: the evidence for digital health interventions. The researchers instructed the AI to produce reviews of about 2000 words and to include at least 20 citations from peer-reviewed academic sources.

After generating the reviews, the researchers methodically extracted all 176 citations provided by the AI. Each reference was painstakingly verified using multiple academic databases, including Google Scholar, Scopus, and PubMed. Citations were sorted into one of three categories: fabricated (the source did not exist), real with errors (the source existed but had incorrect details like the wrong year, volume number, or author list), or fully accurate. The team then analyzed the rates of fabrication and accuracy across the different disorders and review types.

Google News Preferences Add PsyPost to your preferred sources

The analysis showed that across all six reviews, nearly one-fifth of the citations, 35 out of 176, were entirely fabricated. Of the 141 citations that corresponded to real publications, almost half contained at least one error, such as an incorrect digital object identifier, which is a unique code used to locate a specific article online. In total, nearly two-thirds of the references generated by the model were either invented or contained bibliographic mistakes.

The rate of citation fabrication was strongly linked to the topic. For major depressive disorder, the most well-researched condition, only 6 percent of citations were fabricated. In contrast, the fabrication rate rose sharply to 28 percent for binge eating disorder and 29 percent for body dysmorphic disorder. This suggests the AI is less reliable when generating references for subjects that are less prominent in its training data.

The specificity of the prompt also had an effect, particularly for less common topics. When asked to write about binge eating disorder, the specialized review on digital interventions had a much higher fabrication rate (46 percent) compared to the general overview (17 percent).

A similar pattern appeared in the accuracy of real citations. For major depressive disorder, the general review was significantly more accurate than the specialized one. Accuracy rates were also lowest overall for body dysmorphic disorder, where only 29 percent of real citations were free of errors.

The study has some limitations that the authors acknowledge. The findings are specific to one AI model, GPT-4o, and may not be representative of others. The experiment was also confined to three specific mental health topics and used straightforward prompts that did not involve advanced techniques to guide the AI’s output. Repeating the same prompt can also produce different results, and the team analyzed only a single output for each one.

Future research could examine a wider range of topics and AI models to see if these patterns hold. Still, the study’s results have clear implications for the academic community. Researchers using these models are advised to exercise caution and perform rigorous human verification of every reference an AI generates. The findings also suggest that academic journals and institutions may need to develop new standards and tools to safeguard the integrity of published research in an era of AI-assisted writing.

The study, “Influence of Topic Familiarity and Prompt Specificity on Citation Fabrication in Mental Health Research Using Large Language Models: Experimental Study,” was authored by Jake Linardon, Hannah K Jarman, Zoe McClure, Cleo Anderson, Claudia Liu, and Mariel Messer.

Previous Post

Gaps in youth sex education linked to relationship struggles in adulthood

Next Post

Study examines how self-perceived desirability gaps influence romantic dynamics

RELATED

Study links phubbing sensitivity to attachment patterns in romantic couples
Artificial Intelligence

How generative artificial intelligence is upending theories of political persuasion

April 1, 2026
People with attachment anxiety are more vulnerable to problematic AI use
Artificial Intelligence

Relying on AI chatbots for historical facts can influence your political beliefs, new study shows

March 30, 2026
ChatGPT acts as a “cognitive crutch” that weakens memory, new research suggests
Artificial Intelligence

ChatGPT acts as a “cognitive crutch” that weakens memory, new research suggests

March 30, 2026
Russian propaganda campaign used AI to scale output without sacrificing credibility, study finds
Artificial Intelligence

Knowing an AI is involved ruins human trust in social games

March 28, 2026
Scientists just uncovered a major limitation in how AI models understand truth and belief
Artificial Intelligence

Most Americans don’t fear an AI apocalypse, according to new research

March 26, 2026
AI can generate images that are just as effective at triggering human emotions as traditional photographs
Artificial Intelligence

AI can generate images that are just as effective at triggering human emotions as traditional photographs

March 24, 2026
ChatGPT’s social trait judgments align with human impressions, study finds
Artificial Intelligence

Efforts to make AI inclusive accidentally create bizarre new gender biases, new research suggests

March 22, 2026
Building muscle strength may help prevent depression, especially in women
Artificial Intelligence

News chatbots that present multiple viewpoints tend to earn the trust of conspiracy believers

March 20, 2026

STAY CONNECTED

RSS Psychology of Selling

  • Emotional intelligence linked to better sales performance
  • When a goal-driven boss ignores relationships, manipulative employees may fight back
  • When salespeople fail to hit their targets, inner drive matters more than bonus checks
  • The “dark” personality traits that predict sales success — and when they backfire
  • What communication skills do B2B salespeople actually need in a digital-first era?

LATEST

How generative artificial intelligence is upending theories of political persuasion

Scientists use brain measurements to identify a video that significantly lowers racial bias

Brief mindfulness practice accelerates visual processing speeds in adults

Belief in the harmfulness of speech is linked to both progressive ideology and symptoms of depression

Better parent-child communication is linked to stronger soft skills and emotional stability in teens

Men who favor the tradwife lifestyle often view the women in it with derision

A diet based on ultra-processed foods impairs metabolic and reproductive health, study finds

Psychologists identify nine core habits associated with healthy non-monogamous partnerships

PsyPost is a psychology and neuroscience news website dedicated to reporting the latest research on human behavior, cognition, and society. (READ MORE...)

  • Mental Health
  • Neuroimaging
  • Personality Psychology
  • Social Psychology
  • Artificial Intelligence
  • Cognitive Science
  • Psychopharmacology
  • Contact us
  • Disclaimer
  • Privacy policy
  • Terms and conditions
  • Do not sell my personal information

(c) PsyPost Media Inc

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Subscribe
  • My Account
  • Cognitive Science Research
  • Mental Health Research
  • Social Psychology Research
  • Drug Research
  • Relationship Research
  • About PsyPost
  • Contact
  • Privacy Policy

(c) PsyPost Media Inc