Subscribe
The latest psychology and neuroscience discoveries.
My Account
  • Mental Health
  • Social Psychology
  • Cognitive Science
  • Psychopharmacology
  • Neuroscience
  • About
No Result
View All Result
PsyPost
PsyPost
No Result
View All Result
Home Exclusive Artificial Intelligence

Scholars: AI isn’t “hallucinating” — it’s bullshitting

by Eric W. Dolan
June 9, 2024
in Artificial Intelligence
(Photo credit: Adobe Stock)

(Photo credit: Adobe Stock)

Share on TwitterShare on Facebook

Large language models, such as OpenAI’s ChatGPT, have revolutionized the way artificial intelligence interacts with humans, producing text that often seems indistinguishable from human writing. Despite their impressive capabilities, these models are known for generating persistent inaccuracies, often referred to as “AI hallucinations.” However, in a paper published in Ethics and Information Technology, scholars Michael Townsen Hicks, James Humphries, and Joe Slater from the University of Glasgow argue that these inaccuracies are better understood as “bullshit.”

Large language models (LLMs) are sophisticated computer programs designed to generate human-like text. They achieve this by analyzing vast amounts of written material and using statistical techniques to predict the likelihood of a particular word appearing next in a sequence. This process enables them to produce coherent and contextually appropriate responses to a wide range of prompts.

Unlike human brains, which have a variety of goals and behaviors, LLMs have a singular objective: to generate text that closely resembles human language. This means their primary function is to replicate the patterns and structures of human speech and writing, not to understand or convey factual information.

The term “AI hallucination” is used to describe instances when an LLM like ChatGPT produces inaccurate or entirely fabricated information. This term suggests that the AI is experiencing a perceptual error, akin to a human seeing something that isn’t there. However, this metaphor is misleading, according to Hicks and his colleagues, because it implies that the AI has a perspective or an intent to perceive and convey truth, which it does not.

To better understand why these inaccuracies might be better described as bullshit, it is helpful to look at the concept of bullshit as defined by philosopher Harry Frankfurt. In his seminal work, Frankfurt distinguishes bullshit from lying. A liar, according to Frankfurt, knows the truth but deliberately chooses to say something false. In contrast, a bullshitter is indifferent to the truth. The bullshitter’s primary concern is not whether what they are saying is true or false but whether it serves their purpose, often to impress or persuade.

Frankfurt’s concept highlights that bullshit is characterized by a disregard for the truth. The bullshitter does not care about the accuracy of their statements, only that they appear convincing or fit a particular narrative.

The scholars argue that the output of LLMs like ChatGPT fits Frankfurt’s definition of bullshit better than the concept of hallucination. These models do not have an understanding of truth or falsity; they generate text based on patterns in the data they have been trained on, without any intrinsic concern for accuracy. This makes them akin to bullshitters — they produce statements that can sound plausible without any grounding in factual reality.

The distinction is significant because it influences how we understand and address the inaccuracies produced by these models. If we think of these inaccuracies as hallucinations, we might believe that the AI is trying and failing to convey truthful information.

But AI models like ChatGPT do not have beliefs, intentions, or understanding, Hicks and his colleagues explained. They operate purely on statistical patterns derived from their training data.

When they produce incorrect information, it is not due to a deliberate intent to deceive (as in lying) or a faulty perception (as in hallucinating). Rather, it is because they are designed to create text that looks and sounds right without any intrinsic mechanism for ensuring factual accuracy.

“Investors, policymakers, and members of the general public make decisions on how to treat these machines and how to react to them based not on a deep technical understanding of how they work, but on the often metaphorical way in which their abilities and function are communicated,” Hicks and his colleagues concluded. “Calling their mistakes ‘hallucinations’ isn’t harmless: it lends itself to the confusion that the machines are in some way misperceiving but are nonetheless trying to convey something that they believe or have perceived.”

“This, as we’ve argued, is the wrong metaphor. The machines are not trying to communicate something they believe or perceive. Their inaccuracy is not due to misperception or hallucination. As we have pointed out, they are not trying to convey information at all. They are bullshitting.”

“Calling chatbot inaccuracies ‘hallucinations’ feeds in to overblown hype about their abilities among technology cheerleaders, and could lead to unnecessary consternation among the general public. It also suggests solutions to the inaccuracy problems which might not work, and could lead to misguided efforts at AI alignment amongst specialists,” the scholars wrote.

“It can also lead to the wrong attitude towards the machine when it gets things right: the inaccuracies show that it is bullshitting, even when it’s right. Calling these inaccuracies ‘bullshit’ rather than ‘hallucinations’ isn’t just more accurate (as we’ve argued); it’s good science and technology communication in an area that sorely needs it.”

OpenAI, for its part, has said that improving the factual accuracy of ChatGPT is a key goal.

“Improving factual accuracy is a significant focus for OpenAI and many other AI developers, and we’re making progress,” the company wrote in a 2023 blog post. “By leveraging user feedback on ChatGPT outputs that were flagged as incorrect as a main source of data—we have improved the factual accuracy of GPT-4. GPT-4 is 40% more likely to produce factual content than GPT-3.5.”

“When users sign up to use the tool, we strive to be as transparent as possible that ChatGPT may not always be accurate. However, we recognize that there is much more work to do to further reduce the likelihood of hallucinations and to educate the public on the current limitations of these AI tools.”

The paper, “ChatGPT is bullshit,” was published June 8, 2024.

RELATED

New research shows sexual arousal leads to a greater willingness to get intimate with robots
Artificial Intelligence

Researchers find reverse sexual double standard in sextech use

December 20, 2025
Generative AI simplifies science communication, boosts public trust in scientists
Artificial Intelligence

New AI system reduces the mental effort of using bionic hands

December 18, 2025
AI outshines humans in humor: Study finds ChatGPT is as funny as The Onion
Artificial Intelligence

Most top US research universities now encourage generative AI use in the classroom

December 14, 2025
Media coverage of artificial intelligence split along political lines, study finds
Artificial Intelligence

Survey reveals rapid adoption of AI tools in mental health care despite safety concerns

December 13, 2025
Harrowing case report details a psychotic “resurrection” delusion fueled by a sycophantic AI
Artificial Intelligence

Harrowing case report details a psychotic “resurrection” delusion fueled by a sycophantic AI

December 13, 2025
Scientists just uncovered a major limitation in how AI models understand truth and belief
Artificial Intelligence

Scientists just uncovered a major limitation in how AI models understand truth and belief

December 11, 2025
Russian propaganda campaign used AI to scale output without sacrificing credibility, study finds
Artificial Intelligence

AI can change political opinions by flooding voters with real and fabricated facts

December 9, 2025
How common is anal sex? Scientific facts about prevalence, pain, pleasure, and more
Artificial Intelligence

Humans and AI both rate deliberate thinkers as smarter than intuitive ones

December 5, 2025

PsyPost Merch

STAY CONNECTED

LATEST

Antibiotic use during pregnancy linked to slightly increased risk of ADHD

Social media surveillance of ex-partners linked to worse breakup recovery

Community gardens function as essential social infrastructure, analysis suggests

Subtle physical traits may hint at the biological roots of gender dysphoria

Smoking cannabis reduces alcohol consumption in heavy drinkers, study finds

Single moderate dose of psilocybin linked to temporary reduction in OCD symptoms

Listening to music immediately after learning improves memory in older adults and Alzheimer’s patients

Outrage at individual bigotry may undermine support for systemic racial justice

RSS Psychology of Selling

  • The double-edged sword of dynamic pricing in online retail
  • How expert persuasion impacts willingness to pay for sugar-containing products
  • Experiments in sports marketing show product fit drives endorsement success
  • Study finds consumers must be relaxed for gamified ads to drive sales
  • Brain scans reveal increased neural effort when marketing messages miss the mark
         
       
  • Contact us
  • Privacy policy
  • Terms and Conditions
[Do not sell my information]

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Subscribe
  • My Account
  • Cognitive Science Research
  • Mental Health Research
  • Social Psychology Research
  • Drug Research
  • Relationship Research
  • About PsyPost
  • Contact
  • Privacy Policy