Subscribe
The latest psychology and neuroscience discoveries.
My Account
  • Mental Health
  • Social Psychology
  • Cognitive Science
  • Psychopharmacology
  • Neuroscience
  • About
No Result
View All Result
PsyPost
PsyPost
No Result
View All Result
Home Exclusive Artificial Intelligence

Scholars: AI isn’t “hallucinating” — it’s bullshitting

by Eric W. Dolan
June 9, 2024
in Artificial Intelligence
(Photo credit: Adobe Stock)

(Photo credit: Adobe Stock)

Share on TwitterShare on Facebook

Large language models, such as OpenAI’s ChatGPT, have revolutionized the way artificial intelligence interacts with humans, producing text that often seems indistinguishable from human writing. Despite their impressive capabilities, these models are known for generating persistent inaccuracies, often referred to as “AI hallucinations.” However, in a paper published in Ethics and Information Technology, scholars Michael Townsen Hicks, James Humphries, and Joe Slater from the University of Glasgow argue that these inaccuracies are better understood as “bullshit.”

Large language models (LLMs) are sophisticated computer programs designed to generate human-like text. They achieve this by analyzing vast amounts of written material and using statistical techniques to predict the likelihood of a particular word appearing next in a sequence. This process enables them to produce coherent and contextually appropriate responses to a wide range of prompts.

Unlike human brains, which have a variety of goals and behaviors, LLMs have a singular objective: to generate text that closely resembles human language. This means their primary function is to replicate the patterns and structures of human speech and writing, not to understand or convey factual information.

The term “AI hallucination” is used to describe instances when an LLM like ChatGPT produces inaccurate or entirely fabricated information. This term suggests that the AI is experiencing a perceptual error, akin to a human seeing something that isn’t there. However, this metaphor is misleading, according to Hicks and his colleagues, because it implies that the AI has a perspective or an intent to perceive and convey truth, which it does not.

To better understand why these inaccuracies might be better described as bullshit, it is helpful to look at the concept of bullshit as defined by philosopher Harry Frankfurt. In his seminal work, Frankfurt distinguishes bullshit from lying. A liar, according to Frankfurt, knows the truth but deliberately chooses to say something false. In contrast, a bullshitter is indifferent to the truth. The bullshitter’s primary concern is not whether what they are saying is true or false but whether it serves their purpose, often to impress or persuade.

Frankfurt’s concept highlights that bullshit is characterized by a disregard for the truth. The bullshitter does not care about the accuracy of their statements, only that they appear convincing or fit a particular narrative.

The scholars argue that the output of LLMs like ChatGPT fits Frankfurt’s definition of bullshit better than the concept of hallucination. These models do not have an understanding of truth or falsity; they generate text based on patterns in the data they have been trained on, without any intrinsic concern for accuracy. This makes them akin to bullshitters — they produce statements that can sound plausible without any grounding in factual reality.

The distinction is significant because it influences how we understand and address the inaccuracies produced by these models. If we think of these inaccuracies as hallucinations, we might believe that the AI is trying and failing to convey truthful information.

But AI models like ChatGPT do not have beliefs, intentions, or understanding, Hicks and his colleagues explained. They operate purely on statistical patterns derived from their training data.

When they produce incorrect information, it is not due to a deliberate intent to deceive (as in lying) or a faulty perception (as in hallucinating). Rather, it is because they are designed to create text that looks and sounds right without any intrinsic mechanism for ensuring factual accuracy.

“Investors, policymakers, and members of the general public make decisions on how to treat these machines and how to react to them based not on a deep technical understanding of how they work, but on the often metaphorical way in which their abilities and function are communicated,” Hicks and his colleagues concluded. “Calling their mistakes ‘hallucinations’ isn’t harmless: it lends itself to the confusion that the machines are in some way misperceiving but are nonetheless trying to convey something that they believe or have perceived.”

“This, as we’ve argued, is the wrong metaphor. The machines are not trying to communicate something they believe or perceive. Their inaccuracy is not due to misperception or hallucination. As we have pointed out, they are not trying to convey information at all. They are bullshitting.”

“Calling chatbot inaccuracies ‘hallucinations’ feeds in to overblown hype about their abilities among technology cheerleaders, and could lead to unnecessary consternation among the general public. It also suggests solutions to the inaccuracy problems which might not work, and could lead to misguided efforts at AI alignment amongst specialists,” the scholars wrote.

“It can also lead to the wrong attitude towards the machine when it gets things right: the inaccuracies show that it is bullshitting, even when it’s right. Calling these inaccuracies ‘bullshit’ rather than ‘hallucinations’ isn’t just more accurate (as we’ve argued); it’s good science and technology communication in an area that sorely needs it.”

OpenAI, for its part, has said that improving the factual accuracy of ChatGPT is a key goal.

“Improving factual accuracy is a significant focus for OpenAI and many other AI developers, and we’re making progress,” the company wrote in a 2023 blog post. “By leveraging user feedback on ChatGPT outputs that were flagged as incorrect as a main source of data—we have improved the factual accuracy of GPT-4. GPT-4 is 40% more likely to produce factual content than GPT-3.5.”

“When users sign up to use the tool, we strive to be as transparent as possible that ChatGPT may not always be accurate. However, we recognize that there is much more work to do to further reduce the likelihood of hallucinations and to educate the public on the current limitations of these AI tools.”

The paper, “ChatGPT is bullshit,” was published June 8, 2024.

RELATED

Scientists identify a fat-derived hormone that drives the mood benefits of exercise
Artificial Intelligence

Conversational AI can increase false memory formation by injecting slight misinformation in conversations

January 7, 2026
Generative AI simplifies science communication, boosts public trust in scientists
Artificial Intelligence

Simple anthropomorphism can make an AI advisor as trusted as a romantic partner

January 5, 2026
Legalized sports betting linked to a rise in violent crimes and property theft
Artificial Intelligence

The psychology behind our anxiety toward black box algorithms

January 2, 2026
Fear of being single, romantic disillusionment, dating anxiety: Untangling the psychological connections
Artificial Intelligence

New psychology research sheds light on how “vibe” and beauty interact in online dating

December 29, 2025
Lifelong diet quality predicts cognitive ability and dementia risk in older age
Artificial Intelligence

Users of generative AI struggle to accurately assess their own competence

December 29, 2025
Scientists shocked to find AI’s social desirability bias “exceeds typical human standards”
Artificial Intelligence

Neuroticism predicts stronger emotional bonds with AI chatbots

December 24, 2025
AI-assisted venting can boost psychological well-being, study suggests
Artificial Intelligence

Adolescents with high emotional intelligence are less likely to trust AI

December 22, 2025
New research shows sexual arousal leads to a greater willingness to get intimate with robots
Artificial Intelligence

Researchers find reverse sexual double standard in sextech use

December 20, 2025

PsyPost Merch

STAY CONNECTED

LATEST

How genetically modified stem cells could repair the brain after a stroke

Psychologists identify a potential bridge between narcissism and OCD

Conversational AI can increase false memory formation by injecting slight misinformation in conversations

Voters from both parties largely agree on how to punish acts of political violence

Psychopathy and sadism show opposite associations with reproductive success

Adults with ADHD crave more relationship support but often feel shortchanged

Women experiencing more sexual guilt have worse sexual functioning

Early life adversity may fundamentally rewire global brain dynamics

RSS Psychology of Selling

  • New study reveals why some powerful leaders admit mistakes while others double down
  • Study reveals the cycle of guilt and sadness that follows a FOMO impulse buy
  • Why good looks aren’t enough for virtual influencers
  • Eye-tracking data shows how nostalgic stories unlock brand memory
  • How spotting digitally altered ads on social media affects brand sentiment
         
       
  • Contact us
  • Privacy policy
  • Terms and Conditions
[Do not sell my information]

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Subscribe
  • My Account
  • Cognitive Science Research
  • Mental Health Research
  • Social Psychology Research
  • Drug Research
  • Relationship Research
  • About PsyPost
  • Contact
  • Privacy Policy