Subscribe
The latest psychology and neuroscience discoveries.
My Account
  • Mental Health
  • Social Psychology
  • Cognitive Science
  • Psychopharmacology
  • Neuroscience
  • About
No Result
View All Result
PsyPost
PsyPost
No Result
View All Result
Home Exclusive Artificial Intelligence

Scholars: AI isn’t “hallucinating” — it’s bullshitting

by Eric W. Dolan
June 9, 2024
in Artificial Intelligence
(Photo credit: Adobe Stock)

(Photo credit: Adobe Stock)

Share on TwitterShare on Facebook

Large language models, such as OpenAI’s ChatGPT, have revolutionized the way artificial intelligence interacts with humans, producing text that often seems indistinguishable from human writing. Despite their impressive capabilities, these models are known for generating persistent inaccuracies, often referred to as “AI hallucinations.” However, in a paper published in Ethics and Information Technology, scholars Michael Townsen Hicks, James Humphries, and Joe Slater from the University of Glasgow argue that these inaccuracies are better understood as “bullshit.”

Large language models (LLMs) are sophisticated computer programs designed to generate human-like text. They achieve this by analyzing vast amounts of written material and using statistical techniques to predict the likelihood of a particular word appearing next in a sequence. This process enables them to produce coherent and contextually appropriate responses to a wide range of prompts.

Unlike human brains, which have a variety of goals and behaviors, LLMs have a singular objective: to generate text that closely resembles human language. This means their primary function is to replicate the patterns and structures of human speech and writing, not to understand or convey factual information.

The term “AI hallucination” is used to describe instances when an LLM like ChatGPT produces inaccurate or entirely fabricated information. This term suggests that the AI is experiencing a perceptual error, akin to a human seeing something that isn’t there. However, this metaphor is misleading, according to Hicks and his colleagues, because it implies that the AI has a perspective or an intent to perceive and convey truth, which it does not.

To better understand why these inaccuracies might be better described as bullshit, it is helpful to look at the concept of bullshit as defined by philosopher Harry Frankfurt. In his seminal work, Frankfurt distinguishes bullshit from lying. A liar, according to Frankfurt, knows the truth but deliberately chooses to say something false. In contrast, a bullshitter is indifferent to the truth. The bullshitter’s primary concern is not whether what they are saying is true or false but whether it serves their purpose, often to impress or persuade.

Frankfurt’s concept highlights that bullshit is characterized by a disregard for the truth. The bullshitter does not care about the accuracy of their statements, only that they appear convincing or fit a particular narrative.

The scholars argue that the output of LLMs like ChatGPT fits Frankfurt’s definition of bullshit better than the concept of hallucination. These models do not have an understanding of truth or falsity; they generate text based on patterns in the data they have been trained on, without any intrinsic concern for accuracy. This makes them akin to bullshitters — they produce statements that can sound plausible without any grounding in factual reality.

The distinction is significant because it influences how we understand and address the inaccuracies produced by these models. If we think of these inaccuracies as hallucinations, we might believe that the AI is trying and failing to convey truthful information.

Google News Preferences Add PsyPost to your preferred sources

But AI models like ChatGPT do not have beliefs, intentions, or understanding, Hicks and his colleagues explained. They operate purely on statistical patterns derived from their training data.

When they produce incorrect information, it is not due to a deliberate intent to deceive (as in lying) or a faulty perception (as in hallucinating). Rather, it is because they are designed to create text that looks and sounds right without any intrinsic mechanism for ensuring factual accuracy.

“Investors, policymakers, and members of the general public make decisions on how to treat these machines and how to react to them based not on a deep technical understanding of how they work, but on the often metaphorical way in which their abilities and function are communicated,” Hicks and his colleagues concluded. “Calling their mistakes ‘hallucinations’ isn’t harmless: it lends itself to the confusion that the machines are in some way misperceiving but are nonetheless trying to convey something that they believe or have perceived.”

“This, as we’ve argued, is the wrong metaphor. The machines are not trying to communicate something they believe or perceive. Their inaccuracy is not due to misperception or hallucination. As we have pointed out, they are not trying to convey information at all. They are bullshitting.”

“Calling chatbot inaccuracies ‘hallucinations’ feeds in to overblown hype about their abilities among technology cheerleaders, and could lead to unnecessary consternation among the general public. It also suggests solutions to the inaccuracy problems which might not work, and could lead to misguided efforts at AI alignment amongst specialists,” the scholars wrote.

“It can also lead to the wrong attitude towards the machine when it gets things right: the inaccuracies show that it is bullshitting, even when it’s right. Calling these inaccuracies ‘bullshit’ rather than ‘hallucinations’ isn’t just more accurate (as we’ve argued); it’s good science and technology communication in an area that sorely needs it.”

OpenAI, for its part, has said that improving the factual accuracy of ChatGPT is a key goal.

“Improving factual accuracy is a significant focus for OpenAI and many other AI developers, and we’re making progress,” the company wrote in a 2023 blog post. “By leveraging user feedback on ChatGPT outputs that were flagged as incorrect as a main source of data—we have improved the factual accuracy of GPT-4. GPT-4 is 40% more likely to produce factual content than GPT-3.5.”

“When users sign up to use the tool, we strive to be as transparent as possible that ChatGPT may not always be accurate. However, we recognize that there is much more work to do to further reduce the likelihood of hallucinations and to educate the public on the current limitations of these AI tools.”

The paper, “ChatGPT is bullshit,” was published June 8, 2024.

RELATED

Scientists shocked to find AI’s social desirability bias “exceeds typical human standards”
Artificial Intelligence

Deceptive AI interactions can feel more deep and genuine than actual human conversations

February 5, 2026
How AI’s distorted body ideals could contribute to body dysmorphia
Artificial Intelligence

How AI’s distorted body ideals could contribute to body dysmorphia

January 28, 2026
New psychology research finds romantic cues reduce self-control and increase risky behavior
Artificial Intelligence

Machine learning identifies brain patterns that predict antidepressant success

January 25, 2026
Genetic factors likely confound the link between c-sections and offspring mental health
Addiction

AI identifies behavioral traits that predict alcohol preference during adolescence

January 24, 2026
Scientists shocked to find AI’s social desirability bias “exceeds typical human standards”
Artificial Intelligence

A simple language switch can make AI models behave significantly differently

January 23, 2026
LLM red teamers: People are hacking AI chatbots just for fun and now researchers have catalogued 35 “jailbreak” techniques
Artificial Intelligence

Are you suffering from “cognitive atrophy” due to AI overuse?

January 22, 2026
Scientists reveal atypical depression is a distinct biological subtype linked to antidepressant resistance
Artificial Intelligence

Researchers are using Dungeons & Dragons to find the breaking points of major AI models

January 22, 2026
Groundbreaking AI model uncovers hidden patterns of political bias in online news
Artificial Intelligence

AI chatbots tend to overdiagnose mental health conditions when used without structured guidance

January 22, 2026

PsyPost Merch

STAY CONNECTED

LATEST

Support for banning hate speech tends to decrease as people get older

Recreational ecstasy use is linked to lasting memory impairments

New psychology research changes how we think about power in the bedroom

Scientists find evidence of Epstein-Barr virus activity in spinal fluid of multiple sclerosis patients

World Trade Center responders with PTSD show signs of accelerated brain aging

This behavior explains why emotionally intelligent couples are happier

Scientists just mapped the brain architecture that underlies human intelligence

Sorting Hat research: What does your Hogwarts house say about your psychological makeup?

RSS Psychology of Selling

  • Sales agents often stay for autonomy rather than financial rewards
  • The economics of emotion: Reassessing the link between happiness and spending
  • Surprising link found between greed and poor work results among salespeople
  • Intrinsic motivation drives sales performance better than financial rewards
  • New research links faking emotions to higher turnover in B2B sales
         
       
  • Contact us
  • Privacy policy
  • Terms and Conditions
[Do not sell my information]

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Subscribe
  • My Account
  • Cognitive Science Research
  • Mental Health Research
  • Social Psychology Research
  • Drug Research
  • Relationship Research
  • About PsyPost
  • Contact
  • Privacy Policy