Subscribe
The latest psychology and neuroscience discoveries.
My Account
  • Mental Health
  • Social Psychology
  • Cognitive Science
  • Psychopharmacology
  • Neuroscience
  • About
No Result
View All Result
PsyPost
PsyPost
No Result
View All Result
Home Exclusive Artificial Intelligence

Both humans and AI hallucinate — but not in the same way

by Sarah Vivienne Bentley and Claire Naughtin
July 11, 2023
in Artificial Intelligence
Share on TwitterShare on Facebook
Stay on top of the latest psychology findings: Subscribe now!

The launch of ever-capable large language models (LLMs) such as GPT-3.5 has sparked much interest over the past six months. However, trust in these models has waned as users have discovered they can make mistakes – and that, just like us, they aren’t perfect.

An LLM that outputs incorrect information is said to be “hallucinating”, and there is now a growing research effort towards minimising this effect. But as we grapple with this task, it’s worth reflecting on our own capacity for bias and hallucination – and how this impacts the accuracy of the LLMs we create.

By understanding the link between AI’s hallucinatory potential and our own, we can begin to create smarter AI systems that will ultimately help reduce human error.

How people hallucinate

It’s no secret people make up information. Sometimes we do this intentionally, and sometimes unintentionally. The latter is a result of cognitive biases, or “heuristics”: mental shortcuts we develop through past experiences.

These shortcuts are often born out of necessity. At any given moment, we can only process a limited amount of the information flooding our senses, and only remember a fraction of all the information we’ve ever been exposed to.

As such, our brains must use learnt associations to fill in the gaps and quickly respond to whatever question or quandary sits before us. In other words, our brains guess what the correct answer might be based on limited knowledge. This is called a “confabulation” and is an example of a human bias.

Our biases can result in poor judgement. Take the automation bias, which is our tendency to favour information generated by automated systems (such as ChatGPT) over information from non-automated sources. This bias can lead us to miss errors and even act upon false information.

Another relevant heuristic is the halo effect, in which our initial impression of something affects our subsequent interactions with it. And the fluency bias, which describes how we favour information presented in an easy-to-read manner.

The bottom line is human thinking is often coloured by its own cognitive biases and distortions, and these “hallucinatory” tendencies largely occur outside of our awareness.

How AI hallucinates

In an LLM context, hallucinating is different. An LLM isn’t trying to conserve limited mental resources to efficiently make sense of the world. “Hallucinating” in this context just describes a failed attempt to predict a suitable response to an input.

Nevertheless, there is still some similarity between how humans and LLMs hallucinate, since LLMs also do this to “fill in the gaps”.

LLMs generate a response by predicting which word is most likely to appear next in a sequence, based on what has come before, and on associations the system has learned through training.

Like humans, LLMs try to predict the most likely response. Unlike humans, they do this without understanding what they’re saying. This is how they can end up outputting nonsense.

As to why LLMs hallucinate, there are a range of factors. A major one is being trained on data that are flawed or insufficient. Other factors include how the system is programmed to learn from these data, and how this programming is reinforced through further training under humans.

Doing better together

So, if both humans and LLMs are susceptible to hallucinating (albeit for different reasons), which is easier to fix?

Fixing the training data and processes underpinning LLMs might seem easier than fixing ourselves. But this fails to consider the human factors that influence AI systems (and is an example of yet another human bias known as a fundamental attribution error).

The reality is our failings and the failings of our technologies are inextricably intertwined, so fixing one will help fix the other. Here are some ways we can do this.

  • Responsible data management. Biases in AI often stem from biased or limited training data. Ways to address this include ensuring training data are diverse and representative, building bias-aware algorithms, and deploying techniques such as data balancing to remove skewed or discriminatory patterns.
  • Transparency and explainable AI. Despite the above actions, however, biases in AI can remain and can be difficult to detect. By studying how biases can enter a system and propagate within it, we can better explain the presence of bias in outputs. This is the basis of “explainable AI”, which is aimed at making AI systems’ decision-making processes more transparent.
  • Putting the public’s interests front and centre. Recognising, managing and learning from biases in an AI requires human accountability and having human values integrated into AI systems. Achieving this means ensuring stakeholders are representative of people from diverse backgrounds, cultures and perspectives.

By working together in this way, it’s possible for us to build smarter AI systems that can help keep all our hallucinations in check.

For instance, AI is being used within healthcare to analyse human decisions. These machine learning systems detect inconsistencies in human data and provide prompts that bring them to the clinician’s attention. As such, diagnostic decisions can be improved while maintaining human accountability.

In a social media context, AI is being used to help train human moderators when trying to identify abuse, such as through the Troll Patrol project aimed at tackling online violence against women.

In another example, combining AI and satellite imagery can help researchers analyse differences in nighttime lighting across regions, and use this as a proxy for the relative poverty of an area (wherein more lighting is correlated with less poverty).

Importantly, while we do the essential work of improving the accuracy of LLMs, we shouldn’t ignore how their current fallibility holds up a mirror to our own.The Conversation

 

 

This article is republished from The Conversation under a Creative Commons license. Read the original article.

TweetSendScanShareSendPin1ShareShareShareShareShare

RELATED

Is ChatGPT really more creative than humans? New research provides an intriguing test
ADHD

Scientists use deep learning to uncover hidden motor signs of neurodivergence

July 10, 2025

Diagnosing autism and attention-related conditions often takes months, if not years. But new research shows that analyzing how people move their hands during simple tasks, with the help of artificial intelligence, could offer a faster, objective path to early detection.

Read moreDetails
Positive attitudes toward AI linked to problematic social media use
Artificial Intelligence

Positive attitudes toward AI linked to problematic social media use

July 7, 2025

A new study suggests that people who view artificial intelligence positively may be more likely to overuse social media. The findings highlight a potential link between attitudes toward AI and problematic online behavior, especially among male users.

Read moreDetails
Stress disrupts gut and brain barriers by reducing key microbial metabolites, study finds
Artificial Intelligence

Dark personality traits linked to generative AI use among art students

July 5, 2025

As generative AI tools become staples in art education, a new study uncovers who misuses them most. Research on Chinese art students connects "dark traits" like psychopathy to academic dishonesty, negative thinking, and a heavier reliance on AI technologies.

Read moreDetails
AI can already diagnose depression better than a doctor and tell you which treatment is best
Artificial Intelligence

New research reveals hidden biases in AI’s moral advice

July 5, 2025

Can you trust AI with your toughest moral questions? A new study suggests thinking twice. Researchers found large language models consistently favor inaction and "no" in ethical dilemmas.

Read moreDetails
Scientists reveal ChatGPT’s left-wing bias — and how to “jailbreak” it
Artificial Intelligence

ChatGPT and “cognitive debt”: New study suggests AI might be hurting your brain’s ability to think

July 1, 2025

Researchers at MIT investigated how writing with ChatGPT affects brain activity and recall. Their findings indicate that reliance on AI may lead to reduced mental engagement, prompting concerns about cognitive “offloading” and its implications for education.

Read moreDetails
Readers struggle to understand AI’s role in news writing, study suggests
Artificial Intelligence

Readers struggle to understand AI’s role in news writing, study suggests

June 29, 2025

A new study finds that readers often misunderstand AI’s role in news writing, creating their own explanations based on limited information. Without clear byline disclosures, many assume the worst.

Read moreDetails
Generative AI chatbots like ChatGPT can act as an “emotional sanctuary” for mental health
Artificial Intelligence

Do AI tools undermine our sense of creativity? New study says yes

June 19, 2025

A new study published in The Journal of Creative Behavior offers insight into how people think about their own creativity when working with artificial intelligence.

Read moreDetails
Dark personality traits and specific humor styles are linked to online trolling, study finds
Artificial Intelligence

Memes can serve as strong indicators of coming mass violence

June 15, 2025

A new study finds that surges in visual propaganda—like memes and doctored images—often precede political violence. By combining AI with expert analysis, researchers tracked manipulated content leading up to Russia’s invasion of Ukraine, revealing early warning signs of instability.

Read moreDetails

SUBSCRIBE

Go Ad-Free! Click here to subscribe to PsyPost and support independent science journalism!

STAY CONNECTED

LATEST

Neuroscientists shed new light on how heroin disrupts prefrontal brain function

New research identifies four distinct health pathways linked to Alzheimer’s disease

A surprising body part might provide key insights into schizophrenia risk

Religious belief linked to lower anxiety and better sleep in Israeli Druze study

A common vegetable may counteract brain changes linked to obesity

Massive psychology study reveals disturbing truths about Machiavellian leaders

Dementia: Your lifetime risk may be far greater than previously thought

Psychopathic tendencies may be associated with specific hormonal patterns

         
       
  • Contact us
  • Privacy policy
  • Terms and Conditions
[Do not sell my information]

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Subscribe
  • My Account
  • Cognitive Science Research
  • Mental Health Research
  • Social Psychology Research
  • Drug Research
  • Relationship Research
  • About PsyPost
  • Contact
  • Privacy Policy