Subscribe
The latest psychology and neuroscience discoveries.
My Account
  • Mental Health
  • Social Psychology
  • Cognitive Science
  • Neuroscience
  • About
No Result
View All Result
PsyPost
PsyPost
No Result
View All Result
Home Exclusive Artificial Intelligence

The psychology behind our anxiety toward black box algorithms

by Paul Jones
January 2, 2026
in Artificial Intelligence
Share on TwitterShare on Facebook

From ChatGPT crafting emails, to AI systems recommending TV shows and even helping diagnose disease, the presence of machine intelligence in everyday life is no longer science fiction.

And yet, for all the promises of speed, accuracy and optimisation, there’s a lingering discomfort. Some people love using AI tools. Others feel anxious, suspicious, even betrayed by them. Why?

The answer isn’t just about how AI works. It’s about how we work. We don’t understand it, so we don’t trust it. Human beings are more likely to trust systems they understand. Traditional tools feel familiar: you turn a key, and a car starts. You press a button, and a lift arrives.

But many AI systems operate as black boxes: you type something in, and a decision appears. The logic in between is hidden. Psychologically, this is unnerving. We like to see cause and effect, and we like being able to interrogate decisions. When we can’t, we feel disempowered.

This is one reason for what’s called algorithm aversion. This is a term popularised by the marketing researcher Berkeley Dietvorst and colleagues, whose research showed that people often prefer flawed human judgement over algorithmic decision making, particularly after witnessing even a single algorithmic error.

We know, rationally, that AI systems don’t have emotions or agendas. But that doesn’t stop us from projecting them on to AI systems. When ChatGPT responds “too politely”, some users find it eerie. When a recommendation engine gets a little too accurate, it feels intrusive. We begin to suspect manipulation, even though the system has no self.

This is a form of anthropomorphism – that is, attributing humanlike intentions to nonhuman systems. Professors of communication Clifford Nass and Byron Reeves, along with others have demonstrated that we respond socially to machines, even knowing they’re not human.

We hate when AI gets it wrong

One curious finding from behavioural science is that we are often more forgiving of human error than machine error. When a human makes a mistake, we understand it. We might even empathise. But when an algorithm makes a mistake, especially if it was pitched as objective or data-driven, we feel betrayed.

Google News Preferences Add PsyPost to your preferred sources

This links to research on expectation violation, when our assumptions about how something “should” behave are disrupted. It causes discomfort and loss of trust. We trust machines to be logical and impartial. So when they fail, such as misclassifying an image, delivering biased outputs or recommending something wildly inappropriate, our reaction is sharper. We expected more.

The irony? Humans make flawed decisions all the time. But at least we can ask them “why?”

For some, AI isn’t just unfamiliar, it’s existentially unsettling. Teachers, writers, lawyers and designers are suddenly confronting tools that replicate parts of their work. This isn’t just about automation, it’s about what makes our skills valuable, and what it means to be human.

This can activate a form of identity threat, a concept explored by social psychologist Claude Steele and others. It describes the fear that one’s expertise or uniqueness is being diminished. The result? Resistance, defensiveness or outright dismissal of the technology. Distrust, in this case, is not a bug – it’s a psychological defence mechanism.

Craving emotional cues

Human trust is built on more than logic. We read tone, facial expressions, hesitation and eye contact. AI has none of these. It might be fluent, even charming. But it doesn’t reassure us the way another person can.

This is similar to the discomfort of the uncanny valley, a term coined by Japanese roboticist Masahiro Mori to describe the eerie feeling when something is almost human, but not quite. It looks or sounds right, but something feels off. That emotional absence can be interpreted as coldness, or even deceit.

In a world full of deepfakes and algorithmic decisions, that missing emotional resonance becomes a problem. Not because the AI is doing anything wrong, but because we don’t know how to feel about it.

It’s important to say: not all suspicion of AI is irrational. Algorithms have been shown to reflect and reinforce bias, especially in areas like recruitment, policing and credit scoring. If you’ve been harmed or disadvantaged by data systems before, you’re not being paranoid, you’re being cautious.

This links to a broader psychological idea: learned distrust. When institutions or systems repeatedly fail certain groups, scepticism becomes not only reasonable, but protective.

Telling people to “trust the system” rarely works. Trust must be earned. That means designing AI tools that are transparent, interrogable and accountable. It means giving users agency, not just convenience. Psychologically, we trust what we understand, what we can question and what treats us with respect.

If we want AI to be accepted, it needs to feel less like a black box, and more like a conversation we’re invited to join.The Conversation

 

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Previous Post

Lifetime estrogen exposure associated with better cognitive performance in women

Next Post

Psychopathic traits are associated with a substantially increased risk of schizophrenia

RELATED

Scientists identify a fat-derived hormone that drives the mood benefits of exercise
Artificial Intelligence

People consistently devalue creative writing generated by artificial intelligence

April 5, 2026
People cannot tell AI-generated from human-written poetry and they like AI poetry more
Artificial Intelligence

Job seekers mask their emotions and act more analytical when evaluated by artificial intelligence

April 3, 2026
AI autocomplete suggestions covertly change how users think about important topics
Artificial Intelligence

AI autocomplete suggestions covertly change how users think about important topics

April 2, 2026
Study links phubbing sensitivity to attachment patterns in romantic couples
Artificial Intelligence

How generative artificial intelligence is upending theories of political persuasion

April 1, 2026
People with attachment anxiety are more vulnerable to problematic AI use
Artificial Intelligence

Relying on AI chatbots for historical facts can influence your political beliefs, new study shows

March 30, 2026
ChatGPT acts as a “cognitive crutch” that weakens memory, new research suggests
Artificial Intelligence

ChatGPT acts as a “cognitive crutch” that weakens memory, new research suggests

March 30, 2026
Russian propaganda campaign used AI to scale output without sacrificing credibility, study finds
Artificial Intelligence

Knowing an AI is involved ruins human trust in social games

March 28, 2026
Scientists just uncovered a major limitation in how AI models understand truth and belief
Artificial Intelligence

Most Americans don’t fear an AI apocalypse, according to new research

March 26, 2026

STAY CONNECTED

RSS Psychology of Selling

  • Smaller influencers drive engagement while bigger ones drive purchases, meta-analysis finds
  • Political conservatives are more drawn to baby-faced product designs, and purity values explain why
  • Free gifts with no strings attached can boost customer spending by over 30%, study finds
  • New research reveals the “Goldilocks” age for social media influencers
  • What today’s shoppers really want from salespeople, and what drives them away

LATEST

Young men steadily catch up to young women in online appearance anxiety

Teenage brains process mechanical and academic skills differently across the sexes

New study reveals six stages of spiritual growth experienced during a pilgrimage

New research links meaning in life to lower depression rates

High sugar intake is linked to increased odds of depression and anxiety in new study

An unpredictable childhood predicts greater psychological distress during the Israel-Hamas war

Toddlers are happier giving treats to others than receiving them, study finds

Your brain might understand music theory better than you think, regardless of formal training

PsyPost is a psychology and neuroscience news website dedicated to reporting the latest research on human behavior, cognition, and society. (READ MORE...)

  • Mental Health
  • Neuroimaging
  • Personality Psychology
  • Social Psychology
  • Artificial Intelligence
  • Cognitive Science
  • Psychopharmacology
  • Contact us
  • Disclaimer
  • Privacy policy
  • Terms and conditions
  • Do not sell my personal information

(c) PsyPost Media Inc

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Subscribe
  • My Account
  • Cognitive Science Research
  • Mental Health Research
  • Social Psychology Research
  • Drug Research
  • Relationship Research
  • About PsyPost
  • Contact
  • Privacy Policy

(c) PsyPost Media Inc