Subscribe
The latest psychology and neuroscience discoveries.
My Account
  • Mental Health
  • Social Psychology
  • Cognitive Science
  • Psychopharmacology
  • Neuroscience
  • About
No Result
View All Result
PsyPost
PsyPost
No Result
View All Result
Home Exclusive Artificial Intelligence

Assimilation-induced dehumanization: Psychology research uncovers a dark side effect of AI

by Eric W. Dolan
August 11, 2025
in Artificial Intelligence
[Adobe Stock]

[Adobe Stock]

Share on TwitterShare on Facebook
Stay informed on the latest psychology and neuroscience research—follow PsyPost on LinkedIn for daily updates and insights.

A series of experiments published in the Journal of Consumer Psychology indicates that when people interact with autonomous agents—such as robots or virtual assistants—that display strong emotional intelligence, they tend to see those machines as more humanlike. But this shift comes with an unintended consequence: people may also begin to see other humans as less human, leading to a higher likelihood of mistreating them.

Autonomous agents have moved beyond purely functional roles. Virtual assistants can chat naturally, social robots can comfort patients, and large language models can engage in humanlike dialogue. These advances have been accompanied by a growing effort to give machines emotional intelligence—the ability to detect and respond to human emotions.

While this shift can make AI interactions more engaging and satisfying, the researchers wanted to explore a less obvious question: what happens to our perceptions of other people when we interact with emotionally capable machines?

Psychological research has long shown that the way we attribute mental capacities—such as the ability to think, plan, and feel—to others affects how we treat them. Emotional capacity, in particular, is seen as central to being human. The team theorized that when people encounter AI with humanlike emotional abilities, they may mentally group it closer to humans. Because AI is still generally perceived as having less of a mind than people, this mental “assimilation” could drag down humanness ratings of actual humans, leading to subtle forms of dehumanization.

The researchers called this phenomenon assimilation-induced dehumanization. They also wanted to know whether this effect could be mitigated, for example, if AI’s abilities were so extreme that people clearly saw them as nonhuman, or if the AI displayed only cognitive skills without emotional understanding.

“With recent technological advancements, we’ve been fascinated by how humanlike AI technologies have become—not just in what they can do, but in how they’re presented and how they interact with people,” explained study author Hye-young Kim, an assistant professor of at the London School of Economics and Political Science.

What’s interesting is that people respond to even subtle humanlike cues in AI, often treating them much like they would other humans. This led us to ask: while AI’s humanlike qualities clearly make machines seem more human, could they also make real people seem more like machines? Our research was driven by the idea that the more human we perceive AI to be, the more it may quietly reshape our understanding of what it means to be human.”

The researchers conducted five main experiments, supplemented with additional studies, involving both embodied AI (like humanoid robots) and disembodied systems (like customer service chatbots). Participants were exposed to AI agents with either high or low socio-emotional capabilities, then asked to evaluate the humanness of the AI and of real people. The researchers also measured behavior toward humans.

The first study aimed to establish a basic link between perceiving socio-emotional ability in a robot and the willingness to mistreat human employees. For this experiment, 195 online participants were divided into two groups. One group watched a video of a humanoid robot, named Atlas, dancing energetically to music—a hedonic and expressive activity. The other group watched the same robot performing parkour—a more mechanical and utilitarian task.

Afterward, all participants read three scenarios describing potentially negative changes to employee welfare, such as replacing workers’ hot meals with meal-replacement shakes, housing them in tiny “micro-capsule” rooms, and making them wear tracking devices to monitor their every move.

The researchers found that participants who watched the dancing robot perceived it as having more of a “mind” than those who watched the parkour robot. These same participants were also significantly more supportive of the harsh and dehumanizing workplace practices, like replacing meals and using invasive tracking.

The second study was designed to get at the psychological mechanism behind the effect, testing the concepts of assimilation and contrast. The experiment involved 451 participants and again used the dancing and parkour robot videos to represent high and low socio-emotional capabilities.

However, a second factor was added: the robot’s capabilities were described as either moderate (similar to what was used in the first study) or extreme. In the extreme condition, the robot was said to have superhuman abilities like infrared, UV, and X-ray vision. After this exposure, participants completed a scale measuring their level of dehumanization, which assesses the extent to which people are seen as cold and mechanical or unsophisticated and animal-like.

The second study shed light on the boundary conditions of assimilation-induced dehumanization. When the robot’s capabilities were moderate and comparable to a human’s, the initial finding was replicated: seeing the emotional, dancing robot led to greater dehumanization of people.

However, when the robot had extreme, superhuman abilities, the effect reversed. In that case, seeing the dancing robot made participants perceive people as more human. This suggests that when a machine is clearly in a different category from humans, we contrast ourselves against it, reinforcing our own humanity. But when the line is blurry, assimilation occurs.

The third study sought to pinpoint whether it was socio-emotional skills, specifically, or any advanced human-like skill that caused the effect. A group of 651 participants was divided into three conditions. They read about a fictitious artificial intelligence service. One group read about “EmpathicMind,” a virtual therapy program that could respond to subtle emotions. A second group read about “InsightMind,” a medical diagnosis program with powerful cognitive abilities to analyze complex data. A control group read about a simple survey analysis program. All participants then completed the same dehumanization scale from the previous study.

Results from the third study confirmed that socio-emotional capability is the key ingredient. Only the “EmpathicMind” artificial intelligence, the one with emotional skills, led to increased dehumanization among participants. The highly intelligent “InsightMind” program, which had advanced cognitive but not emotional skills, produced no more dehumanization than the control condition. This demonstrates that it is the perception of a capacity to feel, not just to think, that triggers the assimilation process.

The fourth study was constructed to directly measure the entire proposed chain of events and test it with a consequential choice. The 280 participants first read a fictitious article that framed artificial intelligence as either having remarkable emotional capabilities or having significant limitations in that area. They then used a slider scale to rate the “humanness” of both artificial intelligence agents and a typical person.

For the final part of the study, they were told they could win a $25 gift card and were asked to choose between one for Amazon or one for Costco. Before they chose, they read a real news article detailing Amazon’s allegedly dehumanizing working conditions. The researchers predicted that people who dehumanized workers would be less bothered by this information and more likely to choose the Amazon gift card.

The fourth study provided direct evidence for the full psychological process. Participants exposed to the emotionally capable artificial intelligence rated the technology as more human-like, and critically, they rated actual people as less human-like. This shift in perception had a real-world consequence: they were significantly more likely to choose the Amazon gift card, suggesting they were less troubled by the reports of poor employee treatment.

A final study aimed to demonstrate the effect within a specific company setting. A total of 331 participants read about a company’s new conversational virtual assistant for customer service. For one group, the assistant was described as having high socio-emotional skills, able to adapt its tone to a customer’s emotional needs. For the other group, its abilities were described as purely functional. Participants then rated the humanness of both the virtual assistant and a human customer service operator. As a behavioral measure, they were told they would receive a $0.25 bonus and were given the option to donate it to a fundraising campaign to support the mental health of the company’s human customer service agents.

The fifth study replicated these findings in a direct consumer context. Participants who read about the emotionally intelligent virtual assistant perceived it as more human, which in turn led them to rate the human operator as less human. This translated directly into behavior: they were less likely to donate their bonus payment to a fund supporting the mental health of those same human employees.

“The more we perceive social and emotional capabilities in AI, the more likely we are to see real people as machine-like—less deserving of care and respect,” Kim told PsyPost. “As consumers increasingly interact with AI in customer-facing roles, we should be mindful that this AI-induced dehumanization can make us more prone to mistreating employees or frontline workers without even realizing it.”

However, “this research does not suggest that AI itself is inherently harmful to the quality of our social experiences,” she added. “The issue arises when people unconsciously judge and compare humans and AI agents using the same standard of ‘humanness.’ While intentionally making AI appear more humanlike may help consumers feel more comfortable adopting and using it, we should be mindful of the unintended negative consequences that may follow.”

The authors identify several avenues for future research to clarify and extend their findings on assimilation-induced dehumanization. One priority is to examine how the physical appearance of autonomous agents interacts with their perceived socio-emotional capabilities. The studies here focused on agents with limited humanlike embodiment, which likely minimized identity-threat effects.

Another suggested direction is to explore self-perception. The current work measured changes in perceptions of others, but it remains unclear whether interacting with emotionally capable AI would cause individuals to apply the same dehumanizing shift to themselves. One possibility is that people might accept less humane treatment in workplaces where AI is present, especially if the AI appears to match human socio-emotional abilities. Alternatively, they might engage in self-affirmation strategies, reinforcing their own humanness even if they dehumanize others.

“Going forward, we’re interested in exploring how AI might influence not only how people treat others, but also how we see ourselves—and how that, in turn, shapes our values, preferences, and choices,” Kim explained. “Understanding how these effects unfold over time would be especially important as AI becomes increasingly embedded in our daily lives. Ultimately, we aim to shed light on the responsible design and deployment of AI technologies so they can enhance, rather than erode, our social relationships and human values.”

The study, “AI-induced dehumanization,” was authored by Hye-young Kim and Ann L. McGill.

RELATED

AI-powered mental health app showcasing its interface on a mobile device.
Artificial Intelligence

Interaction with the Replika social chatbot can alleviate loneliness, study finds

October 11, 2025
AI chatbots often misrepresent scientific studies — and newer models may be worse
Artificial Intelligence

Startling study finds people overtrust AI-generated medical advice

October 10, 2025
Vivid close-up of a brown human eye showing intricate iris patterns and details.
ADHD

Scientists use AI to detect ADHD through unique visual rhythms in groundbreaking study

October 10, 2025
Do chatbots fill a social void? Research examines their role for lonely teens
Artificial Intelligence

An AI chatbot’s feedback style can alter your brain activity during learning

October 9, 2025
Psilocybin-assisted group therapy may help reduce depression and burnout among healthcare workers
Artificial Intelligence

Just a few chats with a biased AI can alter your political opinions

October 4, 2025
AI chatbots often misrepresent scientific studies — and newer models may be worse
Artificial Intelligence

AI chatbots give inconsistent responses to suicide-related questions, study finds

September 29, 2025
Study reveals AI’s potential to detect loneliness by deciphering speech patterns
Artificial Intelligence

People are more likely to act dishonestly when delegating tasks to AI

September 26, 2025
Frequent AI chatbot use associated with lower grades among computer science students
Artificial Intelligence

Frequent AI chatbot use associated with lower grades among computer science students

September 24, 2025

STAY CONNECTED

LATEST

Both sides favor censorship when children’s books conflict with their political beliefs

TikTok activity linked to young women’s views on body image and cosmetic surgery

What we’ve learned about the psychology of narcissism over the past 30 years

Interaction with the Replika social chatbot can alleviate loneliness, study finds

Major IQ differences in identical twins linked to schooling, challenging decades of research

Children exposed to antidepressants before birth do not face lasting mental health risks

People on the far-right and far-left exhibit strikingly similar brain responses

Injection of Reelin protein may reverse “leaky gut” caused by chronic stress

         
       
  • Contact us
  • Privacy policy
  • Terms and Conditions
[Do not sell my information]

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Subscribe
  • My Account
  • Cognitive Science Research
  • Mental Health Research
  • Social Psychology Research
  • Drug Research
  • Relationship Research
  • About PsyPost
  • Contact
  • Privacy Policy