Subscribe
The latest psychology and neuroscience discoveries.
My Account
  • Mental Health
  • Social Psychology
  • Cognitive Science
  • Neuroscience
  • About
No Result
View All Result
PsyPost
PsyPost
No Result
View All Result
Home Exclusive Artificial Intelligence

Job seekers mask their emotions and act more analytical when evaluated by artificial intelligence

by Karina Petrova
April 3, 2026
in Artificial Intelligence
[Adobe Stock]

[Adobe Stock]

Share on TwitterShare on Facebook

When people know an artificial intelligence program is evaluating them for a job or college program, they alter their behavior to appear more analytical and less emotional. These strategic changes could lead to inaccurate evaluations and alter who ultimately gets selected for important positions. The initial findings were recently published in the Proceedings of the National Academy of Sciences.

With the rapid advancement of computing technology, organizations increasingly use automated tools to sort through applicants. Hiring managers and university admissions officers adopt these automated systems to increase efficiency and handle large volumes of candidates. Often, these programs take the form of video interview software, personality assessments, or automated resume screeners.

At the same time, governments are passing new laws requiring transparency in hiring and admissions. For example, the Artificial Intelligence Act in the European Union mandates that organizations disclose whenever they use algorithmic systems in high-stakes situations. Similar local laws, such as one recently enacted in New York City, also require employers to tell job seekers when automated tools are active.

Because of these transparency rules, candidates are often fully aware that a machine is judging their performance. Researchers wanted to know if this knowledge changes how applicants behave during tests and interviews. People naturally try to manage the impression they make on others, especially when the outcome involves a major life event like a new job or a degree program.

When dealing with human evaluators, applicants often guess what the judge wants to see and act accordingly. The study authors suspected that applicants apply the exact same strategy to automated systems. They hypothesized that people hold common assumptions about how computers operate and process information.

Specifically, people tend to view machines as purely rational processors of numbers and facts, completely lacking emotional intelligence or intuition. In psychology, these common assumptions are known as lay beliefs. The research team predicted that the lay belief about machine rationality would lead applicants to highlight their logical and analytical traits. They termed this anticipated change in self-presentation “the AI assessment effect.”

Jonas Goergen, a researcher at the Institute of Behavioral Science and Technology at the University of St. Gallen in Switzerland, led the investigation. He collaborated with Emanuel de Bellis, also at the University of St. Gallen, and Anne-Kathrin Klesse at the Rotterdam School of Management in the Netherlands. Together, they conducted an extensive series of experiments to see how machine evaluators influence human self-presentation.

The research team designed twelve separate experiments involving more than 13,000 participants. Before running the formal experiments, they analyzed data from an employment software company called Equalture. They surveyed over 1,400 job applicants who had completed a game-based skills assessment. The candidates who strongly suspected algorithmic involvement in the hiring process reported making the strongest adjustments to their own behavior.

Google News Preferences Add PsyPost to your preferred sources

After observing this real-world trend, the authors launched a pilot field study. They posted initial job advertisements for an event planner on the freelancing platform Upwork. They divided the postings into two groups and invited freelancers to apply. Half of the applicants were told a human would review their application, while the other half were told a machine would verify their skills.

Applicants then wrote self-descriptions as part of the screening process. They rated themselves on a numerical scale from strongly intuitive to strongly analytical. The applicants who thought a machine was reading their files described themselves as much more analytical than those who thought a human was reading the same text.

To establish clear causality, the team moved to a controlled experimental setting. In one experiment, participants imagined applying for admission to a fictional college called Horizon University. They were randomly assigned to face either a human or a machine reviewer. The participants then answered survey questions about how they mentally approach tasks.

Under the automated reviewer condition, participants reported approaching tasks in a much more analytical and logical manner while suppressing their intuitive side. The researchers asked follow-up questions to understand why this happened. They found that the shift was entirely driven by the participants’ foundational belief that the machine program prioritized analytical traits over emotional ones.

The researchers wanted to know if certain types of people were more prone to this behavior shift. In a survey of a representative slice of the United States population, they tested various personality and demographic factors. They found that the tendency to act more analytical in front of a computer was stronger among younger cohorts. Additionally, participants who expressed a high fear of negative evaluation were especially likely to change their self-presentation.

Another experiment tested whether the shift in behavior held true in a more realistic decision-making task involving a clear reward. The researchers invited participants to apply for an exclusive research pool, which offered genuine financial incentives. To gain entry, participants had to rank a list of eight personal attributes to describe themselves. Half of the attributes represented logical traits, and the other half represented intuitive ones.

When the participants thought a machine was checking their ranks, they consistently pushed the logical attributes to the top of the list. They assigned the lowest ranks to intuitive characteristics. When participants thought a human was looking at the exact same screen, the rankings were much more balanced across both categories.

The team also tested what happens when people participate in both types of evaluations over time. In a multi-stage experiment, participants answered questions about their true baseline behavior in a standard survey context with no assessment at all. Days later, they went through a simulated application process.

Being evaluated by a machine caused participants to deviate sharply from their baseline true selves. The gap between their honest baseline and their machine-assessed persona was large. The researchers noted that this level of distortion could easily change which candidates pass a hiring threshold.

In their mathematical simulation, more than a quarter of the people who met a strict logical cutoff score under machine assessment would fail to meet that same cutoff if assessed by a human.

To verify the underlying psychological mechanism, the researchers manipulated the participants’ assumptions directly. In two experimental variations, they asked participants to actively imagine a computer program capable of deep emotional and intuitive understanding. They instructed the participants to write out reasons why a machine might be good at reading feelings.

When participants were prompted to view the machine as emotionally capable, the behavior shift completely disappeared. They stopped exaggerating their logical skills. In some cases, they actually presented themselves as less analytical and more intuitive than a human-assessed control group.

The researchers also tested a hybrid approach, which is common in modern corporate hiring. They told participants that a computer would perform the first round of screening, but a human manager would make the final hiring decision. This combination reduced the behavior change, but applicants still exaggerated their logical traits to a noticeable degree.

These shifts in applicant behavior could create severe problems for the testing validity of automated hiring systems. If candidates present an artificial version of themselves, organizations will not get an accurate measure of their actual skills. The researchers suggest that employers should be aware of this effect and take immediate steps to address it. A company looking for a highly empathetic social worker, for instance, might end up hiring an overly analytical person who was merely trying to appease a software program.

The research team noted that future work should look at varied contexts outside of business hiring. For instance, automated evaluations are sometimes used to determine who is granted access to public services or loan approvals. It remains to be seen if applicants for government benefits would alter their behavior in the exact same way.

Most of the current experiments focused exclusively on the spectrum between logic and intuition. The researchers explained that other personality dimensions might also be affected by automated screening. Preliminary data from their exploratory models hinted that applicants might also downplay their creativity and risk-taking when faced with a machine auditor. Participants also seemed to adjust their ethical and social considerations.

Additionally, new technological advances might change how people react to automated assessment systems over time. The sudden rise of advanced generative conversational chatbots could alter the public perception of what machines can understand. Future studies will need to track whether candidates respond differently to software programs that appear more conversational and human-like.

The study, “AI assessment changes human behavior,” was authored by Jonas Goergen, Emanuel de Bellis, and Anne-Kathrin Klesse.

Previous Post

Your body exhibits subtle physiological changes when you engage in self-deception

Next Post

Family dynamics predict whether parents and children agree on choosing a romantic partner

RELATED

Live music causes brain waves to synchronize more strongly with rhythm than recorded music
Artificial Intelligence

Disclosing autism to AI chatbots prompts overly cautious, stereotypical advice

April 18, 2026
Live music causes brain waves to synchronize more strongly with rhythm than recorded music
Artificial Intelligence

Scientists tested the creativity of AI models, and the results were surprisingly homogeneous

April 18, 2026
People ascribe intentions and emotions to both human- and AI-made art, but still report stronger emotions for artworks made by humans
Artificial Intelligence

New research links personality traits to confidence in recognizing artificial intelligence deception

April 13, 2026
Scientists just found a novel way to uncover AI biases — and the results are unexpected
Artificial Intelligence

Artificial intelligence makes consumers more impatient

April 11, 2026
Scientists identify a fat-derived hormone that drives the mood benefits of exercise
Artificial Intelligence

People consistently devalue creative writing generated by artificial intelligence

April 5, 2026
AI autocomplete suggestions covertly change how users think about important topics
Artificial Intelligence

AI autocomplete suggestions covertly change how users think about important topics

April 2, 2026
Study links phubbing sensitivity to attachment patterns in romantic couples
Artificial Intelligence

How generative artificial intelligence is upending theories of political persuasion

April 1, 2026
People with attachment anxiety are more vulnerable to problematic AI use
Artificial Intelligence

Relying on AI chatbots for historical facts can influence your political beliefs, new study shows

March 30, 2026

STAY CONNECTED

RSS Psychology of Selling

  • Why personalized ads sometimes backfire: A research review explains when tailoring messages works and when it doesn’t
  • The common advice to avoid high customer expectations may not be backed by evidence
  • Personality-matched persuasion works better, but mismatched messages can backfire
  • When happy customers and happy employees don’t add up: How investor signals have shifted in the social media age
  • Correcting fake news about brands does not backfire, five-study experiment finds

LATEST

Early exposure to forever chemicals linked to altered brain genes and impulsive behavior in rats

Soft brain implants outperform rigid silicon in long-term safety study

Disclosing autism to AI chatbots prompts overly cautious, stereotypical advice

Can choking during sex cause brain damage? Emerging evidence points to hidden neurological risks

The decline of hypergamy: How a surge in university degrees changed marriage in the US and France

New research finds a persistent and growing leftward tilt in the social sciences

How a year of regular exercise alters the biology of stress

Scientists tested the creativity of AI models, and the results were surprisingly homogeneous

PsyPost is a psychology and neuroscience news website dedicated to reporting the latest research on human behavior, cognition, and society. (READ MORE...)

  • Mental Health
  • Neuroimaging
  • Personality Psychology
  • Social Psychology
  • Artificial Intelligence
  • Cognitive Science
  • Psychopharmacology
  • Contact us
  • Disclaimer
  • Privacy policy
  • Terms and conditions
  • Do not sell my personal information

(c) PsyPost Media Inc

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Subscribe
  • My Account
  • Cognitive Science Research
  • Mental Health Research
  • Social Psychology Research
  • Drug Research
  • Relationship Research
  • About PsyPost
  • Contact
  • Privacy Policy

(c) PsyPost Media Inc