Subscribe
The latest psychology and neuroscience discoveries.
My Account
  • Mental Health
  • Social Psychology
  • Cognitive Science
  • Neuroscience
  • About
No Result
View All Result
PsyPost
PsyPost
No Result
View All Result
Home Exclusive Cognitive Science

Personalization algorithms create an illusion of competence, study finds

by Eric W. Dolan
December 2, 2025
in Cognitive Science, Social Media
Share on TwitterShare on Facebook

A new study published in the Journal of Experimental Psychology: General suggests that personalization algorithms used by online content platforms may actively hinder the learning process. The findings provide evidence that when algorithms tailor information to a user’s behavior, that user may develop a biased understanding of the subject while simultaneously feeling overconfident in their inaccurate knowledge

The study was conducted by Giwon Bahg from the Department of Psychology at Vanderbilt University, alongside Vladimir M. Sloutsky and Brandon M. Turner from the Department of Psychology at The Ohio State University. Previous scientific inquiries into personalization have often focused on how these systems reinforce existing beliefs, such as political ideologies or social attitudes. This phenomenon is often referred to as a “filter bubble.”

The research team sought to determine if these algorithms affect basic cognitive processes when a person attempts to learn about an entirely new topic where they have no prior opinions. They investigated whether the mechanism of tailoring content to increase consumption might inadvertently limit exposure to the broader environment.

This restriction could prevent users from forming an accurate mental map of reality. The researchers aimed to simulate how an individual might try to learn about a new domain, such as foreign cinema or a scientific concept, through a curated feed.

To test their hypothesis, the researchers recruited 343 participants through an online platform. After excluding data from sessions that were incomplete or failed to meet specific quality standards, the final analysis included 200 participants.

The researchers designed a task involving completely fictional categories to ensure that prior knowledge did not influence the results. Participants were asked to learn how to categorize strange, crystal-like “aliens.”

These digital creatures possessed six distinct visual features that defined their category. The features included location on a line, the radius of a circle, brightness, orientation, curvature, and spatial frequency. The goal for the participants was to learn the structure of these alien categories by observing various examples.

The experiment consisted of a learning phase followed by a testing phase. During the learning phase, the specific features of the aliens were initially hidden behind gray boxes. Participants had to click on the boxes to reveal specific features, a process the researchers called information sampling. This setup allowed the team to track exactly what information the participants chose to look at and what they ignored.

Google News Preferences Add PsyPost to your preferred sources

The researchers divided the participants into different groups to test the specific effects of algorithmic personalization. One group served as a control and viewed a random assortment of items with all features available to inspect. Another group engaged in active learning, where they freely chose which categories to study without algorithmic interference.

The experimental groups interacted with a personalization algorithm modeled after the collaborative filtering systems used by video-sharing platforms like YouTube. This algorithm tracked which features a participant tended to click during the trials.

It then recommended subsequent items that made it easier to continue that specific pattern of clicking. Consequently, the system created a feedback loop that presented items similar to those the user had already engaged with.

This setup mimicked how online platforms prioritize content engagement over information diversity to maximize revenue. The algorithm was trained to predict which items would result in the most clicks from the user. It then populated the user’s feed with those high-engagement items.

The data analysis revealed significant differences in how the different groups gathered information. Participants in the personalized conditions sampled substantially fewer features than those in the control or active learning groups.

As the learning phase progressed, these participants narrowed their focus even further. The data suggests that they tended to ignore dimensions of the aliens that the algorithm did not prioritize.

The analysis of sampling diversity used a measure called Shannon entropy. This metric showed that the personalized environment effectively trained users to pay attention to a limited slice of the available information. The algorithm successfully constrained the diversity of the categories presented to the users.

Following the learning phase, the researchers administered a categorization task to measure what the participants had learned. They showed the participants new alien examples and asked them to sort them into the correct groups.

The researchers found that individuals who learned through the personalized algorithm made more errors than those in the control group. Their internal representation of the alien categories was distorted.

The algorithm had prevented them from seeing the full variety of the alien population. This led to inaccurate generalizations about how the different features related to one another. The participants effectively learned a skewed version of the reality presented in the experiment.

In addition to accuracy, the study measured the participants’ confidence in their decisions using a rating scale from zero to ten. The analysis showed that participants in the personalized groups frequently reported high confidence levels even when their answers were wrong. This effect was particularly distinct when they encountered items from categories they had rarely or never seen during the learning phase.

Instead of recognizing their lack of knowledge regarding these unfamiliar items, the participants incorrectly applied their limited experience. The results show that when a test item came from an unobserved category, the participants did not report low confidence. They felt sure that their biased knowledge applied to these novel situations.

This indicates a disconnection between actual competence and perceived competence caused by the filtered learning environment. The participants were unaware that the algorithm had hidden significant portions of the information landscape from them. They assumed the limited sample they viewed was representative of the whole.

The authors note that the study utilized a highly controlled, artificial task to isolate the cognitive effects of the algorithms. Real-world interactions with personalization often involve complex semantic content and emotional preferences, which were not present in this experiment. The synthetic nature of the stimuli was a necessary design choice to rule out the influence of pre-existing beliefs.

Future research could investigate how these findings translate to more naturalistic settings, such as news consumption or educational tools. The researchers also suggest exploring how different types of user goals might mitigate the negative effects of personalization. For instance, an algorithm designed to maximize diversity rather than engagement might yield different cognitive outcomes.

The findings provide evidence that the structure of information delivery systems plays a significant role in shaping human cognition. By optimizing for engagement, current algorithms may inadvertently sacrifice the accuracy of user knowledge. This trade-off suggests that online platforms can shape not just what people see, but how they reason about the world.

The study, “Algorithmic Personalization of Information Can Cause Inaccurate Generalization and Overconfidence,” was authored by Giwon Bahg, Vladimir M. Sloutsky, and Brandon M. Turner.

Previous Post

Fantastical content, not editing speed, depletes children’s cognitive resources

Next Post

Higher social media engagement linked to reduced performance on cognitive assessments

RELATED

Listening to bad music makes you crave sugar, study finds
Cognitive Science

Listening to bad music makes you crave sugar, study finds

April 20, 2026
Live music causes brain waves to synchronize more strongly with rhythm than recorded music
Cognitive Science

Cognition might emerge from embodied “grip” with the world rather than abstract mental processes

April 19, 2026
Women’s cognitive abilities remain stable across menstrual cycle
Cognitive Science

Men and women show different relative cognitive strengths across their lifespans

April 19, 2026
Live music causes brain waves to synchronize more strongly with rhythm than recorded music
Cognitive Science

Soft brain implants outperform rigid silicon in long-term safety study

April 18, 2026
Live music causes brain waves to synchronize more strongly with rhythm than recorded music
Cognitive Science

Live music causes brain waves to synchronize more strongly with rhythm than recorded music

April 18, 2026
How common is anal sex? Scientific facts about prevalence, pain, pleasure, and more
Cognitive Science

Higher intelligence in adolescence linked to lower mental illness risk in adulthood

April 17, 2026
Sorting Hat research: What does your Hogwarts house say about your psychological makeup?
Cognitive Science

Maturing brain pathways explain the sudden leap in children’s language skills

April 17, 2026
Republican lawmakers lead the trend of using insults to chase media attention instead of policy wins
Business

Children with obesity face a steep decline in adult economic mobility

April 16, 2026

STAY CONNECTED

RSS Psychology of Selling

  • Why personalized ads sometimes backfire: A research review explains when tailoring messages works and when it doesn’t
  • The common advice to avoid high customer expectations may not be backed by evidence
  • Personality-matched persuasion works better, but mismatched messages can backfire
  • When happy customers and happy employees don’t add up: How investor signals have shifted in the social media age
  • Correcting fake news about brands does not backfire, five-study experiment finds

LATEST

Autism spectrum disorder is associated with specific congenital malformations

Study links internalized pornographic standards to body image issues among incel men

Listening to bad music makes you crave sugar, study finds

People remain “blissfully ignorant” of AI use in everyday messages, new research shows

Believing in a “chemical imbalance” might keep patients on antidepressants longer

Can a common parasite medication calm the brain’s stress circuitry during alcohol withdrawal?

Childhood trauma and attachment styles show nuanced links to alternative sexual preferences

New study reveals how political bias conditions the impact of conspiracy thinking

PsyPost is a psychology and neuroscience news website dedicated to reporting the latest research on human behavior, cognition, and society. (READ MORE...)

  • Mental Health
  • Neuroimaging
  • Personality Psychology
  • Social Psychology
  • Artificial Intelligence
  • Cognitive Science
  • Psychopharmacology
  • Contact us
  • Disclaimer
  • Privacy policy
  • Terms and conditions
  • Do not sell my personal information

(c) PsyPost Media Inc

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Subscribe
  • My Account
  • Cognitive Science Research
  • Mental Health Research
  • Social Psychology Research
  • Drug Research
  • Relationship Research
  • About PsyPost
  • Contact
  • Privacy Policy

(c) PsyPost Media Inc