Subscribe
The latest psychology and neuroscience discoveries.
My Account
  • Mental Health
  • Social Psychology
  • Cognitive Science
  • Psychopharmacology
  • Neuroscience
  • About
No Result
View All Result
PsyPost
PsyPost
No Result
View All Result
Home Exclusive Cognitive Science

Scientists show how you’re unknowingly sealing yourself in an information bubble

by Eric W. Dolan
June 29, 2025
in Cognitive Science, Social Media
[Adobe Stock]

[Adobe Stock]

Share on TwitterShare on Facebook
Stay informed on the latest psychology and neuroscience research—follow PsyPost on LinkedIn for daily updates and insights.

A new study published in PNAS highlights a powerful but often overlooked driver of belief polarization: the way people search for information online. Across 21 experiments involving nearly 10,000 participants, researchers found that people’s prior beliefs influence the search terms they use, and in turn, the search engines’ narrow focus on “relevance” reinforces those beliefs. Even when people aren’t actively seeking to confirm their views, the structure of traditional and AI-powered search tools can trap them in informational echo chambers. The study also shows that relatively simple changes to how search algorithms present information can help break this pattern, encouraging belief updating and broader understanding.

“The inspiration for this research came from a personal experience,” said study author Eugina Leung, an assistant professor of marketing at the Freeman School of Business at Tulane University. “I was visiting my co-author, Oleg Urminsky, at the University of Chicago. It was at the end of November, and I came down with a cold. As I was searching Google for cold medicine, I started paying close attention to the exact words I was using.”

“I noticed that searching for ‘cold medicine side effects’ gave me a very different, and much more alarming, set of results than searching for ‘best medicine for cold symptoms.’ It became clear how my own framing of the search was dramatically shaping the information I received, and that led us to investigate this phenomenon more systematically.”

The authors wanted to test whether the problem lies more in how algorithms deliver results or in the behavior of users themselves. They also wanted to know if anything could be done to reverse this pattern.

To do this, the researchers designed a series of experiments that examined how people search for information, how those searches affect their beliefs, and whether belief change could be promoted by changing either the user’s approach or the platform’s algorithm. Participants took part in studies involving health topics such as caffeine and food risks, as well as broader domains like energy, crime, bitcoin, gas prices, and age-related thinking ability. The studies included real-world platforms like Google, Bing, and ChatGPT, as well as custom-built search engines and AI chatbots.

The first set of studies focused on what the researchers call the “narrow search effect.” Participants were asked to report their beliefs on various topics and then come up with their own search terms to learn more. Coders who didn’t know what the study was about rated those search terms on a scale to determine whether they were neutral, or biased toward confirming or disconfirming the participant’s beliefs.

In one study, people who thought caffeine was healthy tended to search for things like “benefits of caffeine,” while those with more skeptical views searched for “caffeine dangers.” This happened across many topics, including gas prices, crime, and nuclear energy.

The researchers also found that these belief-driven search terms led to different search results—and that those results influenced people’s beliefs after they searched. In one experiment, participants were randomly assigned to search either “nuclear energy is good” or “nuclear energy is bad.” Because they were randomly assigned, the researchers could assume both groups started with similar average beliefs. Yet after reading the results from their assigned search terms, their beliefs shifted in opposite directions.

Importantly, the differences in belief didn’t just come from being told what to search. In a follow-up study, participants were shown identical search results, even though they thought the results matched their search terms. Their beliefs didn’t change, suggesting that it was the content of the search results—not the assigned terms—that shaped their opinions.

The team then explored whether these shifts in belief had real-world consequences. In one study, Dutch undergraduates were asked to search for either the risks or benefits of caffeine, then offered a choice between a caffeinated or decaffeinated energy drink. Those who searched for benefits were not only more positive about caffeine afterward—they were also more likely to choose the caffeinated drink.

“The most important takeaway is that we all create our own mini ‘echo chambers’ without even realizing it,” Leung told PsyPost. “Our existing beliefs unconsciously influence the words we type into a search bar, and because search engines are designed for relevance, they show us results that confirm our initial belief. This happens across many topics, from health to finance, and on all platforms, including Google and new AI chatbots. While telling people to do follow-up searches doesn’t fix it, our research shows that designing search algorithms to provide broader, more balanced viewpoints is a very effective solution.”

“Just to be clear, we believe there is value in getting narrow, hyper-relevant results in many situations. However, we propose that giving people an easy option to see broader perspectives, like a ‘Search Broadly’ button as a counterpart to Google’s ‘I’m Feeling Lucky’ button, would be an incredibly beneficial step toward creating a more informed society.”

Next, the researchers examined potential interventions. Could simply encouraging people to do more searches help? In one experiment, participants were prompted to do a second round of searching. But doing more searches didn’t help much—people stuck to their initial biases, using similar search terms that returned similarly one-sided results.

They also tested a cognitive nudge. Some participants were asked to consider how their beliefs might differ if they had used a different search term. This helped a little—those who thought about the potential influence of their search term beforehand showed slightly more openness to new information. But the effect was limited, and didn’t eliminate the bias.

The most promising results came from changing the algorithms themselves. In several studies, the researchers used a custom search engine that looked and felt like Google but showed either the user’s original search results or a mixture that included balanced perspectives. For instance, someone who typed “caffeine health benefits” might see results not only about benefits, but also about risks. These broader results consistently led to more moderate beliefs.

In one study involving the relationship between age and thinking ability, participants saw either results tailored to their own search or a broader set reflecting both positive and negative perspectives. Those who saw the broader information were more likely to revise their beliefs, even though they found the results just as relevant and useful.

The researchers extended this idea to AI chatbots as well. In one experiment, they designed two versions of a chatbot based on ChatGPT. One version gave narrow answers focused on what the user asked. The other gave broader, more balanced answers, including pros and cons. Even though both chatbots used the same base model, the one offering broader answers led to greater belief updating. People who saw the broad responses rated them as equally useful and relevant as the narrower ones.

“What surprised me most was how well people responded to the broadened search results,” Leung said. “There’s an assumption in the tech world that users demand hyper-relevant, narrowly-focused information, and that giving them anything else would be seen as less useful or frustrating. However, our studies consistently found that this wasn’t true.”

“When we gave people a more balanced set of search results or a broader AI-generated answer, they rated the information as just as useful and relevant as the people who received narrow, belief-confirming results. This is optimistic because it means platforms can potentially contribute to a more informed public without sacrificing user satisfaction.”

The findings also highlight the limitations of relying solely on individual self-awareness to fix the problem. Most people didn’t realize they were searching narrowly. In fact, only a small portion said they had used their search terms to confirm their beliefs. This suggests that the problem isn’t just deliberate bias—it’s a product of natural tendencies and how search platforms respond to them.

But as with all research, there are some caveats.

“It’s important to be clear about the specific conditions where this effect is most powerful,” Leung noted. “As we lay out in the paper, the ‘narrow search effect’ and the benefits of broadening search results are most likely to occur under a specific set of circumstances: First, users need to hold some kind of prior belief, even a subtle one, that influences the search terms they choose. Second, the search technology must actually provide different, narrower results depending on the direction of the query. And third, users’ beliefs on the topic must be malleable enough to be updated by the information they receive.”

“This means our findings may not apply in every situation. For instance, if a major news event creates a shared, prominent cue, everyone might search using the exact same term, regardless of their personal beliefs. Similarly, for topics where information is scarce or already cleaned up by the search platforms, different search terms might lead to the same balanced results anyway. And for identity-defining political beliefs, people may be highly resistant to changing their minds, no matter what information they are shown.”

Nevertheless, the findings point to a need for design strategies that balance relevance with informational breadth. If the default settings of our digital tools consistently show us what we already believe, belief polarization may become harder to undo.

“Our overarching goal is to use these insights to help design a healthier and more robust information ecosystem,” Leung explained. “We plan to pursue this in a few directions: First, we want to go deeper into the psychology of the searcher. We’re interested in understanding who is most susceptible to this effect and why. Second, we hope to figure out how to apply these principles to highly polarized topics and areas plagued by misinformation. How can we design systems that broaden perspectives without amplifying harmful or dangerous content?”

“Ultimately, we hope this line of research helps create a new standard for search and AI design – one that accounts for human psychology by default to create a more shared factual foundation for society.”

“I think it’s crucial to understand that this isn’t just about ‘filter bubbles’ being created for us by algorithms; it’s also about the echo chambers we unintentionally create ourselves through our search habits,” Leung added. “The good news is that the solution doesn’t require us to rewire our brains. Instead, we can make simple but powerful design changes to the technology we use every day. With thoughtful design, the same AI and search tools that risk reinforcing our biases can become powerful instruments for broadening our perspectives and leading to more informed decisions.”

The study, “The narrow search effect and how broadening search promotes belief updating,” was authored by Eugina Leung and Oleg Urminsky.

TweetSendScanShareSendPin3ShareShareShareShareShare

RELATED

Scientists find genetic basis for how much people enjoy music
Cognitive Science

Is humor inherited? Twin study suggests the ability to be funny may not run in the family

July 10, 2025

A first-of-its-kind study set out to discover whether being funny is something you inherit. By testing twins on their joke-making skills, researchers found that your sense of humor might have less to do with DNA than you'd think.

Read moreDetails
Even in healthy adults, high blood sugar levels are linked to impaired brain function
Memory

Neuroscientists decode how people juggle multiple items in working memory

July 8, 2025

New neuroscience research shows how the brain decides which memories deserve more attention. By tracking brain activity, scientists found that the frontal cortex helps direct limited memory resources, allowing people to remember high-priority information more precisely than less relevant details.

Read moreDetails
The most popular dementia videos on TikTok tend to have the lowest quality, study find
Addiction

People with short-video addiction show altered brain responses during decision-making

July 8, 2025

People who frequently use short-video apps like TikTok may show reduced loss sensitivity and impulsive decision-making, according to a new neuroimaging study that links addictive use patterns to changes in brain activity during risky choices.

Read moreDetails
New study uncovers a surprising effect of cold-water immersion
Cognitive Science

New study uncovers a surprising effect of cold-water immersion

July 8, 2025

Cold-water immersion increases energy expenditure—but it may also drive people to eat more afterward. A study in Physiology & Behavior found participants consumed significantly more food following cold exposure, possibly due to internal cooling effects that continue after leaving the water.

Read moreDetails
Positive attitudes toward AI linked to problematic social media use
Cognitive Science

People with higher cognitive ability have weaker moral foundations, new study finds

July 7, 2025

A large study has found that individuals with greater cognitive ability are less likely to endorse moral values such as compassion, fairness, loyalty, and purity. The results point to a consistent negative relationship between intelligence and moral intuitions.

Read moreDetails
These common sounds can impair your learning, according to new psychology research
Cognitive Science

These common sounds can impair your learning, according to new psychology research

July 4, 2025

Your brain’s ancient defense system might be sabotaging your test scores. New research suggests our "behavioral immune system," which makes us subconsciously alert to signs of illness, can be triggered by coughs and sniffles.

Read moreDetails
From fireflies to brain cells: Unraveling the complex web of synchrony in networks
Addiction

Understanding “neuronal ensembles” could revolutionize addiction treatment

July 3, 2025

The same brain system that rewards you for a delicious meal is hijacked by drugs like fentanyl. A behavioral neuroscientist explains how understanding the specific memories behind these rewards is the key to treating addiction without harming our essential survival instincts.

Read moreDetails
Scientists just uncovered a surprising illusion in how we remember time
Memory

Scientists just uncovered a surprising illusion in how we remember time

July 3, 2025

Our perception of time is more fragile than we think. Scientists have uncovered a powerful illusion where repeated exposure to information makes us misremember it as happening much further in the past, significantly distorting our mental timelines.

Read moreDetails

SUBSCRIBE

Go Ad-Free! Click here to subscribe to PsyPost and support independent science journalism!

STAY CONNECTED

LATEST

Neuroscientists shed new light on how heroin disrupts prefrontal brain function

New research identifies four distinct health pathways linked to Alzheimer’s disease

A surprising body part might provide key insights into schizophrenia risk

Religious belief linked to lower anxiety and better sleep in Israeli Druze study

A common vegetable may counteract brain changes linked to obesity

Massive psychology study reveals disturbing truths about Machiavellian leaders

Dementia: Your lifetime risk may be far greater than previously thought

Psychopathic tendencies may be associated with specific hormonal patterns

         
       
  • Contact us
  • Privacy policy
  • Terms and Conditions
[Do not sell my information]

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Subscribe
  • My Account
  • Cognitive Science Research
  • Mental Health Research
  • Social Psychology Research
  • Drug Research
  • Relationship Research
  • About PsyPost
  • Contact
  • Privacy Policy