A new study published in PNAS highlights a powerful but often overlooked driver of belief polarization: the way people search for information online. Across 21 experiments involving nearly 10,000 participants, researchers found that people’s prior beliefs influence the search terms they use, and in turn, the search engines’ narrow focus on “relevance” reinforces those beliefs. Even when people aren’t actively seeking to confirm their views, the structure of traditional and AI-powered search tools can trap them in informational echo chambers. The study also shows that relatively simple changes to how search algorithms present information can help break this pattern, encouraging belief updating and broader understanding.
“The inspiration for this research came from a personal experience,” said study author Eugina Leung, an assistant professor of marketing at the Freeman School of Business at Tulane University. “I was visiting my co-author, Oleg Urminsky, at the University of Chicago. It was at the end of November, and I came down with a cold. As I was searching Google for cold medicine, I started paying close attention to the exact words I was using.”
“I noticed that searching for ‘cold medicine side effects’ gave me a very different, and much more alarming, set of results than searching for ‘best medicine for cold symptoms.’ It became clear how my own framing of the search was dramatically shaping the information I received, and that led us to investigate this phenomenon more systematically.”
The authors wanted to test whether the problem lies more in how algorithms deliver results or in the behavior of users themselves. They also wanted to know if anything could be done to reverse this pattern.
To do this, the researchers designed a series of experiments that examined how people search for information, how those searches affect their beliefs, and whether belief change could be promoted by changing either the user’s approach or the platform’s algorithm. Participants took part in studies involving health topics such as caffeine and food risks, as well as broader domains like energy, crime, bitcoin, gas prices, and age-related thinking ability. The studies included real-world platforms like Google, Bing, and ChatGPT, as well as custom-built search engines and AI chatbots.
The first set of studies focused on what the researchers call the “narrow search effect.” Participants were asked to report their beliefs on various topics and then come up with their own search terms to learn more. Coders who didn’t know what the study was about rated those search terms on a scale to determine whether they were neutral, or biased toward confirming or disconfirming the participant’s beliefs.
In one study, people who thought caffeine was healthy tended to search for things like “benefits of caffeine,” while those with more skeptical views searched for “caffeine dangers.” This happened across many topics, including gas prices, crime, and nuclear energy.
The researchers also found that these belief-driven search terms led to different search results—and that those results influenced people’s beliefs after they searched. In one experiment, participants were randomly assigned to search either “nuclear energy is good” or “nuclear energy is bad.” Because they were randomly assigned, the researchers could assume both groups started with similar average beliefs. Yet after reading the results from their assigned search terms, their beliefs shifted in opposite directions.
Importantly, the differences in belief didn’t just come from being told what to search. In a follow-up study, participants were shown identical search results, even though they thought the results matched their search terms. Their beliefs didn’t change, suggesting that it was the content of the search results—not the assigned terms—that shaped their opinions.
The team then explored whether these shifts in belief had real-world consequences. In one study, Dutch undergraduates were asked to search for either the risks or benefits of caffeine, then offered a choice between a caffeinated or decaffeinated energy drink. Those who searched for benefits were not only more positive about caffeine afterward—they were also more likely to choose the caffeinated drink.
“The most important takeaway is that we all create our own mini ‘echo chambers’ without even realizing it,” Leung told PsyPost. “Our existing beliefs unconsciously influence the words we type into a search bar, and because search engines are designed for relevance, they show us results that confirm our initial belief. This happens across many topics, from health to finance, and on all platforms, including Google and new AI chatbots. While telling people to do follow-up searches doesn’t fix it, our research shows that designing search algorithms to provide broader, more balanced viewpoints is a very effective solution.”
“Just to be clear, we believe there is value in getting narrow, hyper-relevant results in many situations. However, we propose that giving people an easy option to see broader perspectives, like a ‘Search Broadly’ button as a counterpart to Google’s ‘I’m Feeling Lucky’ button, would be an incredibly beneficial step toward creating a more informed society.”
Next, the researchers examined potential interventions. Could simply encouraging people to do more searches help? In one experiment, participants were prompted to do a second round of searching. But doing more searches didn’t help much—people stuck to their initial biases, using similar search terms that returned similarly one-sided results.
They also tested a cognitive nudge. Some participants were asked to consider how their beliefs might differ if they had used a different search term. This helped a little—those who thought about the potential influence of their search term beforehand showed slightly more openness to new information. But the effect was limited, and didn’t eliminate the bias.
The most promising results came from changing the algorithms themselves. In several studies, the researchers used a custom search engine that looked and felt like Google but showed either the user’s original search results or a mixture that included balanced perspectives. For instance, someone who typed “caffeine health benefits” might see results not only about benefits, but also about risks. These broader results consistently led to more moderate beliefs.
In one study involving the relationship between age and thinking ability, participants saw either results tailored to their own search or a broader set reflecting both positive and negative perspectives. Those who saw the broader information were more likely to revise their beliefs, even though they found the results just as relevant and useful.
The researchers extended this idea to AI chatbots as well. In one experiment, they designed two versions of a chatbot based on ChatGPT. One version gave narrow answers focused on what the user asked. The other gave broader, more balanced answers, including pros and cons. Even though both chatbots used the same base model, the one offering broader answers led to greater belief updating. People who saw the broad responses rated them as equally useful and relevant as the narrower ones.
“What surprised me most was how well people responded to the broadened search results,” Leung said. “There’s an assumption in the tech world that users demand hyper-relevant, narrowly-focused information, and that giving them anything else would be seen as less useful or frustrating. However, our studies consistently found that this wasn’t true.”
“When we gave people a more balanced set of search results or a broader AI-generated answer, they rated the information as just as useful and relevant as the people who received narrow, belief-confirming results. This is optimistic because it means platforms can potentially contribute to a more informed public without sacrificing user satisfaction.”
The findings also highlight the limitations of relying solely on individual self-awareness to fix the problem. Most people didn’t realize they were searching narrowly. In fact, only a small portion said they had used their search terms to confirm their beliefs. This suggests that the problem isn’t just deliberate bias—it’s a product of natural tendencies and how search platforms respond to them.
But as with all research, there are some caveats.
“It’s important to be clear about the specific conditions where this effect is most powerful,” Leung noted. “As we lay out in the paper, the ‘narrow search effect’ and the benefits of broadening search results are most likely to occur under a specific set of circumstances: First, users need to hold some kind of prior belief, even a subtle one, that influences the search terms they choose. Second, the search technology must actually provide different, narrower results depending on the direction of the query. And third, users’ beliefs on the topic must be malleable enough to be updated by the information they receive.”
“This means our findings may not apply in every situation. For instance, if a major news event creates a shared, prominent cue, everyone might search using the exact same term, regardless of their personal beliefs. Similarly, for topics where information is scarce or already cleaned up by the search platforms, different search terms might lead to the same balanced results anyway. And for identity-defining political beliefs, people may be highly resistant to changing their minds, no matter what information they are shown.”
Nevertheless, the findings point to a need for design strategies that balance relevance with informational breadth. If the default settings of our digital tools consistently show us what we already believe, belief polarization may become harder to undo.
“Our overarching goal is to use these insights to help design a healthier and more robust information ecosystem,” Leung explained. “We plan to pursue this in a few directions: First, we want to go deeper into the psychology of the searcher. We’re interested in understanding who is most susceptible to this effect and why. Second, we hope to figure out how to apply these principles to highly polarized topics and areas plagued by misinformation. How can we design systems that broaden perspectives without amplifying harmful or dangerous content?”
“Ultimately, we hope this line of research helps create a new standard for search and AI design – one that accounts for human psychology by default to create a more shared factual foundation for society.”
“I think it’s crucial to understand that this isn’t just about ‘filter bubbles’ being created for us by algorithms; it’s also about the echo chambers we unintentionally create ourselves through our search habits,” Leung added. “The good news is that the solution doesn’t require us to rewire our brains. Instead, we can make simple but powerful design changes to the technology we use every day. With thoughtful design, the same AI and search tools that risk reinforcing our biases can become powerful instruments for broadening our perspectives and leading to more informed decisions.”
The study, “The narrow search effect and how broadening search promotes belief updating,” was authored by Eugina Leung and Oleg Urminsky.