Subscribe
The latest psychology and neuroscience discoveries.
My Account
  • Mental Health
  • Social Psychology
  • Cognitive Science
  • Psychopharmacology
  • Neuroscience
  • About
No Result
View All Result
PsyPost
PsyPost
No Result
View All Result
Home Exclusive Artificial Intelligence

A major new study charts the surprising ways people weigh AI’s risks and benefits

by Eric W. Dolan
August 28, 2025
in Artificial Intelligence
[Adobe Stock]

[Adobe Stock]

Share on TwitterShare on Facebook
Stay on top of the latest psychology findings: Subscribe now!

New research provides evidence that people’s evaluations of artificial intelligence are shaped more strongly by perceived usefulness than by fears about safety or harm. The study, published in Technological Forecasting and Social Change, sheds light on how the public navigates the trade-offs between expected benefits and potential risks across a wide range of AI applications—from healthcare and transportation to surveillance and warfare.

Rapid advances in artificial intelligence, particularly in tools like large language models, have generated a mix of optimism and anxiety. While these technologies promise to improve efficiency, support medical diagnoses, and enable new forms of creativity, they also raise concerns about surveillance, misinformation, automation, and power imbalances. Public sentiment, in this context, plays a key role in determining how AI is adopted, governed, and developed. If the public sees AI as untrustworthy or irrelevant to daily life, even highly capable systems may struggle to gain acceptance.

Despite the growing literature on AI ethics and trust, most previous research has either examined attitudes toward specific technologies—such as self-driving cars—or asked people for general impressions of AI. Few studies have attempted to map the full terrain of public opinion across a broad range of future possibilities. This study aimed to fill that gap by systematically analyzing how people balance expectations, perceived risks, and potential benefits across a wide array of imagined AI applications.

“We were motivated by both personal interests and the real-world challenges posed by AI’s rapid expansion. Personally, I use AI in many ways: it helps me write more efficiently, creates playful drawing prompts for my toddlers, and accelerates my learning,” said study author Philipp Brauner, a postdoctoral researcher at the Human-Computer Interaction Center and Chair of Communication Science at RWTH Aachen University.

“At the same time, I’m concerned about its downsides: How much should AI know about me? Who regulates it, and how effective is that regulation? Will humans remain in control? These questions are widely discussed not only in academic circles but also in the media and the public sphere.”

“Given the unprecedented speed of AI development, it is difficult even for experts to maintain an overview of AI and its implications. Our goal was to capture how people currently view AI, across a broad range of applications, and to put those perceptions into context.”

For their study, the researchers recruited 1,100 participants from Germany, using an online survey that was designed to reflect the national population in terms of age, gender, and regional background. The median age of participants was 51 years. The researchers presented each participant with a randomized subset of 15 micro-scenarios, drawn from a pool of 71 brief statements describing hypothetical AI developments expected within the next ten years.

These scenarios covered a wide range of domains, including AI-generated art, autonomous warfare, healthcare, criminal justice, and emotional relationships. Participants rated each scenario along several dimensions: how likely they thought it was to occur, how personally risky it seemed, how useful or beneficial they believed it would be, and whether they viewed it as positive or negative overall.

In addition to scenario ratings, the researchers collected demographic data and administered short measures of personality traits, including technology readiness, interpersonal trust, self-efficacy, openness, and familiarity with AI systems.

Statistical models were used to analyze both how individual characteristics shaped AI evaluations and how the scenarios themselves were rated in terms of public sentiment. Risk–benefit maps and expectancy–value plots were also created to help visualize the landscape of AI perception.

“AI is widely recognized as a transformative technology affecting individuals, organizations, and society in countless ways,” Brauner told PsyPost. “This has led to a surge of research on public perceptions, expectations, and design requirements. However, most studies either focus narrowly on specific applications or address AI in very general terms.”

“Our study stands out in that it covers a wide variety of applications and imaginary futures and visualizes them in a ‘cognitive map’ of public perceptions. This perspective helps us understand overall risk–benefit tradeoffs, while also allowing other researchers to situate their more focused studies within the broader landscape. For instance, they can see whether their work addresses domains viewed as relatively safe and valuable, or ones considered more controversial.”

Across all scenarios, people tended to view future AI developments as fairly likely to occur. However, perceived risk was higher than perceived benefit on average, and overall sentiment was generally negative. Only a minority of scenarios were rated positively, and even those were often seen as risky.

The most positively rated scenarios included AI assisting with health improvement, providing conversation support for the elderly, and serving as a helper for everyday tasks. These were seen as both beneficial and relatively safe. In contrast, scenarios involving AI being used in warfare, making life-and-death decisions, or monitoring private life were rated as highly risky and strongly negative.

“We are often surprised by how participants evaluate certain topics,” Brauner said. “To look at just one example, the statement ‘AI will know everything about me’ was unsurprisingly rated as negative. Yet many people freely share personal details on social media or even with AI chatbots, and much of this data is indeed used to train models. This resembles the ‘privacy paradox’ observed in online privacy research: people express concerns yet behave in ways that contradict them.”

Some scenarios were viewed as likely to occur but also undesirable — such as AI being misused by criminals or automating surveillance. Others, such as AI helping people have better relationships or acting with a sense of responsibility, were seen as unlikely and received little support.

Notably, the perceived likelihood of a scenario had little or no correlation with its perceived benefit or risk. People seemed to evaluate the usefulness and moral desirability of AI independently from whether they believed the technology would become reality.

“Our representative sample of people in Germany believes AI is here to stay,” Brauner said. “However, across the many topics we surveyed, they tended to see AI as risky, with limited benefits and lower overall value. This creates a gap between public perceptions and the optimism often voiced by business leaders and politicians. Whether this reflects a specifically cultural perspective (driven by ‘German Angst?’) or our choice of topics remains an open question, and we would like to replicate the study in other countries.”

When the researchers used regression analysis to predict overall value judgments, they found that perceived benefit was the strongest predictor by far. Risk also played a role, but it had less weight in shaping general attitudes. Together, these two factors explained over 96 percent of the variation in how people evaluated each AI scenario. Perceived likelihood had no significant predictive power.

“Somewhat surprisingly, value perceptions were influenced more strongly by perceived benefits than by perceived risks,” Brauner explained. “In other words, if developers or policymakers want to improve acceptance of AI, highlighting tangible benefits may be more effective than trying to mitigate risks. This is a controversial finding, given the real dangers of issues like data breaches or AI-generated misinformation.”

Individual differences also mattered. Older participants tended to rate AI as more risky and less beneficial, and their overall evaluations were more negative than those of younger participants. Gender played a minor role, with women reporting slightly lower overall sentiment toward AI.

However, the strongest individual predictors of positive attitudes were technology readiness and AI familiarity. People who reported feeling comfortable with technology or who had more exposure to AI systems were more likely to rate AI scenarios as beneficial and less likely to view them as threatening.

“Older participants tended to see more risks fewer benefits, and lower value, while women generally assigned lower value to AI,” Brauner told PsyPost. “However, these differences were moderated by ‘AI readiness’ (knowledge and familiarity with the technology). This points to the importance of education: AI literacy should be integrated into school and university curricula, and individuals should also seek to better understand AI’s possibilities and limitations. AI will remain part of our lives, so society needs to make informed decisions about where it can be trusted (e.g., navigation) and where human oversight is indispensable (e.g., life-and-death decisions).”

The researchers also asked participants to identify the most important focus for AI governance. The top response, chosen by 45 percent, was ensuring human control and oversight. Other priorities included transparency, data protection, and social well-being.

Although the study offers a broad overview of AI attitudes across a wide range of domains, there are some limitations to consider. Because the survey included 71 different scenarios, participants rated only a small portion of them. Each scenario was evaluated briefly, with single-item scales, which may not capture the full complexity of people’s thoughts or values.

“The breadth of our approach is both its strength and its limitation,” Brauner said. “We can provide a ‘bird’s-eye view’ of AI as a transformative technology, but we cannot uncover the detailed motivations, benefits, and barriers for specific applications. That is the task of more focused studies, many of which are already published or underway.”

“Another limitation is the cultural context. Our survey reflects German perspectives, which may be influenced by cultural tendencies such as “German angst”. It is essential to replicate this work in other countries and regions.”

“We have already conducted a small exploratory study comparing German and Chinese students and found both different tradeoffs nd differences in the absolute evaluations, but that sample was not representative and we could not control for key variables such as technology use,” Brauner continued. “Much more cross-cultural research is needed, and we would welcome collaborations with other researchers.”

“Looking ahead, we hope to refine and update our ‘map’ regularly to track how perceptions evolve as AI becomes more integrated into everyday life. But we also plan to focus more closely on specific applications: for example, AI as a decision aid and investigate how evaluations change when people experience errors or failures in these systems and how trust can be restored.”

The study, “Mapping public perception of artificial intelligence: Expectations, risk–benefit tradeoffs, and value as determinants for societal acceptance,” was authored by Philipp Brauner, Felix Glawe, Gian Luca Liehner, Luisa Vervier, and Martina Ziefle.

RELATED

Too much ChatGPT? Study ties AI reliance to lower grades and motivation
Artificial Intelligence

Is ChatGPT making us stupid?

August 25, 2025

When The Atlantic asked in 2008 whether Google was making people stupid, the concern was about memory and attention. Now, with ChatGPT and generative AI, the stakes appear higher: could these tools replace our very capacity for thought?

Read moreDetails
People cannot tell AI-generated from human-written poetry and they like AI poetry more
Artificial Intelligence

Top AI models fail spectacularly when faced with slightly altered medical questions

August 24, 2025

Artificial intelligence has dazzled with its test scores on medical exams, but a new study suggests this success may be superficial. When answer choices were modified, AI performance dropped sharply—raising questions about whether these systems truly understand what they're doing.

Read moreDetails
Smash or pass? AI could soon predict your date’s interest via physiological cues
Artificial Intelligence

Researchers fed 7.9 million speeches into AI—and what they found upends our understanding of language

August 23, 2025

A massive linguistic study challenges the belief that language change is driven by young people alone. Researchers found that older adults often adopt new word meanings within a few years—and sometimes even lead the change themselves.

Read moreDetails
His psychosis was a mystery—until doctors learned about ChatGPT’s health advice
Artificial Intelligence

His psychosis was a mystery—until doctors learned about ChatGPT’s health advice

August 13, 2025

Doctors were baffled when a healthy man developed hallucinations and paranoia. The cause? Bromide toxicity—triggered by an AI-guided experiment to eliminate chloride from his diet. The case raises new concerns about how people use chatbots like ChatGPT for health advice.

Read moreDetails
Brain imaging study reveals blunted empathic response to others’ pain when following orders
Artificial Intelligence

Machine learning helps tailor deep brain stimulation to improve gait in Parkinson’s disease

August 12, 2025

A new study shows that adjusting deep brain stimulation settings based on wearable sensor data and brain recordings can enhance walking in Parkinson’s disease. The personalized approach improved gait performance and revealed neural signatures linked to mobility gains.

Read moreDetails
Assimilation-induced dehumanization: Psychology research uncovers a dark side effect of AI
Artificial Intelligence

Assimilation-induced dehumanization: Psychology research uncovers a dark side effect of AI

August 11, 2025

As AI becomes more empathetic, a surprising psychological shift occurs. New research finds that interacting with emotionally intelligent machines can make us see real people as more machine-like, subtly eroding our respect for humanity.

Read moreDetails
Pet dogs fail to favor generous people over selfish ones in tests
Artificial Intelligence

AI’s personality-reading powers aren’t always what they seem, study finds

August 9, 2025

A closer look at AI language models shows that while they can detect meaningful personality signals in text, much of their success with certain datasets comes from exploiting superficial cues, raising questions about the validity of some assessments.

Read moreDetails
High sensitivity may protect against anomalous psychological phenomena
Artificial Intelligence

ChatGPT psychosis? This scientist predicted AI-induced delusions — two years later it appears he was right

August 7, 2025

A psychiatrist’s 2023 warning that AI chatbots could trigger psychosis now appears eerily accurate. Real-world cases show vulnerable users falling into delusional spirals after intense chatbot interactions—raising urgent questions about the mental health risks of generative artificial intelligence.

Read moreDetails

STAY CONNECTED

LATEST

Even in secular Denmark, supernatural beliefs remain surprisingly common, study finds

A major new study charts the surprising ways people weigh AI’s risks and benefits

Study links phubbing sensitivity to attachment patterns in romantic couples

A common childhood virus could be silently fueling Alzheimer’s disease in old age

It’s not social media: What’s really fueling Trump shooting conspiracies might surprise you

Autism’s “odd gait”: Autistic movement differences linked to brain development

Scientists achieve “striking” memory improvements by suppressing brain protein

Some neurocognitive deficits from COVID-19 may last for years, study suggests

         
       
  • Contact us
  • Privacy policy
  • Terms and Conditions
[Do not sell my information]

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Subscribe
  • My Account
  • Cognitive Science Research
  • Mental Health Research
  • Social Psychology Research
  • Drug Research
  • Relationship Research
  • About PsyPost
  • Contact
  • Privacy Policy