New research provides evidence that people’s evaluations of artificial intelligence are shaped more strongly by perceived usefulness than by fears about safety or harm. The study, published in Technological Forecasting and Social Change, sheds light on how the public navigates the trade-offs between expected benefits and potential risks across a wide range of AI applications—from healthcare and transportation to surveillance and warfare.
Rapid advances in artificial intelligence, particularly in tools like large language models, have generated a mix of optimism and anxiety. While these technologies promise to improve efficiency, support medical diagnoses, and enable new forms of creativity, they also raise concerns about surveillance, misinformation, automation, and power imbalances. Public sentiment, in this context, plays a key role in determining how AI is adopted, governed, and developed. If the public sees AI as untrustworthy or irrelevant to daily life, even highly capable systems may struggle to gain acceptance.
Despite the growing literature on AI ethics and trust, most previous research has either examined attitudes toward specific technologies—such as self-driving cars—or asked people for general impressions of AI. Few studies have attempted to map the full terrain of public opinion across a broad range of future possibilities. This study aimed to fill that gap by systematically analyzing how people balance expectations, perceived risks, and potential benefits across a wide array of imagined AI applications.
“We were motivated by both personal interests and the real-world challenges posed by AI’s rapid expansion. Personally, I use AI in many ways: it helps me write more efficiently, creates playful drawing prompts for my toddlers, and accelerates my learning,” said study author Philipp Brauner, a postdoctoral researcher at the Human-Computer Interaction Center and Chair of Communication Science at RWTH Aachen University.
“At the same time, I’m concerned about its downsides: How much should AI know about me? Who regulates it, and how effective is that regulation? Will humans remain in control? These questions are widely discussed not only in academic circles but also in the media and the public sphere.”
“Given the unprecedented speed of AI development, it is difficult even for experts to maintain an overview of AI and its implications. Our goal was to capture how people currently view AI, across a broad range of applications, and to put those perceptions into context.”
For their study, the researchers recruited 1,100 participants from Germany, using an online survey that was designed to reflect the national population in terms of age, gender, and regional background. The median age of participants was 51 years. The researchers presented each participant with a randomized subset of 15 micro-scenarios, drawn from a pool of 71 brief statements describing hypothetical AI developments expected within the next ten years.
These scenarios covered a wide range of domains, including AI-generated art, autonomous warfare, healthcare, criminal justice, and emotional relationships. Participants rated each scenario along several dimensions: how likely they thought it was to occur, how personally risky it seemed, how useful or beneficial they believed it would be, and whether they viewed it as positive or negative overall.
In addition to scenario ratings, the researchers collected demographic data and administered short measures of personality traits, including technology readiness, interpersonal trust, self-efficacy, openness, and familiarity with AI systems.
Statistical models were used to analyze both how individual characteristics shaped AI evaluations and how the scenarios themselves were rated in terms of public sentiment. Risk–benefit maps and expectancy–value plots were also created to help visualize the landscape of AI perception.
“AI is widely recognized as a transformative technology affecting individuals, organizations, and society in countless ways,” Brauner told PsyPost. “This has led to a surge of research on public perceptions, expectations, and design requirements. However, most studies either focus narrowly on specific applications or address AI in very general terms.”
“Our study stands out in that it covers a wide variety of applications and imaginary futures and visualizes them in a ‘cognitive map’ of public perceptions. This perspective helps us understand overall risk–benefit tradeoffs, while also allowing other researchers to situate their more focused studies within the broader landscape. For instance, they can see whether their work addresses domains viewed as relatively safe and valuable, or ones considered more controversial.”
Across all scenarios, people tended to view future AI developments as fairly likely to occur. However, perceived risk was higher than perceived benefit on average, and overall sentiment was generally negative. Only a minority of scenarios were rated positively, and even those were often seen as risky.
The most positively rated scenarios included AI assisting with health improvement, providing conversation support for the elderly, and serving as a helper for everyday tasks. These were seen as both beneficial and relatively safe. In contrast, scenarios involving AI being used in warfare, making life-and-death decisions, or monitoring private life were rated as highly risky and strongly negative.
“We are often surprised by how participants evaluate certain topics,” Brauner said. “To look at just one example, the statement ‘AI will know everything about me’ was unsurprisingly rated as negative. Yet many people freely share personal details on social media or even with AI chatbots, and much of this data is indeed used to train models. This resembles the ‘privacy paradox’ observed in online privacy research: people express concerns yet behave in ways that contradict them.”
Some scenarios were viewed as likely to occur but also undesirable — such as AI being misused by criminals or automating surveillance. Others, such as AI helping people have better relationships or acting with a sense of responsibility, were seen as unlikely and received little support.
Notably, the perceived likelihood of a scenario had little or no correlation with its perceived benefit or risk. People seemed to evaluate the usefulness and moral desirability of AI independently from whether they believed the technology would become reality.
“Our representative sample of people in Germany believes AI is here to stay,” Brauner said. “However, across the many topics we surveyed, they tended to see AI as risky, with limited benefits and lower overall value. This creates a gap between public perceptions and the optimism often voiced by business leaders and politicians. Whether this reflects a specifically cultural perspective (driven by ‘German Angst?’) or our choice of topics remains an open question, and we would like to replicate the study in other countries.”
When the researchers used regression analysis to predict overall value judgments, they found that perceived benefit was the strongest predictor by far. Risk also played a role, but it had less weight in shaping general attitudes. Together, these two factors explained over 96 percent of the variation in how people evaluated each AI scenario. Perceived likelihood had no significant predictive power.
“Somewhat surprisingly, value perceptions were influenced more strongly by perceived benefits than by perceived risks,” Brauner explained. “In other words, if developers or policymakers want to improve acceptance of AI, highlighting tangible benefits may be more effective than trying to mitigate risks. This is a controversial finding, given the real dangers of issues like data breaches or AI-generated misinformation.”
Individual differences also mattered. Older participants tended to rate AI as more risky and less beneficial, and their overall evaluations were more negative than those of younger participants. Gender played a minor role, with women reporting slightly lower overall sentiment toward AI.
However, the strongest individual predictors of positive attitudes were technology readiness and AI familiarity. People who reported feeling comfortable with technology or who had more exposure to AI systems were more likely to rate AI scenarios as beneficial and less likely to view them as threatening.
“Older participants tended to see more risks fewer benefits, and lower value, while women generally assigned lower value to AI,” Brauner told PsyPost. “However, these differences were moderated by ‘AI readiness’ (knowledge and familiarity with the technology). This points to the importance of education: AI literacy should be integrated into school and university curricula, and individuals should also seek to better understand AI’s possibilities and limitations. AI will remain part of our lives, so society needs to make informed decisions about where it can be trusted (e.g., navigation) and where human oversight is indispensable (e.g., life-and-death decisions).”
The researchers also asked participants to identify the most important focus for AI governance. The top response, chosen by 45 percent, was ensuring human control and oversight. Other priorities included transparency, data protection, and social well-being.
Although the study offers a broad overview of AI attitudes across a wide range of domains, there are some limitations to consider. Because the survey included 71 different scenarios, participants rated only a small portion of them. Each scenario was evaluated briefly, with single-item scales, which may not capture the full complexity of people’s thoughts or values.
“The breadth of our approach is both its strength and its limitation,” Brauner said. “We can provide a ‘bird’s-eye view’ of AI as a transformative technology, but we cannot uncover the detailed motivations, benefits, and barriers for specific applications. That is the task of more focused studies, many of which are already published or underway.”
“Another limitation is the cultural context. Our survey reflects German perspectives, which may be influenced by cultural tendencies such as “German angst”. It is essential to replicate this work in other countries and regions.”
“We have already conducted a small exploratory study comparing German and Chinese students and found both different tradeoffs nd differences in the absolute evaluations, but that sample was not representative and we could not control for key variables such as technology use,” Brauner continued. “Much more cross-cultural research is needed, and we would welcome collaborations with other researchers.”
“Looking ahead, we hope to refine and update our ‘map’ regularly to track how perceptions evolve as AI becomes more integrated into everyday life. But we also plan to focus more closely on specific applications: for example, AI as a decision aid and investigate how evaluations change when people experience errors or failures in these systems and how trust can be restored.”
The study, “Mapping public perception of artificial intelligence: Expectations, risk–benefit tradeoffs, and value as determinants for societal acceptance,” was authored by Philipp Brauner, Felix Glawe, Gian Luca Liehner, Luisa Vervier, and Martina Ziefle.