A recent study provides evidence that artificial intelligence can successfully generate customized images designed to trigger specific human emotions. The findings suggest that these computer-generated pictures work just as well as traditional photographs, while offering the added benefit of being adaptable to different cultures, ages, and genders. The research was published in the journal Advances in Methods and Practices in Psychological Science.
Generative AI refers to computer systems that can create new content, such as text or pictures, based on simple written instructions. Scientists often use collections of photographs to study human emotions, a process known as affect induction.
By showing participants specific images, researchers can reliably trigger feelings like fear, joy, or disgust in a laboratory setting. This allows scientists to study how emotions influence human behavior and decision making.
However, older image collections have started to show their age. Many traditional photographs suffer from low resolution or feature outdated fashion styles that might distract participants from the intended emotion. Existing image databases also tend to lack cultural diversity. Most traditional photos feature Western settings and people, which can limit how well they work for individuals from other parts of the world.
To solve these problems, a massive international team of forty six scientists joined forces. They wanted to see if generative AI could create a more flexible and up to date collection of emotional images.
“What led me to explore this topic was really two things. First, in affective science we have long worked with image sets that are valuable, but also come with clear limitations — for example in diversity, flexibility, cultural fit, and the ability to update or expand them for new research questions,” said corresponding author Maciej Behnke, an associate professor of psychology at Adam Mickiewicz University.
“Second, this project also grew out of a personal step in my academic career. After securing tenure in Poland, I decided to take a step back intellectually and begin a bachelor’s degree in computer science, because I wanted to understand AI more deeply rather than only follow it from a distance. Over time, it became natural to combine these two paths: my background in affective science and my growing interest in artificial intelligence.”
The scientists ChatGPT-4o to write detailed descriptions of existing emotional photographs, and then fed those descriptions into image-generating tools, such as Midjourney and Freepik, to produce the new pictures. They produced eight hundred forty seven distinct images designed to trigger twelve specific emotional states, including amusement, awe, anger, attachment love, craving, disgust, excitement, fear, joy, neutral, nurturant love, and sadness.
The research team did not rely on the computer programs alone. They used a collaborative process where local cultural experts reviewed the images and asked the AI to make specific adjustments.
This allowed the team to adapt the base images for different demographic groups. They created specific versions of the pictures to reflect six broad cultural regions, including Asian, African, Arabic, Indian, Latin American, and Western contexts.
They also made sure the images could feature different types of people. The scientists generated matching variations that changed the sex and age of the people in the photos without altering the overall emotional scene.
To test how well the pictures worked, the researchers recruited 2,470 participants from 58 different countries. The participants were divided across six separate experiments. During the experiments, participants looked at both traditional photographs and the newly generated pictures. Each image appeared on a screen for four seconds.
After seeing a picture, participants rated how strongly they felt different emotions on a scale from one to seven. They also rated whether the image made them feel positive or negative, and whether it made them feel calm or energized.
The scientists found that the computer-generated images were just as effective at bringing about emotional responses as the traditional photographs. For positive emotions like amusement and awe, the AI pictures often produced even stronger reactions.
When participants viewed images tailored to their specific cultural background, they reported slightly stronger emotional responses compared to viewing mismatched images. This suggests that customizing visual materials to fit a person’s culture can improve how well those materials work.
“Our findings showed that culturally adjusted images produced slightly stronger emotional responses, which reinforces the idea that people respond more strongly to stimuli that better reflect their own context,” Behnke told PsyPost. “That also carries a broader message: in psychological research, we should move beyond the habit of showing the same White, Western-centered imagery everywhere.”
The researchers also found that changing the sex or age of the people in the images did not reduce the emotional impact. This provides evidence that scientists can safely change demographic details to fit their specific research needs.
The scientists also calculated the smallest difference in emotional intensity that a person can actually notice. They found that even when participants rated two images as causing similar emotional reactions, minor shifts in their feelings could still be measured mathematically.
The computer-generated images were slightly less effective at triggering negative emotions like sadness or anger. The researchers noted that safety filters built into the AI programs often prevented the creation of mildly graphic or upsetting content.
“I think the key message is that AI can help us do better science,” Behnke said. “We found that AI-generated images were good enough to evoke emotional responses at a level comparable to traditional research image sets, and that they can also be adapted to different cultural contexts without losing their impact. That matters not only for researchers, but also for the public, because it shows that AI-generated images are already powerful enough to influence how people feel.”
One potential misinterpretation of this study is that creating emotional images is now a fully automated process. The researchers point out that human oversight remains strictly necessary to ensure the pictures are psychologically useful and ethically acceptable.
“AI can help generate and refine stimuli, but it still takes human creativity to come up with meaningful ideas and human expertise to judge whether an image is psychologically useful, culturally appropriate, and ethically acceptable,” Behnke said. “Our study really supports a human-in-the-loop model, not a replacement-of-humans model.”
The study also has some limitations. All the experiments were conducted in English, and the broad cultural categories used by the researchers do not capture the full diversity within specific global regions. The AI programs also struggled with certain visual details. Some generated faces appeared unrealistically attractive, and the software sometimes produced anatomical errors like poorly rendered hands.
Because AI technology advances very quickly, specific computer programs change from month to month. This rapid evolution means that scientists will need to constantly update their image collections to keep up with changing standards.
In the future, the research team hopes to explore how AI might generate dynamic videos rather than just still photographs. They also plan to investigate ways to personalize emotional images for individual participants to make scientific studies even more accurate.
“One of our long-term goals is to understand more broadly what AI can contribute to affective science,” Behnke explained. “In related work, we are already examining whether AI models can predict viewers’ emotional reactions to images, which opens up a new set of questions about how AI might support emotion research beyond stimulus creation.”
“More broadly, I think AI could help move the field toward personalization. At the moment, affective science largely relies on one-size-fits-all stimuli, even though people differ a lot in what actually makes them feel happy, sad, afraid, or angry. In the future, AI may allow us to tailor stimuli to the individual, so that emotion induction becomes more reliable and more meaningful for each participant.”
“I would also like to emphasize that this study was only possible because so many people were willing to contribute their time, expertise, and energy,” Behnke added. “For me, that is part of a bigger lesson: I believe large-scale team science is the future of affective science. If we want to understand emotions across people and cultures, we need to stop thinking so locally and start building more global, collaborative research efforts.”
The study, “Using Artificial Intelligence to Generate Affective Images: Methodology and Initial Library,” was authored by Maciej Behnke, Maciej Kłoskowski, Michał Klichowski, Wadim Krzyżaniak, Kacper Szymański, Patryk Maciejewski, Patrycja Chwiłkowska, Marta Kowal, Rafał Jończyk, Jan Nowak, Szymon Kupiński, Dominika Kunc, Stanisław Saganowski, Aakash A. Chowkase, Farida Guemaz, Kevin S. Kertechian, Ameer I. M. T. Maadal, Leonardo A. Aguilar, Barnabas T. Alayande, Vimala Balakrishnan, Dana M. Basnight-Brown, Jordane Boudesseul, Tomás A. D’Amelio, Jovi C. Dacanay, Abhishek Dedhe, Shan Gao, Joao F. G. B. Takayanagi, Md. Rohmotul Islam, Alvaro Mailhos, Christine M. Mpyangu, Moises Mebarak, Arooj Najmussaqib, Ju Hee Park, Ekaterine Pirtskhalava, Eli Rice, Sohrab Sami, Yuki Yamada, Jan Baczyński, Lilianna Dera, Szymon Jęśko-Białek, Jakub Łączkowski, Hubert Marciniak, Filip Nowicki, Bartosz Wilczek, James J. Gross, and Nicholas A. Coles.