Subscribe
The latest psychology and neuroscience discoveries.
My Account
  • Mental Health
  • Social Psychology
  • Cognitive Science
  • Psychopharmacology
  • Neuroscience
  • About
No Result
View All Result
PsyPost
PsyPost
No Result
View All Result
Home Exclusive Artificial Intelligence

Machine learning tools can predict emotion in voices in just over a second

by Deborah Pirchner
March 20, 2024
in Artificial Intelligence
(Photo credit: Adobe Stock)

(Photo credit: Adobe Stock)

Share on TwitterShare on Facebook

Can machine learning (ML) tools be used to identify which mood we’re in – and, if so, how accurate are these predictions? A team of researchers has investigated if very short audio segments are enough for ML models to tell how we feel, independent from the words that are spoken. They found that certain models can identify emotion in sound segments with approximately the same accuracy as humans. These models could enable continuous emotion classification in real-time scenarios, the researchers said.

Words are important to express ourselves. What we don’t say, however, may be even more instrumental in conveying emotions. Humans can often tell how people around them feel through non-verbal cues embedded in our voice.

Now, researchers in Germany wanted to find out if technical tools, too, can accurately predict emotional undertones in fragments of voice recordings. To do so, they compared three ML models’ accuracy to recognize diverse emotions in audio excepts. Their results were published in Frontiers in Psychology.

“Here we show that machine learning can be used to recognize emotions from audio clips as short as 1.5 seconds,” said the article’s first author Hannes Diemerling, a researcher at the Center for Lifespan Psychology at the Max Planck Institute for Human Development. “Our models achieved an accuracy similar to humans when categorizing meaningless sentences with emotional coloring spoken by actors.”

Hearing how we feel

The researchers drew nonsensical sentences from two datasets – one Canadian, one German – which allowed them to investigate whether ML models can accurately recognize emotions regardless of language, cultural nuances, and semantic content. Each clip was shortened to a length of 1.5 seconds, as this is how long humans need to recognize emotion in speech. It is also the shortest possible audio length in which overlapping of emotions can be avoided. The emotions included in the study were joy, anger, sadness, fear, disgust, and neutral.

Based on training data, the researchers generated ML models which worked one of three ways: Deep neural networks (DNNs) are like complex filters that analyze sound components like frequency or pitch – for example when a voice is louder because the speaker is angry – to identify underlying emotions. Convolutional neural networks (CNNs) scan for patterns in the visual representation of soundtracks, much like identifying emotions from the rhythm and texture of a voice. The hybrid model (C-DNN) merges both techniques, using both audio and its visual spectrogram to predict emotions. The models then were tested for effectiveness on both datasets.

“We found that DNNs and C-DNNs achieve a better accuracy than only using spectrograms in CNNs,” Diemerling said. “Regardless of model, emotion classification was correct with a higher probability than can be achieved through guessing and was comparable to the accuracy of humans.”


Read and download the original article


As good as any human

“We wanted to set our models in a realistic context and used human prediction skills as a benchmark,” Diemerling explained. “Had the models outperformed humans, it could mean that there might be patterns that are not recognizable by us.” The fact that untrained humans and models performed similarly may mean that both rely on resembling recognition patters, the researchers said.

The present findings also show that it is possible to develop systems that can instantly interpret emotional cues to provide immediate and intuitive feedback in a wide range of situations. This could lead to scalable, cost-efficient applications in various domains where understanding emotional context is crucial, such as therapy and interpersonal communication technology.

The researchers also pointed to some limitations in their study, for example, that actor-spoken sample sentences may not convey the full spectrum of real, spontaneous emotion. They also said that future work should investigate audio segments that last longer or shorter than 1.5 seconds to find out which duration is optimal for emotion recognition.

RELATED

Generative AI simplifies science communication, boosts public trust in scientists
Artificial Intelligence

New AI system reduces the mental effort of using bionic hands

December 18, 2025
AI outshines humans in humor: Study finds ChatGPT is as funny as The Onion
Artificial Intelligence

Most top US research universities now encourage generative AI use in the classroom

December 14, 2025
Media coverage of artificial intelligence split along political lines, study finds
Artificial Intelligence

Survey reveals rapid adoption of AI tools in mental health care despite safety concerns

December 13, 2025
Harrowing case report details a psychotic “resurrection” delusion fueled by a sycophantic AI
Artificial Intelligence

Harrowing case report details a psychotic “resurrection” delusion fueled by a sycophantic AI

December 13, 2025
Scientists just uncovered a major limitation in how AI models understand truth and belief
Artificial Intelligence

Scientists just uncovered a major limitation in how AI models understand truth and belief

December 11, 2025
Russian propaganda campaign used AI to scale output without sacrificing credibility, study finds
Artificial Intelligence

AI can change political opinions by flooding voters with real and fabricated facts

December 9, 2025
How common is anal sex? Scientific facts about prevalence, pain, pleasure, and more
Artificial Intelligence

Humans and AI both rate deliberate thinkers as smarter than intuitive ones

December 5, 2025
Song lyrics have become simpler, more negative, and more self-focused over time
Artificial Intelligence

An “AI” label fails to trigger negative bias in new pop music study

November 30, 2025

PsyPost Merch

STAY CONNECTED

LATEST

Non-intoxicating cannabis compound may reverse opioid-induced brain changes

How running tricks your brain into overestimating time

Escitalopram normalizes brain activity related to social anxiety disorder, study finds

Testosterone alters how men respond to unfairness against women

Scientists explain why nothing feels quite like the first time by tracking dopamine during fly sex

New AI system reduces the mental effort of using bionic hands

Brief computer-assisted therapy alters brain connectivity in depression

Scientists propose cognitive “digital twins” to monitor and protect mental health

RSS Psychology of Selling

  • Study finds consumers must be relaxed for gamified ads to drive sales
  • Brain scans reveal increased neural effort when marketing messages miss the mark
  • Mental reconnection in the morning fuels workplace proactivity
  • The challenge of selling the connected home
  • Consumers prefer emotionally intelligent AI, but not for guilty pleasures
         
       
  • Contact us
  • Privacy policy
  • Terms and Conditions
[Do not sell my information]

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Subscribe
  • My Account
  • Cognitive Science Research
  • Mental Health Research
  • Social Psychology Research
  • Drug Research
  • Relationship Research
  • About PsyPost
  • Contact
  • Privacy Policy