Subscribe
The latest psychology and neuroscience discoveries.
My Account
  • Mental Health
  • Social Psychology
  • Cognitive Science
  • Psychopharmacology
  • Neuroscience
  • About
No Result
View All Result
PsyPost
PsyPost
No Result
View All Result
Home Exclusive Music

Scientists uncover cross-cultural regularities in songs

by Eric W. Dolan
June 20, 2024
in Music
(Photo credit: Adobe Stock)

(Photo credit: Adobe Stock)

Share on TwitterShare on Facebook

Language and music are universal aspects of human culture, yet they manifest in highly diverse forms across different societies. A recent study published in Science Advances aimed to understand the shared features and distinct differences between speech and song across cultures. The findings revealed significant cross-cultural regularities in the acoustic features of songs.

The motivation behind this study stemmed from a longstanding curiosity about the evolutionary functions of language and music. Both forms of vocal communication use rhythm and pitch, leading researchers to speculate on their possible coevolution. Despite these speculations, there has been a lack of empirical data to determine what similarities and differences exist between music and language on a global scale.

Previous studies have explored neural mechanisms and identified some universal features within music and language, but comparative analyses of their acoustic attributes, especially across diverse cultures, have been limited. The new study aimed to fill that gap by examining the acoustic features of speech and song from various cultural contexts to identify potential universal patterns and unique distinctions.

To achieve this, a diverse team of 75 researchers, representing speakers of 55 languages from across Asia, Africa, the Americas, Europe, and the Pacific, was assembled. These researchers included experts in ethnomusicology, music psychology, linguistics, and evolutionary biology. Each participant recorded themselves performing four types of vocalizations: singing a traditional song, reciting the song’s lyrics, describing the song verbally, and performing the song instrumentally.

One of the most striking results was the consistent use of higher pitches in songs compared to speech. This pattern was observed across all cultures studied, suggesting that higher pitch is a defining characteristic of musical vocalization universally.

Additionally, songs were found to have a slower temporal rate than speech. This slower pace may facilitate synchronization and social bonding, which are essential functions of music in many cultural contexts.

Another significant finding was the greater pitch stability observed in songs compared to speech. Stable pitches are a hallmark of music, and this consistency likely aids in harmonization and the creation of melodious sequences.

Interestingly, while both speech and song displayed similar timbral brightness, indicating shared vocal mechanisms, there was no significant difference in pitch interval size between the two forms of vocalization. This similarity suggests that both speech and song use pitch in comparable ways, despite their different communicative purposes.

However, the study’s initial hypothesis that pitch declination would show a significant difference between speech and song was not supported. This feature, which measures the change in pitch over time, did not vary significantly across the vocalizations, indicating that both forms might use pitch declination in similar ways, contrary to what was previously thought.

The study provides “strong evidence for cross-cultural regularities,” according to senior author Patrick Savage, the Director of the CompMusic Lab at the University of Auckland.

The similarities in timbral brightness and pitch interval size suggest underlying constraints on vocalization that apply broadly to both speech and music. On the other hand, the differences in pitch height, temporal rate, and pitch stability highlight the unique characteristics of musical vocalization, which may have evolved to fulfill specific social and communicative functions distinct from those of speech.

Savage suggested that songs are more predictably regular than speech because they serve to facilitate social bonding. This regularity in rhythm and pitch likely helps individuals harmonize and connect with one another. “Slow, regular, predictable melodies make it easier for us to sing together in large groups,” he explained. “We’re trying to shed light on the cultural and biological evolution of two systems that make us human: music and language.”

In addition to their original recordings, the researchers analyzed an alternative dataset consisting of 418 previously published recordings of adult-directed songs and speech. These recordings were collected from 209 individuals who spoke 16 different languages. This additional dataset provided a valuable opportunity to validate the study’s findings and explore whether the observed patterns held true across an even broader range of languages and cultural contexts.

The analysis of the alternative dataset confirmed many of the key findings from the original recordings. Similar to the primary dataset, songs in this collection generally used higher pitches, were slower, and exhibited more stable pitches than speech. These consistent results across two independent datasets reinforce the conclusion that these acoustic features are robust indicators of the differences between song and speech globally.

The study, “Globally, songs and instrumental melodies are slower and higher and use more stable pitches than speech: A Registered Report,” was authored by Yuto Ozaki, Adam Tierney, Peter Q. Pfordresher, John M. McBride, Emmanouil Benetos, Polina Proutskova, Gakuto Chiba, Fang Liu, Nori Jacoby, Suzanne C. Purdy, Patricia Opondo, W. Tecumseh Fitch, Shantala Hegde, Martín Rocamora, Rob Thorne, Florence Nweke, Dhwani P. Sadaphal, Parimal M. Sadaphal, Shafagh Hadavi, Shinya Fujii, Sangbuem Choo, Marin Naruse, Utae Ehara, Latyr Sy, Mark Lenini Parselelo, Manuel Anglada-Tort, Niels Chr. Hansen, Felix Haiduk, Ulvhild Færøvik, Violeta Magalhães, Wojciech Krzyżanowski, Olena Shcherbakova, Diana Hereld, Brenda Suyanne Barbosa, Marco Antonio Correa Varella, Mark van Tongeren, Polina Dessiatnitchenko, Su Zar Zar, Iyadh El Kahla, Olcay Muslu, Jakelin Troy, Teona Lomsadze, Dilyana Kurdova, Cristiano Tsope, Daniel Fredriksson, Aleksandar Arabadjiev, Jehoshaphat Philip Sarbah, Adwoa Arhine, Tadhg Ó Meachair, Javier Silva-Zurita, Ignacio Soto-Silva, Neddiel Elcie Muñoz Millalonco, Rytis Ambrazevičius, Psyche Loui, Andrea Ravignani, Yannick Jadoul, Pauline Larrouy-Maestri, Camila Bruder, Tutushamum Puri Teyxokawa, Urise Kuikuro, Rogerdison Natsitsabui, Nerea Bello Sagarzazu, Limor Raviv, Minyu Zeng, Shahaboddin Dabaghi Varnosfaderani, Juan Sebastián Gómez-Cañón, Kayla Kolff, Christina Vanden Bosch der Nederlanden, Meyha Chhatwal, Ryan Mark David, I. Putu Gede Setiawan, Great Lekakul, Vanessa Nina Borsan, Nozuko Nguqu, and Patrick E. Savage.

RELATED

Paternal psychological strengths linked to lower maternal inflammation in married couples
Dementia

Music training may delay age-related hearing decline by a decade

December 15, 2025
Synesthesia is several times more frequent in musicians than in nonmusicians
Music

Synesthesia is several times more frequent in musicians than in nonmusicians

December 7, 2025
Children with better musical skills may benefit from a prolonged window of brain plasticity
Developmental Psychology

Children with better musical skills may benefit from a prolonged window of brain plasticity

December 6, 2025
Song lyrics have become simpler, more negative, and more self-focused over time
Artificial Intelligence

An “AI” label fails to trigger negative bias in new pop music study

November 30, 2025
Eye-tracking study reveals which facial features truly matter in attraction
Cognitive Science

Your body’s hidden reaction to musical rhythm involves your eyes

November 24, 2025
Study identifies creativity and resilience as positive aspects of ADHD diagnosis
Cognitive Science

Musicians possess a superior internal map of their body in space

November 22, 2025
Positivity resonance predicts lasting love, according to new psychology research
Music

Listening to your favorite songs modulates your brain’s opioid system

November 18, 2025
New study connects Mediterranean diet to positive brain chemistry
Cognitive Science

Scientists reveal intriguing new insights into how the brain processes and predicts sounds

November 18, 2025

PsyPost Merch

STAY CONNECTED

LATEST

Deep sleep reorganizes brain networks used for memory recall

Volume reduction in amygdala tracks with depression relief after ketamine infusions

Couples share a unique form of contagious forgetting, new research suggests

Naturalistic study reveals nuanced cognitive effects of cannabis on frequent older users

New study identifies five strategies women use to detect deception in dating

The mood-enhancing benefits of caffeine are strongest right after waking up

New psychology research flips the script on happiness and self-control

Disrupted sleep might stop the brain from flushing out toxic waste

RSS Psychology of Selling

  • Brain scans reveal increased neural effort when marketing messages miss the mark
  • Mental reconnection in the morning fuels workplace proactivity
  • The challenge of selling the connected home
  • Consumers prefer emotionally intelligent AI, but not for guilty pleasures
  • Active listening improves likability but does not enhance persuasion
         
       
  • Contact us
  • Privacy policy
  • Terms and Conditions
[Do not sell my information]

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Subscribe
  • My Account
  • Cognitive Science Research
  • Mental Health Research
  • Social Psychology Research
  • Drug Research
  • Relationship Research
  • About PsyPost
  • Contact
  • Privacy Policy