Subscribe
The latest psychology and neuroscience discoveries.
My Account
  • Mental Health
  • Social Psychology
  • Cognitive Science
  • Psychopharmacology
  • Neuroscience
  • About
No Result
View All Result
PsyPost
PsyPost
No Result
View All Result
Home Exclusive Cognitive Science

New neuroscience research upends traditional theories of early language learning in babies

by Eric W. Dolan
December 1, 2023
in Cognitive Science, Developmental Psychology, Neuroimaging
Infant electrical brain responses were recorded using a special headcap. (Credit: Image courtesy of the Centre for Neuroscience in Education, University of Cambridge)

Infant electrical brain responses were recorded using a special headcap. (Credit: Image courtesy of the Centre for Neuroscience in Education, University of Cambridge)

Share on TwitterShare on Facebook

New research suggests that babies primarily learn languages through rhythmic rather than phonetic information in their initial months. This finding challenges the conventional understanding of early language acquisition and emphasizes the significance of sing-song speech, like nursery rhymes, for babies. The study was published in Nature Communications.

Traditional theories have posited that phonetic information, the smallest sound elements of speech, forms the foundation of language learning. In language development, acquiring phonetic information means learning to produce and understand these different sounds, recognizing how they form words and convey meaning.

Infants were believed to learn these individual sound elements to construct words. However, recent findings from the University of Cambridge and Trinity College Dublin suggest a different approach to understanding how babies learn languages.

The new study was motivated by the desire to better understand how infants process speech in their first year of life, specifically focusing on the neural encoding of phonetic categories in continuous natural speech. Previous research in this field predominantly used behavioral methods and discrete stimuli, which limited insights into how infants perceive and process continuous speech. These traditional methods were often constrained to simple listening scenarios and few phonetic contrasts, which didn’t fully represent natural speech conditions.

To address these gaps, the researchers used neural tracking measures to assess the neural encoding of the full phonetic feature inventory of continuous speech. This method allowed them to explore how infants’ brains process acoustic and phonetic information in a more naturalistic listening environment.

The study involved a group of 50 infants, monitored at four, seven, and eleven months of age. Each baby was full-term and without any diagnosed developmental disorders. The research team also included 22 adult participants for comparison, though data from five were later excluded.

In a carefully controlled environment, the infant participants were seated in a highchair, a meter away from their caregiver, inside a sound-proof chamber. The adults sat similarly in a normal chair. Each participant, whether infant or adult, was presented with eighteen nursery rhymes played via video recordings. These rhymes, sung or chanted by a native English speaker, were selected carefully to cover a range of phonetic features. The sounds were delivered at a consistent volume.

To capture how the infants’ brains responded to these nursery rhymes, the researchers used a method called electroencephalography (EEG), which records patterns of brain activity. This technique is non-invasive and involved placing a soft cap with sensors on the infants’ heads to measure their brainwaves.

The brainwave data was then analyzed using a sophisticated algorithm to decode the phonological information – allowing them to create a “readout” of how the infants’ brains were processing the different sounds in the nursery rhymes. This technique is significant as it moved beyond the traditional method of just comparing reactions to individual sounds or syllables, allowing a more comprehensive understanding of how continuous speech is processed.

Contrary to what was previously thought, the researchers found that infants do not process individual speech sounds reliably until they are about seven months old. Even at eleven months, when many babies start to say their first words, the processing of these sounds is still sparse.

Furthermore, the study discovered that phonetic encoding in babies emerged gradually over the first year. The ‘read out’ of brain activity showed that the processing of speech sounds in infants started with simpler sounds like labial and nasal ones, and this processing became more adult-like as they grew older.

“Our research shows that the individual sounds of speech are not processed reliably until around seven months, even though most infants can recognize familiar words like ‘bottle’ by this point,” said study co-author Usha Goswami, a professor at the University of Cambridge. “From then individual speech sounds are still added in very slowly – too slowly to form the basis of language.”

The current study is part of the BabyRhythm project, which is led by Goswami.

First author Giovanni Di Liberto, a professor at Trinity College Dublin, added: “This is the first evidence we have of how brain activity relates to phonetic information changes over time in response to continuous speech.”

The researchers propose that rhythmic speech – the pattern of stress and intonation in spoken language – is crucial for language learning in infants. They found that rhythmic speech information was processed by babies as early as two months old, and this processing predicted later language outcomes.

The findings challenge traditional theories of language acquisition that emphasize the rapid learning of phonetic elements. Instead, the study suggests that the individual sounds of speech are not processed reliably until around seven months, and the addition of these sounds into language is a gradual process.

The study underscores the importance of parents talking and singing to their babies, using rhythmic speech patterns such as those found in nursery rhymes. This could significantly influence language outcomes, as rhythmic information serves as a framework for adding phonetic information.

“We believe that speech rhythm information is the hidden glue underpinning the development of a well-functioning language system,” said Goswami. “Infants can use rhythmic information like a scaffold or skeleton to add phonetic information on to. For example, they might learn that the rhythm pattern of English words is typically strong-weak, as in ‘daddy’ or ‘mummy,’ with the stress on the first syllable. They can use this rhythm pattern to guess where one word ends and another begins when listening to natural speech.”

“Parents should talk and sing to their babies as much as possible or use infant directed speech like nursery rhymes because it will make a difference to language outcome,” she added.

While this study offers valuable insights into infant language development, it’s important to recognize its limitations. The research focused on a specific demographic – full-term infants without developmental disorders, mainly from a monolingual English-speaking environment. Future research could look into how infants from different linguistic and cultural backgrounds, or those with developmental challenges, process speech.

Additionally, the study opens up new avenues for exploring how early speech processing relates to language disorders, such as dyslexia. This could be particularly significant in understanding and potentially intervening in these conditions early in life.

The study, “Emergence of the cortical encoding of phonetic features in the first year of life“, was authored by Giovanni M. Di Liberto, AdamAttaheri, Giorgia Cantisani, Richard B. Reilly, ÁineNí Choisdealbha, SineadRocha, PerrineBrusini, and Usha Goswami.

RELATED

Language learning rates in autistic children decline exponentially after age two
Early Life Adversity and Childhood Maltreatment

Early life adversity may fundamentally rewire global brain dynamics

January 6, 2026
Language learning rates in autistic children decline exponentially after age two
Autism

Language learning rates in autistic children decline exponentially after age two

January 6, 2026
What we know about a person changes how our brain processes their face
Cognitive Science

Fascinating new neuroscience model predicts intelligence by mapping the brain’s internal clocks

January 5, 2026
Scientists uncover biological pathway that could revolutionize anxiety treatment
Neuroimaging

Restoring a specific protein could rewire the brain in Down syndrome

January 4, 2026
Genetic risk for alcoholism linked to brain immune cell response, study finds
Cognitive Science

Faster biological aging predicts lower cognitive test scores 7 years later

January 4, 2026
Researchers identify two psychological traits that predict conspiracy theory belief
Neuroimaging

Brain scans reveal an emotional advantage for modest people

January 4, 2026
Mentalizing skills drive teen storytelling ability more than autism diagnosis
Developmental Psychology

Born between 2010 and 2025? Here is what psychologists say about your future

January 3, 2026
Even a little exercise could significantly lower dementia risk
Alzheimer's Disease

New cellular map reveals how exercise protects the brain from Alzheimer’s disease

January 3, 2026

PsyPost Merch

STAY CONNECTED

LATEST

Voters from both parties largely agree on how to punish acts of political violence

Psychopathy and sadism show opposite associations with reproductive success

Adults with ADHD crave more relationship support but often feel shortchanged

Women experiencing more sexual guilt have worse sexual functioning

Early life adversity may fundamentally rewire global brain dynamics

People with anxious tendencies are more likely to support left-wing economic policy

Language learning rates in autistic children decline exponentially after age two

Fascinating new neuroscience model predicts intelligence by mapping the brain’s internal clocks

RSS Psychology of Selling

  • New study reveals why some powerful leaders admit mistakes while others double down
  • Study reveals the cycle of guilt and sadness that follows a FOMO impulse buy
  • Why good looks aren’t enough for virtual influencers
  • Eye-tracking data shows how nostalgic stories unlock brand memory
  • How spotting digitally altered ads on social media affects brand sentiment
         
       
  • Contact us
  • Privacy policy
  • Terms and Conditions
[Do not sell my information]

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Subscribe
  • My Account
  • Cognitive Science Research
  • Mental Health Research
  • Social Psychology Research
  • Drug Research
  • Relationship Research
  • About PsyPost
  • Contact
  • Privacy Policy