Subscribe
The latest psychology and neuroscience discoveries.
My Account
  • Mental Health
  • Social Psychology
  • Cognitive Science
  • Psychopharmacology
  • Neuroscience
  • About
No Result
View All Result
PsyPost
PsyPost
No Result
View All Result
Home Exclusive Cognitive Science

New neuroscience research upends traditional theories of early language learning in babies

by Eric W. Dolan
December 1, 2023
in Cognitive Science, Developmental Psychology, Neuroimaging
Infant electrical brain responses were recorded using a special headcap. (Credit: Image courtesy of the Centre for Neuroscience in Education, University of Cambridge)

Infant electrical brain responses were recorded using a special headcap. (Credit: Image courtesy of the Centre for Neuroscience in Education, University of Cambridge)

Share on TwitterShare on Facebook

New research suggests that babies primarily learn languages through rhythmic rather than phonetic information in their initial months. This finding challenges the conventional understanding of early language acquisition and emphasizes the significance of sing-song speech, like nursery rhymes, for babies. The study was published in Nature Communications.

Traditional theories have posited that phonetic information, the smallest sound elements of speech, forms the foundation of language learning. In language development, acquiring phonetic information means learning to produce and understand these different sounds, recognizing how they form words and convey meaning.

Infants were believed to learn these individual sound elements to construct words. However, recent findings from the University of Cambridge and Trinity College Dublin suggest a different approach to understanding how babies learn languages.

The new study was motivated by the desire to better understand how infants process speech in their first year of life, specifically focusing on the neural encoding of phonetic categories in continuous natural speech. Previous research in this field predominantly used behavioral methods and discrete stimuli, which limited insights into how infants perceive and process continuous speech. These traditional methods were often constrained to simple listening scenarios and few phonetic contrasts, which didn’t fully represent natural speech conditions.

To address these gaps, the researchers used neural tracking measures to assess the neural encoding of the full phonetic feature inventory of continuous speech. This method allowed them to explore how infants’ brains process acoustic and phonetic information in a more naturalistic listening environment.

The study involved a group of 50 infants, monitored at four, seven, and eleven months of age. Each baby was full-term and without any diagnosed developmental disorders. The research team also included 22 adult participants for comparison, though data from five were later excluded.

In a carefully controlled environment, the infant participants were seated in a highchair, a meter away from their caregiver, inside a sound-proof chamber. The adults sat similarly in a normal chair. Each participant, whether infant or adult, was presented with eighteen nursery rhymes played via video recordings. These rhymes, sung or chanted by a native English speaker, were selected carefully to cover a range of phonetic features. The sounds were delivered at a consistent volume.

To capture how the infants’ brains responded to these nursery rhymes, the researchers used a method called electroencephalography (EEG), which records patterns of brain activity. This technique is non-invasive and involved placing a soft cap with sensors on the infants’ heads to measure their brainwaves.

The brainwave data was then analyzed using a sophisticated algorithm to decode the phonological information – allowing them to create a “readout” of how the infants’ brains were processing the different sounds in the nursery rhymes. This technique is significant as it moved beyond the traditional method of just comparing reactions to individual sounds or syllables, allowing a more comprehensive understanding of how continuous speech is processed.

Contrary to what was previously thought, the researchers found that infants do not process individual speech sounds reliably until they are about seven months old. Even at eleven months, when many babies start to say their first words, the processing of these sounds is still sparse.

Furthermore, the study discovered that phonetic encoding in babies emerged gradually over the first year. The ‘read out’ of brain activity showed that the processing of speech sounds in infants started with simpler sounds like labial and nasal ones, and this processing became more adult-like as they grew older.

“Our research shows that the individual sounds of speech are not processed reliably until around seven months, even though most infants can recognize familiar words like ‘bottle’ by this point,” said study co-author Usha Goswami, a professor at the University of Cambridge. “From then individual speech sounds are still added in very slowly – too slowly to form the basis of language.”

The current study is part of the BabyRhythm project, which is led by Goswami.

First author Giovanni Di Liberto, a professor at Trinity College Dublin, added: “This is the first evidence we have of how brain activity relates to phonetic information changes over time in response to continuous speech.”

The researchers propose that rhythmic speech – the pattern of stress and intonation in spoken language – is crucial for language learning in infants. They found that rhythmic speech information was processed by babies as early as two months old, and this processing predicted later language outcomes.

The findings challenge traditional theories of language acquisition that emphasize the rapid learning of phonetic elements. Instead, the study suggests that the individual sounds of speech are not processed reliably until around seven months, and the addition of these sounds into language is a gradual process.

The study underscores the importance of parents talking and singing to their babies, using rhythmic speech patterns such as those found in nursery rhymes. This could significantly influence language outcomes, as rhythmic information serves as a framework for adding phonetic information.

“We believe that speech rhythm information is the hidden glue underpinning the development of a well-functioning language system,” said Goswami. “Infants can use rhythmic information like a scaffold or skeleton to add phonetic information on to. For example, they might learn that the rhythm pattern of English words is typically strong-weak, as in ‘daddy’ or ‘mummy,’ with the stress on the first syllable. They can use this rhythm pattern to guess where one word ends and another begins when listening to natural speech.”

“Parents should talk and sing to their babies as much as possible or use infant directed speech like nursery rhymes because it will make a difference to language outcome,” she added.

While this study offers valuable insights into infant language development, it’s important to recognize its limitations. The research focused on a specific demographic – full-term infants without developmental disorders, mainly from a monolingual English-speaking environment. Future research could look into how infants from different linguistic and cultural backgrounds, or those with developmental challenges, process speech.

Additionally, the study opens up new avenues for exploring how early speech processing relates to language disorders, such as dyslexia. This could be particularly significant in understanding and potentially intervening in these conditions early in life.

The study, “Emergence of the cortical encoding of phonetic features in the first year of life“, was authored by Giovanni M. Di Liberto, AdamAttaheri, Giorgia Cantisani, Richard B. Reilly, ÁineNí Choisdealbha, SineadRocha, PerrineBrusini, and Usha Goswami.

RELATED

Morning light exposure plays key role in children’s sleep onset, study finds
Developmental Psychology

Early father-child bonding predicts lower inflammation in children

January 17, 2026
Scientists link dyslexia risk genes to brain differences in motor, visual, and language areas
Cognitive Science

Elite army training reveals genetic markers for resilience

January 17, 2026
Psilocybin therapy alters prefrontal and limbic brain circuitry in alcohol use disorder
Addiction

Heroin addiction linked to a “locally hyperactive but globally disconnected” brain state during creative tasks

January 17, 2026
A simple 30-minute EEG test may predict who will experience sexual dysfunction from SSRIs
Depression

A simple 30-minute EEG test may predict who will experience sexual dysfunction from SSRIs

January 17, 2026
Neuroscientists find evidence meditation changes how fluid moves in the brain
Meditation

Neuroscientists find evidence meditation changes how fluid moves in the brain

January 16, 2026
Spacing math practice across multiple sessions improves students’ test scores and helps them accurately judge their learning
Cognitive Science

Boys and girls tend to use different strategies to solve math problems, new research shows

January 15, 2026
Long-COVID recovery: The promising combo of breath exercises and creatine supplementation
COVID-19

COVID-19 infection may alter brain microstructure even in people who fully recover

January 15, 2026
Faith and gray matter: New study finds no relationship between brain structure and religiosity
Mental Health

Excessive smartphone users show heightened brain reactivity to social exclusion

January 15, 2026

PsyPost Merch

STAY CONNECTED

LATEST

Early father-child bonding predicts lower inflammation in children

Learning from AI summaries leads to shallower knowledge than web search

Elite army training reveals genetic markers for resilience

Personal beliefs about illness drive treatment uptake in untreated depression

People readily spot gender and race bias but often overlook discrimination based on attractiveness

Data from 28,000 people reveals which conspiracy debunking strategies tend to work best

Heroin addiction linked to a “locally hyperactive but globally disconnected” brain state during creative tasks

A simple 30-minute EEG test may predict who will experience sexual dysfunction from SSRIs

RSS Psychology of Selling

  • Researchers track how online shopping is related to stress
  • New study reveals why some powerful leaders admit mistakes while others double down
  • Study reveals the cycle of guilt and sadness that follows a FOMO impulse buy
  • Why good looks aren’t enough for virtual influencers
  • Eye-tracking data shows how nostalgic stories unlock brand memory
         
       
  • Contact us
  • Privacy policy
  • Terms and Conditions
[Do not sell my information]

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Subscribe
  • My Account
  • Cognitive Science Research
  • Mental Health Research
  • Social Psychology Research
  • Drug Research
  • Relationship Research
  • About PsyPost
  • Contact
  • Privacy Policy