Findings from a machine learning study suggest that some of the speech differences associated with autism are consistent across languages, while others are language-specific. The study, published in the journal PLOS One, was conducted among separate samples of English speakers and Cantonese speakers.
Autism spectrum disorder (ASD) is often accompanied by differences in speech prosody. Speech prosody describes aspects of speech, like rhythm and intonation, that help us express emotions and convey meaning with our words. Atypical speech prosody can interfere with a person’s communication and social abilities, for example, by causing a person to misunderstand others or be misunderstood themselves. The reason these speech differences commonly present among autistic people is not fully understood.
Study author Joseph C. Y. Lau and his team wanted to shed light on this topic by studying prosodic features associated with autism across two typologically distinct languages.
“As a speech scientist who is bilingual and born in a foreign country, how different cultures and language experience shape human beings always fascinate me. When studying autism, there are so many novel insights that a cross-cultural and cross-linguistic approach could enlighten us with,” said Lau, a research assistant professor at Northwestern University and member of Molly Losh’s Neurodevelopmental Disabilities Laboratory.
While most related studies have examined English-speaking groups, prosody is used differently across languages. Accordingly, there is reason to believe that the prosodic features associated with autism are also language-specific. With their study, Lau and his colleagues investigated which aspects of speech prosody are reliably associated with autism across languages and which are not.
The participants of the study were native English speakers from the United States and Cantonese speakers from Hong Kong. Among the English group, 55 of the participants were autistic, and 39 were neurotypical. Among the Cantonese group, 28 participants were autistic, and 24 were neurotypical. All participants were asked to narrate the story of a wordless picture book. Their speech was recorded, transcribed, and then divided into individual utterances for further examination.
The researchers used a computer program to extract the speech rhythm and intonation from the narrative samples. Rhythm refers to variations in timing and loudness of speech, while intonation refers to variations in voice pitch. The researchers then used machine learning, a technique that uses computer systems to analyze and interpret data, to try to classify autistic participants versus participants with typical development.
The findings revealed that speech rhythm could reliably classify autistic participants versus neurotypical participants among both the English and Cantonese samples. However, speech intonation could only classify autistic participants versus neurotypical participants in the English sample. Further, when the researchers analyzed a combined dataset of both English and Cantonese speakers, only speech rhythm reliably classified autistic individuals from neurotypical participants.
These results indicate that there were features of speech rhythm that offered enough information for the machine learning algorithm to distinguish between an autistic speaker and a neurotypical speaker. The authors said this falls in line with past research suggesting that autistic people demonstrate reliable differences in stress patterns, speech rate, and loudness of speech. Moreover, the findings suggest that these differences are consistent across two distinct languages.
“When we use a AI-based analytic method to study features of autism across languages in an objective and holistic way, we can see there are features that are strikingly common in autistic individuals from different parts of the world; meanwhile there are also some other features of autism that are manifested differently, as shaped by their language and culture,” Lau told PsyPost.
“In our case, we found that speech rhythm (i.e., the periodicity of speech patterns) showed such commonalities, whereas intonation (i.e., the variation of pitch when we speak) demonstrated differences cross-linguistically. Identifying common features may give us a pathway to examine the deep biological bases of autism that strongly influence language and behavior in a homogenous way in autism across cultures. On the other hand, cross-culturally or cross-linguistically different features may reflect features of autism that can be more easily changed by experience, which may potentially reflect potentially good targets for clinical intervention.”
Notably, intonation only predicted an autism diagnosis among the English sample, but not the Cantonese sample. The study authors say this could be because Cantonese is a tone language, which means that pitch can be used to change the meaning of words. “It is possible that the prolific usage of linguistic pitch in tone languages provides a compensatory effect ameliorating intonational differences in ASD,” the authors wrote. While further research in this area is needed, this may suggest that autistic people who speak non-tonal languages can benefit from speech interventions that focus on pitch and intonation.
“We believe that the cross-linguistic commonality of rhythmic patterns of autistic speech implicated in our study leads to an important follow-up area of enquiry: whether rhythm is a potentially universal feature of ASD that is less malleable by experience such as language,” Lau explained. “Testing this hypothesis will require a much larger sample size and many other languages, including languages from different language families around the world. We look forward to expanding our research programs and fostering international collaborations that will one day make such an investigation possible.”
“Although the immediate benefit of this study to the autistic community may seem little as it currently stands, we do hope that other than its theoretical implications, this machine learning study can inspire future scientific and technical developments that can make more direct benefits to the autistic community, such as in the realm of AI-assisted healthcare,” the researcher added.
The study, “Cross-linguistic patterns of speech prosodic differences in autism: A machine learning study”, was authored by Joseph C. Y. Lau, Shivani Patel, Xin Kang, Kritika Nayar, Gary E. Martin, Jason Choy, Patrick C. M. Wong, and Molly Losh.