Subscribe
The latest psychology and neuroscience discoveries.
My Account
  • Mental Health
  • Social Psychology
  • Cognitive Science
  • Psychopharmacology
  • Neuroscience
  • About
No Result
View All Result
PsyPost
PsyPost
No Result
View All Result
Home Exclusive Artificial Intelligence

A neuroscientist explains why it’s impossible for AI to “understand” language

by Veena D. Dwivedi
June 12, 2025
in Artificial Intelligence
[Adobe Stock]

[Adobe Stock]

Share on TwitterShare on Facebook
Stay on top of the latest psychology findings: Subscribe now!

As meaning-makers, we use spoken or signed language to understand our experiences in the world around us. The emergence of generative artificial intelligence such as ChatGPT (using large language models) call into question the very notion of how to define “meaning.”

One popular characterization of AI tools is that they “understand” what they are doing. Nobel laureate and AI pioneer Geoffrey Hinton said: “What’s really surprised me is how good neural networks are at understanding natural language — that happened much faster than I thought…. And I’m still amazed that they really do understand what they’re saying.”

Hinton repeated this claim in an interview with Adam Smith, chief scientific officer for Nobel Prize Outreach. In it, Hinton stated that “neural nets are much better at processing language than anything ever produced by the Chomskyan school of linguistics.”

Chomskyan linguistics refers to American linguist Noam Chomsky’s theories about the nature of human language and its development. Chomsky proposes that there is a universal grammar innate in humans, which allows for the acquisition of any language from birth.

I’ve been researching how humans understand language since the 1990s, including more than 20 years of studies on the neuroscience of language. This has included measuring brainwave activity as people read or listen to sentences. Given my experience, I have to respectfully disagree with the idea that AI can “understand” — despite the growing popularity of this belief.

Generating text

First, it’s unfortunate that most people conflate text on a screen with natural language. Written text is related to — but not the same thing as — language.

For example, the same language can be represented by vastly different visual symbols. Look at Hindi and Urdu, for instance. At conversational levels, these are mutually intelligible and therefore considered the same language by linguists. However, they use entirely different writing scripts. The same is true for Serbian and Croatian. Written text is not the same thing as “language.”

Next let’s take a look at the claim that machine learning algorithms “understand” natural language. Linguistic communication mostly happens face-to-face, in a particular environmental context shared between the speaker and listener, alongside cues such as spoken tone and pitch, eye contact and facial and emotional expressions.

The importance of context

There is a lot more to understanding what a person is saying than merely being able to comprehend their words. Even babies, who are not experts in language yet, can comprehend context cues.

Take, for example, the simple sentence: “I’m pregnant,” and its interpretations in different contexts. If uttered by me, at my age, it’s likely my husband would drop dead with disbelief. Compare that level of understanding and response to a teenager telling her boyfriend about an unplanned pregnancy, or a wife telling her husband the news after years of fertility treatments.

In each case, the message recipient ascribes a different sort of meaning — and understanding — to the very same sentence.

In my own recent research, I have shown that even an individual’s emotional state can alter brainwave patterns when processing the meaning of a sentence. Our brains (and thus our thoughts and mental processes) are never without emotional context, as other neuroscientists have also pointed out.

So, while some computer code can respond to human language in the form of text, it does not come close to capturing what humans — and their brains — accomplish in their understanding.

It’s worth remembering that when workers in AI talk about neural networks, they mean computer algorithms, not the actual, biological brain networks that characterize brain structure and function. Imagine constantly confusing the word “flight” (as in birds migrating) versus “flight” (as in airline routes) — this could lead to some serious misunderstandings!

Finally, let’s examine the claim about neural networks processing language better than theories produced by Chomskyan linguistics. This field assumes that all human languages can be understood via grammatical systems (in addition to context), and that these systems are related to some universal grammar.

Chomsky conducted research on syntactic theory as a paper-and-pencil theoretician. He did not conduct experiments on the psychological or neural bases of language comprehension. His ideas in linguistics are absolutely silent on the mechanisms underlying sentence processing and understanding.

What the Chomskyan school of linguistics does do, however, is ask questions about how human infants and toddlers can learn language with such ease, barring any neurobiological deficits or physical trauma.

There are at least 7,000 languages on the planet, and no one gets to pick where they are born. That means the human brain must be ready to comprehend and learn the language of their community at birth.

From this fact about language development, Chomsky posited an (abstract) innate module for language learning — not processing. From a neurobiological standpoint, the brain has to be ready to understand language from birth.

While there are plenty of examples of language specialization in infants, the precise neural mechanisms are still unknown, but not unknowable. But objects of study become unknowable when scientific terms are misused or misapplied. And this is precisely the danger: conflating AI with human understanding can lead to dangerous consequences.The Conversation

 

This article is republished from The Conversation under a Creative Commons license. Read the original article.

TweetSendScanShareSendPinShareShareShareShareShare

RELATED

Is ChatGPT really more creative than humans? New research provides an intriguing test
ADHD

Scientists use deep learning to uncover hidden motor signs of neurodivergence

July 10, 2025

Diagnosing autism and attention-related conditions often takes months, if not years. But new research shows that analyzing how people move their hands during simple tasks, with the help of artificial intelligence, could offer a faster, objective path to early detection.

Read moreDetails
Positive attitudes toward AI linked to problematic social media use
Artificial Intelligence

Positive attitudes toward AI linked to problematic social media use

July 7, 2025

A new study suggests that people who view artificial intelligence positively may be more likely to overuse social media. The findings highlight a potential link between attitudes toward AI and problematic online behavior, especially among male users.

Read moreDetails
Stress disrupts gut and brain barriers by reducing key microbial metabolites, study finds
Artificial Intelligence

Dark personality traits linked to generative AI use among art students

July 5, 2025

As generative AI tools become staples in art education, a new study uncovers who misuses them most. Research on Chinese art students connects "dark traits" like psychopathy to academic dishonesty, negative thinking, and a heavier reliance on AI technologies.

Read moreDetails
AI can already diagnose depression better than a doctor and tell you which treatment is best
Artificial Intelligence

New research reveals hidden biases in AI’s moral advice

July 5, 2025

Can you trust AI with your toughest moral questions? A new study suggests thinking twice. Researchers found large language models consistently favor inaction and "no" in ethical dilemmas.

Read moreDetails
Scientists reveal ChatGPT’s left-wing bias — and how to “jailbreak” it
Artificial Intelligence

ChatGPT and “cognitive debt”: New study suggests AI might be hurting your brain’s ability to think

July 1, 2025

Researchers at MIT investigated how writing with ChatGPT affects brain activity and recall. Their findings indicate that reliance on AI may lead to reduced mental engagement, prompting concerns about cognitive “offloading” and its implications for education.

Read moreDetails
Readers struggle to understand AI’s role in news writing, study suggests
Artificial Intelligence

Readers struggle to understand AI’s role in news writing, study suggests

June 29, 2025

A new study finds that readers often misunderstand AI’s role in news writing, creating their own explanations based on limited information. Without clear byline disclosures, many assume the worst.

Read moreDetails
Generative AI chatbots like ChatGPT can act as an “emotional sanctuary” for mental health
Artificial Intelligence

Do AI tools undermine our sense of creativity? New study says yes

June 19, 2025

A new study published in The Journal of Creative Behavior offers insight into how people think about their own creativity when working with artificial intelligence.

Read moreDetails
Dark personality traits and specific humor styles are linked to online trolling, study finds
Artificial Intelligence

Memes can serve as strong indicators of coming mass violence

June 15, 2025

A new study finds that surges in visual propaganda—like memes and doctored images—often precede political violence. By combining AI with expert analysis, researchers tracked manipulated content leading up to Russia’s invasion of Ukraine, revealing early warning signs of instability.

Read moreDetails

SUBSCRIBE

Go Ad-Free! Click here to subscribe to PsyPost and support independent science journalism!

STAY CONNECTED

LATEST

Dementia: Your lifetime risk may be far greater than previously thought

Psychopathic tendencies may be associated with specific hormonal patterns

Scientists use deep learning to uncover hidden motor signs of neurodivergence

Study finds “Anxious Mondays” linked to long-term stress and heart health risks in older adults

Adults treated with psychostimulants for ADHD show increased brain surface complexity, study finds

Is humor inherited? Twin study suggests the ability to be funny may not run in the family

Testosterone shifts political preferences in weakly affiliated Democratic men, study finds

Can sunshine make you happier? A massive study offers a surprising answer

         
       
  • Contact us
  • Privacy policy
  • Terms and Conditions
[Do not sell my information]

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Subscribe
  • My Account
  • Cognitive Science Research
  • Mental Health Research
  • Social Psychology Research
  • Drug Research
  • Relationship Research
  • About PsyPost
  • Contact
  • Privacy Policy