Subscribe
The latest psychology and neuroscience discoveries.
My Account
  • Mental Health
  • Social Psychology
  • Cognitive Science
  • Psychopharmacology
  • Neuroscience
  • About
No Result
View All Result
PsyPost
PsyPost
No Result
View All Result
Home Exclusive Artificial Intelligence

Advanced AI can mimic human development stages, study finds

by Eric W. Dolan
April 9, 2024
in Artificial Intelligence
(Photo credit: OpenAI's DALLĀ·E)

(Photo credit: OpenAI's DALLĀ·E)

Share on TwitterShare on Facebook

In the rapidly evolving field of artificial intelligence, a new study published in PLOS One has shed light on an unexpected capability of large language models like ChatGPT: their ability to mimic the cognitive and linguistic abilities of children. Researchers have found that these advanced AI systems can simulate lower levels of intelligence, specifically child-like language and understanding, particularly in tasks designed to test the theory of mind.

Large language models are advanced artificial intelligence systems designed to understand, generate, and interact using natural human language. These models are trained on a vast amount of text data, which enables them to produce remarkably human-like text, answer questions, write essays, translate languages, and even create poetry or code. The architecture of these models allows them to predict the next word in a sentence by considering the context provided by the words that precede it.

Theory of Mind is a psychological concept that refers to the ability to attribute mental states—beliefs, intents, desires, emotions, knowledge, etc.—to oneself and others and to understand that others have beliefs, desires, intentions, and perspectives that are different from one’s own. This capability is crucial for human interaction, as it enables individuals to predict and interpret the behavior of others, navigate social situations, and engage in empathetic and cooperative behaviors.

The researchers conducted their study to explore the extent to which large language models can simulate not just advanced cognitive abilities but also the more nuanced, developmental stages of human cognitive and linguistic capabilities, specifically those observed in children. This interest stems from the evolving understanding of AI’s capabilities and limitations.

“Thanks to psycholinguistics, we have a relatively comprehensive understanding of what children are capable of at various ages,” explained study author Anna MarklovĆ” of the Humboldt University of Berlin. “In particular, the Theory of Mind plays a significant role, as it explores the inner world of the child and is not easily emulated by observing simple statistical patterns.”

“We used this insight to determine whether large language models can pretend to be less capable than they are. In fact, this represents a practical application of concepts that have been discussed in psycholinguistics for decades.”

The researchers conducted 1,296 independent trials, employing GPT-3.5-turbo and GPT-4 to generate responses that would be analyzed for their linguistic complexity and accuracy in solving false-belief tasks. The core objective was to assess if these large language models could adjust their responses to reflect the developmental stages of language complexity and cognitive abilities typical of children aged between 1 and 6 years.

To assess linguistic complexity, the researchers employed two primary methods: measuring the response length and approximating the Kolmogorov complexity. Response length was chosen as a straightforward metric, operationalized by counting the number of letters in the text generated by the model in response to the prompts.

The Kolmogorov complexity, on the other hand, offers a more nuanced measure of linguistic complexity. It is defined as the minimum amount of information required to describe or reproduce a given string of text.

As the simulated age of the child persona increased, so did the complexity of the language used by the models. This trend was consistent across both GPT-3.5-turbo and GPT-4, indicating that these large language models possess an understanding of language development that allows them to approximate the linguistic capabilities of children at different ages.

The false-belief tasks chosen for this study were the change-of-location and unexpected-content tasks, both foundational in assessing a child’s development of Theory of Mind. As the name implies, these tasks test an individual’s ability to understand that another person can hold a belief that is false.

The change-of-location task involves a character, Maxi, who places an object (like a chocolate) in one location and leaves. While Maxi is gone, the object is moved to a different location. The task is to predict where Maxi will look for the object upon returning. Success in this task indicates an understanding that Maxi’s belief about the location of the object did not change, despite the actual relocation.

In the unexpected-content task, a container typically associated with a certain content (e.g., a candy box) is shown to contain something unexpected (e.g., pencils). The question then explores what a third party, unaware of the switch, would believe is inside the container. This task assesses the ability to understand that others’ beliefs can be based on false premises.

Both GPT-3.5-turbo and GPT-4 showed an ability to accurately respond to these false-belief scenarios, with performance improving as the age of the simulated child persona increased. This improvement aligns with the natural progression seen in children, where older children typically have a more developed Theory of Mind and are better at understanding that others may hold beliefs different from their own.

“Large language models are capable of feigning lower intelligence than they possess,” MarklovĆ” told PsyPost. “This implies that in the development of Artificial Superintelligence (ASI), we must be cautious not to demand that they emulate a human, and therefore limited, intelligence. Additionally, it suggests that we may underestimate their capabilities for an extended period, which is not a safe situation.”

An interesting finding was the occurrence of what the researchers termed “hyper-accuracy” in GPT-4’s responses to false-belief tasks, even at the youngest simulated ages. This phenomenon, where the model displayed a higher than expected understanding of ToM concepts, was attributed to the extensive training and reinforcement learning from human feedback (RLHF) that GPT-4 underwent.

RLHF is a training methodology that refines the model’s responses based on feedback from human evaluators, effectively teaching the model to generate more desirable outputs. This approach is part of the broader training and fine-tuning strategies employed to enhance the capabilities of AI systems.

“The effect of RLHF in the new model led to more adult-like responses even in very young personas,” MarklovĆ” explained. “It seems that the default setting of new models, i.e., they are ‘helpful assistants,’ adds certain constraints on the diversity of responses we as users can get from them.”

The study’s findings pave the way for several directions for future research. One key area involves further probing the limits of large language models (LLMs) in simulating cognitive and linguistic development stages across a wider array of tasks and contexts.

“We aim to triangulate psycholinguistic research with behavioral studies of large language models,” MarklovĆ” said.

Additionally, future studies could explore the implications of these simulations for practical applications, such as personalized learning tools or therapeutic AI that can adapt to the cognitive and emotional development stages of users.

“Our research aims to explore the potential of large language models, not assess if they are ‘good’ or ‘bad,” MarklovĆ” noted.

The study, “Large language models are able to downplay their cognitive abilities to fit the persona they simulate,” was authored by Jiří Milička, Anna MarklovĆ”, KlĆ”ra VanSlambrouck, Eva PospĆ­Å”ilovĆ”, Jana Å imsovĆ”, Samuel Harvan, and Ondřej Drobil.

RELATED

Generative AI simplifies science communication, boosts public trust in scientists
Artificial Intelligence

New AI system reduces the mental effort of using bionic hands

December 18, 2025
AI outshines humans in humor: Study finds ChatGPT is as funny as The Onion
Artificial Intelligence

Most top US research universities now encourage generative AI use in the classroom

December 14, 2025
Media coverage of artificial intelligence split along political lines, study finds
Artificial Intelligence

Survey reveals rapid adoption of AI tools in mental health care despite safety concerns

December 13, 2025
Harrowing case report details a psychotic “resurrection” delusion fueled by a sycophantic AI
Artificial Intelligence

Harrowing case report details a psychotic “resurrection” delusion fueled by a sycophantic AI

December 13, 2025
Scientists just uncovered a major limitation in how AI models understand truth and belief
Artificial Intelligence

Scientists just uncovered a major limitation in how AI models understand truth and belief

December 11, 2025
Russian propaganda campaign used AI to scale output without sacrificing credibility, study finds
Artificial Intelligence

AI can change political opinions by flooding voters with real and fabricated facts

December 9, 2025
How common is anal sex? Scientific facts about prevalence, pain, pleasure, and more
Artificial Intelligence

Humans and AI both rate deliberate thinkers as smarter than intuitive ones

December 5, 2025
Song lyrics have become simpler, more negative, and more self-focused over time
Artificial Intelligence

An “AI” label fails to trigger negative bias in new pop music study

November 30, 2025

PsyPost Merch

STAY CONNECTED

LATEST

How running tricks your brain into overestimating time

Escitalopram normalizes brain activity related to social anxiety disorder, study finds

Testosterone alters how men respond to unfairness against women

Scientists explain why nothing feels quite like the first time by tracking dopamine during fly sex

New AI system reduces the mental effort of using bionic hands

Brief computer-assisted therapy alters brain connectivity in depression

Scientists propose cognitive “digital twins” to monitor and protect mental health

Trigger sounds impair speech perception for people with misophonia

RSS Psychology of Selling

  • Study finds consumers must be relaxed for gamified ads to drive sales
  • Brain scans reveal increased neural effort when marketing messages miss the mark
  • Mental reconnection in the morning fuels workplace proactivity
  • The challenge of selling the connected home
  • Consumers prefer emotionally intelligent AI, but not for guilty pleasures
         
       
  • Contact us
  • Privacy policy
  • Terms and Conditions
[Do not sell my information]

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Subscribe
  • My Account
  • Cognitive Science Research
  • Mental Health Research
  • Social Psychology Research
  • Drug Research
  • Relationship Research
  • About PsyPost
  • Contact
  • Privacy Policy