Subscribe
The latest psychology and neuroscience discoveries.
My Account
  • Mental Health
  • Social Psychology
  • Cognitive Science
  • Psychopharmacology
  • Neuroscience
  • About
No Result
View All Result
PsyPost
PsyPost
No Result
View All Result
Home Exclusive Artificial Intelligence

“Catastrophic effects”: Can AI turn us into imbeciles? This scientist fears for the worst

by Eric W. Dolan
February 13, 2024
in Artificial Intelligence, Cognitive Science
(Photo credit: OpenAI's DALL·E)

(Photo credit: OpenAI's DALL·E)

Share on TwitterShare on Facebook
Stay on top of the latest psychology findings: Subscribe now!

Have you ever pondered the impact of relying on machines to do your thinking? With the rapid advancement of technology, this scenario is moving from the realm of science fiction straight into the realm of possibility.

In a new scientific paper, University of Monterrey professor Umberto León Domínguez explores the potential of artificial intelligence (AI) to not just mimic human conversation but fundamentally supplant many aspects human cognition. The work, published in the journal Neuropsychology, raises concerns about the risks that AI chatbots might pose to higher order executive functions.

Artificial intelligence, in simple terms, refers to machines programmed to mimic human intelligence—learning, reasoning, and problem-solving. Among the AI models, ChatGPT stands out. It’s a tool designed to understand and generate human-like text based on the data it’s fed. Unlike older AI models that struggled to grasp the nuances of language, ChatGPT uses something called a transformer model, which allows it to understand context and produce responses that can be startlingly similar to those a human might give.

Domínguez’s interest in ChatGPT stems from its potential as a technological milestone. He sees it as a signifier of the technological singularity, a concept that suggests AI development could reach a point where it begins to advance beyond human control, potentially merging human and machine intelligence.

“As a university professor, I design my activities as intellectual challenges to stimulate and train cognitive functions that are useful in the daily lives of my students, such as problem-solving and planning abilities,” explained Domínguez, the director of the Human Cognition and Brain Studies Lab and researcher in the Artificial Intelligence Group.

“The emergence of a tool like ChatGPT raised concerns for me about its potential use by students to complete tasks, thereby preventing the stimulation of these cognitive functions. From this observation, I began to explore and generalize the impact, not only as a student but as humanity, of the catastrophic effects these technologies could have on a significant portion of the population by blocking the development of these cognitive functions.”

“Consequently, I researched how ChatGPT or other AI chatbots could interfere with higher- order executive functions to understand how to also train these skills, even with the use of ChatGPT.”

One of the paper’s striking assertions is that AI can act as a “cognitive prosthesis,” a concept introduced in a 2019 study by Falk Lieder and his colleagues. In essence, this means AI could perform cognitive tasks on behalf of humans, much like how a prosthetic limb serves as a replacement for a lost limb. This doesn’t just include simple tasks like calculating numbers or organizing schedules. The research suggests that AI’s capabilities might extend to more complex cognitive functions, such as problem-solving and decision-making, traditionally seen as distinctly human traits.

Lieder and his colleagues specifically highlighted scenarios where people’s natural inclination towards short-term rewards leads them away from actions that would be more beneficial in the long term. For example, choosing to watch TV and relax instead of working on a challenging but rewarding project. To address this, they proposed using AI to “gamify” the decision-making process. Gamification involves adding game-like elements such as points, levels, and badges to non-game activities.

Through a series of experiments, Lieder and his colleagues provided initial evidence of the benefits of this approach. They found that AI-enhanced decisions helped individuals make better choices more quickly, reduce procrastination, and focus more on important tasks.

But Domínguez’s paper warns of the potential risks associated with integrating AI so closely into our cognitive processes. A key concern is “cognitive offloading,” where humans might become overly reliant on AI, leading to a decline in our ability to perform cognitive tasks independently. Just as muscles can weaken without exercise, cognitive skills can deteriorate if they’re not regularly used.

The danger, as Domínguez’s paper outlines, is not just about becoming lazy thinkers. There’s a more profound risk that our cognitive development and problem-solving abilities could be stunted. Over time, this could lead to a society where critical thinking and creativity are in short supply, as people become accustomed to letting AI do the heavy lifting.

“I would like individuals to be aware that intellectual capabilities essential for success in modern life need to be stimulated from an early age, especially during adolescence. For the effective development of these capabilities, individuals must engage in cognitive effort,” Domínguez told PsyPost.

“Cognitive offloading can serve as a beneficial mechanism because it frees up cognitive load that can then be directed towards more complex cognitions. However, with technologies like ChatGPT, we face, for the first time in history, a technology capable of providing a complete plan, from start to finish.”

“Consequently, there is a genuine risk that individuals might become complacent and overlook even the most complex cognitive tasks. Just as one cannot become skilled at basketball without actually playing the game, the development of complex intellectual abilities requires active participation and cannot solely rely on technological assistance.”

But don’t all technologies pose a risk of cognitive offloading? The researcher argues that ChatGPT’s ability to independently generate ideas, solutions, and even hold conversations sets it apart. Traditional tools, in contrast, still require human input to derive results.

“Many people argue that there have been other technologies that allowed for cognitive offloading, such as calculators, computers, and more recently, Google search,” Domínguez explained. “However, even then, these technologies did not solve the problem for you; they assisted with part of the problem and/or provided information that you had to integrate into a plan or decision-making process.

“With ChatGPT, we encounter a tool that (1) is accessible to everyone for free (global impact) and (2) is capable of planning and making decisions on your behalf. ChatGPT represents a logarithmic amplifier of cognitive offloading compared to the classical technologies previously available.”

The paper was titled: “Potential cognitive risks of generative transformer-based AI chatbots on higher order executive functions.”

RELATED

New research reveals masturbation is on the rise and challenges old ideas about its role
Artificial Intelligence

AI model suggests that dreams shape daily spirituality over time

October 20, 2025
Scientists identify distinct brain patterns linked to mental health symptoms
Memory

New study finds creativity supports learning through novel mental connections

October 20, 2025
Most bereaved people dream of or sense the deceased, study finds — and the two may be linked
Cognitive Science

This strange phenomenon could unlock the secrets of the mind

October 18, 2025
Your brain’s insulation might become emergency energy during a marathon
Cognitive Science

Neuroscientists discover a repeating rhythm that guides brain network activity

October 18, 2025
People with attachment anxiety are more vulnerable to problematic AI use
Artificial Intelligence

People with attachment anxiety are more vulnerable to problematic AI use

October 17, 2025
Evolutionary psychology reveals patterns in mass murder motivations across life stages
Cognitive Science

Neuroscientists can now predict what color you’re seeing. The secret is surprisingly black and white.

October 17, 2025
Scientists discover our bodies react differently to AI-generated music
Artificial Intelligence

Scientists discover our bodies react differently to AI-generated music

October 15, 2025
Stunned woman refusing bread at the table, rejecting food in a grocery store or restaurant setting.
Cognitive Science

The nocebo effect, not gluten, may trigger symptoms for many with IBS

October 15, 2025

STAY CONNECTED

LATEST

Sermons at large evangelical church tend to justify economic inequality, study finds

AI model suggests that dreams shape daily spirituality over time

New research reveals masturbation is on the rise and challenges old ideas about its role

Typing patterns on smartphones offer clues to cognitive health, new research suggests

Altered brain activity patterns affect ADHD risk, not vice versa

The psychology of scary fun: New study reveals nearly all children enjoy “recreational fear”

New study finds creativity supports learning through novel mental connections

Review of 12 years of research highlights gaps in knowledge about non-binary sexual health

         
       
  • Contact us
  • Privacy policy
  • Terms and Conditions
[Do not sell my information]

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Subscribe
  • My Account
  • Cognitive Science Research
  • Mental Health Research
  • Social Psychology Research
  • Drug Research
  • Relationship Research
  • About PsyPost
  • Contact
  • Privacy Policy