Subscribe
The latest psychology and neuroscience discoveries.
My Account
  • Mental Health
  • Social Psychology
  • Cognitive Science
  • Psychopharmacology
  • Neuroscience
  • About
No Result
View All Result
PsyPost
PsyPost
No Result
View All Result
Home Exclusive Artificial Intelligence

The secret to sustainable AI may have been in our brains all along

by Karina Petrova
October 31, 2025
in Artificial Intelligence
[Adobe Stock]

[Adobe Stock]

Share on TwitterShare on Facebook
Stay on top of the latest psychology findings: Subscribe now!

Researchers have developed a new method for training artificial intelligence that dramatically improves its speed and energy efficiency by mimicking the structured wiring of the human brain. The approach, detailed in the journal Neurocomputing, creates AI models that can match or even exceed the accuracy of conventional networks while using a small fraction of the computational resources.

The study was motivated by a growing challenge in the field of artificial intelligence: sustainability. Modern AI systems, such as the large language models that power generative AI, have become enormous. They are built with billions of connections, and training them can require vast amounts of electricity and cost tens of millions of dollars. As these models continue to expand, their financial and environmental costs are becoming a significant concern.

“Training many of today’s popular large AI models can consume over a million kilowatt-hours of electricity, which is equivalent to the annual use of more than a hundred US homes, and cost tens of millions of dollars,” said Roman Bauer, a senior lecturer at the University of Surrey and a supervisor on the project. “That simply isn’t sustainable at the rate AI continues to grow. Our work shows that intelligent systems can be built far more efficiently, cutting energy demands without sacrificing performance.”

To find a more efficient design, the research team looked to the human brain. While many artificial neural networks are “dense,” meaning every neuron in one layer is connected to every neuron in the next, the brain operates differently. Its connectivity is highly sparse and structured. For instance, in the visual system, neurons in the retina form localized and orderly connections to process information, creating what are known as topographical maps. This design is exceptionally efficient, avoiding the need for redundant wiring. The brain also refines its connections during development, pruning away unnecessary pathways to optimize its structure.

Inspired by these biological principles, the researchers developed a new framework called Topographical Sparse Mapping, or TSM. Instead of building a dense network, TSM configures the input layer of an artificial neural network with a sparse, structured pattern from the very beginning. Each input feature, such as a pixel in an image, is connected to only one neuron in the following layer in an organized, sequential manner. This method immediately reduces the number of connections, known as parameters, which the model must manage.

The team then developed an enhanced version of the framework, named Enhanced Topographical Sparse Mapping, or ETSM. This version introduces a second brain-inspired process. After the network trains for a short period, it undergoes a dynamic pruning stage. During this phase, the model identifies and removes the least important connections throughout its layers, based on their magnitude. This process is analogous to the synaptic pruning that occurs in the brain as it learns and matures, resulting in an even leaner and more refined network.

To evaluate their approach, the scientists built and trained a type of network known as a multilayer perceptron. They tested its ability to perform image classification tasks using several standard benchmark datasets, including MNIST, Fashion-MNIST, CIFAR-10, and CIFAR-100. This setup allowed for a direct comparison of the TSM and ETSM models against both conventional dense networks and other leading techniques designed to create sparse, efficient AI.

The results showed a remarkable balance of efficiency and performance. The ETSM model was able to achieve extreme levels of sparsity, in some cases removing up to 99 percent of the connections found in a standard network. Despite this massive reduction in complexity, the sparse models performed just as well as, and sometimes better than, their dense counterparts. For the more difficult CIFAR-100 dataset, the ETSM model achieved a 14 percent improvement in accuracy over the next best sparse method while using far fewer connections.

“The brain achieves remarkable efficiency through its structure, with each neuron forming connections that are spatially well-organised,” said Mohsen Kamelian Rad, a PhD student at the University of Surrey and the study’s lead author. “When we mirror this topographical design, we can train AI systems that learn faster, use less energy and perform just as accurately. It’s a new way of thinking about neural networks, built on the same biological principles that make natural intelligence so effective.”

The efficiency gains were substantial. Because the network starts with a sparse structure and does not require complex phases of adding back connections, it trains much more quickly. The researchers’ analysis of computational costs revealed that their method consumed less than one percent of the energy and used significantly less memory than a conventional dense model. This combination of speed, low energy use, and high accuracy sets it apart from many existing methods that often trade performance for efficiency.

A key part of the investigation was to confirm the importance of the orderly, topographical wiring. The team compared their models to networks that had a similar number of sparse connections but were arranged randomly. The results demonstrated that the brain-inspired topographical structure consistently produced more stable training and higher accuracy, indicating that the specific pattern of connectivity is a vital component of its success.

The researchers acknowledge that their current framework applies the topographical mapping only to the model’s input layer. A potential direction for future work is to extend this structured design to deeper layers within the network, which could lead to even greater gains in efficiency. The team is also exploring how the approach could be applied to other AI architectures, such as the large models used for natural language processing, where the efficiency improvements could have a profound impact.

The study, “Topographical sparse mapping: A neuro-inspired sparse training framework for deep learning models,” was authored by Mohsen Kamelian Rad, Ferrante Neri, Sotiris Moschoyiannis, and Roman Bauer.

RELATED

Young children are more likely to trust information from robots over humans
Artificial Intelligence

New study shows that a robot’s feedback can shape human relationships

October 30, 2025
AI chatbots often misrepresent scientific studies — and newer models may be worse
Artificial Intelligence

AI chatbots often violate ethical standards in mental health contexts

October 26, 2025
Female Tinder users lean towards liberal sexuality and away from soulmate beliefs, study finds
Artificial Intelligence

Experts warn of an ‘intimate authenticity crisis’ as AI enters the dating scene

October 24, 2025
New research reveals masturbation is on the rise and challenges old ideas about its role
Artificial Intelligence

AI model suggests that dreams shape daily spirituality over time

October 20, 2025
People with attachment anxiety are more vulnerable to problematic AI use
Artificial Intelligence

People with attachment anxiety are more vulnerable to problematic AI use

October 17, 2025
Scientists discover our bodies react differently to AI-generated music
Artificial Intelligence

Scientists discover our bodies react differently to AI-generated music

October 15, 2025
AI-generated conversation with ChatGPT about mental health and psychology.
Artificial Intelligence

Most people rarely use AI, and dark personality traits predict who uses it more

October 12, 2025
AI-powered mental health app showcasing its interface on a mobile device.
Artificial Intelligence

Interaction with the Replika social chatbot can alleviate loneliness, study finds

October 11, 2025

STAY CONNECTED

LATEST

New $2 saliva test may aid in psychiatric diagnosis

The secret to sustainable AI may have been in our brains all along

Vulnerability to stress magnifies how a racing mind disrupts sleep

A severed brain reveals an astonishing power to reroute communication

Public Montessori preschool yields improved reading and cognition at a lower cost

Familial link between ADHD and crime risk is partly genetic, study suggests

A newsroom’s political makeup affects public trust, study finds

Researchers identify a peculiar tendency among insecure narcissists

         
       
  • Contact us
  • Privacy policy
  • Terms and Conditions
[Do not sell my information]

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Subscribe
  • My Account
  • Cognitive Science Research
  • Mental Health Research
  • Social Psychology Research
  • Drug Research
  • Relationship Research
  • About PsyPost
  • Contact
  • Privacy Policy