Prediction is crucial for brain function–without forecasting, our actions would always be too late because of the delay in neural processing. However, there has been limited theoretical work explaining how our brains perform perceptual predictions over time.
In the latest issue of Proceedings of the National Academy of Sciences (PNAS), New York University neuroscientist David Heeger offers a new framework to explain how the brain makes predictions. He outlines how “prediction” may be a general principle of cortical function–along with the already-established role of inference.
“It has long been recognized that the brain performs a kind of inference, combining sensory information with expectations,” explains Heeger, a professor in NYU’s Center for Neural Science. “Those expectations can come from the current context, from memory recall, or as an ongoing prediction over time. This new theory puts all of this together and formalizes it mathematically.”
Largely missing from our understanding of brain function had been models akin to those routinely employed by meteorologists. In making their predictions, forecasters rely on past weather information to project climate conditions over the next several days.
“Similarly, the neural networks in our brains embody a type of model of our surroundings,” Heeger observes. “However, we don’t have a clear understanding of how they operate to make predictions.”
Existing theories of brain function and neural networks used in artificial intelligence use a hierarchical structure: sensory input comes in at one end and progressively more abstract representations are computed along the hierarchy.
But this “feedforward”/pipeline processing architecture, Heeger argues, does not account for the brain’s predictive capabilities because, unlike weather models, it does not explain how it loops in earlier information–a dynamic Heeger’s theory includes.
“It’s possible to run this process the other way around–to take an abstract representation at the top of the hierarchy and run it backwards, from top to bottom through the neural net, to generate something like a sensory prediction or expectation,” he explains.
Overall, Heeger’s theory blends inference with prediction. The brain’s neural network can process in a feedforward manner–from sensory input to an abstract representation–akin to typical AI neural networks. Heeger’s theory can instead run in a feedback mode–to generate a sensory prediction from an abstract representation (i.e., a kind of memory recall or mental imagery). Or it can run in a mode that is a combination of the two, in which inferences mix sensory input with prediction.
The theory also posits a role for noise or variability in neural activity to explore different possible interpretations, even when the sensory input and prediction are the same.
“This noise-driven process of exploration may be the neural basis of creativity,” he speculates.
Heeger hypothesizes that neuromodulators like acetylcholineine change the brain state to control which mode–purely feedforward, purely feedback, or a combination–is operating at each moment in time. The amount of noise, which determines the amount of exploration, is controlled by other neuromodulators like noradrenaline and by oscillations in brain activity, he suggests.
The theory, which uses the model embedded in the neural network to predict and explore over time, offers promise in both the fields of neuroscience research and AI.
One of these areas is the assessment of the factors that explain autism. There is no special genetic mechanism that causes autism (so-called “autism genes” account for only a small percentage of cases). Rather, individuals with autism have a neurocomputational deficit–the brain’s processing doesn’t function properly. By understanding more fully how the brain effectively computes, Heeger says, we can potentially shed light on exactly what the neurocomputational differences are. Likewise for other developmental disorders and psychiatric illnesses.
In addition, while AI seeks to replicate human decision-making and processing, it cannot fully capture these phenomena if it’s exclusively drawing from a “feedforward” architecture, ignoring prediction and exploration.
“The theory of neural function that I’m outlining aims to fill in some of the significant dynamics that AI is missing,” Heeger says.