EPFL scientists propose a new way of understanding of how the brain processes unconscious information into our consciousness. According to the model, consciousness arises only in time intervals of up to 400 milliseconds, with gaps of unconsciousness in between.
The driver ahead suddenly stops, and you find yourself stomping on your breaks before you even realize what is going on. We would call this a reflex, but the underlying reality is much more complex, forming a debate that goes back centuries: Is consciousness a constant, uninterrupted stream or a series of discrete bits – like the 24 frames-per-second of a movie reel? Scientists from EPFL and the universities of Ulm and Zurich, now put forward a new model of how the brain processes unconscious information, suggesting that consciousness arises only in intervals up to 400 milliseconds, with no consciousness in between. The work is published in PLOS Biology.
Continuous or discrete?
Consciousness seems to work as continuous stream: one image or sound or smell or touch smoothly follows the other, providing us with a continuous image of the world around us. As far as we are concerned, it seems that sensory information is continuously translated into conscious perception: we see objects move smoothly, we hear sounds continuously, and we smell and feel without interruption. However, another school of thought argues that our brain collects sensory information only at discrete time-points, like a camera taking snapshots. Even though there is a growing body of evidence against “continuous” consciousness, it also looks like that the “discrete” theory of snapshots is too simple to be true.
A two-stage model
Michael Herzog at EPFL, working with Frank Scharnowski at the University of Zurich, have now developed a new paradigm, or “conceptual framework”, of how consciousness might actually work. They did this by reviewing data from previously published psychological and behavioral experiments that aim to determine if consciousness is continuous or discrete. Such experiments can involve showing a person two images in rapid succession and asking them to distinguish between them while monitoring their brain activity.
The new model proposes a two-stage processing of information. First comes the unconscious stage: The brain processes specific features of objects, e.g. color or shape, and analyzes them quasi-continuously and unconsciously with a very high time-resolution. However, the model suggests that there is no perception of time during this unconscious processing. Even time features, such as duration or color change, are not perceived during this period. Instead, the brain represents its duration as a kind of “number”, just as it does for color and shape.
Then comes the conscious stage: Unconscious processing is completed, and the brain simultaneously renders all the features conscious. This produces the final “picture”, which the brain finally presents to our consciousness, making us aware of the stimulus.
The whole process, from stimulus to conscious perception, can last up to 400 milliseconds, which is a considerable delay from a physiological point of view. “The reason is that the brain wants to give you the best, clearest information it can, and this demands a substantial amount of time,” explains Michael Herzog. “There is no advantage in making you aware of its unconscious processing, because that would be immensely confusing.” This model focuses on visual perception, but the time delay might be different for other sensory information, e.g. auditory or olfactory.
This is the first two-stage model of how consciousness arises, and it provides a more complete picture of how the brain manages consciousness than the “continuous versus discrete” debate envisages. But it especially provides useful insights about the way the brain processes time and relates it to our perception of the world.