New research published in the cognitive science journal Open Mind provides insight into how 19-month-old infants interpret animations on a screen. The study suggests that infants understand by this age that the animations they see are decoupled from their immediate environment and independent from the physical screen they appear on.
“To my mind, the way in which young infants make sense of symbolic objects in their environment is a rich topic to investigate for at least two reasons,” said study author Barbu Revencu of the Central European University.
“First, while there is a lot of research on infants’ and children’s understanding of drawings, pictures, or scale models, not much work has been dedicated to the cognitive mechanism underlying these processes. What is the input to this mechanism? And what is the output? How do we tag the drawing of a pipe as a pipe while fully aware that it is not a pipe? Is there a common process underlying our interpretation of diagrams, animations, puppet shows, and (internet) memes? Why is it the case that we use visual symbols in communication to begin with, and what information can we efficiently convey through them?”
“Second, the topic has methodological consequences for the field I am working in,” Revencu explained. “In developmental psychology, we often bring infants to the lab and show them simple animated events. We then measure their reactions to these animations under the assumption that the reactions will tell us something about infants’ underlying cognitive processes. But simple animated events are so different from the real world!”
“I am not referring only to visual differences between an animation and a real-world scene, but to the fact that animations are typically communicative while real-world scenes typically are not. If infants view animations and screens as representational, we are also testing their communicative inferences when presenting them with these stimuli, which is interesting in itself.”
In a series of experiments, the researchers tested various hypotheses about how infants interpret on-screen animations. Their first experiment confirmed that 19-month-old infants could accurately and reliably follow the trajectory of a ball as it fell off a wooden seesaw and into a box.
The researchers then had infants watch as an animated ball appeared to fall from a cartoon seesaw on a television into one of two real boxes below the screen. When asked where the ball was, the infants “often preferred to point to the screen” rather than the boxes. “When they did provide a response, however, they chose boxes at random instead of basing their answers on the side in which the ball was seen falling,” the researchers said.
The results suggest that the infants did not expect the falling animated balls to end up in real boxes. However, it is also possible that the infants did not understand that the question “where is the ball” referred to the animated ball on the screen. The researchers ruled out this possibility in their third experiment — when the real off-screen boxes were replaced with animated boxes on the TV screen, the infants overwhelmingly pointed to the correct box.
In their fourth experiment, the researchers tested whether infants accept that an event displayed on one screen can move to a different screen.
The infants viewed two different TV screens, which were placed side-by-side. One screen depicted an animated bear leaving and entering a house, while the other screen depicted an animated rabbit doing the same. Although both houses were identical, the backgrounds for each animation were noticeably different.
The researchers first confirmed that the infants had learned which animal lived on which screen, then physically moved the location of the two screens. During this process, the screens were covered and the two backgrounds were surreptitiously swapped. The screens were then uncovered and the infants were asked to identify where the animals lived. (The characters were not visible on the screen at this point.)
Infants tended to link the animated characters to their virtual environments as opposed to their physical one. In other words, they selected the screen based on its background image rather than its physical position.
The findings indicate that “that by 19 months, infants have figured out that on-screen animated events are not happening in the here-and-now,” Revencu told PsyPost.
However, “the pattern of findings only provides negative evidence for infants’ understanding of screens as representational, because it rules out alternative accounts,” Revencu explained. “Experiments 1-3 suggest that infants do not think that on-screen events can extend beyond the screen, while Experiment 4 suggests that they do not track animations by the physical device on which they are presented. While this pattern of findings is compatible with a representational understanding of animations, it would be ideal if we gathered direct evidence for the representational hypothesis as well.”
The study, “For 19-Month-Olds, What Happens On-Screen Stays On-Screen“, was authored by Barbu Revencu and Gergely Csibra.