Why this is so is not entirely clear, but gesture seems to lighten the load on our cognitive systems. Cook has shown, for instance, that if you ask people to do two things at once — explain a math problem while remembering a sequence of letters — they do a far better job if permitted to gesture while explaining.
Research suggests that when we see and use gestures, we recruit more parts of the brain than when we use language alone, and we may activate more memory systems – such as procedural memory (the type that stores automatic processes such as how to type or ride a bike) in addition to our memory for events and experiences.
Cook is among a cadre of researchers who study learning in the context of “embodied cognition” – the theory that our thoughts are shaped by the physical experiences of our body. According to this view, even when we think about abstract ideas, our brains link them to concrete, physical things that we experience through our hands, our senses and other body parts.
Studies that use functional magnetic resonance imaging (fMRI) and other brain imaging techniques provide fascinating evidence for embodied cognition. For instance, when we hear verbs such as lick, pick and kick, they activate parts of the brain associated with the tongue, the hands and the legs, respectively. When we read about a happy event, there is greater activity in the nerves and muscles that control smiling.
One of the more remarkable findings in this field is that people who get Botox injections to reduce frown lines actually take longer to read sad and angry passages right after the injections than before, although there is no change of pace for reading happy tales.
Arthur Glenberg, a professor of psychology at Arizona State University, one of the authors of the Botox study and many others on embodied cognition, is applying the theory to help struggling readers succeed.
For more than a decade, Glenberg and colleagues have been developing systems that allow novice readers to physically simulate the content of books to enhance their understanding. The latest version is an iPad-based system called EMBRACE in which children can move characters and props around on a touch screen to bring the text alive. Unlike some multimedia picture books in which bells and whistles can distract from the story, the EMBRACE actions are tightly aligned with the text. If the story says that a farmer puts a pig in the pen, the child can slide a finger to do the same. If the text explains how blood flows from the heart’s right ventricle to the lungs, the reader can make it happen onscreen.
Glenberg has tested this system and an earlier version called Moved by Reading with struggling readers, including kids with learning disabilities, and has found sizeable increases in comprehension. The kids begin by acting out what they are reading — with support from a teacher or from the EMBRACE programming. Later they learn to simply “imagine” the physical actions.
The approach works across a variety of content areas — including story problems in math. In a 2011 study with 97 third- and fourth-graders, kids trained in the method solved 44 percent of math problems versus 33 percent for those in a control group. The trained kids were also much less likely (38 percent versus 61 percent) to mistakenly use irrelevant information in their calculations.
Word problems are notoriously hard for many students. “Kids sort of give up on trying to figure out what the meaning is and go right to playing with the numbers,” Glenberg explains. What the embodied approach does, he says, is help them develop “a sensorimotor representation” of the math problem. It “forces you to imagine the situation and that makes doing the math much easier.”
The same is true in reading. Many kids are able to sound out the text, but don’t actually understand it. This is particularly true of English language learners, Glenberg says. He has been testing the EMBRACE system for such students in the U.S. and in China. In a 2017 study with 93 native Spanish-speaking children in Arizona, he reports a “large positive benefit in story comprehension.” An enhanced version of the system offers some basic support in child’s native language.
A big question about the approach is whether kids who learn to read on this platform can make the leap to reading fluently without its support, internalizing the habit of picturing the story in their mind’s eye. Glenberg is in the process of studying this.
Using our bodies and gesture to teach is something parents and preschool teachers do instinctively (just think about rhymes like the “The Eensy-weensy Spider”). But work by Glenberg, Cook and many others indicates that the benefits can go far beyond preschool and extend to teaching advanced and abstract concepts.
Cook’s quick advice to teachers: “Use your hands. Make sure you don’t always have your smartboard controller in your hand. And if the students have their backs to you, it’s not as good.” She hopes that her work with gesturing avatars will eventually improve digital instruction, much of which makes poor use of body language.
As more and more of education comes to depend on technology and virtual instruction, it will be vital to capture under-appreciated aspects of human interaction that engage both body and mind.