Lots of factors can influence how well you can learn, remember, and perceive the things around you — even the position of your body. For example, if you see someone pinch a fake hand near where your hand is positioned, you may think you feel real pain in your hand. We naturally pay more attention to objects close to our hands, too.
Christopher Davoli, James Brockmole, and Annabelle Goujon wondered if the location of our hands can also affect how we remember and learn visual information, so they designed a task to test that question.
They showed complex fractal images to student volunteers and asked them to search for small letters — either a tiny T or L embedded in each image. The students had to press one button if they spotted a T and another button for L. Below is an example image; can you find the letter?
If you look closely, you should be able to spot a tiny L in the lower left of the image. Davoli’s team showed the volunteers dozens of images like these and measured how quickly they responded. Half of the students pressed buttons on their laps, while the other half pressed buttons attached to the side of the visual display.
Over the course of the experiment, most images were different, but a few key images were repeated, with the same letter in the same spot in the image. By the end of the experiment, everyone could find the letters faster for these repeated images. They had learned the location of the letters and so could respond faster. But did the location of the hands affect their learning? Here are the results:
It might take a second to see what this graph shows. It charts the performance of the students on repeated images compared to the images they haven’t seen before. A higher “improvement” score means they were able to spot the letter T or L in the repeated images faster than in the new images. The horizontal axis charts the number of times repeated images have been seen.
As you can see, performance on these repeated images gets better the more the students have seen the same image. But critically, the students whose hands were next to the images didn’t improve any more (or less) than those whose hands were in their laps. So placing your hands nearer the image didn’t help participants to learn to search for the letter.
In a second experiment, the researchers made one critical change. Instead of repeating identical images, they changed the colors in the repeated images. The fractal pattern was the same, and the letter was in the same position on the pattern, but the colors were different in each repeated image, like this:
Otherwise, the experiment was exactly the same as the first one. As before, the students improved when they saw the repeated images, but this time there was a difference in the results depending on where their hands were during the experiment:
When their hands were in their laps, the students improved about as much with each repeated image as they had in the first experiment. But with hands up by the computer display, improvement for repeated images was significantly reduced.
Davoli, Brockmole, and Goujon think the reason might be related to how we perceive details when they are near our hands. Suppose you are holding an apple; you might pay close attention to the color and texture of the fruit in order to decide whether it is good to eat. But when an apple is far from your hand, you may just think “that’s an apple,” without paying attention to the details. Similarly, when the viewers’ hands were near the computer screen, they may have been distracted by the color of the pattern and therefore had fewer mental resources to devote to locating and identifying the letter T or L hidden in the image.
So the position of our hands can affect not only how we perceive things, but also how we learn and remember them. Paradoxically, when our hands are nearer to an object, they distract our attention from the qualities of that object that matter the most, causing us to learn slower than we would otherwise.
Davoli C.C., Brockmole J.R. & Goujon A. (2011). A bias to detail: how hand position modulates visual learning and visual memory, Memory , 40 (3) 352-359. DOI: http://dx.doi.org/10.3758/s13421-011-0147-3