Listener, Speaker’s Brains in Sync When Speech is Predicted
“Our findings show that the brains of both speakers and listeners take language predictability into account, resulting in more similar brain activity patterns between the two,” said lead author Suzanne Dikker, Ph.D., a postdoctoral researcher in New York University’s Department of Psychology and Utrecht University.
“Crucially, this happens even before a sentence is spoken and heard.”
Scientists have traditionally believed that our brains process the world around us from the “bottom up” — when we listen to a person speak, we first process the sounds, and then other areas in the brain put those sounds together into words and then sentences. From there, it was thought that we figured out the content and meaning.
However, in recent years, many neuroscientists have shifted to a “top-down” view of the brain.
For example, they believe we have a “prediction machine.” This means that we are constantly anticipating events in the world around us so that we can respond to them quickly and accurately. We can predict words and sounds based on context, for instance, and the brain takes advantage of this. When we hear “Grass is…” we can easily predict “green.”
For the study, researchers wanted to find out how this predictability might affect the speaker’s brain, and the interaction between speaker and listener.
“A lot of what we’ve learned about language and the brain has been from controlled laboratory tests that tend to look at language in the abstract — you get a string of words or you hear one word at a time,” said study co-author Jason Zevin, Ph.D., an associate professor of psychology and linguistics at the University of Southern California.
“They’re not so much about communication, but about the structure of language. The current experiment is really about how we use language to express common ground or share our understanding of an event with someone else.”
For the study, published in the Journal of Neuroscience, researchers measured the brain activity of a speaker as a variety of images were described. Another group of participants listened to those descriptions while viewing the same images. Researchers measured their brain activity as well.
Some of the images would be difficult for listeners to predict the description, while others would be much easier.
For example, one image showed a penguin hugging a star (a relatively easy image in which to predict a speaker’s description). However, another image depicted a guitar stirring a bicycle tire submerged in a boiling pot of water — a picture that is much less likely to yield a predictable description: Is it “a guitar cooking a tire,” “a guitar boiling a wheel,” or “a guitar stirring a bike”?
Researchers compared the brain activity of the speaker to the listeners’ brain activity and found that activity patterns were more similar between the listeners and the speaker when the listeners could predict what the speaker was going to say.
When listeners were able to predict what the speaker was going to say, said the authors, their brains took advantage of this by sending a signal to their auditory cortex to expect sound patterns corresponding to predicted words (for example, “green” while hearing “grass is…”).
Furthermore, the speaker’s brain showed a similar pattern as she was planning what she would say: Brain activity in her auditory language areas was affected by how predictable her description would be for her listeners.
Source: New York University
Pedersen, T. (2015). Listener, Speaker’s Brains in Sync When Speech is Predicted. Psych Central. Retrieved on July 26, 2016, from http://psychcentral.com/news/2014/05/05/brain-activity-of-listener-speaker-similar-when-speech-is-predicted/69365.html