In order to follow our own speech, our brains feature a system of volume settings that help us dim and amplify the sounds we both emit and hear, according to a new study by the University of California, Berkeley.
“We used to think that the human auditory system is mostly suppressed during speech, but we found closely knit patches of cortex with very different sensitivities to our own speech that paint a more complicated picture,” said Adeen Flinker, lead author and doctoral student in neuroscience at UC Berkeley.
These findings may also be beneficial in better understanding how auditory hallucinations work, he said, adding that individuals with schizophrenia often can’t tell the difference between their own internal voice from the voices of others, possibly suggesting that there may be a dysfunction in the selective auditory mechanism.
By studying electrical signals from epileptic patients’ brains, neuroscientists from UC Berkeley, UC San Francisco and Johns Hopkins University found that neurons in a certain area of the individuals’ hearing mechanism were muted during speech, while neurons in other areas perked up.
“We found evidence of millions of neurons firing together every time you hear a sound right next to millions of neurons ignoring external sounds but firing together every time you speak,” Flinker added.
“Such a mosaic of responses could play an important role in how we are able to distinguish our own speech from that of others.”
These findings provide new insight into how we are able to hear ourselves above surrounding noise and how we manage to supervise our own voices and words. Prior studies on monkeys have revealed that a selective hearing mechanism magnifies their mating, danger and food calls, and yet, until this current study, it was still unknown how the human version of this system works.
Although the study doesn’t have an answer for why humans would need to track their own speech so closely, Flinker believes that following our own speech is necessary for language development, monitoring our words and adjusting to different types of noise environments.
“Whether it’s learning a new language or talking to friends in a noisy bar, we need to hear what we say and change our speech dynamically according to our needs and environment,” Flinker said.
Furthermore, these findings may help physicians better navigate brain surgery by offering a better understanding of the auditory cortex, an area of the brain’s temporal lobe associated with sound. During hearing, the ear converts vibrations into electrical signals that are channeled to the brain’s auditory cortex where they are refined and processed.
In the study, scientists observed the electrical activity of healthy brain tissue in seizure patients; these patients volunteered to participate in the research during their time off between treatments, as they already had implanted electrodes on their auditory cortices to track the seizures.
Participants carried out certain tasks, such as listening to words and vowels and then repeating them back. As scientists compared the activity of electrical signals given off during speaking and hearing, they discovered that certain regions of the auditory cortex were less active while the participants were speaking, and other areas remained the same or at higher levels.
“This shows that our brain has a complex sensitivity to our own speech that helps us distinguish between our vocalizations and those of others, and makes sure that what we say is actually what we meant to say,” Flinker said.
This study is published in the Journal of Neuroscience.
Source: University of California