In a promising step toward allowing severely paralyzed people to communicate, University of Utah researchers were able to decipher unspoken words through brain signal patterns alone.
The scientists used a new kind of nonpenetrating microelectrode that sits on top of the brain without poking into it. These electrodes are known as microECoGs because they are a smaller adaptation of the large electrodes used in electrocorticography, or ECoG, which were developed several decades ago.
“We have been able to decode spoken words using only signals from the brain with a device that has promise for long-term use in paralyzed patients who cannot now speak,” said Bradley Greger, assistant professor of bioengineering.
A volunteer who’d already had a craniotomy –a temporary partial skull removal—in an effort to help his severe epileptic seizures, volunteered for the research. Scientists placed grids of microelectrodes on top of his brain’s speech centers—on the facial motor cortex, which controls movements of the muscles involved in speaking, and also on the Wernicke’s area, which is associated with language comprehension and understanding.
Since microelectrodes do not actually pierce brain matter, they are considered safe to place on speech areas of the brain. With these microelectrodes in place, scientists were able to detect and record the electrical brain signals generated by the few thousand neurons or nerve cells.
After the volunteer repeatedly read each of 10 words that might be useful to a paralyzed person–yes, no, hot, cold, hungry, thirsty, hello, goodbye, more and less—the researchers tried figuring out which brain signals represented each of the 10 words by analyzing changes in strength of the different frequencies coming in with each nerve signal.
When any two brain signals were compared – such as those generated when the man said the words “yes” and “no” – the scientists were able to tell the difference between each word 76 percent to 90 percent of the time.
When the scientists used only the five microelectrodes on each 16-electrode grid that were most accurate in decoding brain signals from the facial motor cortex, their accuracy in distinguishing between words rose to almost 90 percent.
When the scientists looked at all 10 brain signal patterns at once, however, they were able to correctly label each word only 28 percent to 48 percent of the time. This was better than chance (10 percent) but not considered strong enough yet.
“It doesn’t mean the problem is completely solved and we can all go home,” Greger says. “It means it works, and we now need to refine it so that people with ‘locked-in’ syndrome could really communicate.”
“The obvious next step – and this is what we are doing right now – is to do it with bigger microelectrode grids. We can make the grid bigger, have more electrodes and get a tremendous amount of data out of the brain, which probably means more words and better accuracy,” says Greger.
“This is proof of concept. We’ve proven these signals can tell you what the person is saying well above chance. But we need to be able to do more words with more accuracy before it is something a patient really might find useful,” he adds.
Because the method needs much more improvement and involves placing electrodes on the brain, Greger expects it will be a few years before there are clinical trials on paralyzed people unable to speak.
There is hope, however, that continual research in this area will eventually bring about a wireless device that can convert a person’s thoughts into computer-spoken spoken words, says Greger. Right now, the only way that people who are ‘locked in’ can communicate is through movement, such as blinking an eye or moving a hand slightly or by painstakingly choosing letters or words from a list.
University of Utah colleagues who conducted the study with Greger included electrical engineers Spencer Kellis, a doctoral student, and Richard Brown, dean of the College of Engineering; and Paul House, an assistant professor of neurosurgery. Another coauthor was Kai Miller, a neuroscientist at the University of Washington in Seattle.
The research was funded by the National Institutes of Health, the Defense Advanced Research Projects Agency, the University of Utah Research Foundation and the National Science Foundation.
The Journal of Neural Engineering‘s September issue will publish Greger’s study showing the feasibility of translating brain signals into computer-spoken words.
Source: University of Utah