BTemplates.com

Powered by Blogger.

Pageviews past week

Quantum mechanics

Auto News

artificial intelligence

About Me

Recommend us on Google!

Information Technology

Popular Posts

Showing posts with label Electrocorticography. Show all posts
Showing posts with label Electrocorticography. Show all posts

Sunday, September 26, 2010

Turning Thoughts into Words A new approach allows more information to be extracted from the brain.


Brain-computer interfaces could someday provide a lifeline to "locked-in" patients, who are unable to talk or move but are aware and awake. Many of these patients can communicate by blinking their eyes, but turning blinks into words is time-consuming and exhausting.

Scientists in Utah have now demonstrated a way to determine which of 10 distinct words a person is thinking by recording the electrical activity from the surface of the brain.
Brian interface: The micro electrodes
shown here were used to record brain
signals in order to decode ten words
from a patient’s thoughts.
Credit: Spencer Kellis, University of Utah


The new technique involves training algorithms to recognize specific brain signals picked up by an array of nonpenetrating electrodes placed over the language centers of the brain, says Spencer Kellis, one of the bioengineers who carried out the work at the University of Utah, in Salt Lake City. The approach used is known as electrocorticography (ECoG). The group was able to identify the words "yes," "no," "hot, "cold," "thirsty," "hungry," "hello," "goodbye," "more," and "less" with an accuracy of 48 percent.

"The accuracy definitely needs to be improved," says Kellis. "But we have shown the information is there."

Individual words have been decoded from brain signals in the past using functional magnetic resonance imaging (fMRI), says Eric Leuthardt, director of the Center for Innovation in Neuroscience and Technology at Washington University School of Medicine in St. Louis, Missouri. This is the first time that the feat has been performed using ECoG, a far more practical and portable approach than fMRI, he says.

Working with colleagues Bradley Greger and Paul House, Kellis placed 16 electrodes on the surface of the brain of a patient being treated for epilepsy. The electrodes recorded signals from the facial motor cortex--an area of the brain that controls face muscles during speech--and over the Wernicke's area, part of the cerebral cortex that is linked with language. To train the algorithm, signals were analyzed as the patient was asked to repeatedly utter the 10 words.

ECoG has long been used to locate the source of epileptic seizures in the brain. But electrodes used are typically several hundred microns in size and are positioned centimeters apart, says Kellis. "The brain is doing processing at a much finer spatial scale than is really detectable by these standard clinical electrodes," he says. The Utah team used a new type of microelectrode array developed by PMT Neurosurgical. The electrodes are much smaller--40 microns in size--and are separated by a couple of millimeters.

It's possible to use less invasive techniques, such as electroencephalography (EEG), which places electrodes on the scalp, to enable brain-to-computer communications. Adrian Owen, a senior scientist in the Cognition and Brain Sciences Unit at the University of Cambridge, UK, has shown that EEG signals can be used to allow people in a persistent vegetative state to communicate "yes" and "no."

But with EEG, many of the signals are filtered out by the skull, says Leuthardt. "What's really nice about ECoG is its potential to give us a lot more information," he says.

Decoding 10 words is "very cool," says Owen, but the accuracy will need to improve dramatically, given the patients the technology is aimed at. "I don't think even 60 percent or 70 percent accuracy is going to work for patients who cannot communicate in any other way and where there is no other margin for verification," he says.

Ultimately, the hope is that ECoG will enable much more sophisticated communication. Last year Leuthardt showed that ECoG could be used to decode vowel and consonant sounds--an approach that might eventually be used to reconstruct a much larger number of complete words.

Wednesday, September 8, 2010

The Brain Speaks: Scientists Decode Words from Brain Signals


In an early step toward letting severely paralyzed people speak with their thoughts, University of Utah researchers translated brain signals into words using two grids of 16 microelectrodes implanted beneath the skull but atop the brain.
This photo shows two kinds of electrodes sitting atop a severely epileptic patient's brain after part of his skull was removed temporarily. The larger, numbered, button-like electrodes are ECoGs used by surgeons to locate and then remove brain areas responsible for severe epileptic seizures. While the patient had to undergo that procedure, he volunteered to let researchers place two small grids -- each with 16 tiny "microECoG" electrodes -- over two brain areas responsible for speech. These grids are at the end of the green and orange wire bundles, and the grids are represented by two sets of 16 white dots since the actual grids cannot be seen easily in the photo. University of Utah scientists used the microelectrodes to translate speech-related brain signals into actual words -- a step toward future machines to allow severely paralyzed people to speak. (Credit: University of Utah Department of Neurosurgery)

"We have been able to decode spoken words using only signals from the brain with a device that has promise for long-term use in paralyzed patients who cannot now speak," says Bradley Greger, an assistant professor of bioengineering.

Because the method needs much more improvement and involves placing electrodes on the brain, he expects it will be a few years before clinical trials on paralyzed people who cannot speak due to so-called "locked-in syndrome."

The Journal of Neural Engineering's September issue is publishing Greger's study showing the feasibility of translating brain signals into computer-spoken words.

The University of Utah research team placed grids of tiny microelectrodes over speech centers in the brain of a volunteer with severe epileptic seizures. The man already had a craniotomy -- temporary partial skull removal -- so doctors could place larger, conventional electrodes to locate the source of his seizures and surgically stop them.

Using the experimental microelectrodes, the scientists recorded brain signals as the patient repeatedly read each of 10 words that might be useful to a paralyzed person: yes, no, hot, cold, hungry, thirsty, hello, goodbye, more and less.

Later, they tried figuring out which brain signals represented each of the 10 words. When they compared any two brain signals -- such as those generated when the man said the words "yes" and "no" -- they were able to distinguish brain signals for each word 76 percent to 90 percent of the time.

When they examined all 10 brain signal patterns at once, they were able to pick out the correct word any one signal represented only 28 percent to 48 percent of the time -- better than chance (which would have been 10 percent) but not good enough for a device to translate a paralyzed person's thoughts into words spoken by a computer.

"This is proof of concept," Greger says, "We've proven these signals can tell you what the person is saying well above chance. But we need to be able to do more words with more accuracy before it is something a patient really might find useful."

People who eventually could benefit from a wireless device that converts thoughts into computer-spoken spoken words include those paralyzed by stroke, Lou Gehrig's disease and trauma, Greger says. People who are now "locked in" often communicate with any movement they can make -- blinking an eye or moving a hand slightly -- to arduously pick letters or words from a list.

University of Utah colleagues who conducted the study with Greger included electrical engineers Spencer Kellis, a doctoral student, and Richard Brown, dean of the College of Engineering; and Paul House, an assistant professor of neurosurgery. Another coauthor was Kai Miller, a neuroscientist at the University of Washington in Seattle.

The research was funded by the National Institutes of Health, the Defense Advanced Research Projects Agency, the University of Utah Research Foundation and the National Science Foundation.

Nonpenetrating Microelectrodes Read Brain's Speech Signals

The study used a new kind of nonpenetrating microelectrode that sits on the brain without poking into it. These electrodes are known as microECoGs because they are a small version of the much larger electrodes used for electrocorticography, or ECoG, developed a half century ago.

For patients with severe epileptic seizures uncontrolled by medication, surgeons remove part of the skull and place a silicone mat containing ECoG electrodes over the brain for days to weeks while the cranium is held in place but not reattached. The button-sized ECoG electrodes don't penetrate the brain but detect abnormal electrical activity and allow surgeons to locate and remove a small portion of the brain causing the seizures.

Last year, Greger and colleagues published a study showing the much smaller microECoG electrodes could "read" brain signals controlling arm movements. One of the epileptic patients involved in that study also volunteered for the new study.

Because the microelectrodes do not penetrate brain matter, they are considered safe to place on speech areas of the brain -- something that cannot be done with penetrating electrodes that have been used in experimental devices to help paralyzed people control a computer cursor or an artificial arm.

EEG electrodes used on the skull to record brain waves are too big and record too many brain signals to be used easily for decoding speech signals from paralyzed people.

Translating Nerve Signals into Words

In the new study, the microelectrodes were used to detect weak electrical signals from the brain generated by a few thousand neurons or nerve cells.

Each of two grids with 16 microECoGs spaced 1 millimeter (about one-25th of an inch) apart, was placed over one of two speech areas of the brain: First, the facial motor cortex, which controls movements of the mouth, lips, tongue and face -- basically the muscles involved in speaking. Second, Wernicke's area, a little understood part of the human brain tied to language comprehension and understanding.

The study was conducted during one-hour sessions on four consecutive days. Researchers told the epilepsy patient to repeat one of the 10 words each time they pointed at the patient. Brain signals were recorded via the two grids of microelectrodes. Each of the 10 words was repeated from 31 to 96 times, depending on how tired the patient was. Then the researchers "looked for patterns in the brain signals that correspond to the different words" by analyzing changes in strength of different frequencies within each nerve signal, says Greger.

The researchers found that each spoken word produced varying brain signals, and thus the pattern of electrodes that most accurately identified each word varied from word to word. They say that supports the theory that closely spaced microelectrodes can capture signals from single, column-shaped processing units of neurons in the brain.

One unexpected finding: When the patient repeated words, the facial motor cortex was most active and Wernicke's area was less active. Yet Wernicke's area "lit up" when the patient was thanked by researchers after repeating words. It shows Wernicke's area is more involved in high-level understanding of language, while the facial motor cortex controls facial muscles that help produce sounds, Greger says.

The researchers were most accurate -- 85 percent -- in distinguishing brain signals for one word from those for another when they used signals recorded from the facial motor cortex. They were less accurate -- 76 percent -- when using signals from Wernicke's area. Combining data from both areas didn't improve accuracy, showing that brain signals from Wernicke's area don't add much to those from the facial motor cortex.

When the scientists selected the five microelectrodes on each 16-electrode grid that were most accurate in decoding brain signals from the facial motor cortex, their accuracy in distinguishing one of two words from the other rose to almost 90 percent.

In the more difficult test of distinguishing brain signals for one word from signals for the other nine words, the researchers initially were accurate 28 percent of the time -- not good, but better than the 10 percent random chance of accuracy. However, when they focused on signals from the five most accurate electrodes, they identified the correct word almost half (48 percent) of the time.

"It doesn't mean the problem is completely solved and we can all go home," Greger says. "It means it works, and we now need to refine it so that people with locked-in syndrome could really communicate."

"The obvious next step -- and this is what we are doing right now -- is to do it with bigger microelectrode grids" with 121 micro electrodes in an 11-by-11 grid, he says. "We can make the grid bigger, have more electrodes and get a tremendous amount of data out of the brain, which probably means more words and better accuracy."