BTemplates.com

Powered by Blogger.

Pageviews past week

Quantum mechanics

Auto News

artificial intelligence

About Me

Recommend us on Google!

Information Technology

Popular Posts

Showing posts with label Electroencephalography. Show all posts
Showing posts with label Electroencephalography. Show all posts

Saturday, October 29, 2011

Scientists Measure Dream Content for the First Time: Dreams Activate the Brain in a Similar Way to Real Actions


The ability to dream is a fascinating aspect of the human mind. However, how the images and emotions that we experience so intensively when we dream form in our heads remains a mystery. Up to now it has not been possible to measure dream content. Max Planck scientists working with colleagues from the Charité hospital in Berlin have now succeeded, for the first time, in analysing the activity of the brain during dreaming.

Top: Patient in a functional magnetic resonance imaging machine. Bottom: Activity in the motor cortex during the movement of the hands while awake (left) and during a dreamed movement (right). Blue areas indicate the activity during a movement of the right hand, which is clearly demonstrated in the left brain hemisphere, while red regions indicate the corresponding left-hand movements in the opposite brain hemisphere. (Credit: © MPI of Psychiatry)

They were able to do this with the help of lucid dreamers, i.e. people who become aware of their dreaming state and are able to alter the content of their dreams. The scientists measured that the brain activity during the dreamed motion matched the one observed during a real executed movement in a state of wakefulness.

The research is published in the journal Current Biology.

Methods like functional magnetic resonance imaging have enabled scientists to visualise and identify the precise spatial location of brain activity during sleep. However, up to now, researchers have not been able to analyse specific brain activity associated with dream content, as measured brain activity can only be traced back to a specific dream if the precise temporal coincidence of the dream content and measurement is known. Whether a person is dreaming is something that could only be reported by the individual himself.

Scientists from the Max Planck Institute of Psychiatry in Munich, the Charité hospital in Berlin and the Max Planck Institute for Human Cognitive and Brain Sciences in Leipzig availed of the ability of lucid dreamers to dream consciously for their research. Lucid dreamers were asked to become aware of their dream while sleeping in a magnetic resonance scanner and to report this "lucid" state to the researchers by means of eye movements. They were then asked to voluntarily "dream" that they were repeatedly clenching first their right fist and then their left one for ten seconds.



This enabled the scientists to measure the entry into REM sleep -- a phase in which dreams are perceived particularly intensively -- with the help of the subject's electroencephalogram (EEG) and to detect the beginning of a lucid phase. The brain activity measured from this time onwards corresponded with the arranged "dream" involving the fist clenching. A region in the sensorimotor cortex of the brain, which is responsible for the execution of movements, was actually activated during the dream. This is directly comparable with the brain activity that arises when the hand is moved while the person is awake. Even if the lucid dreamer just imagines the hand movement while awake, the sensorimotor cortex reacts in a similar way.

The coincidence of the brain activity measured during dreaming and the conscious action shows that dream content can be measured. "With this combination of sleep EEGs, imaging methods and lucid dreamers, we can measure not only simple movements during sleep but also the activity patterns in the brain during visual dream perceptions," says Martin Dresler, a researcher at the Max Planck Institute for Psychiatry.

The researchers were able to confirm the data obtained using MR imaging in another subject using a different technology. With the help of near-infrared spectroscopy, they also observed increased activity in a region of the brain that plays an important role in the planning of movements. "Our dreams are therefore not a 'sleep cinema' in which we merely observe an event passively, but involve activity in the regions of the brain that are relevant to the dream content," explains Michael Czisch, research group leader at the Max Planck Institute for Psychiatry.

Friday, September 2, 2011

Word Association: Study Matches Brain Scans With Complex Thought


In an effort to understand what happens in the brain when a person reads or considers such abstract ideas as love or justice, Princeton researchers have for the first time matched images of brain activity with categories of words related to the concepts a person is thinking about. The results could lead to a better understanding of how people consider meaning and context when reading or thinking.


Princeton researchers developed a method to determine the
probability of various words being associated with the object a person
thought about during a brain scan. They produced color-coded figures
that illustrate the probability of words within the Wikipedia article
about the object the participant saw during the scan actually being
associated with the object. The more red a word is, the more likely a
person is to associate it, in this case, with "cow." On the other hand,
bright blue suggests a strong correlation with "carrot." Black and grey
"neutral" words had no specific association or were not considered at
all. (Credit: Illustration courtesy of Francisco Pereira)

The researchers report in the journal Frontiers in Human Neuroscience that they used functional magnetic resonance imaging (fMRI) to identify areas of the brain activated when study participants thought about physical objects such as a carrot, a horse or a house. The researchers then generated a list of topics related to those objects and used the fMRI images to determine the brain activity that words within each topic shared. For instance, thoughts about "eye" and "foot" produced similar neural stirrings as other words related to body parts.

Once the researchers knew the brain activity a topic sparked, they were able to use fMRI images alone to predict the subjects and words a person likely thought about during the scan. This capability to put people's brain activity into words provides an initial step toward further exploring themes the human brain touches upon during complex thought.

"The basic idea is that whatever subject matter is on someone's mind -- not just topics or concepts, but also, emotions, plans or socially oriented thoughts -- is ultimately reflected in the pattern of activity across all areas of his or her brain," said the team's senior researcher, Matthew Botvinick, an associate professor in Princeton's Department of Psychology and in the Princeton Neuroscience Institute.

"The long-term goal is to translate that brain-activity pattern into the words that likely describe the original mental 'subject matter,'" Botvinick said. "One can imagine doing this with any mental content that can be verbalized, not only about objects, but also about people, actions and abstract concepts and relationships. This study is a first step toward that more general goal.

"If we give way to unbridled speculation, one can imagine years from now being able to 'translate' brain activity into written output for people who are unable to communicate otherwise, which is an exciting thing to consider. In the short term, our technique could be used to learn more about the way that concepts are represented at the neural level -- how ideas relate to one another and how they are engaged or activated."

The research, which was published Aug. 23, was funded by a grant from the National Institute of Neurological Disease and Stroke, part of the National Institutes of Health.

Depicting a person's thoughts through text is a "promising and innovative method" that the Princeton project introduces to the larger goal of correlating brain activity with mental content, said Marcel Just, a professor of psychology at Carnegie Mellon University. The Princeton researchers worked from brain scans Just had previously collected in his lab, but he had no active role in the project.

"The general goal for the future is to understand the neural coding of any thought and any combination of concepts," Just said. "The significance of this work is that it points to a method for interpreting brain activation patterns that correspond to complex thoughts."

Tracking the brain's 'semantic threads'

Largely designed and conducted in Botvinick's lab by lead author and Princeton postdoctoral researcher Francisco Pereira, the study takes a currently popular approach to neuroscience research in a new direction, Botvinick said. He, Pereira and coauthor Greg Detre, who earned his Ph.D. from Princeton in 2010, based their work on various research endeavors during the past decade that used brain-activity patterns captured by fMRI to reconstruct pictures that participants viewed during the scan.

"This 'generative' approach -- actually synthesizing something, an artifact, from the brain-imaging data -- is what inspired us in our study, but we generated words rather than pictures," Botvinick said.

"The thought is that there are many things that can be expressed with language that are more difficult to capture in a picture. Our study dealt with concrete objects, things that are easy to put into a picture, but even then there was an interesting difference between generating a picture of a chair and generating a list of words that a person associates with 'chair.'"

Those word associations, lead author Pereira explained, can be thought of as "semantic threads" that can lead people to think of objects and concepts far from the original subject matter yet strangely related.

"Someone will start thinking of a chair and their mind wanders to the chair of a corporation then to Chairman Mao -- you'd be surprised," Pereira said. "The brain tends to drift, with multiple processes taking place at the same time. If a person thinks about a table, then a lot of related words will come to mind, too. And we thought that if we want to understand what is in a person's mind when they think about anything concrete, we can follow those words."

Pereira and his co-authors worked from fMRI images of brain activity that a team led by Just and fellow Carnegie Mellon researcher Tom Mitchell, a professor of computer science, published in the journal Science in 2008. For those scans, nine people were presented with the word and picture of five concrete objects from 12 categories. The drawing and word for the 60 total objects were displayed in random order until each had been shown six times. Each time an image and word appeared, participants were asked to visualize the object and its properties for three seconds as the fMRI scanner recorded their brain activity.

Matching words and brain activity with related topics

Separately, Pereira and Detre constructed a list of topics with which to categorize the fMRI data. They used a computer program developed by Princeton Associate Professor of Computer Science David Blei to condense 3,500 articles about concrete objects from the online encyclopedia Wikipedia into all the topics the articles covered. The articles included a broad array of subjects, such as an airplane, heroin, birds and manual transmission. The program came up with 40 possible topics -- such as aviation, drugs, animals or machinery -- with which the articles could relate. Each topic was defined by the words most associated with it.

The computer ultimately created a database of topics and associated words that were free from the researchers' biases, Pereira said.

"We let the software discern the factors that make up meaning rather than stipulating it ourselves," he said. "There is always a danger that we could impose our preconceived notions of the meaning words have. Plus, I can identify and describe, for instance, a bird, but I don't think I can list all the characteristics that make a bird a bird. So instead of postulating, we let the computer find semantic threads in an unsupervised manner."




The topic database let the researchers objectively arrange the fMRI images by subject matter, Pereira said. To do so, the team searched the brain scans of related objects for similar activity to determine common brain patterns for an entire subject, Pereira said. The neural response for thinking about "furniture," for example, was determined by the common patterns found in the fMRI images for "table," "chair," "bed," "desk" and "dresser." At the same time, the team established all the words associated with "furniture" by matching each fMRI image with related words from the Wikipedia-based list.

Based on the similar brain activity and related words, Pereira, Botvinick and Detre concluded that the same neural response would appear whenever a person thought of any of the words related to furniture, Pereira said. And a scientist analyzing that brain activity would know that person was thinking of furniture. The same would follow for any topic.

Using images to predict the words on a person's mind
Finally, to ensure their method was accurate, the researchers conducted a blind comparison of each of the 60 fMRI images against each of the others. Without knowing the objects the pair of scans pertained to, Pereira and his colleagues estimated the presence of certain topics on a participant's mind based solely on the fMRI data. Knowing the applicable Wikipedia topics for a given brain image, and the keywords for each topic, they could predict the most likely set of words associated with the brain image.

The researchers found that they could confidently determine from an fMRI image the general topic on a participant's mind, but that deciphering specific objects was trickier, Pereira said. For example, they could compare the fMRI scan for "carrot" against that for "cow" and safely say that at the time the participant had thought about vegetables in the first example instead of animals. In turn, they could say that the person most likely thought of other words related to vegetables, as opposed to words related to animals.

On the other hand, when the scan for "carrot" was compared to that for "celery," Pereira and his colleagues knew the participant had thought of vegetables, but they could not identify related words unique to either object.

One aim going forward, Pereira said, is to fine-tune the group's method to be more sensitive to such detail. In addition, he and Botvinick have begun performing fMRI scans on people as they read in an effort to observe the various topics the mind accesses.

"Essentially," Pereira said, "we have found a way to generally identify mental content through the text related to it. We can now expand that capability to even further open the door to describing thoughts that are not amenable to being depicted with pictures."

Story Source:
The above story is reprinted (with editorial adaptations) from materials provided by Princeton University.

Sunday, July 31, 2011

Brain Cap Technology Turns Thought Into Motion; Mind-Machine Interface Could Lead to New Life-Changing Technologies for Millions of People


"Brain cap" technology being developed at the University of Maryland allows users to turn their thoughts into motion. Associate Professor of Kinesiology José 'Pepe' L. Contreras-Vidal and his team have created a non-invasive, sensor-lined cap with neural interface software that soon could be used to control computers, robotic prosthetic limbs, motorized wheelchairs and even digital avatars.
University of Maryland associate professor of 
kinesiology Jose "Pepe" Contreras-Vidal wears his 
Brain Cap, a noninvasive, sensor-lined cap with neural 
interface software that soon could be used to control 
computers, robotic prosthetic limbs, motorized 
wheelchairs and even digital avatars. (Credit: John 
Consoli, University of Maryland)

"We are on track to develop, test and make available to the public- within the next few years -- a safe, reliable, noninvasive brain computer interface that can bring life-changing technology to millions of people whose ability to move has been diminished due to paralysis, stroke or other injury or illness," said Contreras-Vidal of the university's School of Public Health.

The potential and rapid progression of the UMD brain cap technology can be seen in a host of recent developments, including a just published study in the Journal of Neurophysiology, new grants from the National Science Foundation (NSF) and National Institutes of Health, and a growing list of partners that includes the University of Maryland School of Medicine, the Veterans Affairs Maryland Health Care System, the Johns Hopkins University Applied Physics Laboratory, Rice University and Walter Reed Army Medical Center's Integrated Department of Orthopaedics & Rehabilitation.

"We are doing something that few previously thought was possible," said Contreras-Vidal, who is also an affiliate professor in Maryland's Fischell Department of Bioengineering and the university's Neuroscience and Cognitive Science Program. "We use EEG [electroencephalography] to non-invasively read brain waves and translate them into movement commands for computers and other devices.

Peer Reviewed

Contreras-Vidal and his team have published three major papers on their technology over the past 18 months, the latest a just released study in the Journal of Neurophysiology in which they successfully used EEG brain signals to reconstruct the complex 3-D movements of the ankle, knee and hip joints during human treadmill walking. In two earlier studies they showed (1) similar results for 3-D hand movement and (2) that subjects wearing the brain cap could control a computer cursor with their thoughts.

Alessandro Presacco, a second-year doctoral student in Contreras-Vidal's Neural Engineering and Smart Prosthetics Lab, Contreras-Vidal and co-authors write that their Journal of Neurophysiology study indicated "that EEG signals can be used to study the cortical dynamics of walking and to develop brain-machine interfaces aimed at restoring human gait function."

There are other brain computer interface technologies under development, but Contreras-Vidal notes that these competing technologies are either very invasive, requiring electrodes to be implanted directly in the brain, or, if noninvasive, require much more training to use than does UMD's EEG-based, brain cap technology.

Partnering to Help Sufferers of Injury and Stroke

Contreras-Vidal and his team are collaborating on a rapidly growing cadre projects with researchers at other institutions to develop thought-controlled robotic prosthetics that can assist victims of injury and stroke. Their latest partnership is supported by a new $1.2 million NSF grant. Under this grant, Contreras-Vidal's Maryland team is embarking on a four-year project with researchers at Rice University, the University of Michigan and Drexel University to design a prosthetic arm that amputees can control directly with their brains, and which will allow users to feel what their robotic arm touches.



"There's nothing fictional about this," said Rice University co-principal investigator Marcia O'Malley, an associate professor of mechanical engineering. "The investigators on this grant have already demonstrated that much of this is possible. What remains is to bring all of it -- non-invasive neural decoding, direct brain control and [touch] sensory feedback -- together into one device."

In a NIH-supported project underway, Contreras-Vidal and his colleagues are pairing their brain cap's EEG-based technology with a DARPA-funded next-generation robotic arm designed by researchers at the Johns Hopkins Applied Physics Laboratory to function like a normal limb. And the UMD team is developing a new collaboration with the New Zealand's start-up Rexbionics, the developer of a powered lower-limb exoskeleton called Rex that could be used to restore gait after spinal cord injury.

Two of the earliest partnerships formed by Contreras-Vidal and his team are with the University of Maryland School of Medicine in Baltimore and the Veterans Affairs Medical Center in Baltimore. A particular focus of this research is the use of the brain cap technology to help stroke victims whose brain injuries affect their motor-sensory control. Originally funded by a seed grant from the University of Maryland, College Park and the University of Maryland, Baltimore, the work now also is supported by a VA merit grant (anklebot BMI) and an NIH grant (Stroke).

"There is a big push in brain science to understand what exercise does in terms of motor learning or motor retraining of the human brain," says Larry Forrester, an associate professor of physical therapy and rehabilitation science at the University of Maryland School of Medicine.

For the more than a year, Forrester and the UMD team have tracked the neural activity of people on a treadmill doing precise tasks like stepping over dotted lines. The researchers are matching specific brain activity recorded in real time with exact lower-limb movements.

This data could help stroke victims in several ways, Forrester says. One is a prosthetic device, called an "anklebot," or ankle robot, that stores data from a normal human gait and assists partially paralyzed people. People who are less mobile commonly suffer from other health issues such as obesity, diabetes or cardiovascular problems, Forrester says, "so we want to get [stroke survivors] up and moving by whatever means possible."

The second use of the EEG data in stroke victims is more complex, yet offers exciting possibilities. "By decoding the motion of a normal gait," Contreras-Vidal says, "we can then try and teach stroke victims to think in certain ways and match their own EEG signals with the normal signals." This could "retrain" healthy areas of the brain in what is known as neuroplasticity.

One potential method for retraining comes from one of the Maryland research team's newest members, Steve Graff, a first-year bioengineering doctoral student. He envisions a virtual reality game that matches real EEG data with on-screen characters. "It gives us a way to train someone to think the right thoughts to generate movement from digital avatars. If they can do that, then they can generate thoughts to move a device," says Graff, who brings a unique personal perspective to the work. He has congenital muscular dystrophy and uses a motorized wheelchair. The advances he's working on could allow him to use both hands -- to put on a jacket, dial his cell phone or throw a football while operating his chair with his mind.

No Surgery Required

During the past two decades a great deal of progress has been made in the study of direct brain to computer interfaces, most of it through studies using monkeys with electrodes implanted in their brains. However, for use in humans such an invasive approach poses many problems, not the least of which is that most people don't' want holes in their heads and wires attached to their brains. "EEG monitoring of the brain, which has a long, safe history for other applications, has been largely ignored by those working on brain-machine interfaces, because it was thought that the human skull blocked too much of the detailed information on brain activity needed to read thoughts about movement and turn those readings into movement commands for multi-functional high-degree of freedom prosthetics," said Contreras-Vidal. He is among the few who have used EEG, MEG or other sensing technologies to develop non-invasive neural interfaces, and the only one to have demonstrated decoding results comparable to those achieved by researchers using implanted electrodes.

A paper Contreras-Vidal and colleagues published in the Journal of Neuroscience in March 2010 showed the feasibility of Maryland's EEG-based technology to infer multidimensional natural movement from noninvasive measurements of brain activity. In their two latest studies, Contreras-Vidal and his team have further advanced the development of their EEG brain interface technology, and provided powerful new evidence that it can yield brain computer interface results as good as or better than those from invasive studies, while also requiring minimal training to use.

In a paper published in April in the Journal of Neural Engineering, the Maryland team demonstrated that people wearing the EEG brain cap, could after minimal training control a computer cursor with their thoughts and achieve performance levels comparable to those by subjects using invasive implanted electrode brain computer interface systems. Contreras-Vidal and his co-authors write that this study also shows that compared to studies of other noninvasive brain control interface systems, training time with their system was substantially shorter, requiring only a single 40-minute session.

Friday, July 1, 2011

How Social Pressure Can Affect What We Remember: Scientists Track Brain Activity as False Memories Are Formed



How easy is it to falsify memory? New research at the Weizmann Institute shows that a bit of social pressure may be all that is needed. The study, which appears in the journal Science, reveals a unique pattern of brain activity when false memories are formed -- one that hints at a surprising connection between our social selves and memory.
New research reveals a unique pattern of brain 
activity when false memories are formed -- one 
that hints at a surprising connection between 
our social selves and memory. (Credit: Image 
courtesy of Weizmann Institute of Science)

The experiment, conducted by Prof. Yadin Dudai and research student Micah Edelson of the Institute's Neurobiology Department with Prof. Raymond Dolan and Dr. Tali Sharot of University College London, took place in four stages. In the first, volunteers watched a documentary film in small groups. Three days later, they returned to the lab individually to take a memory test, answering questions about the film. They were also asked how confident they were in their answers.

They were later invited back to the lab to retake the test while being scanned in a functional MRI (fMRI) that revealed their brain activity. This time, the subjects were also given a "lifeline": the supposed answers of the others in their film viewing group (along with social-media-style photos). Planted among these were false answers to questions the volunteers had previously answered correctly and confidently. The participants conformed to the group on these "planted" responses, giving incorrect answers nearly 70% of the time.

But were they simply conforming to perceived social demands, or had their memory of the film actually undergone a change? To find out, the researchers invited the subjects back to the lab to take the memory test once again, telling them that the answers they had previously been fed were not those of their fellow film watchers, but random computer generations. Some of the responses reverted back to the original, correct ones, but close to half remained erroneous, implying that the subjects were relying on false memories implanted in the earlier session.

An analysis of the fMRI data showed differences in brain activity between the persistent false memories and the temporary errors of social compliance. The most outstanding feature of the false memories was a strong co-activation and connectivity between two brain areas: the hippocampus and the amygdala. The hippocampus is known to play a role in long-term memory formation, while the amygdala, sometimes known as the emotion center of the brain, plays a role in social interaction. The scientists think that the amygdala may act as a gateway connecting the social and memory processing parts of our brain; its "stamp" may be needed for some types of memories, giving them approval to be uploaded to the memory banks. Thus social reinforcement could act on the amygdala to persuade our brains to replace a strong memory with a false one.


Prof. Yadin Dudai's research is supported by the Norman and Helen Asher Center for Human Brain Imaging, which he heads; the Nella and Leon Benoziyo Center for Neurological Diseases; the Carl and Micaela Einhorn-Dominic Institute of Brain Research, which he heads; the Marc Besen and the Pratt Foundation, Australia; Lisa Mierins Smith, Canada; Abe and Kathryn Selsky Memorial Research Project; and Miel de Botton, UK. Prof. Dudai is the incumbent of the Sara and Michael Sela Professorial Chair of Neurobiology.

Friday, April 29, 2011

Microsleep: Brain Regions Can Take Short Naps During Wakefulness, Leading to Errors



If you've ever lost your keys or stuck the milk in the cupboard and the cereal in the refrigerator, you may have been the victim of a tired brain region that was taking a quick nap.
A photo of rats with objects introduced into their 
cages to keep them awake. (Credit: Giulio Tononi,
M.D., Ph.D., University of Wisconsin-Madison)

Researchers at the University of Wisconsin-Madison have a new explanation. They've found that some nerve cells in a sleep-deprived yet awake brain can briefly go "off line," into a sleep-like state, while the rest of the brain appears awake.

"Even before you feel fatigued, there are signs in the brain that you should stop certain activities that may require alertness," says Dr. Chiara Cirelli, professor of psychiatry at the School of Medicine and Public Health. "Specific groups of neurons may be falling asleep, with negative consequences on performance."

Until now, scientists thought that sleep deprivation generally affected the entire brain. Electroencephalograms (EEGs) show network brain-wave patterns typical of either being asleep or awake.

"We know that when we are sleepy, we make mistakes, our attention wanders and our vigilance goes down," says Cirelli. "We have seen with EEGs that even while we are awake, we can experience shorts periods of 'micro sleep.' "

Periods of micro sleep were thought to be the most likely cause of people falling asleep at the wheel while driving, Cirelli says.

But the new research found that even before that stage, brains are already showing sleep-like activity that impairs them, she says.

As reported in the current issue of Nature, the researchers inserted probes into specific groups of neurons in the brains of freely-behaving rats. After the rats were kept awake for prolonged periods, the probes showed areas of "local sleep" despite the animals' appearance of being awake and active.

"Even when some neurons went off line, the overall EEG measurements of the brain indicated wakefulness in the rats," Cirelli says.

And there were behavioral consequences to the local sleep episodes.

"When we prolonged the awake period, we saw the rats start to make mistakes," Cirelli says.

When animals were challenged to do a tricky task, such as reaching with one paw to get a sugar pellet, they began to drop the pellets or miss in reaching for them, indicating that a few neurons might have gone off line.

"This activity happened in few cells," Cirelli adds. "For instance, out of 20 neurons we monitored in one experiment, 18 stayed awake. From the other two, there were signs of sleep -- brief periods of activity alternating with periods of silence."

The researchers tested only motor tasks, so they concluded from this study that neurons affected by local sleep are in the motor cortex.

Wednesday, February 23, 2011

Scientists Steer Car With the Power of Thought


You need to keep your thoughts from wandering, if you drive using the new technology from the AutoNOMOS innovation labs of Freie Universität Berlin. The computer scientists have developed a system making it possible to steer a car with your thoughts. Using new commercially available sensors to measure brain waves -- sensors for recording electroencephalograms (EEG) -- the scientists were able to distinguish the bioelectrical wave patterns for control commands such as "left," "right," "accelerate" or "brake" in a test subject.
Computer scientists have developed a system making it 
possible to steer a car with your thoughts. (Credit: Image 
courtesy of Freie Universitaet Berlin)

They then succeeded in developing an interface to connect the sensors to their otherwise purely computer-controlled vehicle, so that it can now be "controlled" via thoughts. Driving by thought control was tested on the site of the former Tempelhof Airport.

The scientists from Freie Universität first used the sensors for measuring brain waves in such a way that a person can move a virtual cube in different directions with the power of his or her thoughts. The test subject thinks of four situations that are associated with driving, for example, "turn left" or "accelerate." In this way the person trained the computer to interpret bioelectrical wave patterns emitted from his or her brain and to link them to a command that could later be used to control the car. The computer scientists connected the measuring device with the steering, accelerator, and brakes of a computer-controlled vehicle, which made it possible for the subject to influence the movement of the car just using his or her thoughts.

"In our test runs, a driver equipped with EEG sensors was able to control the car with no problem -- there was only a slight delay between the envisaged commands and the response of the car," said Prof. Raúl Rojas, who heads the AutoNOMOS project at Freie Universität Berlin. In a second test version, the car drove largely automatically, but via the EEG sensors the driver was able to determine the direction at intersections.

The AutoNOMOS Project at Freie Universität Berlin is studying the technology for the autonomous vehicles of the future. With the EEG experiments they investigate hybrid control approaches, i.e., those in which people work with machines.

The computer scientists have made a short film about their research, which is available at: http://tinyurl.com/BrainDriver

Sunday, September 26, 2010

Turning Thoughts into Words A new approach allows more information to be extracted from the brain.


Brain-computer interfaces could someday provide a lifeline to "locked-in" patients, who are unable to talk or move but are aware and awake. Many of these patients can communicate by blinking their eyes, but turning blinks into words is time-consuming and exhausting.

Scientists in Utah have now demonstrated a way to determine which of 10 distinct words a person is thinking by recording the electrical activity from the surface of the brain.
Brian interface: The micro electrodes
shown here were used to record brain
signals in order to decode ten words
from a patient’s thoughts.
Credit: Spencer Kellis, University of Utah


The new technique involves training algorithms to recognize specific brain signals picked up by an array of nonpenetrating electrodes placed over the language centers of the brain, says Spencer Kellis, one of the bioengineers who carried out the work at the University of Utah, in Salt Lake City. The approach used is known as electrocorticography (ECoG). The group was able to identify the words "yes," "no," "hot, "cold," "thirsty," "hungry," "hello," "goodbye," "more," and "less" with an accuracy of 48 percent.

"The accuracy definitely needs to be improved," says Kellis. "But we have shown the information is there."

Individual words have been decoded from brain signals in the past using functional magnetic resonance imaging (fMRI), says Eric Leuthardt, director of the Center for Innovation in Neuroscience and Technology at Washington University School of Medicine in St. Louis, Missouri. This is the first time that the feat has been performed using ECoG, a far more practical and portable approach than fMRI, he says.

Working with colleagues Bradley Greger and Paul House, Kellis placed 16 electrodes on the surface of the brain of a patient being treated for epilepsy. The electrodes recorded signals from the facial motor cortex--an area of the brain that controls face muscles during speech--and over the Wernicke's area, part of the cerebral cortex that is linked with language. To train the algorithm, signals were analyzed as the patient was asked to repeatedly utter the 10 words.

ECoG has long been used to locate the source of epileptic seizures in the brain. But electrodes used are typically several hundred microns in size and are positioned centimeters apart, says Kellis. "The brain is doing processing at a much finer spatial scale than is really detectable by these standard clinical electrodes," he says. The Utah team used a new type of microelectrode array developed by PMT Neurosurgical. The electrodes are much smaller--40 microns in size--and are separated by a couple of millimeters.

It's possible to use less invasive techniques, such as electroencephalography (EEG), which places electrodes on the scalp, to enable brain-to-computer communications. Adrian Owen, a senior scientist in the Cognition and Brain Sciences Unit at the University of Cambridge, UK, has shown that EEG signals can be used to allow people in a persistent vegetative state to communicate "yes" and "no."

But with EEG, many of the signals are filtered out by the skull, says Leuthardt. "What's really nice about ECoG is its potential to give us a lot more information," he says.

Decoding 10 words is "very cool," says Owen, but the accuracy will need to improve dramatically, given the patients the technology is aimed at. "I don't think even 60 percent or 70 percent accuracy is going to work for patients who cannot communicate in any other way and where there is no other margin for verification," he says.

Ultimately, the hope is that ECoG will enable much more sophisticated communication. Last year Leuthardt showed that ECoG could be used to decode vowel and consonant sounds--an approach that might eventually be used to reconstruct a much larger number of complete words.

Wednesday, September 8, 2010

The Brain Speaks: Scientists Decode Words from Brain Signals


In an early step toward letting severely paralyzed people speak with their thoughts, University of Utah researchers translated brain signals into words using two grids of 16 microelectrodes implanted beneath the skull but atop the brain.
This photo shows two kinds of electrodes sitting atop a severely epileptic patient's brain after part of his skull was removed temporarily. The larger, numbered, button-like electrodes are ECoGs used by surgeons to locate and then remove brain areas responsible for severe epileptic seizures. While the patient had to undergo that procedure, he volunteered to let researchers place two small grids -- each with 16 tiny "microECoG" electrodes -- over two brain areas responsible for speech. These grids are at the end of the green and orange wire bundles, and the grids are represented by two sets of 16 white dots since the actual grids cannot be seen easily in the photo. University of Utah scientists used the microelectrodes to translate speech-related brain signals into actual words -- a step toward future machines to allow severely paralyzed people to speak. (Credit: University of Utah Department of Neurosurgery)

"We have been able to decode spoken words using only signals from the brain with a device that has promise for long-term use in paralyzed patients who cannot now speak," says Bradley Greger, an assistant professor of bioengineering.

Because the method needs much more improvement and involves placing electrodes on the brain, he expects it will be a few years before clinical trials on paralyzed people who cannot speak due to so-called "locked-in syndrome."

The Journal of Neural Engineering's September issue is publishing Greger's study showing the feasibility of translating brain signals into computer-spoken words.

The University of Utah research team placed grids of tiny microelectrodes over speech centers in the brain of a volunteer with severe epileptic seizures. The man already had a craniotomy -- temporary partial skull removal -- so doctors could place larger, conventional electrodes to locate the source of his seizures and surgically stop them.

Using the experimental microelectrodes, the scientists recorded brain signals as the patient repeatedly read each of 10 words that might be useful to a paralyzed person: yes, no, hot, cold, hungry, thirsty, hello, goodbye, more and less.

Later, they tried figuring out which brain signals represented each of the 10 words. When they compared any two brain signals -- such as those generated when the man said the words "yes" and "no" -- they were able to distinguish brain signals for each word 76 percent to 90 percent of the time.

When they examined all 10 brain signal patterns at once, they were able to pick out the correct word any one signal represented only 28 percent to 48 percent of the time -- better than chance (which would have been 10 percent) but not good enough for a device to translate a paralyzed person's thoughts into words spoken by a computer.

"This is proof of concept," Greger says, "We've proven these signals can tell you what the person is saying well above chance. But we need to be able to do more words with more accuracy before it is something a patient really might find useful."

People who eventually could benefit from a wireless device that converts thoughts into computer-spoken spoken words include those paralyzed by stroke, Lou Gehrig's disease and trauma, Greger says. People who are now "locked in" often communicate with any movement they can make -- blinking an eye or moving a hand slightly -- to arduously pick letters or words from a list.

University of Utah colleagues who conducted the study with Greger included electrical engineers Spencer Kellis, a doctoral student, and Richard Brown, dean of the College of Engineering; and Paul House, an assistant professor of neurosurgery. Another coauthor was Kai Miller, a neuroscientist at the University of Washington in Seattle.

The research was funded by the National Institutes of Health, the Defense Advanced Research Projects Agency, the University of Utah Research Foundation and the National Science Foundation.

Nonpenetrating Microelectrodes Read Brain's Speech Signals

The study used a new kind of nonpenetrating microelectrode that sits on the brain without poking into it. These electrodes are known as microECoGs because they are a small version of the much larger electrodes used for electrocorticography, or ECoG, developed a half century ago.

For patients with severe epileptic seizures uncontrolled by medication, surgeons remove part of the skull and place a silicone mat containing ECoG electrodes over the brain for days to weeks while the cranium is held in place but not reattached. The button-sized ECoG electrodes don't penetrate the brain but detect abnormal electrical activity and allow surgeons to locate and remove a small portion of the brain causing the seizures.

Last year, Greger and colleagues published a study showing the much smaller microECoG electrodes could "read" brain signals controlling arm movements. One of the epileptic patients involved in that study also volunteered for the new study.

Because the microelectrodes do not penetrate brain matter, they are considered safe to place on speech areas of the brain -- something that cannot be done with penetrating electrodes that have been used in experimental devices to help paralyzed people control a computer cursor or an artificial arm.

EEG electrodes used on the skull to record brain waves are too big and record too many brain signals to be used easily for decoding speech signals from paralyzed people.

Translating Nerve Signals into Words

In the new study, the microelectrodes were used to detect weak electrical signals from the brain generated by a few thousand neurons or nerve cells.

Each of two grids with 16 microECoGs spaced 1 millimeter (about one-25th of an inch) apart, was placed over one of two speech areas of the brain: First, the facial motor cortex, which controls movements of the mouth, lips, tongue and face -- basically the muscles involved in speaking. Second, Wernicke's area, a little understood part of the human brain tied to language comprehension and understanding.

The study was conducted during one-hour sessions on four consecutive days. Researchers told the epilepsy patient to repeat one of the 10 words each time they pointed at the patient. Brain signals were recorded via the two grids of microelectrodes. Each of the 10 words was repeated from 31 to 96 times, depending on how tired the patient was. Then the researchers "looked for patterns in the brain signals that correspond to the different words" by analyzing changes in strength of different frequencies within each nerve signal, says Greger.

The researchers found that each spoken word produced varying brain signals, and thus the pattern of electrodes that most accurately identified each word varied from word to word. They say that supports the theory that closely spaced microelectrodes can capture signals from single, column-shaped processing units of neurons in the brain.

One unexpected finding: When the patient repeated words, the facial motor cortex was most active and Wernicke's area was less active. Yet Wernicke's area "lit up" when the patient was thanked by researchers after repeating words. It shows Wernicke's area is more involved in high-level understanding of language, while the facial motor cortex controls facial muscles that help produce sounds, Greger says.

The researchers were most accurate -- 85 percent -- in distinguishing brain signals for one word from those for another when they used signals recorded from the facial motor cortex. They were less accurate -- 76 percent -- when using signals from Wernicke's area. Combining data from both areas didn't improve accuracy, showing that brain signals from Wernicke's area don't add much to those from the facial motor cortex.

When the scientists selected the five microelectrodes on each 16-electrode grid that were most accurate in decoding brain signals from the facial motor cortex, their accuracy in distinguishing one of two words from the other rose to almost 90 percent.

In the more difficult test of distinguishing brain signals for one word from signals for the other nine words, the researchers initially were accurate 28 percent of the time -- not good, but better than the 10 percent random chance of accuracy. However, when they focused on signals from the five most accurate electrodes, they identified the correct word almost half (48 percent) of the time.

"It doesn't mean the problem is completely solved and we can all go home," Greger says. "It means it works, and we now need to refine it so that people with locked-in syndrome could really communicate."

"The obvious next step -- and this is what we are doing right now -- is to do it with bigger microelectrode grids" with 121 micro electrodes in an 11-by-11 grid, he says. "We can make the grid bigger, have more electrodes and get a tremendous amount of data out of the brain, which probably means more words and better accuracy."