BTemplates.com

Powered by Blogger.

Pageviews past week

Quantum mechanics

Auto News

artificial intelligence

About Me

Recommend us on Google!

Information Technology

Popular Posts

Showing posts with label Brain Injury. Show all posts
Showing posts with label Brain Injury. Show all posts

Friday, October 10, 2014

Manipulating memory with light: Scientists erase specific memories in mice


Just look into the light: not quite, but researchers at the UC Davis Center for Neuroscience and Department of Psychology have used light to erase specific memories in mice, and proved a basic theory of how different parts of the brain work together to retrieve episodic memories.
During memory retrieval, cells in the hippocampus
connect to cells in the brain cortex.
Credit: Photo illustration by Kazumasa Tanaka and 
Brian Wiltgen/UC Davis
Optogenetics, pioneered by Karl Diesseroth at Stanford University, is a new technique for manipulating and studying nerve cells using light. The techniques of optogenetics are rapidly becoming the standard method for investigating brain function.

Kazumasa Tanaka, Brian Wiltgen and colleagues at UC Davis applied the technique to test a long-standing idea about memory retrieval. For about 40 years, Wiltgen said, neuroscientists have theorized that retrieving episodic memories -- memories about specific places and events -- involves coordinated activity between the cerebral cortex and the hippocampus, a small structure deep in the brain.

"The theory is that learning involves processing in the cortex, and the hippocampus reproduces this pattern of activity during retrieval, allowing you to re-experience the event," Wiltgen said. If the hippocampus is damaged, patients can lose decades of memories.

But this model has been difficult to test directly, until the arrival of optogenetics.

Wiltgen and Tanaka used mice genetically modified so that when nerve cells are activated, they both fluoresce green and express a protein that allows the cells to be switched off by light. They were therefore able both to follow exactly which nerve cells in the cortex and hippocampus were activated in learning and memory retrieval, and switch them off with light directed through a fiber-optic cable.

They trained the mice by placing them in a cage where they got a mild electric shock. Normally, mice placed in a new environment will nose around and explore. But when placed in a cage where they have previously received a shock, they freeze in place in a "fear response."

Tanaka and Wiltgen first showed that they could label the cells involved in learning and demonstrate that they were reactivated during memory recall. Then they were able to switch off the specific nerve cells in the hippocampus, and show that the mice lost their memories of the unpleasant event. They were also able to show that turning off other cells in the hippocampus did not affect retrieval of that memory, and to follow fibers from the hippocampus to specific cells in the cortex.

"The cortex can't do it alone, it needs input from the hippocampus," Wiltgen said. "This has been a fundamental assumption in our field for a long time and Kazu’s data provides the first direct evidence that it is true."

They could also see how the specific cells in the cortex were connected to the amygdala, a structure in the brain that is involved in emotion and in generating the freezing response.

Co-authors are Aleksandr Pevzner, Anahita B. Hamidi, Yuki Nakazawa and Jalina Graham, all at the Center for Neuroscience. The work was funded by grants from the Whitehall Foundation, McKnight Foundation, Nakajima Foundation and the National Science Foundation.

Story Source:
The above story is based on materials provided by University of California - Davis. Note: Materials may be edited for content and length.

Journal Reference:
Kazumasa Z. Tanaka, Aleksandr Pevzner, Anahita B. Hamidi, Yuki Nakazawa, Jalina Graham, Brian J. Wiltgen. Cortical Representations Are Reinstated by the Hippocampus during Memory Retrieval. Neuron, 2014 DOI: 10.1016/j.neuron.2014.09.037

Saturday, June 29, 2013

A Telescope for Your Eye: New Contact Lens Design May Improve Sight of Patients With Macular Degeneration


Contact lenses correct many people's eyesight but do nothing to improve the blurry vision of those suffering from age-related macular degeneration (AMD), the leading cause of blindness among older adults in the western world. That's because simply correcting the eye's focus cannot restore the central vision lost from a retina damaged by AMD. Now a team of researchers from the United States and Switzerland led by University of California San Diego Professor Joseph Ford has created a slim, telescopic contact lens that can switch between normal and magnified vision. With refinements, the system could offer AMD patients a relatively unobtrusive way to enhance their vision.

This image shows five views of the switchable telescopic contact lens. a) From front. b) From back. c) On the mechanical model eye. d) With liquid crystal glasses. Here, the glasses block the unmagnified central portion of the lens. e) With liquid crystal glasses. Here, the central portion is not blocked.
This image shows five views of the switchable telescopic contact lens. a) From front. b) From back. c) On the mechanical model eye. d) With liquid crystal glasses. Here, the glasses block the unmagnified central portion of the lens. e) With liquid crystal glasses. Here, the central portion is not blocked. (Credit: Optics Express)

The team reports its work in the Optical Society's (OSA) open-access journal Optics Express.

Visual aids that magnify incoming light help AMD patients see by spreading light around to undamaged parts of the retina. These optical magnifiers can assist patients with a variety of important everyday tasks such as reading, identification of faces, and self-care. But these aids have not gained widespread acceptance because they either use bulky spectacle-mounted telescopes that interfere with social interactions, or micro-telescopes that require surgery to implant into the patient's eye.

"For a visual aid to be accepted it needs to be highly convenient and unobtrusive," says co-author Eric Tremblay of the École Polytechnique Fédérale de Lausanne (EPFL) in Switzerland. A contact lens is an "attractive compromise" between the head-mounted telescopes and surgically implanted micro-telescopes, Tremblay says.

The new lens system developed by Ford's team uses tightly fitting mirror surfaces to make a telescope that has been integrated into a contact lens just over a millimeter thick. The lens has a dual modality: the center of the lens provides unmagnified vision, while the ring-shaped telescope located at the periphery of the regular contact lens magnifies the view 2.8 times.

To switch back and forth between the magnified view and normal vision, users would wear a pair of liquid crystal glasses originally made for viewing 3-D televisions. These glasses selectively block either the magnifying portion of the contact lens or its unmagnified center. The liquid crystals in the glasses electrically change the orientation of polarized light, allowing light with one orientation or the other to pass through the glasses to the contact lens.

The team tested their design both with computer modeling and by fabricating the lens. They also created a life-sized model eye that they used to capture images through their contact lens-eyeglasses system. In constructing the lens, researchers relied on a robust material commonly used in early contact lenses called polymethyl methacrylate (PMMA). The team needed that robustness because they had to place tiny grooves in the lens to correct for aberrant color caused by the lens' shape, which is designed to conform to the human eye.

Tests showed that the magnified image quality through the contact lens was clear and provided a much larger field of view than other magnification approaches, but refinements are necessary before this proof-of-concept system could be used by consumers. The researchers report that the grooves used to correct color had the side effect of degrading image quality and contrast. These grooves also made the lens unwearable unless it is surrounded by a smooth, soft "skirt," something commonly used with rigid contact lenses today. Finally, the robust material they used, PMMA, is not ideal for contact lenses because it is gas-impermeable and limits wear to short periods of time.

The team is currently pursuing a similar design that will still be switchable from normal to telescopic vision, but that will use gas-permeable materials and will correct aberrant color without the need for grooves to bend the light. They say they hope their design will offer improved performance and better sight for people with macular degeneration, at least until a more permanent remedy for AMD is available.

"In the future, it will hopefully be possible to go after the core of the problem with effective treatments or retinal prosthetics," Tremblay says. "The ideal is really for magnifiers to become unnecessary. Until we get there, however, contact lenses may provide a way to make AMD a little less debilitating."

Wednesday, June 12, 2013

Brain-Computer Interfaces: Just Wave a Hand


Small electrodes placed on or inside the brain allow patients to interact with computers or control robotic limbs simply by thinking about how to execute those actions. This technology could improve communication and daily life for a person who is paralyzed or has lost the ability to speak from a stroke or neurodegenerative disease.

This image shows the changes that took place in the brain for all patients participating in the study using a brain-computer interface. Changes in activity were distributed widely throughout the brain.
This image shows the changes that took place in the brain for all patients participating in the study using a brain-computer interface. Changes in activity were distributed widely throughout the brain. (Credit: Jeremiah Wander, UW)
Now, University of Washington researchers have demonstrated that when humans use this technology -- called a brain-computer interface -- the brain behaves much like it does when completing simple motor skills such as kicking a ball, typing or waving a hand. Learning to control a robotic arm or a prosthetic limb could become second nature for people who are paralyzed.

"What we're seeing is that practice makes perfect with these tasks," said Rajesh Rao, a UW professor of computer science and engineering and a senior researcher involved in the study. "There's a lot of engagement of the brain's cognitive resources at the very beginning, but as you get better at the task, those resources aren't needed anymore and the brain is freed up."

Rao and UW collaborators Jeffrey Ojemann, a professor of neurological surgery, and Jeremiah Wander, a doctoral student in bioengineering, published their results online June 10 in the Proceedings of the National Academy of Sciences.

In this study, seven people with severe epilepsy were hospitalized for a monitoring procedure that tries to identify where in the brain seizures originate. Physicians cut through the scalp, drilled into the skull and placed a thin sheet of electrodes directly on top of the brain. While they were watching for seizure signals, the researchers also conducted this study.

The patients were asked to move a mouse cursor on a computer screen by using only their thoughts to control the cursor's movement. Electrodes on their brains picked up the signals directing the cursor to move, sending them to an amplifier and then a laptop to be analyzed. Within 40 milliseconds, the computer calculated the intentions transmitted through the signal and updated the movement of the cursor on the screen.

Researchers found that when patients started the task, a lot of brain activity was centered in the prefrontal cortex, an area associated with learning a new skill. But after often as little as 10 minutes, frontal brain activity lessened, and the brain signals transitioned to patterns similar to those seen during more automatic actions.

"Now we have a brain marker that shows a patient has actually learned a task," Ojemann said. "Once the signal has turned off, you can assume the person has learned it."

While researchers have demonstrated success in using brain-computer interfaces in monkeys and humans, this is the first study that clearly maps the neurological signals throughout the brain. The researchers were surprised at how many parts of the brain were involved.

"We now have a larger-scale view of what's happening in the brain of a subject as he or she is learning a task," Rao said. "The surprising result is that even though only a very localized population of cells is used in the brain-computer interface, the brain recruits many other areas that aren't directly involved to get the job done."

Several types of brain-computer interfaces are being developed and tested. The least invasive is a device placed on a person's head that can detect weak electrical signatures of brain activity. Basic commercial gaming products are on the market, but this technology isn't very reliable yet because signals from eye blinking and other muscle movements interfere too much.

A more invasive alternative is to surgically place electrodes inside the brain tissue itself to record the activity of individual neurons. Researchers at Brown University and the University of Pittsburgh have demonstrated this in humans as patients, unable to move their arms or legs, have learned to control robotic arms using the signal directly from their brain.

The UW team tested electrodes on the surface of the brain, underneath the skull. This allows researchers to record brain signals at higher frequencies and with less interference than measurements from the scalp. A future wireless device could be built to remain inside a person's head for a longer time to be able to control computer cursors or robotic limbs at home.

"This is one push as to how we can improve the devices and make them more useful to people," Wander said. "If we have an understanding of how someone learns to use these devices, we can build them to respond accordingly."

The research team, along with the National Science Foundation's Engineering Research Center for Sensorimotor Neural Engineering headquartered at the UW, will continue developing these technologies.

This research was funded by the National Institutes of Health, the NSF, the Army Research Office and the Keck Foundation. 


Share this story on Facebook, Twitter, and Google

Friday, May 24, 2013

IQ Predicted by Ability to Filter Visual Motion


A brief visual task can predict IQ, according to a new study. This surprisingly simple exercise measures the brain's unconscious ability to filter out visual movement. The study shows that individuals whose brains are better at automatically suppressing background motion perform better on standard measures of intelligence. The test is the first purely sensory assessment to be strongly correlated with IQ and may provide a non-verbal and culturally unbiased tool for scientists seeking to understand neural processes associated with general intelligence.
Intelligence is closely linked to a person's ability to filter out background movement, according to a new cognitive science study from the University of Rochester.
Intelligence is closely linked to a person's ability to filter out background movement, according to a new cognitive science study from the University of Rochester. (Credit: J. Adam Fenster, University of Rochester)

"Because intelligence is such a broad construct, you can't really track it back to one part of the brain," says Duje Tadin, a senior author on the study and an assistant professor of brain and cognitive sciences at the University of Rochester. "But since this task is so simple and so closely linked to IQ, it may give us clues about what makes a brain more efficient, and, consequently, more intelligent."

The unexpected link between IQ and motion filtering was reported online in the Cell Press journal Current Biology on May 23 by a research team lead by Tadin and Michael Melnick, a doctoral candidate in brain and cognitive sciences at the University of Rochester.

In the study, individuals watched brief video clips of black and white bars moving across a computer screen. Their sole task was to identify which direction the bars drifted: to the right or to the left. The bars were presented in three sizes, with the smallest version restricted to the central circle where human motion perception is known to be optimal, an area roughly the width of the thumb when the hand is extended. Participants also took a standardized intelligence test.

As expected, people with higher IQ scores were faster at catching the movement of the bars when observing the smallest image. The results support prior research showing that individuals with higher IQs make simple perceptual judgments swifter and have faster reflexes. "Being 'quick witted' and 'quick on the draw' generally go hand in hand," says Melnick.

But the tables turned when presented with the larger images. The higher a person's IQ, the slower they were at detecting movement. "From previous research, we expected that all participants would be worse at detecting the movement of large images, but high IQ individuals were much, much worse," says Melnick. That counter-intuitive inability to perceive large moving images is a perceptual marker for the brain's ability to suppress background motion, the authors explain. In most scenarios, background movement is less important than small moving objects in the foreground. Think about driving in a car, walking down a hall, or even just moving your eyes across the room. The background is constantly in motion.

The key discovery in this study is how closely this natural filtering ability is linked to IQ. The first experiment found a 64 percent correlation between motion suppression and IQ scores, a much stronger relationship than other sensory measures to date. For example, research on the relationship between intelligence and color discrimination, sensitivity to pitch, and reaction times have found only a 20 to 40 percent correlation. "In our first experiment, the effect for motion was so strong," recalls Tadin, "that I really thought this was a fluke."

So the group tried to disprove the findings from the initial 12-participant study conducted while Tadin was at Vanderbilt University working with co-author Sohee Park, a professor of psychology. They reran the experiment at the University of Rochester on a new cohort of 53 subjects, administering the full IQ test instead of an abbreviated version and the results were even stronger; correlation rose to 71 percent. The authors also tested for other possible explanations for their findings.

For example, did the surprising link to IQ simply reflect a person's willful decision to focus on small moving images? To rule out the effect of attention, the second round of experiments randomly ordered the different image sizes and tested other types of large images that have been shown not to elicit suppression. High IQ individuals continued to be quicker on all tasks, except the ones that isolated motion suppression. The authors concluded that high IQ is associated with automatic filtering of background motion.

"We know from prior research which parts of the brain are involved in visual suppression of background motion. This new link to intelligence provides a good target for looking at what is different about the neural processing, what's different about the neurochemistry, what's different about the neurotransmitters of people with different IQs," says Tadin.

The relationship between IQ and motion suppression points to the fundamental cognitive processes that underlie intelligence, the authors write. The brain is bombarded by an overwhelming amount of sensory information, and its efficiency is built not only on how quickly our neural networks process these signals, but also on how good they are at suppressing less meaningful information. "Rapid processing is of little utility unless it is restricted to the most relevant information," the authors conclude.

The researchers point out that this vision test could remove some of the limitations associated with standard IQ tests, which have been criticized for cultural bias. "Because the test is simple and non-verbal, it will also help researchers better understand neural processing in individuals with intellectual and developmental disabilities," says co-author Loisa Bennetto, an associate professor of psychology at the University of Rochester.

Bryan Harrison, a doctoral candidate in clinical and social psychology at the University of Rochester is also an author on the paper. The research was supported by grants from the National Institutes of Health.

Wednesday, April 17, 2013

Bad Decisions Arise from Faulty Information, Not Faulty Brain Circuits


Making decisions involves a gradual accumulation of facts that support one choice or another. A person choosing a college might weigh factors such as course selection, institutional reputation and the quality of future job prospects.
Researchers have found that it might be the information rather than the brain's decision-making process that is to blame. The researchers report that erroneous decisions tend to arise from errors, or "noise," in the information coming into the brain rather than errors in how the brain accumulates information.
Researchers have found that it might be the information rather than the brain's decision-making process that is to blame. The researchers report that erroneous decisions tend to arise from errors, or "noise," in the information coming into the brain rather than errors in how the brain accumulates information.

But if the wrong choice is made, Princeton University researchers have found that it might be the information rather than the brain's decision-making process that is to blame. The researchers report in the journal Science that erroneous decisions tend to arise from errors, or "noise," in the information coming into the brain rather than errors in how the brain accumulates information.

These findings address a fundamental question among neuroscientists about whether bad decisions result from noise in the external information -- or sensory input -- or because the brain made mistakes when tallying that information. In the example of choosing a college, the question might be whether a person made a poor choice because of misleading or confusing course descriptions, or because the brain failed to remember which college had the best ratings.

Previous measurements of brain neurons have indicated that brain functions are inherently noisy. The Princeton research, however, separated sensory inputs from the internal mental process to show that the former can be noisy while the latter is remarkably reliable, said senior investigator Carlos Brody, a Princeton associate professor of molecular biology and the Princeton Neuroscience Institute (PNI), and a Howard Hughes Medical Institute Investigator.

"To our great surprise, the internal mental process was perfectly noiseless. All of the imperfections came from noise in the sensory processes," Brody said. Brody worked with first author Bingni Brunton, now a postdoctoral research associate in the departments of biology and applied mathematics at the University of Washington; and Matthew Botvinick, a Princeton associate professor of psychology and PNI.

The research subjects -- four college-age volunteers and 19 laboratory rats -- listened to streams of randomly timed clicks coming into both the left ear and the right ear. After listening to a stream, the subjects had to choose the side from which more clicks originated. The rats had been trained to turn their noses in the direction from which more clicks originated.

The test subjects mostly chose the correct side but occasionally made errors. By comparing various patterns of clicks with the volunteers' responses, researchers found that all of the errors arose when two clicks overlapped, and not from any observable noise in the brain system that tallied the clicks. This was true in experiment after experiment utilizing different click patterns, in humans and rats.

The researchers used the timing of the clicks and the decision-making behavior of the test subjects to create computer models that can be used to indicate what happens in the brain during decision-making. The models provide a clear window into the brain during the "mulling over" period of decision-making, the time when a person is accumulating information but has yet to choose, Brody said.

"Before we conducted this study, we did not have a way of looking at this process without inserting electrodes into the brain," Brody said. "Now thanks to our model, we have an estimation of what is going on at each moment in time during the formation of the decision."

The study suggests that information represented and processed in the brain's neurons must be robust to noise, Brody said. "In other words, the 'neural code' may have a mechanism for inherent error correction," he said.

"The new work from the Brody lab is important for a few reasons," said Anne Churchland, an assistant professor of biological sciences at Cold Spring Harbor Laboratory who studies decision-making and was not involved in the study. "First, the work was very innovative because the researchers were able to study carefully controlled decision-making behavior in rodents. This is surprising in that one might have guessed rodents were incapable of producing stable, reliable decisions that are based on complex sensory stimuli.

"This work exposed some unexpected features of why animals, including humans, sometimes make incorrect decisions," Churchland said. "Specifically, the researchers found that errors are mostly driven by the inability to accurately encode sensory information. Alternative possibilities, which the authors ruled out, included noise associated with holding the stimulus in mind, or memory noise, and noise associated with a bias toward one alternative or the other."

The work was funded by the Howard Hughes Medical Institute, Princeton University and National Institutes of Health training grants.

Wednesday, July 18, 2012

Visual Searches: Human Brain Beats Computers


You're headed out the door and you realize you don't have your car keys. After a few minutes of rifling through pockets, checking the seat cushions and scanning the coffee table, you find the familiar key ring and off you go. Easy enough, right? What you might not know is that the task that took you a couple seconds to complete is a task that computers -- despite decades of advancement and intricate calculations -- still can't perform as efficiently as humans: the visual search.

Part of the research team in front of the Magnetic Resonance Imaging (MRI) device at the UCSB Brain Imaging Center From left to right : Researcher Tim Preston; Associate Professor of Psychological & Brain Sciences Barry Giesbrecht; and Professor of Psychological & Brain Sciences Miguel P. Eckstein. Not pictured: Koel Das, now a faculty member at the Indian Institute of Science in Bangalore, Karnatka, India; and lead author Fei Guo, now in the software industry.
Part of the research team in front of the Magnetic Resonance Imaging (MRI) device at the UCSB Brain Imaging Center From left to right : Researcher Tim Preston; Associate Professor of Psychological & Brain Sciences Barry Giesbrecht; and Professor of Psychological & Brain Sciences Miguel P. Eckstein. Not pictured: Koel Das, now a faculty member at the Indian Institute of Science in Bangalore, Karnatka, India; and lead author Fei Guo, now in the software industry. (Credit: Image courtesy of University of California - Santa Barbara)


"Our daily lives are composed of little searches that are constantly changing, depending on what we need to do," said Miguel Eckstein, UC Santa Barbara professor of psychological and brain sciences and co-author of the recently released paper "Feature-Independent Neural Coding of Target Detection during Search of Natural Scenes," published in the Journal of Neuroscience. "So the idea is, where does that take place in the brain?"

A large part of the human brain is dedicated to vision, with different parts involved in processing the many visual properties of the world. Some parts are stimulated by color, others by motion, yet others by shape.

However, those parts of the brain tell only a part of the story. What Eckstein and co-authors wanted to determine was how we decide whether the target object we are looking for is actually in the scene, how difficult the search is, and how we know we've found what we wanted.

They found their answers in the dorsal frontoparietal network, a region of the brain that roughly corresponds to the top of one's head, and is also associated with properties such as attention and eye movements. In the parts of the human brain used earlier in the processing stream, regions stimulated by specific features like color, motion, and direction are a major part of the search. However, in the dorsal frontoparietal network, activity is not confined to any specific features of the object.

"It's flexible," said Eckstein. Using 18 observers, an MRI machine, and hundreds of photos of scenes flashed before the observers with instructions to look for certain items, the scientists monitored their subjects' brain activity. By watching the intraparietal sulcus (IPS), located within the dorsal frontoparietal network, the researchers were able to note not only whether their subjects found the objects, but also how confident they were in their finds.

The IPS region would be stimulated even if the object was not there, said Eckstein, but the pattern of activity would not be the same as it would had the object actually existed in the scene. The pattern of activity was consistent, even though the 368 different objects the subjects searched for were defined by very different visual features. This, Eckstein said, indicates that IPS did not rely on the presence of any fixed feature to determine the presence or absence of various objects. Other visual regions did not show this consistent pattern of activity across objects.

"As you go further up in processing, the neurons are less interested in a specific feature, but they're more interested in whatever is behaviorally relevant to you at the moment," said Eckstein. Thus, a search for an apple, for instance, would make red, green, and rounded shapes relevant. If the search was for your car keys, the interparietal sulcus would now be interested in gold, silver, and key-type shapes and not interested in green, red, and rounded shapes.

"For visual search to be efficient, we want those visual features related to what we are looking for to elicit strong responses in our brain and not others that are not related to our search, and are distracting," Eckstein added. "Our results suggest that this is what is achieved in the intraparietal sulcus, and allows for efficient visual search."

For Eckstein and colleagues, these findings are just the tip of the iceberg. Future research will dig more deeply into the seemingly simple yet essential ability of humans to do a visual search and how they can use the layout of a scene to guide their search.

"What we're trying to really understand is what other mechanisms or strategies the brain has to make searches efficient and easy," said Eckstein. "What part of the brain is doing that?"

Research on this study was also conducted by Tim Preston, Koel Das, Barry Giesbrecht, and first author Fei Guo, all from UC Santa Barbara.

Saturday, July 7, 2012

Diabetes Drug Makes Brain Cells Grow


The widely used diabetes drug metformin comes with a rather unexpected and alluring side effect: it encourages the growth of new neurons in the brain. The study reported in the July 6th issue of Cell Stem Cell, a Cell Press publication, also finds that those neural effects of the drug also make mice smarter.

New research finds that the widely used diabetes drug metformin comes with a rather unexpected and alluring side effect: it encourages the growth of new neurons in the brain.
New research finds that the widely used diabetes drug 
metformin comes with a rather unexpected and alluring 
side effect: it encourages the growth of new neurons in 
the brain. (Credit: iStockphoto/Guido Vrola)
The discovery is an important step toward therapies that aim to repair the brain not by introducing new stem cells but rather by spurring those that are already present into action, says the study's lead author Freda Miller of the University of Toronto-affiliated Hospital for Sick Children. The fact that it's a drug that is so widely used and so safe makes the news all that much better.

Earlier work by Miller's team highlighted a pathway known as aPKC-CBP for its essential role in telling neural stem cells where and when to differentiate into mature neurons. As it happened, others had found before them that the same pathway is important for the metabolic effects of the drug metformin, but in liver cells.

"We put two and two together," Miller says. If metformin activates the CBP pathway in the liver, they thought, maybe it could also do that in neural stem cells of the brain to encourage brain repair.

The new evidence lends support to that promising idea in both mouse brains and human cells. Mice taking metformin not only showed an increase in the birth of new neurons, but they were also better able to learn the location of a hidden platform in a standard maze test of spatial learning.

While it remains to be seen whether the very popular diabetes drug might already be serving as a brain booster for those who are now taking it, there are already some early hints that it may have cognitive benefits for people with Alzheimer's disease. It had been thought those improvements were the result of better diabetes control, Miller says, but it now appears that metformin may improve Alzheimer's symptoms by enhancing brain repair.

Miller says they now hope to test whether metformin might help repair the brains of those who have suffered brain injury due to trauma or radiation therapies for cancer.

Sunday, July 10, 2011

A Change of Heart: Researchers Reprogram Brain Cells to Become Heart Cells


For the past decade, researchers have tried to reprogram the identity of all kinds of cell types. Heart cells are one of the most sought-after cells in regenerative medicine because researchers anticipate that they may help to repair injured hearts by replacing lost tissue. Now, researchers at the Perelman School of Medicine at the University of Pennsylvania are the first to demonstrate the direct conversion of a non-heart cell type into a heart cell by RNA transfer.
Cardiomyocyte (center), showing protein distribution (green and red colors) indicative of a young cardiomyocyte. (Credit: Tae Kyung Kim, PhD, Perelman School of Medicine, University of Pennsylvania)

Working on the idea that the signature of a cell is defined by molecules called messenger RNAs (mRNAs), which contain the chemical blueprint for how to make a protein, the investigators changed two different cell types, an astrocyte (a star-shaped brain cell) and a fibroblast (a skin cell), into a heart cell, using mRNAs.

James Eberwine, PhD, the Elmer Holmes Bobst Professor of Pharmacology, Tae Kyung Kim, PhD, post-doctoral fellow, and colleagues report their findings online in the Proceedings of the National Academy of Sciences. This approach offers the possibility for cell-based therapy for cardiovascular diseases.

"What's new about this approach for heart-cell generation is that we directly converted one cell type to another using RNA, without an intermediate step," explains Eberwine. The scientists put an excess of heart cell mRNAs into either astrocytes or fibroblasts using lipid-mediated transfection, and the host cell does the rest. These RNA populations (through translation or by modulation of the expression of other RNAs) direct DNA in the host nucleus to change the cell's RNA populations to that of the destination cell type (heart cell, or tCardiomyocyte), which in turn changes the phenotype of the host cell into the destination cell.



The method the group used, called Transcriptome Induced Phenotype Remodeling, or TIPeR, is distinct from the induced pluripotent stem cell (iPS) approach used by many labs in that host cells do not have to be dedifferentiated to a pluripotent state and then redifferentiated with growth factors to the destination cell type. TIPeR is more similar to prior nuclear transfer work in which the nucleus of one cell is transferred into another cell where upon the transferred nucleus then directs the cell to change its phenotype based upon the RNAs that are made. The tCardiomyocyte work follows directly from earlier work from the Eberwine lab, where neurons were converted into tAstrocytes using the TIPeR process.

The team first extracted mRNA from a heart cell, then put it into host cells. Because there are now so many more heart-cell mRNAs versus astrocyte or fibroblast mRNAs, they take over the indigenous RNA population. The heart-cell mRNAs are translated into heart-cell proteins in the cell cytoplasm. These heart-cell proteins then influence gene expression in the host nucleus so that heart-cell genes are turned on and heart-cell-enriched proteins are made.

To track the change from an astrocyte to heart cell, the team looked at the new cells' RNA profile using single cell microarray analysis; cell shape; and immunological and electrical properties. While TIPeR-generated tCardiomyocytes are of significant use in fundamental science it is easy to envision their potential use to screen for heart cell therapeutics, say the study authors. What's more, creation of tCardiomyoctes from patients would permit personalized screening for efficacy of drug treatments; screening of new drugs; and potentially as a cellular therapeutic.

These studies were enabled through the collaboration of a number of investigators spanning multiple disciplines including Vickas Patel, MD and Nataliya Peternko from the Division of Cardiovascular Medicine, Miler Lee, PhD and Junhyong Kim, PhD from the Department of Biology and Jai-Yoon Sul, PhD and Jae Hee Lee, PhD also from the Department of Pharmacology, all from Penn. This work was funded by grants from the W. M. Keck Foundation, the National Institutes of Health Director's Office, and the Commonwealth of Pennsylvania.

Saturday, November 27, 2010

Do Brain's 'Traffic Lights' Direct Our Actions?


In every waking minute, we have to make decisions -- sometimes within a split second. Neuroscientists at the Bernstein Center Freiburg have now discovered a possible explanation how the brain chooses between alternative options. The key lies in extremely fast changes in the communication between single nerve cells.
The timing of exciting (red curve) and inhibiting 
(blue curve) signals could be a way to control the 
"traffic flow" of activity in the brain. (Illustration: 
Bernstein Center Freiburg) (Credit: Illustration 
courtesy of Bernstein Center Freiburg)

The traffic light changes from green to orange -- should I push down the accelerator a little bit further or rather hit the brakes? Our daily lives present a long series of decisions we have to make, and sometimes we only have a split second at our disposal. Often the problem of decision-making entails the selection of one set of brain processes over multiple others seeking access to same resources. Several mechanisms have been suggested how the brain might solve this problem. However, up to now, it is a mystery what exactly happens when during a rapid choice between two options.

In the current issue of the Journal of Neuroscience, Jens Kremkow, Arvind Kumar, and Ad Aertsen from the Bernstein Center Freiburg propose a mechanism how the brain can choose between possible actions -- already at the level of single nerve cells.

As the structure and activity of the brain are just too complex to answer this question through a simple biological experiment, the scientists constructed a network of neurons in the computer. An important aspect of the model in this context is the property of nerve cells to influence the activity of other nerve cells, either in an excitatory or inhibitory manner. In the constructed network, two groups of neurons acted as the senders of two different signals. Further downstream in the network, another group of neurons, the "gate" neurons, were to control which of the signals would be transmitted onward.

As the cells within the network were connected both with exciting and inhibiting neurons, the signals reached the gate as excitatory and, after a short delay, inhibitory activity. In their simulations, the scientists found that the key for the gate neurons' "decision" in favour of one signal over the other was the time delay of the inhibitory signal relative to the excitatory signal. If the delay was set to be very small, the activity of the cells in the gate was quenched too quickly for the signal to be propagated.

Conversely, a larger delay caused the gate to open for the signal. Results from neurophysiological experiments have already shown that a change in delay properties is possible in real neurons. These findings therefore support the hypothesis of Kremkow and colleagues that such temporal gating can form the basis for selecting one of several alternative options in our brain.

Admin's Note: This article is not intended to provide medical advice, diagnosis or treatment.

Thursday, October 21, 2010

Human Brain Can 'See' Shapes With Sound : See No Shape, Touch No Shape, Hear a Shape? New Way of 'Seeing' the World


Scientists at The Montreal Neurological Institute and Hospital -- The Neuro, McGill University have discovered that our brains have the ability to determine the shape of an object simply by processing specially-coded sounds, without any visual or tactile input. Not only does this new research tell us about the plasticity of the brain and how it perceives the world around us, it also provides important new possibilities for aiding those who are blind or with impaired vision.
New research shows that the human brain is able to 
determine the shape of an object simply by processing 
specially-coded sounds, without any visual or tactile input. 
(Credit: iStockphoto/Sergey Chushkin)

Shape is an inherent property of objects existing in both vision and touch but not sound. Researchers at The Neuro posed the question 'can shape be represented by sound artificially?' "The fact that a property of sound such as frequency can be used to convey shape information suggests that as long as the spatial relation is coded in a systematic way, shape can be preserved and made accessible -- even if the medium via which space is coded is not spatial in its physical nature," says Jung-Kyong Kim, PhD student in Dr. Robert Zatorre's lab at The Neuro and lead investigator in the study.

In other words, similar to our ocean-dwelling dolphin cousins who use echolocation to explore their surroundings, our brains can be trained to recognize shapes represented by sound and the hope is that those with impaired vision could be trained to use this as a tool. In the study, blindfolded sighted participants were trained to recognize tactile spatial information using sounds mapped from abstract shapes. Following training, the individuals were able to match auditory input to tactually discerned shapes and showed generalization to new auditory-tactile or sound-touch pairings.

"We live in a world where we perceive objects using information available from multiple sensory inputs," says Dr. Zatorre, neuroscientist at The Neuro and co-director of the International Laboratory for Brain Music and Sound Research. "On one hand, this organization leads to unique sense-specific percepts, such as colour in vision or pitch in hearing. On the other hand our perceptual system can integrate information present across different senses and generate a unified representation of an object. We can perceive a multisensory object as a single entity because we can detect equivalent attributes or patterns across different senses." Neuroimaging studies have identified brain areas that integrate information coming from different senses -- combining input from across the senses to create a complete and comprehensive picture.

The results from The Neuro study strengthen the hypothesis that our perception of a coherent object or event ultimately occurs at an abstract level beyond the sensory input modes in which it is presented. This research provides important new insight into how our brains process the world as well as new possibilities for those with impaired senses.

The study was published in the journal Experimental Brain Research. The research was supported by grants from the Canadian Institutes of Health Research and the Natural Sciences and Engineering Research Council of Canada.

Editor's Note: This article is not intended to provide medical advice, diagnosis or treatment.

Wednesday, August 11, 2010

Brains Wiring: More Like the Internet Than a Pyramid?


The brain has been mapped to the smallest fold for at least a century, but still no one knows how all the parts talk to each other.
Image
New research suggests that the distributed network 
of the Internet may be a better model for the human 
brain than a top-down hierarchy. 
(Credit: iStockphoto/Henrik Jonsson)

A study in Proceedings of the National Academy of Sciences answers that question for a small area of the rat brain and in so doing takes a big step toward revealing the brain's wiring.

The network of brain connections was thought too complex to describe, but molecular biology and computing methods have improved to the point that the National Institutes of Health have announced a $30 million plan to map the human "connectome."

The study shows the power of a new method for tracing brain circuits.

USC College neuroscientists Richard H. Thompson and Larry W. Swanson used the method to trace circuits running through a "hedonic hot spot" related to food enjoyment.

The circuits showed up as patterns of circular loops, suggesting that at least in this part of the rat brain, the wiring diagram looks like a distributed network.

Neuroscientists are split between a traditional view that the brain is organized as a hierarchy, with most regions feeding into the "higher" centers of conscious thought, and a more recent model of the brain as a flat network similar to the Internet.

"We started in one place and looked at the connections. It led into a very complicated series of loops and circuits. It's not an organizational chart. There's no top and bottom to it," said Swanson, a member of the National Academy of Sciences and the Milo Don and Lucille Appleman Professor of Biological Sciences at USC College.

The circuit tracing method allows the study of incoming and outgoing signals from any two brain centers. It was invented and refined by Thompson over eight years. Thompson is a research assistant professor of biological sciences at the College.

Most other tracing studies at present focus only on one signal, in one direction, at one location.

"[We] can look at up to four links in a circuit, in the same animal at the same time. That was our technical innovation," Swanson said.

The Internet model would explain the brain's ability to overcome much local damage, Swanson said.

"You can knock out almost any single part of the Internet and the rest of it works."

Likewise, Swanson said, "There are usually alternate pathways through the nervous system. It's very hard to say that any one part is absolutely essential."

Swanson first argued for the distributed model of the brain in his acclaimed book Brain Architecture: Understanding the Basic Plan (Oxford University Press, 2003).

The PNAS study appears to support his view.

"There is an alternate model. It's not proven, but let's rethink the traditional way of regarding how the brain works," he said.

"The part of the brain you think with, the cortex, is very important, but it's certainly not the only part of the nervous system that determines our behavior."

The research described in the PNAS study was supported by the National Institute of Neurological Disorders and Stroke in the National Institutes of Health.

Friday, March 12, 2010

Computer Algorithm Able to 'Read' Memories


Computer programs have been able to predict which of three short films a person is thinking about, just by looking at their brain activity. The research, conducted by scientists at the Wellcome Trust Centre for Neuroimaging at UCL (University College London), provides further insight into how our memories are recorded.
To explore how memories are recorded, researchers showed volunteers three short films and asked them to memorize what they saw. The films were very simple, sharing a number of similar features -- all included a woman carrying out an everyday task in a typical urban street, and each film was the same length, seven seconds long. For example, one film showed a woman posting a letter. (Credit: Wellcome Trust Centre for Neuroimaging at UCL)

Professor Eleanor Maguire led this Wellcome Trust-funded study, an extension of work published last year which showed how spatial memories -- in that case, where a volunteer was standing in a virtual reality room -- are recorded in regular patterns of activity in the hippocampus, the area of the brain responsible for learning and memory.

"In our previous experiment, we were looking at basic memories, at someone's location in an environment," says Professor Maguire. "What is more interesting is to look at 'episodic' memories -- the complex, everyday memories that include much more information on where we are, what we are doing and how we feel."

To explore how such memories are recorded, the researchers showed ten volunteers three short films and asked them to memorise what they saw. The films were very simple, sharing a number of similar features -- all included a woman carrying out an everyday task in a typical urban street, and each film was the same length, seven seconds long. For example, one film showed a woman drinking coffee from a paper cup in the street before discarding the cup in a litter bin; another film showed a (different) woman posting a letter.

The volunteers were then asked to recall each of the films in turn whilst inside an fMRI scanner, which records brain activity by measuring changes in blood flow within the brain.

A computer algorithm then studied the patterns and had to identify which film the volunteer was recalling purely by looking at the pattern of their brain activity. The results are published in the journal Current Biology.

"The algorithm was able to predict correctly which of the three films the volunteer was recalling significantly above what would be expected by chance," explains Martin Chadwick, lead author of the study. "This suggests that our memories are recorded in a regular pattern."

Although a whole network of brain areas support memory, the researchers focused their study on the medial temporal lobe, an area deep within the brain believed to be most heavily involved in episodic memory. It includes the hippocampus -- an area which Professor Maguire and colleagues have studied extensively in the past.

They found that the key areas involved in recording the memories were the hippocampus and its immediate neighbours. However, the computer algorithm performed best when analysing activity in the hippocampus itself, suggesting that this is the most important region for recording episodic memories. In particular, three areas of the hippocampus -- the rear right and the front left and front right areas -- seemed to be involved consistently across all participants. The rear right area had been implicated in the earlier study, further enforcing the idea that this is where spatial information is recorded. However, it is still not clear what role the front two regions play.

"Now that we are developing a clearer picture of how our memories are stored, we hope to examine how they are affected by time, the ageing process and by brain injury," says Professor Maguire.
Reblog this post [with Zemanta]

Friday, February 26, 2010

Surprise! Neural Mechanism May Underlie an Enhanced Memory for the Unexpected


The human brain excels at using past experiences to make predictions about the future. However, the world around us is constantly changing, and new events often violate our logical expectations.
The element of surprise appears to have a big effect on our ability to remember. Researchers have discovered that unexpected stimuli enhanced an early and a late electrical potential in the hippocampus and the late signal was associated with a memory for the unexpected picture. (Credit: iStockphoto/Rosemarie Gearhart)

"We know these unexpected events are more likely to be remembered than predictable events, but the underlying neural mechanisms for these effects remain unclear," says lead researcher, Dr. Nikolai Axmacher, from the University of Bonn in Germany.

Dr. Axmacher and colleagues, whose new study is published by Cell Press in the February 25 issue of the journal Neuron, investigated the relationship between novelty processing and memory formation in two key brain structures, the hippocampus, and the nucleus accumbens. The hippocampus plays a key role in memory formation while the nucleus accumbens is involved in processing rewards and novel information. Previous work had suggested that information transfer between these structures may be associated with enhanced memory for unexpected items or events.

Obtaining direct information on the electrical activity of these structures deep in the brain is usually impossible in humans. However, the researchers used the opportunity to record from two groups of patients with electrodes implanted in these regions: Epilepsy patients awaiting surgical treatment of severe epilepsy, and patients with treatment-resistant depression undergoing deep-brain stimulation. Both groups of participants studied pictures of faces and houses in grayscale that were usually presented on a red or green background, respectively. Occasionally, a picture would have an "unexpected" configuration, such as a face on a green background. Subjects were subsequently tested for their memory of the expected and unexpected items.

The researchers discovered that unexpected stimuli enhanced an early and a late electrical potential in the hippocampus and the late signal was associated with a memory for the unexpected picture. In the nucleus accumbens, there was only a late potential which was larger during exposure to unexpected items. "Our findings support the idea that hippocampal activity may initially signal the occurrence of an unexpected event and that the nucleus accumbens may influence subsequent processing which serves to promote memory encoding," explains Dr. Axmacher.

The authors are careful to point out that one limitation of their study is that the recordings from the hippocampus and nucleus accumbens came from two separate groups of subjects, so their data provide an indirect measure of the functional connectivity between these two brain areas. However, their findings do provide fascinating new insight into this complex brain circuit. "Taken together, these are the first results that speak to the relative timing of expectation effects in different regions of the human brain, and they support models of accumbens-hippocampus interactions during encoding of unexpected events," concludes Dr. Axmacher.

The researchers include Nikolai Axmacher, University of Bonn, Bonn, Germany, University of California, Davis, Davis, CA; Michael X. Cohen, University of Amsterdam, Amsterdam, The Netherlands, University of Arizona, Tucson, AZ; Juergen Fell, University of Bonn, Bonn, Germany; Sven Haupt, University of Bonn, Bonn, Germany; Matthias Dumpelmann, Epilepsy Center, University Hospital Freiburg, Freiburg, Germany; Christian E. Elger, University of Bonn, Bonn, Germany, University of California, Davis, Davis, CA; Thomas E. Schlaepfer, University of Bonn, Bonn, Germany, The Johns Hopkins University, Baltimore, MD; Doris Lenartz, University of Cologne, Koln, Germany; Volker Sturm, University of Cologne, Koln, Germany; and Charan Ranganath, University of California, Davis, Davis, CA.