BTemplates.com

Powered by Blogger.

Pageviews past week

Quantum mechanics

Auto News

artificial intelligence

About Me

Recommend us on Google!

Information Technology

Popular Posts

Tuesday, September 27, 2011

Deep Brain Stimulation Studies Show How Brain Buys Time for Tough Choices


Take your time. Hold your horses. Sleep on it. When people must decide between arguably equal choices, they need time to deliberate. In the case of people undergoing deep brain stimulation (DBS) for Parkinson's disease, that process sometimes doesn't kick in, leading to impulsive behavior. Some people who receive deep brain stimulation for Parkinson's disease behave impulsively, making quick, often bad, decisions.

Red is for reflection. The hotter the color, especially in the circled area, the more likely the brain was to take its time making difficult decisions. Parkinson's patients whose deep brain stimulators were on (right), were more impulsive -- a cooler blue. (Credit: Frank Lab/Brown University)

New research into why that happens has led scientists to a detailed explanation of how the brain devotes time to reflect on tough choices.

Michael Frank, professor of cognitive, linguistic, and psychological sciences at Brown University, studied the impulsive behavior of Parkinson's patients when he was at the University of Arizona several years ago. His goal was to model the brain's decision-making mechanics. He had begun working with Parkinson's patients because DBS, a treatment that suppresses their tremor symptoms, delivers pulses of electrical current to the subthalamic nucleus (STN), a part of the brain that Frank hypothesized had an important role in decisions. Could the STN be what slams the brakes on impulses, giving the medial prefrontal cortex (mPFC) time to think?

When the medial prefrontal cortex needs time to deliberate, it recruits help in warding off impulsive urges from elsewhere in the brain."We didn't have any direct evidence of that," said Frank, who is affiliated with the Brown Institute for Brain Science. "To test that theory for how areas of the brain interact to prevent you from making impulsive decisions and how that could be changed by DBS, you have to do experiments where you record brain activity in both parts of the network that we think are involved. Then you also have to manipulate the system to see how the relationship between recorded activity in one area and decision making changes as a function of stimulating the other area."

Frank and his team at Brown and Arizona did exactly that. They describe their findings in a study published online in the journal Nature Neuroscience.

The researchers' measurements from two experiments and analysis with a computer model support the theory that when the mPFC is faced with a tough decision, it recruits the STN to ward off more impulsive urges coming from the striatum, a third part of the brain. That allows it time to make its decision.

For their first experiment, the researchers designed a computerized decision-making experiment. They asked 65 healthy subjects and 14 subjects with Parkinson's disease to choose between pairs of generic line art images while their mPFC brain activity was recorded. Each image was each associated with a level of reward. Over time the subjects learned which ones carried a greater reward.

Sometimes, however, the subjects would be confronted with images of almost equal reward -- a relatively tough choice. That's when scalp electrodes detected elevated activity in the mPFC in certain low frequency bands. Lead author and postdoctoral scholar James Cavanagh found that when mPFC activity was larger, healthy participants and Parkinson's participants whose stimulators were off would take proportionally longer to decide. But when deep brain stimulators were turned on to alter STN function, the relationship between mPFC activity and decision making was reversed, leading to decision making that was quicker and less accurate.



The Parkinson's patients whose stimulators were on still showed the same elevated level of activity in the mPFC. The cortex wanted to deliberate, Cavanagh said, but the link to the brakes had been cut.

"Parkinson's patients on DBS had the same signals," he said. "It just didn't relate to behavior. We had knocked out the network."

In the second experiment, the researchers presented eight patients with the same decision-making game while they were on the operating table in Arizona receiving their DBS implant. The researchers used the electrode to record activity directly in the STN and found a pattern of brain activity closely associated with the patterns they observed in the mPFC.

"The STN has greater activity with greater [decision] conflict," he said. "It is responsive to the circumstances that the signals on top of the scalp are responsive to, and in highly similar frequency bands and time ranges."

A mathematical model for analyzing the measurements of accuracy and response time confirmed that the elevated neural activity and the extra time people took to decide was indeed evidence of effortful deliberation.

"It was not that they were waiting without doing anything," said graduate student Thomas Wiecki, the paper's second author. "They were slower because they were taking the time to make a more informed decision. They were processing it more thoroughly."

The results have led the researchers to think that perhaps the different brain regions communicate by virtue of these low-frequency signals. Maybe the impulsivity side effect of DBS could be mitigated if those bands could remain unhindered by the stimulator's signal. Alternatively, Wiecki said, a more sophisticated DBS system could sense that decision conflict is underway in the mPFC and either temporarily suspend its operation until the decision is made, or stimulate the STN in a more dynamic way to better mimic intact STN function.

These are not trivial ideas to foist upon DBS engineers, but by understanding the mechanics underlying the side effect -- and in healthy unhindered decision making -- the researchers say they now have a target to consider.

In addition to Frank, Cavanagh, and Wiecki, another Brown author is Christina Figueroa. Arizona authors include Michael Cohen, Johan Samanta, and Scott Sherman.

The Michael J. Fox Foundation funded the research.

Recommend this story on Facebook, Twitter, and Google +1:

From Your Heart to Your iPhone


A new app gets data from an implanted device and can share it with the patient, doctors, and family.

A smart-phone app under development for heart-failure patients allows them to keep track of the pressure inside their heart as measured by an implanted sensor. That data could help patients adjust their medication to maintain a healthy pressure, much as diabetics do with insulin and blood sugar readings.

Called Pam+ (for "patient advisory module"), the app is being developed by researchers at the University of Southern California in collaboration with medical device maker St. Jude Medical. The researchers hope it will help patients better manage their health and reduce hospitalizations, which are responsible for much of the $40 million in health-care costs linked to heart failure.

In congestive heart failure, pressure builds up in the circulatory system and the heart fails to pump blood adequately to the rest of the body. Fluid pressure changes by the day, and monitoring those fluctuations continuously is essential to treating heart failure effectively. A number of implanted devices are now under development to monitor this pressure, giving patients and doctors real time data.

The PAM+ app works in conjunction with an external device—developed by St. Jude and currently in clinical tests—that is placed over the heart, where it charges the implanted sensor and downloads data from it.

The data is forwarded to a server at St. Jude that analyzes it and returns, via the app, the latest readings and information about ongoing trends. A patient who has regularly monitored his or her heart pressure over a week will see a graph of pressure readings along with the message "Your heart thanks you." Users can easily share their data with their health-care team and family.



"We want patients to be able to access data but also to be rewarded and encouraged on a daily basis, so they don't feel like their whole life is a diet," says Leslie Saxon, a cardiologist and director of the Center for Body Computing at USC, who helped develop the device.

Previous research conducted by Saxon showed that remote monitoring can improve the health of heart-failure patients and lower health-care costs. She unveiled a prototype of the app at the Body Computing conference in Los Angeles today.

Users get points for monitoring their pressure—points that might eventually be tied to iTunes or Amazon credit. "Even a traditional payer would love to reward this type of behavior," says Saxon.

She believes an app like this can also change the nature of doctors' visits. Rather than a physician giving a patient the latest test results, taken at a few points in time, the patient can show the doctor measurements of heart pressure over weeks and months, and together they can discuss the trends these reveal.

Scientists discover an organizing principle for our sense of smell


The fact that certain smells cause us pleasure or disgust would seem to be a matter of personal taste. But new research at the Weizmann Institute shows that odors can be rated on a scale of pleasantness, and this turns out to be an organizing principle for the way we experience smell. The findings, which appeared today in Nature Neuroscience, reveal a correlation between the response of certain nerves to particular scents and the pleasantness of those scents. Based on this correlation, the researchers could tell by measuring the nerve responses whether a subject found a smell pleasant or unpleasant.

Our various sensory organs are have evolved patterns of organization that reflect the type of input they receive. Thus the receptors in the retina, in the back of the eye, are arranged spatially for efficiently mapping out visual coordinates. The structure of the inner ear, on the other hand, is set up according to a tonal scale. But the organizational principle for our sense of smell has remained a mystery: Scientists have not even been sure if there is a scale that determines the organization of our smell organ, much less how the arrangement of smell receptors on the membranes in our nasal passages might reflect such a scale.

A team headed by Prof. Noam Sobel of the Weizmann Institute's Neurobiology Department set out to search for the principle of organization for smell. Hints that the answer could be tied to pleasantness had been seen in research labs around the world, including that of Sobel, who had previously found a connection between the chemical structure of an odor molecule and its place on a pleasantness scale. Sobel and his team thought that smell receptors in the nose – of which there are some 400 subtypes – could be arranged on the nasal membrane according to this scale. This hypothesis goes against the conventional view, which claims that the various smell receptors are mixed -- distributed evenly, but randomly, around the membrane.



In the experiment, the researchers inserted electrodes into the nasal passages of volunteers and measured the nerves' responses to different smells in various sites. Each measurement actually captured the response of thousands of smell receptors, as these are densely packed on the membrane. The scientists found that the strength of the nerve signal varies from place to place on the membrane. It appeared that the receptors are not evenly distributed, but rather, that they are grouped into distinct sites, each engaging most strongly with a particular type of scent. Further investigation showed that the intensity of a reaction was linked to the odor's place on the pleasantness scale. A site where the nerves reacted strongly to a certain agreeable scent also showed strong reactions to other pleasing smells and vice versa: The nerves in an area with a high response to an unpleasant odor reacted similarly to other disagreeable smells. The implication is that a pleasantness scale is, indeed, an organizing principle for our smell organ.

But does our sense of smell really work according to this simple principle? Natural odors are composed of a large number of molecules – roses, for instance, release 172 different odor molecules. Nonetheless, says Sobel, the most dominant of those determine which sites on the membrane will react the most strongly, while the other substances make secondary contributions to the scent.

'We uncovered a clear correlation between the pattern of nerve reaction to various smells and the pleasantness of those smells. As in sight and hearing, the receptors for our sense of smell are spatially organized in a way that reflects the nature of the sensory experience,' says Sobel. In addition, the findings confirm the idea that our experience of smells as nice or nasty is hardwired into our physiology, and not purely the result of individual preference. Sobel doesn't discount the idea that individuals may experience smells differently. He theorizes that cultural context and personal experience may cause a certain amount of reorganization in smell perception over a person's lifetime.

More information: DOI: 10.1038/nn.2926

Sunday, September 25, 2011

Bio-inspired coating resists liquids


After a rain, the cupped leaf of a pitcher plant becomes a virtually frictionless surface. Sweet-smelling and elegant, the carnivore attracts ants, spiders, and even little frogs. One by one, they slide to their doom.

This is an illustration showing a schematic
of slippery surface and its characteristics of
repelling many fluids present on the earth
(as symbolized by the earth reflected on the
liquid drop). Credit: Courtesy of James
C. Weaver and Peter Allen.

Adopting the plant's slick strategy, a group of applied scientists at Harvard have created a material that repels just about any type of liquid, including blood and oil, and does so even under harsh conditions like high pressure and freezing temperatures.

The bio-inspired liquid repellence technology, described in the September 22 issue of Nature, should find applications in biomedical fluid handling, fuel transport, and anti-fouling and anti-icing technologies. It could even lead to self-cleaning windows and improved optical devices.


"Inspired by the pitcher plant, we developed a new coating that outperforms its natural and synthetic counterparts and provides a simple and versatile solution for liquid and solid repellency," says lead author Joanna Aizenberg, Amy Smith Berylson Professor of Materials Science at the Harvard School of Engineering and Applied Sciences (SEAS), Director of the Kavli Institute for Bionano Science and Technology at Harvard, and a Core Faculty member at the Wyss Institute for Biologically Inspired Engineering at Harvard.

By contrast, current state-of-the-art liquid repellent surfaces have taken cues from a different member of the plant world. The leaves of the lotus resist water due to the tiny microtextures on the surface; droplets balance on the cushion of air on the tips of the surface and bead up.
 This is an example of the self-cleaning quality of Slippery Liquid-Infused Porous Surface (SLIPS). Credit: Courtesy of the laboratory of Joanna Aizenberg, Harvard School of Engineering and Applied Sciences

The so-called lotus effect, however, does not work well for organic or complex liquids. Moreover, if the surface is damaged (e.g., scratched) or subject to extreme conditions, liquid drops tend to stick to or sink into the textures rather than roll away. Finally, it has proven costly and difficult to manufacture surfaces based on the lotus strategy.

The pitcher plant takes a fundamentally different approach. Instead of using burr-like, air-filled nanostructures to repel water, the plant locks in a water layer, creating a slick coating on the top. In short, the fluid itself becomes the repellent surface. "The effect is similar to when a car hydroplanes, the tires literally gliding on the water rather than the road," says lead author Tak-Sing Wong, a postdoctoral fellow in the Aizenberg lab. "In the case of the unlucky ants, the oil on the bottom of their feet will not stick to the slippery coating on the plant. It's like oil floating on the surface of a puddle."

This is a schematic showing the manufacturing of the Slippery Liquid-Infused Porous Surface (SLIPS). Credit: Courtesy of Peter Allen and James C. Weaver.

Inspired by the pitcher plant's elegant solution, the scientists designed a strategy for creating slippery surfaces by infusing a nano/microstructured porous material with a lubricating fluid. They are calling the resulting bio-inspired surfaces "SLIPS" (Slippery Liquid-Infused Porous Surfaces).



"Like the pitcher plant, SLIPS are slippery for insects, but they are now designed to do much more: they repel a wide variety of liquids and solids," says Aizenberg. SLIPS show virtually no retention, as very little tilt is needed to coax the liquid or solid into sliding down and off the surface.

"The repellent fluid surface offers additional benefits, as it is intrinsically smooth and free of defects," says Wong. "Even after we damage a sample by scraping it with a knife or blade, the surface repairs itself almost instantaneously and the repellent qualities remain, making SLIPS self-healing." Unlike the lotus, the SLIPS can be made optically transparent, and therefore ideal for optical applications and self-cleaning, clear surfaces.

In addition, the near frictionless effect persists under extreme conditions: high pressures (as much as 675 atmospheres, equivalent to seven kilometers under the sea) and humidity, and in colder temperatures. The team conducted studies outside after a snowstorm; SLIPS withstood the freezing temperatures and even repelled ice.

"Not only is our bio-inspired surface able to work in a variety of conditions, but it is also simple and cheap to manufacture," says co-author Sung Hoon Kang, a Ph.D. candidate in the Aizenberg lab. "It is easily scalable because you can choose just about any porous material and a variety of liquids."

To see if the surface was truly up to nature's high standards, they even did a few experiments with ants. In tests, the insects slid off the artificial surface or retreated to safer ground after only a few timorous steps.

The researchers anticipate that the pitcher plant-inspired technology, for which they are seeking a patent, could one day be used for fuel- and water-transport pipes, and medical tubing (such as catheters and blood transfusion systems), which are sensitive to drag and pressure and are compromised by unwanted liquid-surface interactions. Other potential applications include self-cleaning windows and surfaces that resist bacteria and other types of fouling (such as the buildup that forms on ship hulls). The advance may also find applications in ice-resistant materials and may lead to anti-sticking surfaces that repel fingerprints or graffiti.

"The versatility of SLIPS, their robustness and unique ability to self-heal makes it possible to design these surfaces for use almost anywhere, even under extreme temperature and pressure conditions," says Aizenberg. "It potentially opens up applications in harsh environments, such as polar or deep sea exploration, where no satisfactory solutions exist at present. Everything SLIPS!"

Provided by Harvard University

Roll over Einstein: Law of physics challenged


One of the very pillars of physics and Einstein's theory of relativity - that nothing can go faster than the speed of light - was rocked Thursday by new findings from one of the world's foremost laboratories.
This undated file photo shows famed physicist Albert Einstein. Scientists at the European Organization for Nuclear Research, or CERN, the world's largest physics lab, say they have clocked subatomic particles, called neutrinos, traveling faster than light, a feat that, if true, would break a fundamental pillar of science, the idea that nothing is supposed to move faster than light, at least according to Einstein's special theory of relativity: The famous E (equals) mc2 equation. That stands for energy equals mass times the speed of light squared. The readings have so astounded researchers that they are asking others to independently verify the measurements before claiming an actual discovery. (AP Photo)

European researchers said they clocked an oddball type of subatomic particle called a neutrino going faster than the 186,282 miles per second that has long been considered the cosmic speed limit.

The claim was met with skepticism, with one outside physicist calling it the equivalent of saying you have a flying carpet. In fact, the researchers themselves are not ready to proclaim a discovery and are asking other physicists to independently try to verify their findings.

"The feeling that most people have is this can't be right, this can't be real," said James Gillies, a spokesman for the European Organization for Nuclear Research, or CERN, which provided the particle accelerator that sent neutrinos on their breakneck 454-mile trip underground from Geneva to Italy.

Going faster than light is something that is just not supposed to happen according to Einstein's 1905 special theory of relativity - the one made famous by the equation E equals mc2. But no one is rushing out to rewrite the science books just yet.

It is "a revolutionary discovery if confirmed," said Indiana University theoretical physicist Alan Kostelecky, who has worked on this concept for a quarter of a century.

Stephen Parke, who is head theoretician at the Fermilab near Chicago and was not part of the research, said: "It's a shock. It's going to cause us problems, no doubt about that - if it's true."

Even if these results are confirmed, they won't change at all the way we live or the way the world works. After all, these particles have presumably been speed demons for billions of years. But the finding will fundamentally alter our understanding of how the universe operates, physicists said.

Einstein's special relativity theory, which says that energy equals mass times the speed of light squared, underlies "pretty much everything in modern physics," said John Ellis, a theoretical physicist at CERN who was not involved in the experiment. "It has worked perfectly up until now."

France's National Institute for Nuclear and Particle Physics Research collaborated with Italy's Gran Sasso National Laboratory on the experiment at CERN. CERN reported that a neutrino beam fired from a particle accelerator near Geneva to a lab 454 miles (730 kilometers) away in Italy traveled 60 nanoseconds faster than the speed of light. Scientists calculated the margin of error at just 10 nanoseconds. (A nanosecond is one-billionth of a second.)

Given the enormous implications of the find, the researchers spent months checking and rechecking their results to make sure there were no flaws in the experiment.

A team at Fermilab had similar faster-than-light results in 2007, but a large margin of error undercut its scientific significance.

If anything is going to throw a cosmic twist into Einstein's theories, it's not surprising that it's the strange particles known as neutrinos. These are odd slivers of an atom that have confounded physicists for about 80 years.

The neutrino has almost no mass, comes in three different "flavors," may have its own antiparticle and has been seen shifting from one flavor to another while shooting out from our sun, said physicist Phillip Schewe, communications director at the Joint Quantum Institute in Maryland.

Columbia University physicist Brian Greene, author of the book "Fabric of the Cosmos," said neutrinos theoretically can travel at different speeds depending on how much energy they have. And some mysterious particles whose existence is still only theorized could be similarly speedy, he said.



Fermilab team spokeswoman Jenny Thomas, a physics professor at the University College of London, said there must be a "more mundane explanation" for the European findings. She said Fermilab's experience showed how hard it is to measure accurately the distance, time and angles required for such a claim.

Nevertheless, Fermilab, which shoots neutrinos from Chicago to Minnesota, has already begun working to try to verify or knock down the new findings.

And that's exactly what the team in Geneva wants.

Gillies told The Associated Press that the readings have so astounded researchers that "they are inviting the broader physics community to look at what they've done and really scrutinize it in great detail, and ideally for someone elsewhere in the world to repeat the measurements."

Only two labs elsewhere in the world can try to replicate the work: Fermilab and a Japanese installation that has been slowed by the tsunami and earthquake. And Fermilab's measuring systems aren't nearly as precise as the Europeans' and won't be upgraded for a while, said Fermilab scientist Rob Plunkett.

Drew Baden, chairman of the physics department at the University of Maryland, said it is far more likely that the CERN findings are the result of measurement errors or some kind of fluke. Tracking neutrinos is very difficult, he said.

"This is ridiculous what they're putting out," Baden said. "Until this is verified by another group, it's flying carpets. It's cool, but ..."

So if the neutrinos are pulling this fast one on Einstein, how can it happen?

Parke said there could be a cosmic shortcut through another dimension - physics theory is full of unseen dimensions - that allows the neutrinos to beat the speed of light.

Indiana's Kostelecky theorizes that there are situations when the background is different in the universe, not perfectly symmetrical as Einstein says. Those changes in background may alter both the speed of light and the speed of neutrinos.

But that doesn't mean Einstein's theory is ready for the trash heap, he said.

"I don't think you're going to ever kill Einstein's theory. You can't. It works," Kostelecky said. There are just times when an additional explanation is needed, he said.

If the European findings are correct, "this would change the idea of how the universe is put together," Columbia's Greene said. But he added: "I would bet just about everything I hold dear that this won't hold up to scrutiny."


More information: The results are pre-published on ArXiv: http://arxiv.org/abs/1109.4897

Saturday, September 24, 2011

Taking Touch beyond the Touch Screen A prototype tablet can sense gestures, and objects placed next to it.


A tablet computer developed collaboratively by researchers at Intel, Microsoft, and the University of Washington can be controlled not only by swiping and pinching at the screen, but by touching any surface on which it is placed.
In touch: The spacecraft on this tablet's
screen can be controlled by maneuvering
the toy on the table next to it. Credit: Intel

Finding new ways to interact with computers has become an important area of research among computer scientists, especially now that touch-screen smart phones and tablets have grown so popular. The project that produced the new device, called Portico, could eventually result in smart phones or tablets that take touch beyond the physical confines of the device.

"The idea is to allow the interactive space to go beyond the display space or screen space," says Jacob Wobbrock, an assistant professor at the University of Washington's Information School, in Seattle, who helped develop the system. This is achieved with two foldout cameras that sit above the display on either side, detecting and tracking motion around the screen. The system detects the height of objects and determines whether they are touching the surrounding surface by comparing the two views captured by the cameras. The approach make it possible to detect hand gestures as well as physical objects so that they can interact with the display, says Wobbrock.

In one demonstration, software tracks a small ball as it moves across the surface the tablet sits on. As the ball strikes the side of the tablet, a virtual ball appears on-screen following the same trajectory, as if the physical ball had entered the device. In this way the ball can be used to score on-screen goals. In another demonstration, the angle of a toy spaceship placed on the table next to the tablet controls the angle of a virtual spaceship onscreen, allowing the user to shoot down "asteroids."



Wobbrock says the same approach would work on smart phones and other pocket-sized devices. "As devices continue to shrink, they compromise the screen space. But with Portico you can reclaim the surrounding area for interactivity," he says.

With the tablet, Portico increases the usable area sixfold, says Daniel Avrahami a senior researcher at Intel Labs Seattle, who came up with the idea for Portico, and led its development, with help from Shahram Izadi at Microsoft Research in Cambridge, UK. For a 12-inch tablet, "that's the equivalent of a 26-inch screen," says Avrahami, who will present the work in October at the ACM User Interface, Software and Technology Symposium in Santa Barbara, California.

Eventually, says Wobbrock, it may be more practical, especially from a commercial standpoint, to use clip-on cameras instead of foldout ones, which tend to break more easily. But he also notes that the entire display might be replaced with a fold-up frame containing both cameras and a pico projector to produce the image on the surface below.

Eva Hornecker, a lecturer specializing in human-computer interaction at the University of Strathclyde, in Glasgow, Scotland, says there is growing interest in using cameras to detect hand gestures and objects among researchers.

"The problem with touch screens is you can't detect anything that's happening over the surface," Hornecker says. However, she notes that allowing interaction beyond the screen could introduce new challenges such as how to provide feedback so the user knows where the interactive area starts and ends.

Friday, September 23, 2011

All-access genome: New study explores packaging of DNA


While efforts to unlock the subtleties of DNA have produced remarkable insights into the code of life, researchers still grapple with fundamental questions. For example, the underlying mechanisms by which human genes are turned on and off -- generating essential proteins, determining our physical traits, and sometimes causing disease -- remain poorly understood.

Fluorescence resonance energy transfer (FRET): experimental design. Nucleosomes are constructed having a fluorescent donor (cyan) attached to one end of the DNA, and a fluorescent acceptor (magenta) attached nearby on the histone protein core. In the middle diagram, spontaneous partial unwrapping of the DNA thread exposes a hidden DNA target site (hatched area), which is site-specific for the DNA binding protein LexA. When LexA is added in sufficient concentration, nucleosomes are temporarily trapped in their unwrapped state. The distance between the two fluorescent molecules changes as the DNA unwraps and rewraps, allowing the process to be precisely measured. Credit: Reprinted from: Journal of Molecular Biology, volume 411(2), Tims HS, Gurunathan K, Levitus M, Widom J, Dynamics of nucleosome invasion by DNA binding proteins, pgs 430-48, with permission from Elsevier.

Biophysicists Marcia Levitus and Kaushik Gurunathan at the Biodesign Institute at Arizona State University along with their colleagues Hannah S. Tims, and Jonathan Widom of Northwestern University in Evanston, Illinois have been preoccupied with tiny, spool-like entities known as nucleosomes. Their latest insights into how these structures wrap and unwrap, permitting regulatory proteins to access, bind with and act on regions of DNA, recently appeared in the Journal of Molecular Biology.

Nucleosomes, Levitus explains, are essential components of the genome, acting to regulate access to DNA and protect it from harm. Nucleosome structure permits the entire strand of human DNA, roughly 6 feet in length, to be densely packed into the nucleus of every cell—an area just 10 microns in diameter. This occurs after nucleosomes assemble and fold into higher order structures, culminating in the formation of chromosomes.

Each nucleosome (there are roughly 30 million per cell) consists of a 147 base pair segment of DNA. This length of DNA thread is wound 1.67 times around the spool-like protein units, known as histones. The histone complex, together with its windings of DNA, forms the nucleosome core particle.

A multitude of proteins must act on regions of the DNA strand, by binding with appropriate target sites. Essential functions rely on these operations, including gene expression, replication and repair of damaged regions of the DNA molecule. But in eukaryotic cells like those of humans, some 75-80 percent of the DNA strand is curled up and hidden in the nucleosomes—inaccessible to protein binding interactions.

In earlier work, the group was able to show that nucleosomes are dynamic structures, quite different from the static pictures produced by X-ray crystallography. Lengths of DNA make themselves available for protein interaction by unwrapping and rewrapping around the histone core. When nucleosomes unwrap, proteins present in sufficient concentration can find their DNA targets and bind with them.

In order to observe and characterize the dynamic behavior of nucleosomes, the team relied on a versatile imaging method known as Fluorescence Resonance Energy Transfer or FRET. The technique allows researchers to look at a pair of fluorescent molecules or fluorophores, one of which is attached to the end of the exposed DNA strand, the other, to one of the histones around which the DNA is coiled, (see figure 1).

As Levitus explains, spontaneous unwrapping and rewrapping of DNA changes the distance between fluorophores, signaling that the process has occurred and allowing the group to quantify the frequency and rate of DNA exposure and concealment. "Although FRET has been used for decades to measure molecular distances in biological systems, dynamic biomolecules such as nucleosomes present particular challenges," notes Levitus. Traditionally, FRET experiments are performed with protein solutions containing many billions of particles. In the case of nucleosomes however, the dynamic behavior of each particle is crucial and bulk measurements using FRET are not effective.
This graphic shows the time elapsed during DNA unwrapping (a) and re-wrapping (b) as measured by FRET analysis. FRET works by measuring the distance between a pair of fluorescent molecules of fluorophores -- one attached to the end of the DNA and the other attached to the histone protein spool around which the DNA "thread" winds and unwinds. Credit: Reprinted from: Journal of Molecular Biology, volume 411(2), Tims HS, Gurunathan K, Levitus M, Widom J, Dynamics of nucleosome invasion by DNA binding proteins, pgs 430-48, with permission from Elsevier.




"In simple terms, if one wanted to understand how humans clap, it would be useless to listen to the whole planet clapping at once. Instead, one would listen to a few individuals, and that is exactly what we did with nucleosomes," Levitus says.

The results of initial studies were revealing. For base pair sequences along the nucleosomes' outer rind, spontaneous DNA unwrapping occurs at a rapid rate— about 4 times per second. This corresponds to a period of only 250 milliseconds during which this region of DNA remains fully wrapped and occluded by the histone complex. Once unwrapped, the DNA remains exposed for 10-50 milliseconds.

These findings present a plausible mechanism to allow protein binding with unwrapped DNA in vivo, so long as the binding sites occur near the ends of wrapped nucleosomal DNA.

The new study also examines, for the first time, the condition of DNA sequences occurring further along the wound length of nucleosomal DNA, that is, closer to the nucleosome's center. Here, rates of DNA unwrapping decreased by orders of magnitude, (see figure 2).

To examine this phenomenon, the group used a site-specific binding protein of Escherichia coli (known as LexA) to identify binding site exposure caused by nucleosome unwrapping. The nucleosomes were labeled with a FRET dye, which allowed the binding process of LexA and its target to be visualized. In successive experiments, the team shifted the binding sites in 10 base pair increments from the end of the nucleosome toward the middle.

The changes in unwrapping rate observed as the binding site was successively moved further inside the nucleosome were dramatic. In one case, a change in position of just 10 base pairs could produce a 250-fold decrease in unwrapping rate of the binding region.

These results prompt the question of how DNA binding sites more deeply wound within the nucleosome are able to successfully interact with their respective protein binders in vivo. The team proposes several possible mechanisms that would permit rapid access to hidden DNA binding regions, even where intrinsic rates of nucleosome unwrapping are low.

One hypothesis is that two or more proteins with target sites on the same nucleosome can act cooperatively, with one protein holding the momentarily unwrapped DNA open as the other enters the nucleosome and invades more inward regions of the DNA sequence, in what the authors describe as a ratcheting process.

Jonathan Widom, Dr. Levitus' collaborator and a co-author of the new study was responsible for much of the pathbreaking research into nucleosome complexity. Dr. Widom died unexpectedly this past month. He was honored for his generosity, prolific research and outstanding contributions to biology in the August 25th issue of the journal Nature.

"I consider myself tremendously fortunate to have had the chance to collaborate with Jon Widom," Levitus says. "Jon has been, and will continue to be, an incredible role model. His generosity, humility, and scientific genius has touched my life in many ways, and his death will leave a void that will be felt for many years to come."

Ongoing research into the subtleties of nucleosome behavior promises to yield rich dividends for genomic science in general and provide a deeper appreciation for foundational issues of health and disease.

Provided by Arizona State University


Microwave Ovens a Key to Energy Production from Wasted Heat


More than 60 percent of the energy produced by cars, machines, and industry around the world is lost as waste heat -- an age-old problem -- but researchers have found a new way to make "thermoelectric" materials for use in technology that could potentially save vast amounts of energy.
Thermoelectric generation of electricity offers a way to recapture some of the enormous amounts of wasted energy lost during industrial activities. (Credit: Graphic courtesy of Oregon State University)

And it's based on a device found everywhere from kitchens to dorm rooms: a microwave oven.

Chemists at Oregon State University have discovered that simple microwave energy can be used to make a very promising group of compounds called "skutterudites," and lead to greatly improved methods of capturing wasted heat and turning it into useful electricity.

A tedious, complex and costly process to produce these materials that used to take three or four days can now be done in two minutes.

Most people are aware you're not supposed to put metal foil into a microwave, because it will spark. But powdered metals are different, and OSU scientists are tapping into that basic phenomenon to heat materials to 1,800 degrees in just a few minutes -- on purpose, and with hugely useful results.

These findings, published in Materials Research Bulletin, should speed research and ultimately provide a more commercially-useful, low-cost path to a future of thermoelectric energy.

"This is really quite fascinating," said Mas Subramanian, the Milton Harris Professor of Materials Science at OSU. "It's the first time we've ever used microwave technology to produce this class of materials."

Thermoelectric power generation, researchers say, is a way to produce electricity from waste heat -- something as basic as the hot exhaust from an automobile, or the wasted heat given off by a whirring machine. It's been known of for decades but never really used other than in niche applications, because it's too inefficient, costly and sometimes the materials needed are toxic. NASA has used some expensive and high-tech thermoelectric generators to produce electricity in outer space.

The problem of wasted energy is huge. A car, for instance, wastes about two-thirds of the energy it produces. Factories, machines and power plants discard enormous amounts of energy.

But the potential is also huge. A hybrid automobile that has both gasoline and electric engines, for instance, would be ideal to take advantage of thermoelectric generation to increase its efficiency. Heat that is now being wasted in the exhaust or vented by the radiator could instead be used to help power the car. Factories could become much more energy efficient, electric utilities could recapture energy from heat that's now going up a smokestack. Minor applications might even include a wrist watch operated by body heat.



"To address this, we need materials that are low cost, non-toxic and stable, and highly efficient at converting low-grade waste heat into electricity," Subramanian said. "In material science, that's almost like being a glass and a metal at the same time. It just isn't easy. Because of these obstacles almost nothing has been done commercially in large scale thermoelectric power generation."

Skutterudites have some of the needed properties, researchers say, but historically have been slow and difficult to make. The new findings cut that production time from days to minutes, and should not only speed research on these compounds but ultimately provide a more affordable way to produce them on a mass commercial scale.

OSU researchers have created skutterudites with microwave technology with an indium cobalt antimonite compound, and believe others are possible. They are continuing research, and believe that ultimately a range of different compounds may be needed for different applications of thermoelectric generation.

Collaborators on this study included Krishnendu Biswas, a post-doctoral researcher, and Sean Muir, a doctoral candidate, both in the OSU Department of Chemistry. The work has been supported by both the National Science Foundation and U.S. Department of Energy.

"We were surprised this worked so well," Subramanian said. "Right now large-scale thermoelectric generation of electricity is just a good idea that we couldn't make work. In the future it could be huge." 

Recommend this story on Facebook, Twitter, and Google +1

Thursday, September 15, 2011

Scientists successfully expand bone marrow-derived stem cells in culture


All stem cells-regardless of their source-share the remarkable capability to replenish themselves by undergoing self-renewal. Yet, so far, efforts to grow and expand scarce hematopoietic (or blood-forming) stem cells in culture for therapeutic applications have been met with limited success.

An image of fully functional hematopoietic
stem cells (or blood-forming) that are successfully

proliferating amongst other bone marrow-derived

cells in a culture dish. Credit: Dr. John Perry,
Stowers Institute forMedical Research

Now, researchers at the Stowers Institute for Medical Research teased apart the molecular mechanisms enabling stem cell renewal in hematopoietic stem cells isolated from mice and successfully applied their insight to expand cultured hematopoietic stem cells a hundredfold.

Their findings, which will be published in the Sept. 15, 2011, edition of Genes & Development, demonstrate that self-renewal requires three complementary events: proliferation, active suppression of differentiation and programmed cell death during proliferation.

"The previous efforts so far to grow and expand scarce hematopoietic stem cells in culture for therapeutic applications have been met with limited success", says Stowers investigator Linheng Li, Ph.D., who led the study. "Being able to tap into stem cell's inherent potential for self-renewal could turn limited sources of hematopoietic stem cells such as umbilical cord blood into more widely available resources for hematopoietic stem cells," he adds while cautioning that their findings have yet to be replicated in human cells.

The transplantation of human hematopoietic stem cells isolated from bone marrow is used in the treatment of anemia, immune deficiencies and other diseases, including cancer. However, since bone marrow transplants require a suitable donor-recipient tissue match, the number of potential donors is limited.

Hematopoietic stem cells isolated from umbilical cord blood could be a good alternative source: Readily available and immunologically immature, they allow the donor-recipient match to be less than perfect without the risk of immune rejection of the transplant. Unfortunately, their therapeutic use is limited since umbilical cord blood contains only a small number of stem cells.

Although self-renewal is typically considered a single trait of stem cells, Li and his team wondered whether it could be pulled apart into three distinct requirements: proliferation, maintenance of the undifferentiated state, and the suppression of programmed cell death or apoptosis. "The default state of stem cells is to differentiate into a specialized cell types," explains postdoctoral researcher and first author John Perry, Ph.D. "Differentiation must be blocked in order for stem cells to undergo self-renewal."




Proliferation of stem cells in an undifferentiated state, however, calls tumor suppressor genes into action. These genes help prevent cancer by inducing a process of cell death known as apoptosis. "Consequently, self-renewal of adult stem cells must also include a third event, the active suppression of apoptosis," says Perry.

To test their hypothesis, Perry and his colleagues isolated hematopoietic stem cells from mice and analyzed two key genetic pathways—the Wnt/β-catenin and PI3K/Akt pathways. Wnt proteins had been identified as "self-renewal factors," while PI3K/Akt activation had been shown to induce proliferation and promote survival by inhibiting apoptosis.

Surprisingly, activation of the Wnt/β-catenin pathway alone blocked differentiation but eventually resulted in cell death, while activation of the PI3K/Akt pathway alone increased differentiation but facilitated cell survival. Only when both pathways were activated, did the pool of hematopoietic stem cells start expanding. "This demonstrated both pathways had to cooperate to promote self-renewal," says Perry.

Although altering both pathways drives self-renewal of hematopoietic stem cells, it also permanently blocks their ability to mature into fully functional blood cells. To sidestep the differentiation block and generate normal, functioning hematopoietic stem cells usable for therapy, the Stowers scientists used small molecules to reversibly activate both the Wnt/β-catenin and PI3K/Akt pathways in culture.

"We were able to expand the most primitive hematopoietic stem cells, which, when transplanted back into mice gave rise to all blood cell types throughout three, sequential transplantation experiments," says Li. "If similar results can be achieved using human hematopoietic stem cells from sources such as umbilical cord blood, this work is expected to have substantial clinical impact."

Provided by Stowers Institute for Medical Research

Wednesday, September 14, 2011

Printing off the paper: Pushing the boundaries of the burgeoning technology of 3-D printing


Imagine being able to "print" an entire house. Or a four-course dinner. Or a complete mechanical device such as a cuckoo clock, fully assembled and ready to run. Or a printer capable of printing ... yet another printer?
One of the 3-D printers at work in the Mediated Matter
group at the MIT Media Lab. Photo: Melanie Gonick

These are no longer sci-fi flights of fancy. Rather, they are all real (though very early-stage) research projects underway at MIT, and just a few ways the Institute is pushing forward the boundaries of a technology it helped pioneer nearly two decades ago. A flurry of media stories this year have touted three-dimensional printing — or “3DP” — as the vanguard of a revolution in the way goods are produced, one that could potentially usher in a new era of “mass customization.”

One of the first practical 3-D printers, and the first to be called by that name, was patented in 1993 by MIT professors Michael Cima, now the Sumitomo Electric Industries Professor of Engineering, and Emanuel Sachs, now the Fred Fort Flowers (1941) and Daniel Fort Flowers (1941) Professor of Mechanical Engineering. Unlike earlier attempts, this machine has evolved to create objects made of plastic, ceramic and metal. The MIT-inspired 3DPs are now in use “all over the world,” Cima says.

The initial motivation was to produce models for visualization — for architects and others — and help streamline the development of new products, such as medical devices. Cima explains, “The slow step in product development was prototyping. We wanted to be able to rapidly prototype surgical tools, and get them into surgeons’ hands to get feedback.”

3DP technology involves building up a shape gradually, one thin layer at a time. The device uses a “stage” — a metal platform mounted on a piston — that’s raised or lowered by a tiny increment at a time. A layer of powder is spread across this platform, and then a print head similar to those used in inkjet printers deposits a binder liquid onto the powder, binding it together. Then, the platform is lowered infinitesimally, another thin layer of powder is applied on top of the last, and the next layer of binder is deposited.

Made to order

With its layers of powder, such a system can make complex shapes that earlier liquid-based 3DP systems could not produce. And different combinations of powders and binders could make a variety of materials — “anything you can make from powders: ceramics, metals, plastics,” Cima says — or even a mix of different materials in the same printed object, using different liquids in the print heads, like the different colors of ink in an inkjet printer.

In one early version, the powder was aluminum oxide, the binder was colloidal silica, and the resulting solid objects were brittle, similar to materials sometimes used as molds for metal casting. They provided, for the first time, a relatively simple way to get one’s hands on a three-dimensional version of just about any shape that could be sketched by computer-assisted design (CAD) software, before manufacturers committed to mass production at much greater cost.


Over the years, the three MIT researchers and one of the companies that licensed the MIT patent, Z Corp., added new variations, including the ability to include colors in printed objects and to use a variety of materials. The ability to print metal objects, in particular, extended the technology from just a way of visualizing new designs to a means of manufacturing metal molds used for the injection molding of plastic parts.

Samuel Allen SM ’71, PhD ’75, the POSCO Professor of Physical Metallurgy and chair of the MIT faculty, spent a decade developing the metal-printing process. In producing molds for injection molding, he says, “the plastic shapes can be quite complicated, with round surfaces and thin walls.” In addition to the shapes of the finished parts, the molds need to have channels for the plastic material to be injected, and they have to be designed so that the resulting pieces can cool uniformly without warping. The 3DP process made it possible to make “parts you could not make through conventional machining,” Allen says.

Manufacturing companies took a strong interest in this work because it enabled “doing a complete design for a tool in days, rather than months,” he adds. “That means you can afford to go through more design iterations.”

Time for a snack

3DP has since branched out in a wide array of directions, at various companies and research institutions around the world. Applications have included everything from the printing of customized prosthetic limbs to nanoprinting of tiny machinery to a project at the MIT Media Lab developing machines to print food ranging from candies to complete meals. One former Media Lab student, Peter Schmitt PhD ’11, working with Media Lab IP consultant Bob Swartz, has printed entire working clocks — with all their gears, chains, faces and hands in a single unit — ready to start ticking as soon as the surplus powder is washed away.

“Mass production is only a couple of hundred years old,” Swartz says. Now, “we’re moving into an area where things will no longer be mass produced.” With 3DP, a basic pattern can be modified to fit an individual’s size, fit and personal tastes before printing.

These clocks were primarily intended to demonstrate that complex devices could be printed as a unit — but one clock took about 100 hours of printing time to produce. “That’s completely impractical for any kind of mass production,” Swartz says, “but it’s my belief that one can get orders-of-magnitude improvements” in the production speed. “It changes the way we think about production.”



Printing better materials

Another variant underway now is a system being developed by Neri Oxman PhD ’10, the Media Lab’s Sony Corporation Career Development Assistant Professor of Media Arts and Sciences, and her graduate student Steven Keating for “printing” concrete. Their ultimate aim: printing a complete structure, even a whole building.

Why do that, instead of the tried-and-true method of casting concrete in wooden forms that dates from the heyday of the Roman Empire? In part, Oxman explains, because it opens up new possibilities in both form and function. Not only would it be possible to create fanciful, organic-looking shapes that would be difficult or impossible using molds, but the technique could also allow the properties of the concrete itself to vary continuously, producing structures that are both lighter and stronger than conventional concrete.

To illustrate this, Keating uses the example of a palm tree compared to a typical structural column. In a concrete column, the properties of the material are constant, resulting in a very heavy structure. But a palm tree’s trunk varies: denser at the outside and lighter toward the center. As part of his thesis research, he has already made sections of concrete with the same kind of variations of density.

“Nature always uses graded materials,” Keating says. Bone, for example, consists of “a hard, dense outer shell, and an interior of spongy material. It gives you a high strength-to-weight ratio. You don’t see that in man-made materials.” Not yet, at least.

Concrete samples made by hand to illustrate the concept of density gradient in concrete. A team from the MIT Media Lab hopes to be able to print such materials with a 3-D printer.Photo: Steven Keating, Timothy Cooke and John Fernández

Variable-density printing is not just about large-scale objects. For example, Oxman has used a similar system to produce a glove with sections that are stiff and others that are flexible, designed to help prevent the wearer from developing carpal tunnel syndrome. She has also designed a chair made of different polymers, producing stiff areas for structural support and flexible areas for comfort, all printed out as a single unit.

Peter Schmitt, now a visiting scientist at the Media Lab, is pushing the technology in an even more sci-fi direction, trying to “build machines that could build machines,” he says. So far, he’s succeeded in making machines that can make many of the parts for another machine, but there remain many obstacles in establishing connections among these — and it’s still more of an intellectual exercise than a practical system, he concedes. “There are better ways to make the parts,” he says. “But at some point, these kinds of things will happen.” 

This story is republished courtesy of MIT News (http://web.mit.edu/newsoffice/), a popular site that covers news about MIT research, innovation and teaching.

Tuesday, September 13, 2011

Have We Met Before? Direct Connections Found Between Areas of Brain Responsible for Voice and Face Recognition


Face and voice are the two main features by which we recognise other people. Researchers at the Max Planck Institute (MPI) for Human Cognitive and Brain Sciences have now discovered that there is a direct structural connection consisting of fibre pathways between voice- and face-recognition areas in the human brain. The exchange of information, which is assumed to take place between these areas via this connection, could help us to quickly identify familiar people in everyday situations and also under adverse conditions.
Direct structural connections exist between the two voice recognition areas (blue and red spheres) and the face recognition area (yellow sphere). In comparison, the connection to the area responsible for more general acoustic information (green sphere) is less strong. The connections appear to be part of larger fibre bundles (shown in grey). (Credit: MPI for Human Cognitive and Brain Sciences)

Theories differ as to what happens in the brain when we recognise familiar persons. Conventionally, it is assumed that voice and face recognition are separate processes which are only combined on a higher processing level. However, recent findings indicate that voice and face recognition are much more closely related. Katharina von Kriegstein, Leader of the Max Planck Research Group "Neural Mechanisms of Human Communication," found in previous research that areas of the brain which are responsible for the identification of faces also become active when we hear a familiar voice. These activations were accompanied by better voice recognition.

"We now assume that areas in the brain which are involved in voice and face recognition interact directly and influence each other," says Helen Blank, a member of von Kriegstein's research group. In a new study, Blank could show that a structural connection between voice and face recognition areas exists. She used diffusion-weighted magnetic resonance imaging, a method with which the course of white matter tracts in the brain can be reconstructed when combined with tractography, a mathematical modelling technique. Blank had located the areas responsible for voice and face recognition in her study participants by measuring the reactions of the brain to different voices and faces using magnetic resonance imaging.

Blank discovered a direct connection consisting of fibre pathways between the voice- and the face-recognition area. "It is particularly interesting that the face recognition area appears to be more strongly connected with the areas involved in voice identification, despite the fact that these areas are further away than areas which process information from voices on a more general level," says the researcher.



This direct connection in our brains could be used in everyday contexts to simulate the faces of our conversation partners, e.g. when we speak on the telephone to a familiar person. However, the precise nature of the information that is exchanged between the voice- and face-recognition areas remains unclear. A forthcoming study which Blank is currently preparing aims to clarify this issue.

Obtaining a more detailed understanding of how the brain works in relation to the processing of such basic tasks as person recognition could be of benefit in many different areas. "The finding is of interest for research on unusual neurological conditions, such as prosopagnosia and phonagnosia, which prevent people from being able to recognise others from their faces or voices," says Blank. The new insights could also stimulate innovations in computer technology and improve person recognition by machines.

Recommend this story on Facebook, Twitter, and Google +1