BTemplates.com

Powered by Blogger.

Pageviews past week

Quantum mechanics

Auto News

artificial intelligence

About Me

Recommend us on Google!

Information Technology

Popular Posts

Showing posts with label National Science Foundation. Show all posts
Showing posts with label National Science Foundation. Show all posts

Thursday, January 5, 2012

Leaping Lizards and Dinosaurs Inspire Robot Design



Leaping lizards have a message for robots: Get a tail! University of California, Berkeley, biologists and engineers -- including undergraduate and graduate students -- studied how lizards manage to leap successfully even when they slip and stumble. They found that lizards swing their tails upward to prevent them from pitching head-over-heels into a rock.

An Agama lizard next to Tailbot, a toy car with an attached tail 
and a toy figure. Sensors detect Tailbot's orientation and swing 
the tail upward to keep the robot from pitching forward, similar 
to the way the lizard uses its tail. 
(Credit: Photo by Robert Full lab, UC Berkeley.)

But after the team added a tail to a robotic car named Tailbot, they discovered that counteracting the effect of a slip is not as simple as throwing your tail in the air. Instead, robots and lizards must actively adjust the angle of their tails just right to remain upright.

"We showed for the first time that lizards swing their tail up or down to counteract the rotation of their body, keeping them stable," said team leader Robert J. Full, UC Berkeley professor of integrative biology. "Inspiration from lizard tails will likely lead to far more agile search-and-rescue robots, as well as ones having greater capability to more rapidly detect chemical, biological or nuclear hazards."

Agile therapod dinosaurs like the velociraptor depicted in the movie Jurassic Park may also have used their tails as stabilizers to prevent forward pitch, Full said. Their tail movement is illustrated in a prescient chase sequence from the 1993 movie in which the animated animal leaps from a balcony onto a T. rex skeleton.

"Muscles willing, the dinosaur could be even more effective with a swing of its tail in controlling body attitude than the lizards," Full said.

Student involvement crucial to research

Full and his laboratory colleagues, including both engineering and biology students, will report their discoveries online on Jan. 5 in advance of publication in the Jan. 12 print edition of the journal Nature. The paper's first author, mechanical engineering graduate student Thomas Libby, also will report the results on Jan. 7 at the annual meeting of the Society for Integrative and Comparative Biology in Charleston, S.C.

Full is enthusiastic about the interplay fostered at UC Berkeley between biologists and engineers in the Center for Interdisciplinary Bio-inspiration in Education and Research (CiBER) lab, within which he offers a research-based teaching lab that provides dozens of undergraduate students with an opportunity to conduct cutting-edge research in teams with graduate students. Each team experiences the benefits of how biologists and engineers approach a problem.

"Learning in the context of original discovery, finding out something that no one has ever know before, really motivated me," said former UC Berkeley integrative biology undergraduate Talia Moore, now a graduate student in the Department of Organismic and Evolutionary Biology at Harvard University. "This research-based lab course … showed me how biologists and engineers can work together to benefit both fields."

"This paper shows that research-based teaching leads to better learning and simultaneously can lead to cutting-edge research," added Full, who last year briefed the U.S. House of Representative's Science, Technology, Engineering and Mathematics (STEM) Education Caucus on this topic. "It also shows the competitive advantage of interdisciplinary approaches and how involvement of undergraduates in research can lead to innovation."

From gecko toe hairs to tails

Full's research over the past 20 years has revealed how the toe hairs of geckos assist them in climbing smooth vertical surfaces and, more recently, how their tails help to keep them from falling when they slip and to right themselves in mid-air.

The new research tested a 40-year-old hypothesis that the two-legged theropod dinosaurs ‑ the ancestors of birds ‑ used their tails as stabilizers while running or dodging obstacles or predators. In Full's teaching laboratory, students noticed a lizard's recovery after slipping during a leap and thought a study of stumbling would be a perfect way to test the value of a tail.

In the CiBER lab, Full and six of his students used high-speed videography and motion capture to record how a red-headed African Agama lizard handled leaps from a platform with different degrees of traction, from slippery to easily-gripped.

They coaxed the lizards to run down a track, vault off a low platform and land on a vertical surface with a shelter on top. When the friction on the platform was reduced, lizards slipped, causing their bodies to potentially spin out of control.

When the researchers saw how the lizard used its tail to counteract the spin, they created a mathematical model as well as Tailbot -- a toy car equipped with a tail and small gyroscope to sense body position ‑ to better understand the animal's skills. With a tail but no feedback from sensors about body position, Tailbot took a nose dive when driven off a ramp, mimicking a lizard's take-off. When body position was sensed and fed back to the tail motor, however, Tailbot was able to stabilize its body in midair. The actively controlled tail effectively redirected the angular momentum of the body into the tail's swing, as happens with leaping lizards, Full said.

Inertial assisted robotics

Tailbot's design pushed the boundaries of control in robotics in an area researchers call inertial assisted robotics, an attention-grabber at last October's meeting of the International Conference on Intelligent Robots and Systems. The UC Berkeley researchers' paper, presented by Libby and fellow mechanical engineering graduate student Evan Chang-Siu, was one of five finalists there among more than 2,000 robot studies.

"Engineers quickly understood the value of a tail," Libby said, noting that when he dropped Tailbot nose-down, it was able to right itself before it had dropped a foot. "Robots are not nearly as agile as animals, so anything that can make a robot more stable is an advancement, which is why this work is so exciting."

Full and his students are now investigating the role of the tail in controlling pitch, roll and yaw while running.

UC Berkeley coauthors include Full and students Moore, Libby and Chang-Siu, along with Department of Integrative Biology undergraduate Deborah Li and graduate students Ardian Jusufi in the Department of Integrative Biology and Daniel Cohen in the Department of Bioengineering.

The work was funded by the National Science Foundation, including the NSF's Integrative Graduate Education and Research Traineeship (IGERT) program, and the Micro Autonomous Systems Technologies (MAST) consortium, a large group of researchers funded in part by the U.S. Army Research Laboratory that is focused on creating autonomous sensing robots.

Monday, December 26, 2011

Chemists Solve an 84-Year-Old Theory On How Molecules Move Energy After Light Absorption



The same principle that causes figure skaters to spin faster as they draw their arms into their bodies has now been used by Michigan State University researchers to understand how molecules move energy around following the absorption of light.
MSU chemist Jim McCusker and postdoctoral researcher Dong Guo proved an 84-year-old theory. (Credit: Photo courtesy of MSU.)

Conservation of angular momentum is a fundamental property of nature, one that astronomers use to detect the presence of satellites circling distant planets. In 1927, it was proposed that this principle should apply to chemical reactions, but a clear demonstration has never been achieved.

In the current issue of Science, MSU chemist Jim McCusker demonstrates for the first time the effect is real and also suggests how scientists could use it to control and predict chemical reaction pathways in general.

"The idea has floated around for decades and has been implicitly invoked in a variety of contexts, but no one had ever come up with a chemical system that could demonstrate whether or not the underlying concept was valid," McCusker said. "Our result not only validates the idea, but it really allows us to start thinking about chemical reactions from an entirely different perspective."

The experiment involved the preparation of two closely related molecules that were specifically designed to undergo a chemical reaction known as fluorescence resonance energy transfer, or FRET. Upon absorption of light, the system is predisposed to transfer that energy from one part of the molecule to another.

McCusker's team changed the identity of one of the atoms in the molecule from chromium to cobalt. This altered the molecule's properties and shut down the reaction. The absence of any detectable energy transfer in the cobalt-containing compound confirmed the hypothesis.

"What we have successfully conducted is a proof-of-principle experiment," McCusker said. "One can easily imagine employing these ideas to other chemical processes, and we're actually exploring some of these avenues in my group right now."

The researchers believe their results could impact a variety of fields including molecular electronics, biology and energy science through the development of new types of chemical reactions.

Dong Guo, a postdoctoral researcher, and Troy Knight, former graduate student and now research scientist at Dow Chemical, were part of McCusker's team. Funding was provided by the National Science Foundation.

Wednesday, October 26, 2011

Design Rules Will Enable Scientists to Use DNA to Build Nanomaterials With Desired Properties


Nature is a master builder. Using a bottom-up approach, nature takes tiny atoms and, through chemical bonding, makes crystalline materials, like diamonds, silicon and even table salt. In all of them, the properties of the crystals depend upon the type and arrangement of atoms within the crystalline lattice.
Abstract rendering of a DNA strand.
(Credit: iStockphoto/Johan Swanepoel)

Now, a team of Northwestern University scientists has learned how to top nature by building crystalline materials from nanoparticles and DNA, the same material that defines the genetic code for all living organisms.

Using nanoparticles as "atoms" and DNA as "bonds," the scientists have learned how to create crystals with the particles arranged in the same types of atomic lattice configurations as some found in nature, but they also have built completely new structures that have no naturally occurring mineral counterpart.

The basic design rules the Northwestern scientists have established for this approach to nanoparticle assembly promise the possibility of creating a variety of new materials that could be useful in catalysis, electronics, optics, biomedicine and energy generation, storage and conversion technologies.

The new method and design rules for making crystalline materials from nanostructures and DNA will be published Oct. 14 by the journal Science.

"We are building a new periodic table of sorts," said Professor Chad A. Mirkin, who led the research. "Using these new design rules and nanoparticles as 'artificial atoms,' we have developed modes of controlled crystallization that are, in many respects, more powerful than the way nature and chemists make crystalline materials from atoms. By controlling the size, shape, type and location of nanoparticles within a given lattice, we can make completely new materials and arrangements of particles, not just what nature dictates."

Mirkin is the George B. Rathmann Professor of Chemistry in the Weinberg College of Arts and Sciences and professor of medicine, chemical and biological engineering, biomedical engineering and materials science and engineering and director of Northwestern's International Institute for Nanotechnology (IIN).

"Once we have a certain type of lattice," Mirkin said, "the particles can be moved closer together or farther apart by changing the length of the interconnecting DNA, thereby providing near-infinite tunability."

"This work resulted from an interdisciplinary collaboration that coupled synthetic chemistry with theoretical model building," said coauthor George C. Schatz, a theoretician and the Charles E. and Emma H. Morrison Professor of Chemistry at Northwestern. "It was the back and forth between synthesis and theory that was crucial to the development of the design rules. Collaboration is a special aspect of research at Northwestern, and it worked very effectively for this project."



In the study, the researchers start with two solutions of nanoparticles coated with single-stranded DNA. They then add DNA strands that bind to these DNA-functionalized particles, which then present a large number of DNA "sticky ends" at a controlled distance from the particle surface; these sticky ends then bind to the sticky ends of adjacent particles, forming a macroscopic arrangement of nanoparticles.

Different crystal structures are achieved by using different combinations of nanoparticles (with varying sizes) and DNA linker strands (with controllable lengths). After a process of mixing and heating, the assembled particles transition from an initially disordered state to one where every particle is precisely located according to a crystal lattice structure. The process is analogous to how ordered atomic crystals are formed.

The researchers report six design rules that can be used to predict the relative stability of different structures for a given set of nanoparticle sizes and DNA lengths. In the paper, they use these rules to prepare 41 different crystal structures with nine distinct crystal symmetries. However, the design rules outline a strategy to independently adjust each of the relevant crystallographic parameters, including particle size (varied from 5 to 60 nanometers), crystal symmetry and lattice parameters (which can range from 20 to 150 nanometers). This means that these 41 crystals are just a small example of the near infinite number of lattices that could be created using different nanoparticles and DNA strands.

Mirkin and his team used gold nanoparticles in their work but note that their method also can be applied to nanoparticles of other chemical compositions. Both the type of nanoparticle assembled and the symmetry of the assembled structure contribute to the properties of a lattice, making this method an ideal means to create materials with predictable and controllable physical properties.

Mirkin believes that, one day soon, software will be created that allows scientists to pick the particle and DNA pairs required to make almost any structure on demand.

The Air Force Office of Scientific Research, the U.S. Department of Energy Office of Basic Energy Sciences and the National Science Foundation supported the research.

Friday, October 14, 2011

Dark Matter of the Genome Revealed




6MK9UNF4B2AX
An international team of researchers has discovered the vast majority of the so-called "dark matter" in the human genome, by means of a sweeping comparison of 29 mammalian genomes. The team, led by scientists from the Broad Institute, has pinpointed the parts of the human genome that control when and where genes are turned on. This map is a critical step in interpreting the thousands of genetic changes that have been linked to human disease.

Rendering of DNA. Researchers have discovered the vast majority of the so-called "dark matter" in the human genome, by means of a sweeping comparison of 29 mammalian genomes. (Credit: iStockphoto/Martin McCarthy)

Their findings appear online October 12 in the journal Nature.

Early comparison studies of the human and mouse genomes led to the surprising discovery that the regulatory information that controls genes dwarfs the information in the genes themselves. But, these studies were indirect: they could infer the existence of these regulatory sequences, but could find only a small fraction of them. These mysterious sequences have been referred to as the dark matter of the genome, analogous to the unseen matter and energy that make up most of the universe.

This new study enlisted a menagerie of mammals -- including rabbit, bat, elephant, and more -- to reveal these mysterious genomic elements.

Over the last five years, the Broad Institute, the Genome Institute at Washington University, and the Baylor College of Medicine Human Genome Sequencing Center have sequenced the genomes of 29 placental mammals. The research team compared all of these genomes, 20 of which are first reported in this paper, looking for regions that remained largely unchanged across species.

"With just a few species, we didn't have the power to pinpoint individual regions of regulatory control," said Manolis Kellis, last author of the study and associate professor of computer science at MIT. "This new map reveals almost 3 million previously undetectable elements in non-coding regions that have been carefully preserved across all mammals, and whose disruptions appear to be associated with human disease."

These findings could yield a deeper understanding of disease-focused studies, which look for genetic variants closely tied to disease.

"Most of the genetic variants associated with common diseases occur in non-protein coding regions of the genome. In these regions, it is often difficult to find the causal mutation," said first author Kerstin Lindblad-Toh, scientific director of vertebrate genome biology at the Broad and a professor in comparative genomics at Uppsala University, Sweden. "This catalog will make it easier to decipher the function of disease-related variation in the human genome."

This new map helps pinpoint those mutations that are likely responsible for disease, as they have been preserved across millions of years of evolution, but are commonly disrupted in individuals that suffer from a given disease. Knowing the causal mutations and their likely functions can then help uncover the underlying disease mechanisms and reveal potential drug targets.

The scientists were able to suggest possible functions for more than half of the 360 million DNA letters contained in the conserved elements, revealing the hidden meaning behind the As, Cs, Ts, and Gs. These revealed:
  • Almost 4,000 previously undetected exons, or segments of DNA that code for protein
  • 10,000 highly conserved elements that may be involved in how proteins are made
  • More than 1,000 new families of RNA secondary structures with diverse roles in gene regulation
  • 2.7 million predicted targets of transcription factors, proteins that control gene expression

"We can use this treasure trove of new elements to revisit disease association studies, focusing on those that disrupt conserved elements and trying to discern their likely functions," said Kellis. "Using a single genome, the language of DNA seems cryptic. When studied through the lens of evolution, words light up and gain meaning."

The researchers were also able to harness this collection of genomes to look back in time, across more than 100 million years of evolution, to uncover the fundamental changes that shaped mammalian adaptation to different environments and lifestyles. The researchers revealed specific proteins under rapid evolution, including some related to the immune system, taste perception, and cell division. They also uncovered hundreds of protein domains within genes that are evolving rapidly, some of which are related to bone remodeling and retinal functions.

"The comparison of mammalian genomes reveals the regulatory controls that are common across all mammals," said Eric Lander, director of the Broad Institute and the third corresponding author of the paper. "These evolutionary innovations were devised more than 100 million years ago and are still at work in the human population today."

In addition to finding the DNA controls that are common across all mammals, the comparison highlighted areas that have been changing rapidly only in the human and primate genomes. Researchers had previously uncovered two hundred of these regions, some of which are linked to brain and limb development. The expanded list -- which now includes more than 1,000 regions -- will give scientists new starting points for understanding human evolution.

The comparison of many complete genomes is beginning to offer a clear view of once indiscernible genomic regions, and with additional genomes, that resolution will only increase. "The power of this resource is that it continues to improve with the inclusion of more species," said Lindblad-Toh. "It's a very systematic and unbiased approach that will only become more powerful with the inclusion of additional genomes."

Other Broad researchers who contributed to this work include Manuel Garber, Or Zuk, Michael F. Lin, Pouya Kheradpour, Jason Ernst, Evan Mauceli, Lucas D. Ward, Michele Clamp, Sante Gnerre, Jessica Alföldi, Jean Chang, Federica Di Palma, Mitchell Guttman, David B. Jaffe, Irwin Jungreis, Marcia Lara, Jim Robinson, Xiaohui Xie, Michael C. Zody, and members of the Broad Institute Sequencing Platform and Whole Genome Assembly Team.

This project was supported by the National Human Genome Research Institute, National Institute for General Medicine, the European Science Foundation, National Science Foundation, the Sloan Foundation, an Erwin Schrödinger Fellowship, the Gates Cambridge Trust, Novo Nordisk Foundation, University of Copenhagen, the David and Lucile Packard Foundation, the Danish Council for Independent Research Medical Sciences, and The Lundbeck Foundation.

Recommend this story on Facebook, Twitter, and Google +1


Monday, October 3, 2011

Measuring Global Photosynthesis Rate: Earth's Plant Life 'Recycles' Carbon Dioxide Faster Than Previously Estimated


A Scripps Institution of Oceanography at UC San Diego-led research team followed the path of oxygen atoms on carbon dioxide molecules during photosynthesis to create a new way of measuring the efficiency of the world's plant life.


Researchers followed the path of oxygen atoms on carbon
dioxide molecules during photosynthesis to create a new
way of measuring the efficiency of Earth's plant life.
(Credit: © Dmitrijs Dmitrijevs / Fotolia)

A team led by postdoctoral researcher Lisa Welp considered the oxygen atoms contained in the carbon dioxide taken up by plants during photosynthesis. The ratio of two oxygen isotopes in carbon dioxide told researchers how long the CO2 had been in the atmosphere and how fast it had passed through plants. From this, they estimated that the global rate of photosynthesis is about 25 percent faster than thought.

"It's really hard to measure rates of photosynthesis for forests, let alone the entire globe. For a single leaf it's not so hard, you just put it in an instrument chamber and measure the CO2 decreasing in the chamber air," said Welp. "But you can't do that for an entire forest. What we have done is to use a naturally occurring marker in atmospheric CO2 that let us track how often it ended up inside a plant leaf, and from that we estimated the mean global rate of photosynthesis over the last few decades."

The authors of the study, published in the journal Nature, said the new estimate of the rate of global photosynthesis enabled by their method will in turn help guide other estimates of plant activity such as the capacity of forests and crops to grow. Understanding such variables is becoming increasingly important to scientists and policymakers attempting to understand the potential changes to ecosystems that can be expected from global warming.

"It speaks to the question, how alive is the Earth? We answer that it is a little more alive than previously believed," said study co-author and director of the Scripps CO2 Research Group, Ralph Keeling.

The key to this new approach was establishing a means of linking the changes in oxygen isotopes to El Niño, the global climate phenomenon that is associated with a variety of unusual weather patterns including low amounts rainfall in tropical regions of Asia and South America. The naturally occurring forms of oxygen known as 18O and 16O are present in different proportions to each other in water inside leaves during dry periods in the tropics. This signal in leaf waters is passed along to CO2 when CO2 mingles with the water inside leaves. This exchange of oxygen between CO2 and plant water also occurs in regions outside of the tropics that aren't as affected by El Niño and eventually returns this 18O/16O ratio to its norm. Welp's team used the time it took for this return to normal to infer the speed at which photosynthesis is taking place. They discovered that the ratio returned to normal faster than previously expected.




From this, the team revised the rate of global photosynthesis upward. The rate is expressed in terms of how much carbon is processed by plants in a year. From the previous estimate of 120 petagrams of carbon a year, the team set the annual rate between 150 and 175 petagrams. One petagram equals one trillion kilograms.

Keeling added that part of the value of the study is its validation of the importance of long-term measurement series and of making multiple independent measurements of the same phenomena. The researchers conducted isotope analyses of air that has been collected by the Scripps CO2 group at several locations around the world since 1977. It was only after decades of measurements that the researchers saw that the several bumps in the isotope record matched the timing of El Niño events. They compared their data to samples collected by Australia's Commonwealth Science and Industrial Research Organization (CSIRO). The redundancy was needed to make sure the data from Scripps' own samples weren't the result of measurement errors, said Keeling, whose research group maintains the famous record of atmospheric carbon dioxide concentration known as the Keeling Curve. Keeling's father, Charles David Keeling, established the CO2 measurements in 1958.

"Supporting long-term measurements is not easy through the normal funding mechanisms, which expect to see results on time scales of typically four years or less," said Keeling. "Few science agencies are happy to commit to measuring variables over longer periods but the value of tracking changes in the atmosphere doesn't stop after four years. Decades of measurements were required to unravel the features highlighted in this paper."

Other co-authors of the report were Harro A.J. Meijer from the University of Groningen in the Netherlands; Roger Francey and Colin Allison from CSIRO; and Alane Bollenbacher, Stephen Piper, and Martin Wahlen from Scripps and Kei Yoshimura of University of Tokyo. The National Science Foundation and the federal Department of Energy have provided long-term support for collection of the data used in the study.

Recommend this story on Facebook, Twitter, and Google +1



Tuesday, September 27, 2011

Scientists discover an organizing principle for our sense of smell


The fact that certain smells cause us pleasure or disgust would seem to be a matter of personal taste. But new research at the Weizmann Institute shows that odors can be rated on a scale of pleasantness, and this turns out to be an organizing principle for the way we experience smell. The findings, which appeared today in Nature Neuroscience, reveal a correlation between the response of certain nerves to particular scents and the pleasantness of those scents. Based on this correlation, the researchers could tell by measuring the nerve responses whether a subject found a smell pleasant or unpleasant.

Our various sensory organs are have evolved patterns of organization that reflect the type of input they receive. Thus the receptors in the retina, in the back of the eye, are arranged spatially for efficiently mapping out visual coordinates. The structure of the inner ear, on the other hand, is set up according to a tonal scale. But the organizational principle for our sense of smell has remained a mystery: Scientists have not even been sure if there is a scale that determines the organization of our smell organ, much less how the arrangement of smell receptors on the membranes in our nasal passages might reflect such a scale.

A team headed by Prof. Noam Sobel of the Weizmann Institute's Neurobiology Department set out to search for the principle of organization for smell. Hints that the answer could be tied to pleasantness had been seen in research labs around the world, including that of Sobel, who had previously found a connection between the chemical structure of an odor molecule and its place on a pleasantness scale. Sobel and his team thought that smell receptors in the nose – of which there are some 400 subtypes – could be arranged on the nasal membrane according to this scale. This hypothesis goes against the conventional view, which claims that the various smell receptors are mixed -- distributed evenly, but randomly, around the membrane.



In the experiment, the researchers inserted electrodes into the nasal passages of volunteers and measured the nerves' responses to different smells in various sites. Each measurement actually captured the response of thousands of smell receptors, as these are densely packed on the membrane. The scientists found that the strength of the nerve signal varies from place to place on the membrane. It appeared that the receptors are not evenly distributed, but rather, that they are grouped into distinct sites, each engaging most strongly with a particular type of scent. Further investigation showed that the intensity of a reaction was linked to the odor's place on the pleasantness scale. A site where the nerves reacted strongly to a certain agreeable scent also showed strong reactions to other pleasing smells and vice versa: The nerves in an area with a high response to an unpleasant odor reacted similarly to other disagreeable smells. The implication is that a pleasantness scale is, indeed, an organizing principle for our smell organ.

But does our sense of smell really work according to this simple principle? Natural odors are composed of a large number of molecules – roses, for instance, release 172 different odor molecules. Nonetheless, says Sobel, the most dominant of those determine which sites on the membrane will react the most strongly, while the other substances make secondary contributions to the scent.

'We uncovered a clear correlation between the pattern of nerve reaction to various smells and the pleasantness of those smells. As in sight and hearing, the receptors for our sense of smell are spatially organized in a way that reflects the nature of the sensory experience,' says Sobel. In addition, the findings confirm the idea that our experience of smells as nice or nasty is hardwired into our physiology, and not purely the result of individual preference. Sobel doesn't discount the idea that individuals may experience smells differently. He theorizes that cultural context and personal experience may cause a certain amount of reorganization in smell perception over a person's lifetime.

More information: DOI: 10.1038/nn.2926

Friday, September 23, 2011

Microwave Ovens a Key to Energy Production from Wasted Heat


More than 60 percent of the energy produced by cars, machines, and industry around the world is lost as waste heat -- an age-old problem -- but researchers have found a new way to make "thermoelectric" materials for use in technology that could potentially save vast amounts of energy.
Thermoelectric generation of electricity offers a way to recapture some of the enormous amounts of wasted energy lost during industrial activities. (Credit: Graphic courtesy of Oregon State University)

And it's based on a device found everywhere from kitchens to dorm rooms: a microwave oven.

Chemists at Oregon State University have discovered that simple microwave energy can be used to make a very promising group of compounds called "skutterudites," and lead to greatly improved methods of capturing wasted heat and turning it into useful electricity.

A tedious, complex and costly process to produce these materials that used to take three or four days can now be done in two minutes.

Most people are aware you're not supposed to put metal foil into a microwave, because it will spark. But powdered metals are different, and OSU scientists are tapping into that basic phenomenon to heat materials to 1,800 degrees in just a few minutes -- on purpose, and with hugely useful results.

These findings, published in Materials Research Bulletin, should speed research and ultimately provide a more commercially-useful, low-cost path to a future of thermoelectric energy.

"This is really quite fascinating," said Mas Subramanian, the Milton Harris Professor of Materials Science at OSU. "It's the first time we've ever used microwave technology to produce this class of materials."

Thermoelectric power generation, researchers say, is a way to produce electricity from waste heat -- something as basic as the hot exhaust from an automobile, or the wasted heat given off by a whirring machine. It's been known of for decades but never really used other than in niche applications, because it's too inefficient, costly and sometimes the materials needed are toxic. NASA has used some expensive and high-tech thermoelectric generators to produce electricity in outer space.

The problem of wasted energy is huge. A car, for instance, wastes about two-thirds of the energy it produces. Factories, machines and power plants discard enormous amounts of energy.

But the potential is also huge. A hybrid automobile that has both gasoline and electric engines, for instance, would be ideal to take advantage of thermoelectric generation to increase its efficiency. Heat that is now being wasted in the exhaust or vented by the radiator could instead be used to help power the car. Factories could become much more energy efficient, electric utilities could recapture energy from heat that's now going up a smokestack. Minor applications might even include a wrist watch operated by body heat.



"To address this, we need materials that are low cost, non-toxic and stable, and highly efficient at converting low-grade waste heat into electricity," Subramanian said. "In material science, that's almost like being a glass and a metal at the same time. It just isn't easy. Because of these obstacles almost nothing has been done commercially in large scale thermoelectric power generation."

Skutterudites have some of the needed properties, researchers say, but historically have been slow and difficult to make. The new findings cut that production time from days to minutes, and should not only speed research on these compounds but ultimately provide a more affordable way to produce them on a mass commercial scale.

OSU researchers have created skutterudites with microwave technology with an indium cobalt antimonite compound, and believe others are possible. They are continuing research, and believe that ultimately a range of different compounds may be needed for different applications of thermoelectric generation.

Collaborators on this study included Krishnendu Biswas, a post-doctoral researcher, and Sean Muir, a doctoral candidate, both in the OSU Department of Chemistry. The work has been supported by both the National Science Foundation and U.S. Department of Energy.

"We were surprised this worked so well," Subramanian said. "Right now large-scale thermoelectric generation of electricity is just a good idea that we couldn't make work. In the future it could be huge." 

Recommend this story on Facebook, Twitter, and Google +1

Sunday, July 31, 2011

Brain Cap Technology Turns Thought Into Motion; Mind-Machine Interface Could Lead to New Life-Changing Technologies for Millions of People


"Brain cap" technology being developed at the University of Maryland allows users to turn their thoughts into motion. Associate Professor of Kinesiology José 'Pepe' L. Contreras-Vidal and his team have created a non-invasive, sensor-lined cap with neural interface software that soon could be used to control computers, robotic prosthetic limbs, motorized wheelchairs and even digital avatars.
University of Maryland associate professor of 
kinesiology Jose "Pepe" Contreras-Vidal wears his 
Brain Cap, a noninvasive, sensor-lined cap with neural 
interface software that soon could be used to control 
computers, robotic prosthetic limbs, motorized 
wheelchairs and even digital avatars. (Credit: John 
Consoli, University of Maryland)

"We are on track to develop, test and make available to the public- within the next few years -- a safe, reliable, noninvasive brain computer interface that can bring life-changing technology to millions of people whose ability to move has been diminished due to paralysis, stroke or other injury or illness," said Contreras-Vidal of the university's School of Public Health.

The potential and rapid progression of the UMD brain cap technology can be seen in a host of recent developments, including a just published study in the Journal of Neurophysiology, new grants from the National Science Foundation (NSF) and National Institutes of Health, and a growing list of partners that includes the University of Maryland School of Medicine, the Veterans Affairs Maryland Health Care System, the Johns Hopkins University Applied Physics Laboratory, Rice University and Walter Reed Army Medical Center's Integrated Department of Orthopaedics & Rehabilitation.

"We are doing something that few previously thought was possible," said Contreras-Vidal, who is also an affiliate professor in Maryland's Fischell Department of Bioengineering and the university's Neuroscience and Cognitive Science Program. "We use EEG [electroencephalography] to non-invasively read brain waves and translate them into movement commands for computers and other devices.

Peer Reviewed

Contreras-Vidal and his team have published three major papers on their technology over the past 18 months, the latest a just released study in the Journal of Neurophysiology in which they successfully used EEG brain signals to reconstruct the complex 3-D movements of the ankle, knee and hip joints during human treadmill walking. In two earlier studies they showed (1) similar results for 3-D hand movement and (2) that subjects wearing the brain cap could control a computer cursor with their thoughts.

Alessandro Presacco, a second-year doctoral student in Contreras-Vidal's Neural Engineering and Smart Prosthetics Lab, Contreras-Vidal and co-authors write that their Journal of Neurophysiology study indicated "that EEG signals can be used to study the cortical dynamics of walking and to develop brain-machine interfaces aimed at restoring human gait function."

There are other brain computer interface technologies under development, but Contreras-Vidal notes that these competing technologies are either very invasive, requiring electrodes to be implanted directly in the brain, or, if noninvasive, require much more training to use than does UMD's EEG-based, brain cap technology.

Partnering to Help Sufferers of Injury and Stroke

Contreras-Vidal and his team are collaborating on a rapidly growing cadre projects with researchers at other institutions to develop thought-controlled robotic prosthetics that can assist victims of injury and stroke. Their latest partnership is supported by a new $1.2 million NSF grant. Under this grant, Contreras-Vidal's Maryland team is embarking on a four-year project with researchers at Rice University, the University of Michigan and Drexel University to design a prosthetic arm that amputees can control directly with their brains, and which will allow users to feel what their robotic arm touches.



"There's nothing fictional about this," said Rice University co-principal investigator Marcia O'Malley, an associate professor of mechanical engineering. "The investigators on this grant have already demonstrated that much of this is possible. What remains is to bring all of it -- non-invasive neural decoding, direct brain control and [touch] sensory feedback -- together into one device."

In a NIH-supported project underway, Contreras-Vidal and his colleagues are pairing their brain cap's EEG-based technology with a DARPA-funded next-generation robotic arm designed by researchers at the Johns Hopkins Applied Physics Laboratory to function like a normal limb. And the UMD team is developing a new collaboration with the New Zealand's start-up Rexbionics, the developer of a powered lower-limb exoskeleton called Rex that could be used to restore gait after spinal cord injury.

Two of the earliest partnerships formed by Contreras-Vidal and his team are with the University of Maryland School of Medicine in Baltimore and the Veterans Affairs Medical Center in Baltimore. A particular focus of this research is the use of the brain cap technology to help stroke victims whose brain injuries affect their motor-sensory control. Originally funded by a seed grant from the University of Maryland, College Park and the University of Maryland, Baltimore, the work now also is supported by a VA merit grant (anklebot BMI) and an NIH grant (Stroke).

"There is a big push in brain science to understand what exercise does in terms of motor learning or motor retraining of the human brain," says Larry Forrester, an associate professor of physical therapy and rehabilitation science at the University of Maryland School of Medicine.

For the more than a year, Forrester and the UMD team have tracked the neural activity of people on a treadmill doing precise tasks like stepping over dotted lines. The researchers are matching specific brain activity recorded in real time with exact lower-limb movements.

This data could help stroke victims in several ways, Forrester says. One is a prosthetic device, called an "anklebot," or ankle robot, that stores data from a normal human gait and assists partially paralyzed people. People who are less mobile commonly suffer from other health issues such as obesity, diabetes or cardiovascular problems, Forrester says, "so we want to get [stroke survivors] up and moving by whatever means possible."

The second use of the EEG data in stroke victims is more complex, yet offers exciting possibilities. "By decoding the motion of a normal gait," Contreras-Vidal says, "we can then try and teach stroke victims to think in certain ways and match their own EEG signals with the normal signals." This could "retrain" healthy areas of the brain in what is known as neuroplasticity.

One potential method for retraining comes from one of the Maryland research team's newest members, Steve Graff, a first-year bioengineering doctoral student. He envisions a virtual reality game that matches real EEG data with on-screen characters. "It gives us a way to train someone to think the right thoughts to generate movement from digital avatars. If they can do that, then they can generate thoughts to move a device," says Graff, who brings a unique personal perspective to the work. He has congenital muscular dystrophy and uses a motorized wheelchair. The advances he's working on could allow him to use both hands -- to put on a jacket, dial his cell phone or throw a football while operating his chair with his mind.

No Surgery Required

During the past two decades a great deal of progress has been made in the study of direct brain to computer interfaces, most of it through studies using monkeys with electrodes implanted in their brains. However, for use in humans such an invasive approach poses many problems, not the least of which is that most people don't' want holes in their heads and wires attached to their brains. "EEG monitoring of the brain, which has a long, safe history for other applications, has been largely ignored by those working on brain-machine interfaces, because it was thought that the human skull blocked too much of the detailed information on brain activity needed to read thoughts about movement and turn those readings into movement commands for multi-functional high-degree of freedom prosthetics," said Contreras-Vidal. He is among the few who have used EEG, MEG or other sensing technologies to develop non-invasive neural interfaces, and the only one to have demonstrated decoding results comparable to those achieved by researchers using implanted electrodes.

A paper Contreras-Vidal and colleagues published in the Journal of Neuroscience in March 2010 showed the feasibility of Maryland's EEG-based technology to infer multidimensional natural movement from noninvasive measurements of brain activity. In their two latest studies, Contreras-Vidal and his team have further advanced the development of their EEG brain interface technology, and provided powerful new evidence that it can yield brain computer interface results as good as or better than those from invasive studies, while also requiring minimal training to use.

In a paper published in April in the Journal of Neural Engineering, the Maryland team demonstrated that people wearing the EEG brain cap, could after minimal training control a computer cursor with their thoughts and achieve performance levels comparable to those by subjects using invasive implanted electrode brain computer interface systems. Contreras-Vidal and his co-authors write that this study also shows that compared to studies of other noninvasive brain control interface systems, training time with their system was substantially shorter, requiring only a single 40-minute session.

Sunday, July 24, 2011

Engineering a new face after injury


Today, surgeons face many limitations when it comes to helping a patient who suffers from a severe craniofacial injury, or an injury pertaining to the skull and the face. Most often a result of cancer or war-related circumstances, the injury is both psychologically and physically damaging.

Evolution of a patient's recovery from facial injury through the use of topological optimization. Credit: Hanlon, Beckman ITG, University of Illinois

Will the patients ever recover their appearance? Or more importantly, recover their ability to speak, breathe or eat correctly again?

Rebuilding the delicate facial bone structure of an individual is a complicated procedure. The surgeon constructs a facial frame with bone from other parts of the body (called autologous tissue), in order to guarantee the functionality of the specialized organs responsible for vital roles such as breathing, seeing, communicating and eating. Since there are no analogous bone structures to a person's face, the procedure depends on experience and skill. As Glaucio Paulino, program director of the mechanics of materials program at the National Science Foundation (NSF), noted, this procedure does not always generate the desired outcome.

"The middle of the face is the most complicated part of the human skeleton," said Paulino. "What makes the reconstruction more complicated is the fact that the bones are small, delicate, highly specialized and located in a region highly susceptible to contamination by bacteria."

Facial bones are unique and using bone tissue extracted from different parts of the body, such as the bones of the forearm, isn't the most effective form of recovery.

"The patient may be improved, but still suffer from significant deformity," said Paulino.

Implementation of loads, boundary conditions and different cavity constraints to a design domain and the consequent optimized results. Credit: Glaucio H. Paulino

In contrast, topological optimization is a feasible alternative to make such a recovery possible.

Topological optimization isn't native to the surgery room--it's a mathematical method that uses given loads, the applied force on an area, and boundary conditions or spatial limits, to optimize a specific structure's layout. Imagine a building grid in which you can determine where there should be material and where there shouldn't. Moreover, you can express loads and supports that would affect certain parts of this block of material. Your final result is an optimized structure that fits your established constraints.

This mathematical method is successfully used to engineer spaceships and airplanes. The Airbus 380 wing, for example, was designed with topological optimization. Today, extensive research is underway to apply topological optimization to the engineering of future high-rise buildings. Paulino is responsible for some of the recent advances in this field.



Together with Alok Sutradhar and Michael Miller, from the Ohio State University Medical Center, and Tam Nguyen, from the department of civil and environmental engineering at the University of Illinois, Paulino is studying how to bring topological optimization to the surgery room. With the recent advances in tissue engineering, Paulino believes that the method can be used to construct patient-specific bone frames.

"The key idea is to have a technique that is tailored for the specific patient. It's not one formula that fits all. People are different, therefore, you cannot have one solution for all patients," said Paulino.

Final optimized result with denture inserted into the craniofacial skeleton. Credit: Glaucio H. Paulino

Engineering a face

In an experiment, researchers explored the creation of a three-dimensional structure for a patient with severe gunshot injury. After selecting a design domain from the craniofacial skeleton, supports, loads and cavity constraints (areas with no bone, such as eye cavities) were applied. Topological optimization generated many possible structures to fit the patient-specific requirements.


Watch this video to see the process of creating a structure for a patient with severe gunshot injury using topological optimization. Although the results did not necessarily resemble the natural bone structure, they would preserve the vital functions of facial organs while providing a safe platform for prosthetics and plastic surgery.

The process will "show surgeons their alternatives before going into the operating room," said Paulino.

At the moment, such structures would be built using titanium, which is light and strong. Unfortunately, titanium may cause infections because it's foreign to the body. With future advances in tissue engineering, however, molding human bone tissue into a structure is a possibility. Researchers are still investigating how to ensure that the bone structure created through this process, a living tissue, will maintain the desired shape after implanted in the patient.

Paulino and his team of researchers hope to continue translating applicable concepts between different fields, such as engineering and medicine, to make innovative discoveries. With the development of tissue engineering and topological optimization, in the future, complete recovery from craniofacial injuries will hopefully be enabled by a routine procedure in the surgery room.

Provided by National Science Foundation

Friday, July 22, 2011

Memories May Skew Visual Perception


Taking a trip down memory lane while you are driving could land you in a roadside ditch, new research indicates. Vanderbilt University psychologists have found that our visual perception can be contaminated by memories of what we have recently seen, impairing our ability to properly understand and act on what we are currently seeing.
aking a trip down memory lane while you are driving could 
land you in a roadside ditch, new research indicates. 
(Credit: © yellowj / Fotolia)

"This study shows that holding the memory of a visual event in our mind for a short period of time can 'contaminate' visual perception during the time that we're remembering," Randolph Blake, study co-author and Centennial Professor of Psychology, said.

"Our study represents the first conclusive evidence for such contamination, and the results strongly suggest that remembering and perceiving engage at least some of the same brain areas."

The study, led by research associate Min-Suk Kang, was recently published in the journal Psychonomic Bulletin & Review.

"There are numerous instances where we engage in visually guided activities, such as driving, while rehashing visual events in our mind's eye. Common sense tells us that this mental replay is harmless in that it does not interfere with our ability to register and react to objects within our visual field," Kang and his co-authors wrote. "Evidently, however, that is not always true when the contents of our working memories overlap with the contents of our perceptual world."

Illusion offers clues

In this study, the researchers used a visual illusion called motion repulsion to learn whether information held in working memory affects perception. This illusion is produced when two sets of moving dots are superimposed, with dots in one set moving in a different direction from those in the other set. Under these conditions, people tend to misperceive the actual directions of motion, and perceive a larger difference between the two sets of motions than actually exists.



Ordinarily this illusion is produced by having people view both sets of motion at the same time. Kang and colleagues set out to determine if the illusion would occur when one set of motions, rather than being physically present, was held in working memory.

In the experiment, participants were shown a random pattern of dots and were asked to remember the direction in which the dots were moving. They were then were shown a second pattern of moving dots. They were asked to report on the direction of second dots' movement.

The research subjects' reports of the second dots' movement was exaggerated and influenced by what they had previously seen. If they were first shown dots moving in one direction and later shown dots moving in a slightly counterclockwise direction relative to the first presented dots, they reported the counterclockwise movement to be more dramatic than it had actually been.

"We find that observers misperceive the actual direction of motion of a single motion stimulus if, while viewing that stimulus, they are holding a different motion direction in visual working memory," the authors wrote.

The results provide further support for previous findings by Vanderbilt researchers Frank Tong and Stephanie Harrison that the contents of working memory may be represented in early visual areas in the brain, including the primary visual cortex, that were previously thought to play no role in higher cognitive functions such as memory.

"Our findings provide compelling evidence that visual working memory representations directly interact with the same neural mechanisms involved in processing basic sensory events," Kang and his colleagues wrote.

Kang and Blake's co-authors were research associate Sang Wook Hong and Assistant Professor of Psychology Geoffrey Woodman. Funding from the National Institutes of Health, the National Science Foundation and the World Class University Initiative of the National Research Foundation of Korea and the Ministry of Education, Science and Technology supported the research.

Wednesday, July 6, 2011

Ultimate Energy Efficiency:Computers to Use Million Times Less Energy?


Future computers may rely on magnetic microprocessors that consume the least amount of energy allowed by the laws of physics, according to an analysis by University of California, Berkeley, electrical engineers.
Nanomagnetic computers use tiny bar magnets to store and process information. The interactions between the polarized, north-south magnetic fields of closely spaced magnets allow logic operations like those in conventional transistors. (Credit: Jeffrey Bokor lab, UC Berkeley)

Today's silicon-based microprocessor chips rely on electric currents, or moving electrons, that generate a lot of waste heat. But microprocessors employing nanometer-sized bar magnets -- like tiny refrigerator magnets -- for memory, logic and switching operations theoretically would require no moving electrons.

Such chips would dissipate only 18 millielectron volts of energy per operation at room temperature, the minimum allowed by the second law of thermodynamics and called the Landauer limit. That's 1 million times less energy per operation than consumed by today's computers.

"Today, computers run on electricity; by moving electrons around a circuit, you can process information," said Brian Lambson, a UC Berkeley graduate student in the Department of Electrical Engineering and Computer Sciences. "A magnetic computer, on the other hand, doesn't involve any moving electrons. You store and process information using magnets, and if you make these magnets really small, you can basically pack them very close together so that they interact with one another. This is how we are able to do computations, have memory and conduct all the functions of a computer."

Lambson is working with Jeffrey Bokor, UC Berkeley professor of electrical engineering and computer sciences, to develop magnetic computers.

"In principle, one could, I think, build real circuits that would operate right at the Landauer limit," said Bokor, who is a codirector of the Center for Energy Efficient Electronics Science (E3S), a Science and Technology Center founded last year with a $25 million grant from the National Science Foundation. "Even if we could get within one order of magnitude, a factor of 10, of the Landauer limit, it would represent a huge reduction in energy consumption for electronics. It would be absolutely revolutionary."

One of the center's goals is to build computers that operate at the Landauer limit.



Lambson, Bokor and UC Berkeley graduate student David Carlton published a paper about their analysis online in the journal Physical Review Letters.

Fifty years ago, Rolf Landauer used newly developed information theory to calculate the minimum energy a logical operation, such as an AND or OR operation, would dissipate given the limitation imposed by the second law of thermodynamics. (In a standard logic gate with two inputs and one output, an AND operation produces an output when it has two positive inputs, while an OR operation produces an output when one or both inputs are positive.) That law states that an irreversible process -- a logical operation or the erasure of a bit of information -- dissipates energy that cannot be recovered. In other words, the entropy of any closed system cannot decrease.

In today's transistors and microprocessors, this limit is far below other energy losses that generate heat, primarily through the electrical resistance of moving electrons. However, researchers such as Bokor are trying to develop computers that don't rely on moving electrons, and thus could approach the Landauer limit. Lambson decided to theoretically and experimentally test the limiting energy efficiency of a simple magnetic logic circuit and magnetic memory.

The nanomagnets that Bokor, Lambson and his lab use to build magnetic memory and logic devices are about 100 nanometers wide and about 200 nanometers long. Because they have the same north-south polarity as a bar magnet, the up-or-down orientation of the pole can be used to represent the 0 and 1 of binary computer memory. In addition, when multiple nanomagnets are brought together, their north and south poles interact via dipole-dipole forces to exhibit transistor behavior, allowing simple logic operations.

"The magnets themselves are the built-in memory," Lambson said. "The real challenge is getting the wires and transistors working."

Lambson showed through calculations and computer simulations that a simple memory operation -- erasing a magnetic bit, an operation often called "restore to one" -- can be conducted with an energy dissipation very close, if not identical to, the Landauer limit.

He subsequently analyzed a simple magnetic logical operation. The first successful demonstration of a logical operation using magnetic nanoparticles was achieved by researchers at the University of Notre Dame in 2006. In that case, they built a three-input majority logic gate using 16 coupled nanomagnets. Lambson calculated that a computation with such a circuit would also dissipate energy at the Landauer limit.

Because the Landauer limit is proportional to temperature, circuits cooled to low temperatures would be even more efficient.

At the moment, electrical currents are used to generate a magnetic field to erase or flip the polarity of nanomagnets, which dissipates a lot of energy. Ideally, new materials will make electrical currents unnecessary, except perhaps for relaying information from one chip to another.

"Then you can start thinking about operating these circuits at the upper efficiency limits," Lambson said.

"We are working now with collaborators to figure out a way to put that energy in without using a magnetic field, which is very hard to do efficiently," Bokor said. "A multiferroic material, for example, may be able to control magnetism directly with a voltage rather than an external magnetic field."

Other obstacles remain as well. For example, as researchers push the power consumption down, devices become more susceptible to random fluctuations from thermal effects, stray electromagnetic fields and other kinds of noise.

"The magnetic technology we are working on looks very interesting for ultra low power uses," Bokor said. "We are trying to figure out how to make it more competitive in speed, performance and reliability. We need to guarantee that it gets the right answer every single time with a very, very, very high degree of reliability."

The work was supported by NSF and the Defense Advanced Research Projects Agency.