BTemplates.com

Powered by Blogger.

Pageviews past week

Quantum mechanics

Auto News

artificial intelligence

About Me

Recommend us on Google!

Information Technology

Popular Posts

Sunday, August 28, 2011

New Depiction of Light Aids Telecommunications


Physicists with the Institute of Ultrafast Spectroscopy and Lasers (IUSL) at The City College of New York have presented a new way to map spiraling light that could help harness untapped data channels in optical fibers. Increased bandwidth would ease the burden on fiber-optic telecommunications networks taxed by an ever-growing demand for audio, video and digital media. The new model, developed by graduate student Giovanni Milione, Professor Robert Alfano and colleagues, could even spur enhancements in quantum computing and other applications.

Higher Order Poincare Sphere model developed by physicists with the
Institute of Ultrafast Spectroscopy and Lasers tracks movement of
complex forms of light. (Credit: Image courtesy of City College of New
York)

"People now can detect (light in) the ground channel, but this gives us a way to detect and measure a higher number of channels," says Mr. Milione. With such heavy traffic funneled through a single channel, there is great interest in exploiting others that can be occupied by complex forms of light, he explains.

The team published their work in the July 25 issue of Physical Review Letters. Mr. Milione will present it at the Optical Society of America's "Frontiers in Optics 2011" conference, October 16-20 in San Jose, Calif.

Polarization is everything to a physicist tracking light in an optical fiber or laser. More than a way to cut glare with sunglasses, polarization refers to a specific direction and orientation of the light's movement and electric field -- when it isn't going every which way as it does when emanating from a light bulb, for example.

"Being able to follow polarization and other changes as light travels gives you insight into the material it travels through, " explains Milione. This helps control the light and can essentially give a fingerprint of the material being analyzed.

Detecting the polarization also lets users finely tune a laser. Such control can allow a laser to burn away one layer of material while leaving the other layers it passes through intact.

Until now, only the simplest form of light, the ground state, could be mapped and controlled. Multiple higher channels in an optical fiber, which could be occupied by more complex light, were left sitting idle.

A globe-shaped model, called the Poincaré Sphere, has long been used to map such simple light. This light has peaks and troughs, like waves on the ocean, and moves or vibrates in "plane waves." One maps how light intersects the sphere in the same way one pinpoints a location on Earth using longitude and latitude.

But complex light moves with both spin and orbital angular momentum, more or less like the movement of our moon as it spins on its axis and orbits Earth.




Such light twists like a tornado as it travels through space and takes the form of what are called vector beams and vortices. To map these vortices the researchers expanded the existing sphere to develop their Higher Order Poincaré Sphere (HOPS).

The team studies even more complex patterns of light, such as star-shaped forms. Their model uses the HOPS to reduce what could be pages of mathematics to single equations. These are the mathematical tools that will harness the complex light for use in technology.

"The sphere facilitates understanding, showing phase vortices are on poles and vector beams are on the equator," explains Milione. "It organizes the relationship between these vortices of light."

"This kind of organization on the higher level Poincaré Sphere could clear the path to a number of novel physics and engineering efforts such as quantum computing and optical transitions; could greatly expand the sensitivity of spectroscopy and the complexity of computer cryptography; and might further push the boundaries what can be 'seen'," said Dr. Alfano.

The research was funded in part by Corning Inc. and the Army Research Office. 

Friday, August 26, 2011

How Many Species On Earth? About 8.7 Million, New Estimate Says


Eight million, seven hundred thousand species (give or take 1.3 million).
Distribution of species by kingdom. (Credit: CoML)

That is a new, estimated total number of species on Earth -- the most precise calculation ever offered -- with 6.5 million species found on land and 2.2 million (about 25 percent of the total) dwelling in the ocean depths.

Announced today by Census of Marine Life scientists, the figure is based on an innovative, validated analytical technique that dramatically narrows the range of previous estimates. Until now, the number of species on Earth was said to fall somewhere between 3 million and 100 million.

Furthermore, the study, published by PLoS Biology, says a staggering 86% of all species on land and 91% of those in the seas have yet to be discovered, described and catalogued.

Says lead author Camilo Mora of the University of Hawaii and Dalhousie University in Halifax, Canada: "The question of how many species exist has intrigued scientists for centuries and the answer, coupled with research by others into species' distribution and abundance, is particularly important now because a host of human activities and influences are accelerating the rate of extinctions. Many species may vanish before we even know of their existence, of their unique niche and function in ecosystems, and of their potential contribution to improved human well-being."

"This work deduces the most basic number needed to describe our living biosphere," says co-author Boris Worm of Dalhousie University. "If we did not know -- even by an order of magnitude (1 million? 10 million? 100 million?) -- the number of people in a nation, how would we plan for the future?"

"It is the same with biodiversity. Humanity has committed itself to saving species from extinction, but until now we have had little real idea of even how many there are."

Dr. Worm notes that the recently-updated Red List issued by the International Union for the Conservation of Nature assessed 59,508 species, of which 19,625 are classified as threatened. This means the IUCN Red List, the most sophisticated ongoing study of its kind, monitors less than 1% of world species.

The research is published alongside a commentary by Lord Robert May of Oxford, past-president of the UK's Royal Society, who praises the researchers' "imaginative new approach."

"It is a remarkable testament to humanity's narcissism that we know the number of books in the US Library of Congress on 1 February 2011 was 22,194,656, but cannot tell you -- to within an order-of-magnitude -- how many distinct species of plants and animals we share our world with," Lord May writes.

"(W)e increasingly recognize that such knowledge is important for full understanding of the ecological and evolutionary processes which created, and which are struggling to maintain, the diverse biological riches we are heir to. Such biodiversity is much more than beauty and wonder, important though that is. It also underpins ecosystem services that -- although not counted in conventional GDP -- humanity is dependent upon."

Drawing conclusions from 253 years of taxonomy since Linnaeus

Swedish scientist Carl Linnaeus created and published in 1758 the system still used to formally name and describe species. In the 253 years since, about 1.25 million species -- roughly 1 million on land and 250,000 in the oceans -- have been described and entered into central databases (roughly 700,000 more are thought to have been described but have yet to reach the central databases).



To now, the best approximation of Earth's species total was based on the educated guesses and opinions of experts, who variously pegged the figure in a range from 3 to 100 million -- wildly differing numbers questioned because there is no way to validate them.

Drs. Mora and Worm, together with Dalhousie colleagues Derek P. Tittensor, Sina Adl and Alastair G.B. Simpson, refined the estimated species total to 8.7 million by identifying numerical patterns within the taxonomic classification system (which groups forms of life in a pyramid-like hierarchy, ranked upwards from species to genus, family, order, class, phylum, kingdom and domain).

Analyzing the taxonomic clustering of the 1.2 million species today in the Catalogue of Life and the World Register of Marine Species, the researchers discovered reliable numerical relationships between the more complete higher taxonomic levels and the species level.

Says Dr. Adl: "We discovered that, using numbers from the higher taxonomic groups, we can predict the number of species. The approach accurately predicted the number of species in several well-studied groups such as mammals, fishes and birds, providing confidence in the method."

When applied to all five known eukaryote* kingdoms of life on Earth, the approach predicted:
  1. ~7.77 million species of animals (of which 953,434 have been described and cataloged)
  2. ~298,000 species of plants (of which 215,644 have been described and cataloged)
  3. ~611,000 species of fungi (moulds, mushrooms) (of which 43,271 have been described and cataloged)
  4. ~36,400 species of protozoa (single-cell organisms with animal-like behavior, eg. movement, of which 8,118 have been described and cataloged)
  5. ~27,500 species of chromista (including, eg. brown algae, diatoms, water moulds, of which 13,033 have been described and cataloged)

Total: 8.74 million eukaryote species on Earth.

(* Notes: Organisms in the eukaryote domain have cells containing complex structures enclosed within membranes. The study looked only at forms of life accorded, or potentially accorded, the status of "species" by scientists. Not included: certain micro-organisms and virus "types," for example, which could be highly numerous.)

Within the 8.74 million total is an estimated 2.2 million (plus or minus 180,000) marine species of all kinds, about 250,000 (11%) of which have been described and catalogued. When it formally concluded in October 2010, the Census of Marine Life offered a conservative estimate of 1 million+ species in the seas.

"Like astronomers, marine scientists are using sophisticated new tools and techniques to peer into places never seen before," says Australian Ian Poiner, Chair of the Census' Scientific Steering Committee. "During the 10-year Census, hundreds of marine explorers had the unique human experience and privilege of encountering and naming animals new to science. We may clearly enjoy the Age of Discovery for many years to come."

"The immense effort entering all known species in taxonomic databases such as the Catalogue of Life and the World Register of Marine Species makes our analysis possible," says co-author Derek Tittensor, who also works with Microsoft Research and the UN Environment Programme's World Conservation Monitoring Centre. "As these databases grow and improve, our method can be refined and updated to provide an even more precise estimate."

"We have only begun to uncover the tremendous variety of life around us," says co-author Alastair Simpson. "The richest environments for prospecting new species are thought to be coral reefs, seafloor mud and moist tropical soils. But smaller life forms are not well known anywhere. Some unknown species are living in our own backyards -- literally."

"Awaiting our discovery are a half million fungi and moulds whose relatives gave humanity bread and cheese," says Jesse Ausubel, Vice-President of the Alfred P. Sloan Foundation and co-founder of the Census of Marine Life. "For species discovery, the 21st century may be a fungal century!"

Mr. Ausubel notes the enigma of why so much diversity exists, saying the answer may lie in the notions that nature fills every niche, and that rare species are poised to benefit from a change of conditions.

In his analysis, Lord May says the practical benefits of taxonomic discovery are many, citing the development in the 1970s of a new strain of rice based on a cross between conventional species and one discovered in the wild. The result: 30% more grain yield, followed by efforts ever since to protect all wild varieties of rice, "which obviously can only be done if we have the appropriate taxonomic knowledge."

"Given the looming problems of feeding a still-growing world population, the potential benefits of ramping up such exploration are clear."

Based on current costs and requirements, the study suggests that describing all remaining species using traditional approaches could require up to 1,200 years of work by more than 300,000 taxonomists at an approximate cost of $US 364 billion. Fortunately, new techniques such as DNA barcoding are radically reducing the cost and time involved in new species identification.

Concludes Dr. Mora: "With the clock of extinction now ticking faster for many species, I believe speeding the inventory of Earth's species merits high scientific and societal priority. Renewed interest in further exploration and taxonomy could allow us to fully answer this most basic question: What lives on Earth?"

Thursday, August 25, 2011

World-Record Pulsed Magnetic Field Achieved; Lab Moves Closer to 100-Tesla Mark


Researchers at the National High Magnetic Field Laboratory's Pulsed Field Facility at Los Alamos National Laboratory have set a new world record for the strongest magnetic field produced by a nondestructive magnet.
Yates Coulter, left, and Mike Gordon of Los Alamos National Laboratory make final preparations before successfully achieving a world-record for the strongest magnetic field produced by a nondestructive magnet. Working at the National High Magnetic Field Laboratory's Pulsed Field Facility at Los Alamos, a team of researchers achieved a field of 97.4 tesla, which is nearly 100 times stronger than the magnetic field found in giant electromagnets used in metal scrap yards. (Credit: Image courtesy of DOE/Los Alamos National Laboratory)
The scientists achieved a field of 92.5 tesla on Thursday, August 18, taking back a record that had been held by a team of German scientists and then, the following day, surpassed their achievement with a whopping 97.4-tesla field. For perspective, Earth's magnetic field is 0.0004 tesla, while a junk-yard magnet is 1 tesla and a medical MRI scan has a magnetic field of 3 tesla.

The ability to create pulses of extremely high magnetic fields nondestructively (high-power magnets routinely rip themselves to pieces due to the large forces involved) provides researchers with an unprecedented tool for studying fundamental properties of materials, from metals and superconductors to semiconductors and insulators. The interaction of high magnetic fields with electrons within these materials provides valuable clues for scientists about the properties of materials. With the recent record-breaking achievement, the Pulsed Field Facility at LANL, a national user facility, will routinely provide scientists with magnetic pulses of 95 tesla, enticing the worldwide user community to Los Alamos for a chance to use this one-of-a-kind capability.

The record puts the Los Alamos team within reach of delivering a magnet capable of achieving 100 tesla, a goal long sought by researchers from around the world, including scientists working at competing magnet labs in Germany, China, France, and Japan.

Such a powerful nondestructive magnet could have a profound impact on a wide range of scientific investigations, from how to design and control material functionality to research into the microscopic behavior of phase transitions. This type of magnet allows researchers to carefully tune material parameters while perfectly reproducing the non-invasive magnetic field. Such high magnetic fields confine electrons to nanometer scale orbits, thereby helping to reveal the fundamental quantum nature of a material.



Thursday's experiment was met with as much excitement as trepidation by the group of condensed matter scientists, high-field magnet technicians, technologists, and pulsed-magnet engineers who gathered to witness the NHMFL-PFF retake the world record. Crammed into the tight confines of the Magnet Lab's control room, they gathered, lab notebooks or caffeine of choice in hand. Their conversation reflected a giddy sense of anticipation tempered with nervousness.

With Mike Gordon commanding the controls that draw power off of a massive 1.4-gigawatt generator system and directs it to the magnet, all eyes and ears were keyed to video monitors showing the massive 100 tesla Multishot Magnet and the capacitor bank located in the now eerily empty Large Magnet Hall next door. The building had been emptied as a standard safety protocol.

Scientists heard a low warping hum, followed by a spine-tingling metallic screech signaling that the magnet was spiking with a precisely distributed electric current of more than 100 megajoules of energy. As the sound dissipated and the monitors confirmed that the magnet performed perfectly, attention turned to data acquired during the shot through two in-situ measurements -- proof positive that the magnet had achieved 92.5 tesla, thus yanking back from a team of German scientists a record that Los Alamos had previously held for five years.

The next day's even higher 97.4-tesla achievement was met with high-fives and congratulatory pats on the back. Later, researchers Charles Mielke, Neil Harrison, Susan Seestrom, and Albert Migliori certified with their signatures the data that would be sent to the Guiness Book of World Records.

The NHMFL is sponsored primarily by the National Science Foundation, Division of Materials Research, with additional support from the State of Florida and the DOE. These recent successes were enabled by long-term support from the U.S. Department of Energy's Office of Basic Energy Sciences, and the National Science Foundation's 100 Tesla Multi-Shot magnet program.

Tuesday, August 23, 2011

Etch-a-sketch with superconductors


Reporting in Nature Materials this week, researchers from the London Centre for Nanotechnology and the Physics Department of Sapienza University of Rome have discovered a technique to 'draw' superconducting shapes using an X-ray beam. This ability to create and control tiny superconducting structures has implications for a completely new generation of electronic devices.
In future, X-ray beams could be used to write superconducting circuits, such as those depicted in the image. Here, solid lines indicate electrical connections while semicircles denote superconducting junctions, whose states are indicated by red arrows. Credit: UCL Press Office

Superconductivity is a special state where a material conducts electricity with no resistance, meaning absolutely zero energy is wasted.

The research group has shown that they can manipulate regions of high temperature superconductivity, in a particular material which combines oxygen, copper and a heavier, 'rare earth' element called lanthanum. Illuminating with X-rays causes a small scale re-arrangement of the oxygen atoms in the material, resulting in high temperature superconductivity, of the type originally discovered for such materials 25 years ago by IBM scientists. The X-ray beam is then used like a pen to draw shapes in two dimensions.

A well as being able to write superconductors with dimensions much smaller than the width of a human hair, the group is able to erase those structures by applying heat treatments. They now have the tools to write and erase with high precision, using just a few simple steps and without the chemicals ordinarily used in device fabrication. This ability to re-arrange the underlying structure of a material has wider applications to similar compounds containing metal atoms and oxygen, ranging from fuel cells to catalysts.



Prof. Aeppli, Director of the London Centre for Nanotechnology and the UCL investigator on the project, said: "Our validation of a one-step, chemical-free technique to generate superconductors opens up exciting new possibilities for electronic devices, particularly in re-writing superconducting logic circuits. Of profound importance is the key to solving the notorious 'travelling salesman problem', which underlies many of the world's great computational challenges. We want to create computers on demand to solve this problem, with applications from genetics to logistics. A discovery like this means a paradigm shift in computing technology is one step closer."

Prof Bianconi, the leader of the team from Sapienza, added: "It is amazing that in a few simple steps, we can now add superconducting 'intelligence' directly to a material consisting mainly of the common elements copper and oxygen."

More information: The X-ray experiments were performed at the Elettra (Trieste) synchrotron radiation facility. The work is published in Nature Materials, 21 August 2011 (doi:1038/nmat3088) and follows on from previous discovery of fractal-like structures in superconductors (doi:10.1038/nature09260).

Provided by University College London

Saturday, August 20, 2011

Biologists Discovery May Force Revision of Biology Textbooks: Novel Chromatin Particle Halfway Between DNA and a Nucleosome


Basic biology textbooks may need a bit of revising now that biologists at UC San Diego have discovered a never-before-noticed component of our basic genetic material.
Biologists have discovered a novel chromatin 
particle halfway between DNA and a nucleosome. 
While it looks like a nucleosome, it is in fact a 
distinct particle of its own, researchers say. 
(Credit: James Kadonaga, UC San Diego)

According to the textbooks, chromatin, the natural state of DNA in the cell, is made up of nucleosomes. And nucleosomes are the basic repeating unit of chromatin.

When viewed by a high powered microscope, nucleosomes look like beads on a string. But in the Aug. 19 issue of the journal Molecular Cell, UC San Diego biologists report their discovery of a novel chromatin particle halfway between DNA and a nucleosome. While it looks like a nucleosome, they say, it is in fact a distinct particle of its own.

"This novel particle was found as a precursor to a nucleosome," said James Kadonaga, a professor of biology at UC San Diego who headed the research team and calls the particle a "pre-nucleosome." "These findings suggest that it is necessary to reconsider what chromatin is. The pre-nucleosome is likely to be an important player in how our genetic material is duplicated and used."



The biologists say that while the pre-nucleosome may look something like a nucleosome under the microscope, biochemical tests have shown that it is in reality halfway between DNA and a nucleosome.

These pre-nucleosomes, the researchers say, are converted into nucleosomes by a motor protein that uses the energy molecule ATP.

"The discovery of pre-nucleosomes suggests that much of chromatin, which has been generally presumed to consist only of nucleosomes, may be a mixture of nucleosomes and pre-nucleosomes," said Kadonaga. "So, this discovery may be the beginning of a revolution in our understanding of what chromatin is."

"The packaging of DNA with histone proteins to form chromatin helps stabilize chromosomes and plays an important role in regulating gene activities and DNA replication," said Anthony Carter, who oversees chromatin grants at the National Institute of General Medical Sciences of the National Institutes of Health, which funded the research. "The discovery of a novel intermediate DNA-histone complex offers intriguing insights into the nature of chromatin and may help us better understand how it impacts these key cellular processes."

Thursday, August 18, 2011

Holograms Reveal Brain's Inner Workings: Microscopy Technique Used to Observe Activity of Neurons Like Never Before


Like far away galaxies, powerful tools are required to bring the minute inner workings of neurons into focus. Borrowing a technique from materials science, a team of neurobiologists, psychiatrists, and advanced imaging specialists from Switzerland's EPLF and CHUV report in The Journal of Neuroscience how Digital Holographic Microscopy (DHM) can now be used to observe neuronal activity in real-time and in three dimensions -- with up to 50 times greater resolution than ever before. The application has immense potential for testing out new drugs to fight neurodegenerative diseases such as Alzheimer's and Parkinson's.
This is a 3-D image of living neuron taken by DHM 
technology. (Credit: Courtesy of Lyncée Tec)

Neurons come in various shapes and are transparent. To observe them in a Petri dish, scientists use florescent dyes that change the chemical composition and can skew results. Additionally, this technique is time consuming, often damages the cells, and only allows researchers to examine a few neurons at a time. But these newly published results show how DHM can bypass the limitations of existing techniques.

"DHM is a fundamentally novel application for studying neurons with a slew of advantages over traditional microscopes," explains Pierre Magistretti of EPFL's Brain Mind Institute and a lead author of the paper. "It is non-invasive, allowing for extended observation of neural processes without the need for electrodes or dyes that damage cells."

Senior team member Pierre Marquet adds, "DHM gives precious information not only about the shape of neurons, but also about their dynamics and activity, and the technique creates 3D navigable images and increases the precision from 500 nanometers in traditional microscopes to a scale of 10 nanometers."

A good way to understand how DHM works is to imagine a large rock in an ocean of perfectly regular waves. As the waves deform around the rock and come out the other side, they carry information about the rock's shape. This information can be extracted by comparing it to waves that did not smash up against the rock, and an image of the rock can be reconstructed. DHM does this with a laser beam by pointing a single wavelength at an object, collecting the distorted wave on the other side, and comparing it to a reference beam. A computer then numerically reconstructs a 3D image of the object -- in this case neurons -- using an algorithm developed by the authors. In addition, the laser beam travels through the transparent cells and important information about their internal composition is obtained.



Normally applied to detect minute defects in materials, Magistretti, along with DHM pioneer and EPFL professor in the Advanced Photonics Laboratory, Christian Depeursinge, decided to use DHM for neurobiological applications. In the study, their group induced an electric charge in a culture of neurons using glutamate, the main neurotransmitter in the brain. This charge transfer carries water inside the neurons and changes their optical properties in a way that can be detected only by DHM. Thus, the technique accurately visualizes the electrical activities of hundreds of neurons simultaneously, in real-time, without damaging them with electrodes, which can only record activity from a few neurons at a time.

A major advance for pharmaceutical research

Without the need to introduce dyes or electrodes, DHM can be applied to High Content Screening -- the screening of thousands of new pharmacological molecules. This advance has important ramifications for the discovery of new drugs that combat or prevent neurodegenerative diseases such as Parkinson's and Alzheimer's, since new molecules can be tested more quickly and in greater numbers.

"Due to the technique's precision, speed, and lack of invasiveness, it is possible to track minute changes in neuron properties in relation to an applied test drug and allow for a better understanding of what is happening, especially in predicting neuronal death," Magistretti says. "What normally would take 12 hours in the lab can now be done in 15 to 30 minutes, greatly decreasing the time it takes for researchers to know if a drug is effective or not."

The promise of this technique for High Content Screening has already resulted in a start-up company at EPFL called LynceeTec (www.lynceetec.com).

Speaking and Understanding Speech Share the Same Parts of the Brain


The brain has two big tasks related to speech: making it and understanding it. Psychologists and others who study the brain have debated whether these are really two separate tasks or whether they both use the same regions of the brain. Now, a new study, published in the August issue of Psychological Science, a journal of the Association for Psychological Science, finds that speaking and understanding speech share the same parts of the brain, with one difference: we don't need the brain regions that control the movements of lips, teeth, and so on to understand speech.

New research finds that speaking and understanding 
speech share the same parts of the brain. 
(Credit: © Artsem Martysiuk / Fotolia)

Most studies of how speech works in the brain focuses on comprehension. That's mostly because it's easier to image the brains of people who are listening quietly; talking makes the head move, which is a problem when you're measuring the brain. But now, the Donders Institute at the Radboud University Nijmegen, where the study was conducted, has developed technology that allows recording from a moving brain.

Laura Menenti, a Postdoctoral Research Associate at the University of Glasgow, co-wrote the paper along with Peter Hagoort of Radboud University Nijmegen and the Max Planck Institute for Psycholinguistics, Sarah Gierhan and Katrien Segaert. Menenti was initially interested in how the brain produces grammatical sentences and wanted to track the process of producing a sentence in its entirety; looking not only at its grammatical structure but also at its meaning. "What made this particularly exciting to us was that no one had managed to perform such a study before, meaning that we could explore an almost completely new topic," says Menenti.




The authors used functional MRI technology to measure brain activity in people who were either listening to sentences or speaking sentences. The other problem with measuring brain activity in people who are speaking is that you have to get them to say the right kind of sentence. The authors accomplished this with a picture of an action -- a man strangling a woman, say -- with one person colored green and one colored red to indicate their order in the sentence. This prompted people to say either "The man is strangling the woman" or "The woman is strangled by the man." (The experiments were all carried out in Dutch.)

From this, the researchers were able to tell where in the brain three different speech tasks (computing meaning, coming up with the words, and building a grammatical sentence) -- were taking place. They found that the same areas were activated for each of these tasks in people who were speaking and people who were listening to sentences. However, although some studies have suggested that while people are listening to speech, they silently articulate the words in order to understand them, the authors found no involvement of motor regions when people were listening.

According to Menenti, though the study was largely designed to answer a specific theoretical question, it also points towards some useful avenues for treatment of people with language-related problems. It suggests that while it sometimes seems that people with comprehension problems may have intact production, and vice versa, this may not necessarily be the case. According to Menenti, "Our data suggest that these problems would be expected to always at least partly coincide. On the other, our data confirm the idea that many different processes in the language system, such as understanding meaning or grammar, can at least partly, be damaged independently of each other."


Wednesday, August 17, 2011

Seeing eye to eye is key to copying, say scientists


Imitation may be the sincerest form of flattery but how do our brains decide when and who we should copy? Researchers from The University of Nottingham have found that the key may lie in an unspoken invitation communicated through eye contact.



In a study published this week in the Journal of Neuroscience, a team of scientists from the University's School of Psychology show that eye contact seems to act as an invitation for mimicry, triggering mechanisms in the frontal region of the brain that control imitation.

The results could be the first clues to understanding why some people, such as children with autism, struggle to grasp when they are expected to copy the actions of others in social situations.

Dr Antonia Hamilton, who led the research, said: "Many studies have looked at copying and imitation in terms of 'mirror neurons', which are believed to be specialised parts of the human brain that implement imitation. However, we also know that imitation is carefully controlled — people don't imitate everything they see, and only copy what's important.

"Our previous research has shown that when somebody makes eye contact with you, you are more likely to copy them. So eye contact seems to act as a message that says "Copy me now". This recent study aimed to see what happens to that signal in the brain."




The team of psychologists, which also included doctoral student Yin Wang and Dr Richard Ramsey, used functional magnetic resonance imaging (fMRI) to scan the brains of volunteers while they watched videos of an actress who sometimes would make eye contact with them while opening or closing her hand. The participant was told they should open their own hand whenever they saw the actress move her hand so in some trials the participant was copying the actress and in other trials they were not.

Because previous behavioural measurement such as response time revealed that the participant unconsciously copied the actress faster when the actress provided eye contact, the scientists analysed the brain imaging data to find which brain areas controlled the decision to copy. The analysis used a new mathematical method called dynamic causal modelling to compute the information processing in the brain, which has never been applied to imitation before.

The data showed that mirror neuron brain regions do play a role in the copying task. More importantly though, it revealed that these regions are controlled by the medial prefrontal cortex, an area of the brain associated with planning complex cognitive behaviours, expressing personality, decision-making and responding to social situations.

Dr Hamilton added: "Previous studies have shown that this medial prefrontal brain region is active in many social situations but responds less in people with autism, which explains why children on the autistic spectrum might not copy at the right time.

"Understanding the control of imitation has implications for many other areas of psychology too. For example, are teenagers whose prefrontal cortex is less developed more easily led to copy risky, dangerous or illegal behaviour such as imitating rioters? Could increasing the amount of eye contact between children and teachers lead to better learning by imitation? Would better control of imitation help children with autism to more effectively learn and interact? We plan further research to address these questions."

Provided by University of Nottingham


Tuesday, August 16, 2011

Scientists Have New Help Finding Their Way Around Brain's Nooks and Crannies


Like explorers mapping a new planet, scientists probing the brain need every type of landmark they can get. Each mountain, river or forest helps scientists find their way through the intricacies of the human brain.
Scientists have found a way to use MRI scanning data 
to map myelin, a white sheath that covers some brain 
cell branches. Such maps, previously only available via 
dissection, help scientists determine precisely where they 
are at in the brain. Red and yellow indicate regions with 
high myelin levels; blue, purple and black areas have low 
myelin levels. (Credit: David Van Essen)

Researchers at Washington University School of Medicine in St. Louis have developed a new technique that provides rapid access to brain landmarks formerly only available at autopsy. Better brain maps will result, speeding efforts to understand how the healthy brain works and potentially aiding in future diagnosis and treatment of brain disorders, the researchers report in the Journal of Neuroscience Aug. 10.

The technique makes it possible for scientists to map myelination, or the degree to which branches of brain cells are covered by a white sheath known as myelin in order to speed up long-distance signaling. It was developed in part through the Human Connectome Project, a $30 million, five-year effort to map the brain's wiring. That project is headed by Washington University in St. Louis and the University of Minnesota.

"The brain is among the most complex structures known, with approximately 90 billion neurons transmitting information across 150 trillion connections," says David Van Essen, PhD, Edison Professor and head of the Department of Anatomy and Neurobiology at Washington University. "New perspectives are very helpful for understanding this complexity, and myelin maps will give us important insights into where certain parts of the brain end and others begin."

Easy access to detailed maps of myelination in humans and animals also will aid efforts to understand how the brain evolved and how it works, according to Van Essen.

Neuroscientists have known for more than a century that myelination levels differ throughout the cerebral cortex, the gray outer layer of the brain where most higher mental functions take place. Until now, though, the only way they could map these differences in detail was to remove the brain after death, slice it and stain it for myelin.

Washington University graduate student Matthew Glasser developed the new technique, which combines data from two types of magnetic resonance imaging (MRI) scans that have been available for years.



"These are standard ways of imaging brain anatomy that scientists and clinicians have used for a long time," Glasser says. "After developing the new technique, we applied it in a detailed analysis of archived brain scans from healthy adults."

As in prior studies, Glasser's results show highest myelination levels in areas involved with early processing of information from the eyes and other sensory organs and control of movement. Many brain cells are packed into these regions, but the connections among the cells are less complex. Scientists suspect that these brain regions rely heavily on what computer scientists call parallel processing: Instead of every cell in the region working together on a single complex problem, multiple separate teams of cells work simultaneously on different parts of the problem.

Areas with less myelin include brain regions linked to speech, reasoning and use of tools. These regions have brain cells that are packed less densely, because individual cells are larger and have more complex connections with neighboring cells.

"It's been widely hypothesized that each chunk of the cerebral cortex is made up of very uniform information-processing machinery," Van Essen says. "But we're now adding to a picture of striking regional differences that are important for understanding how the brain works."

According to Van Essen, the technique will make it possible for the Connectome project to rapidly map myelination in many different research participants. Data on many subjects, acquired through many different analytical techniques including myelination mapping, will help the resulting maps cover the range of anatomic variation present in humans.

"Our colleagues are clamoring to make use of this approach because it's so helpful for figuring out where you are in the cortex, and the data are either already there or can be obtained in less than 10 minutes of MRI scanning," Glasser says.

This research was funded by the National Institutes of Health (NIH).

Searching for Spin Liquids: Much-Sought Exotic Quantum State of Matter Can Exist


The world economy is becoming ever more reliant on high tech electronics such as computers featuring fingernail-sized microprocessors crammed with billions of transistors. For progress to continue, for Moore's Law -- according to which the number of computer components crammed onto microchips doubles every two years, even as the size and cost of components halves -- to continue, new materials and new phenomena need to be discovered.
Diagram depicting anti-ferromagnetic order (upper) compared to a spin liquid phase (lower). In an anti-ferromagnet, the spins are anti-aligned. A spin liquid has no order and the spins can be viewed as bobbing about like water molecules in liquid water. (Credit: E. Edwards)

Furthermore, as the sizes of electronic components shrink, soon down to the size of single atoms or molecules, quantum interactions become ever more important. Consequently, enhanced knowledge and exploitation of quantum effects is essential. Researchers at the Joint Quantum Institute (JQI) in College Park, Maryland, operated by the University of Maryland and the National Institute of Standards and Technology (NIST), and at Georgetown University have uncovered evidence for a long-sought-after quantum state of matter, a spin liquid.

The research was performed by JQI postdoctoral scientists Christopher Varney and Kai Sun, JQI Fellow Victor Galitski, and Marcos Rigol of Georgetown University. The results appear in an editor-recommended article in the 12 August issue of the journal Physical Review Letters.

You can't pour a spin liquid into a glass. It's not a material at all, at least not a material you can touch. It is more like a kind of magnetic disorder within an ordered array of atoms. Nevertheless, it has many physicists excited.

To understand this exotic state of matter, first consider the concept of spin, which is at the heart of all magnetic phenomena. For instance, a refrigerator magnet, at the microscopic level, consists of trillions of trillions of iron atoms all lined up. Each of these atoms can be thought of loosely as a tiny spinning ball. The orientation of that spin is what makes the atom into a tiny magnet. The refrigerator magnet is an example of a ferromagnet, the ferro part coming from the Latin word for iron. In a ferromagnet, all the atomic spins are lined up in the same way, producing a large cooperative magnetic effect.

Important though they may be, ferromagnets aren't the only kind of material where magnetic interactions between spins are critical. In anti-ferromagnets, for instance, the neighboring spins are driven to be anti-aligned. That is, the orientations of the spins alternate up and down (see top picture in figure). The accumulative magnetic effect of all these up and down spins is that the material has no net magnetism. The high-temperature superconducting materials discovered in the 1980s are an important example of an anti-ferromagnetic structure.

More complicated and potentially interesting magnetic arrangements are possible, which may lead to a quantum spin liquid. Imagine an equilateral triangle, with an atom (spin) at each corner. Anti-ferromagnetism in such a geometry would meet with difficulties. Suppose that one spin points up while a second spin points down. So far, so good. But what spin orientation can the third atom take? It can't simultaneously anti-align with both of the other atoms in the triangle. Physicists employ the word "frustration" to describe this baffling condition where all demands cannot be satisfied.

In everyday life frustration is, well, frustrating, and actually this condition is found throughout nature, from magnetism to neural networks. Furthermore, understanding the different manifestations of a collection of magnetically interacting spins might help in designing new types of electronic circuitry.

One compromise that a frustrated spin system makes is to simultaneously exist in many spin orientations. In a quantum system, this simultaneous existence, or superposition, is allowed.

Here's where the JQI researchers have tried something new. They have studied what happens when frustration occurs in materials with a hexagonal (six sided) unit cell lattice.

What these atoms do is interact via their respective spins. The strength of the interaction between nearest neighbor (NN) atoms is denoted by the parameter J1. Similarly, the force between next nearest neighbors (NNN) -- that is, pairs of atoms that have at least one intervening atom between them -- is denoted by J2. Letting this batch of atoms interact among themselves, even on a pretend lattice as small as this, entails an immense calculation. Varney and his colleagues have calculated what happens in an array of hexagons consisting of 30 sites where the spins are free to swing about in a two-dimensional plane (this kind of approach is called an XY model).



Christopher Varney, who has appointments at Maryland and Georgetown, said that the interactions of atoms can be represented by a matrix (essentially a two-dimensional spreadsheet) with 155 million entries on each side. This huge number corresponds to the different spin configurations that can occur on this honeycomb-structured material.

What the researchers found were a "kaleidoscope" of phases, which represent the lowest-energy states that are allowed given the magnetic interactions. Just as water can exist in different phases -- steam, liquid, and ice -- as the temperature is changed, so here a change in the strengths of the interactions among the spins (the J1 and J2 parameters) results in different phases. For example, one simple solution is an antiferromagnet (upper picture in figure).

But one phase turns out to be a true quantum spin liquid having no order at all. When J2 is between about 21% and 36% of the value of J1, frustration coaxes the spins into disorder; the entire sample co-exists in millions of quantum states simultaneously.

It's difficult for the human mind to picture a tiny two-dimensional material in so many states at the same time. JQI fellow, Victor Galitski, suggests that one shouldn't think of the spins as residing at the original atomic sites but rather as free ranging particle-like entities dubbed "spinons." These spinons bob about, just as water molecules bob about in liquid water (see lower picture in figure). Hence the name quantum spin liquid.

Another reason for using the word liquid, Galitski says, is this 'bobbing about' is analogous to what happens inside a metal. There, the outer electrons of most atoms tend to leave their home atoms and drift through the metal sample as if they constituted a fluid, called a "Fermi liquid."

Electrons in a metal are able to drift since it takes only an infinitesimal amount of energy to put them into motion. The same is true for the fluctuating spins in the hexagonal model studied by the JQI scientists. Indeed, their spin model assumes a temperature of absolute zero, where quantum effects abound.

Writing in an essay that accompanied the article in Physical Review Letters, Tameem Albash and Stephan Haas, scientists at the University of Southern California, say that the JQI/Georgetown team "present a convincing example" of the new spin liquid state.

How can this new frustration calculation be tested? The experimental verification of the spin liquid state in a 2-dimenstional hexagonal lattice, Albash and Haas suggest, "will probably be tested using cold atoms trapped in optical lattices. In the past few years, this technology has become a reliable tool to emulate quantum many body lattice systems with tunable interactions." Indeed the authors propose such an experiment.

What would such a spin liquid material be good for? It's too early to tell. But some speculations include the idea that these materials could support some exotic kind of superconductivity or would organize particle-like entities that possessed fractional electric charge.

"Kaleidoscope of Exotic Quantum Phases in a Frustrated XY Model" by Christopher N. Varney, Kai Sun, Victor Galitski, and Marcos Rigol, Physical Review Letters, 107, 077201, (12 August 2011).

Saturday, August 13, 2011

Supergene is key to copycat butterflies


Since Charles Darwin, biologists have pondered the mystery of "mimicry butterflies", which survive by copying the wing patterns of other butterflies that taste horrible to their predators, birds.
This undated handout photo released by the CNRS shows butterflies, Melinaea mneme (top) and Heliconius numata. The mystery of how a butterfly has changed its wing patterns to mimic neighbouring species and avoid being eaten by birds has been solved by a team of European scientists.

The answer, according to a study released on Friday, lies in an astonishing cluster of about 30 genes in a single chromosome.

"We were blown away by what we found," said Mathieu Joron of France's National Museum of Natural History, who led the probe into what is being called a "supergene".

"These butterflies are the 'transformers' of the insect world," said Joron.

"But instead of being able to turn from a car into a robot with the flick of a switch, a single genetic switch allows these insects to morph into several different mimetic forms.

"It is amazing, and the stuff of science fiction. Now we are starting to understand how this switch can have such a pervasive effect."



The trick, known as Muellerian mimicry, was investigated by French and British scientists, who focussed on a species of Amazonian rainforest butterfly, Heliconius numata.

It is able to copy the colour patterns of several species of the Melinaea butterfly which are unpalatable to birds.

The "supergene" comprises a tightly packed region of genes on a single chromosome which control different elements of the wing pattern.

"By changing just one gene, the butterfly is able to fool its predators," explained Richard ffrench-Constant of the University of Exeter, southwestern England.

Even more astonishing is that three versions of the chromosome exist within this species, with each version controlling distinct wing-pattern forms.

Even though the butterflies look quite different from each other, they have the same DNA.

The supergene apparently transmits in a block from generation to generation, rather than go through recombination -- the mingling of genes from both parents.

The "supergene" also appears important in other species, say the authors.

One such species, the peppered moth, developped black wings in 19th-century Britain as a means of gaining camouflage in the sooty industrial environment.

"It's a gene that really packs an evolutionary punch," said ffrench-Constant. The paper is published online by the British science journal Nature.

Friday, August 12, 2011

Inexpensive catalyst that makes hydrogen gas 10 times faster than natural enzyme


Looking to nature for their muse, researchers have used a common protein to guide the design of a material that can make energy-storing hydrogen gas. The synthetic material works 10 times faster than the original protein found in water-dwelling microbes, the researchers report in the August 12 issue of the journal Science, clocking in at 100,000 molecules of hydrogen gas every second.
The part of the catalyst that cranks out 100,000
molecules of hydrogen gas a second packs electrons
into chemical bonds between hydrogen atoms, possibly
hijacked from water. Credit: PNNL

This step is just one part of a series of reactions to split water and make hydrogen gas, but the researchers say the result shows they can learn from nature how to control those reactions to make durable synthetic catalysts for energy storage, such as in fuel cells.

In addition, the natural protein, an enzyme, uses inexpensive, abundant metals in its design, which the team copied. Currently, these materials -- called catalysts, because they spur reactions along -- rely on expensive metals such as platinum.

"This nickel-based catalyst is really very fast," said coauthor Morris Bullock of the Department of Energy's Pacific Northwest National Laboratory. "It's about a hundred times faster than the previous catalyst record holder. And from nature, we knew it could be done with abundant and inexpensive nickel or iron."

Stuffing Bonds

Electrical energy is nothing more than electrons. These same electrons are what tie atoms together when they are chemically bound to each other in molecules such as hydrogen gas. Stuffing electrons into chemical bonds is one way to store electrical energy, which is particularly important for renewable, sustainable energy sources like solar or wind power. Converting the chemical bonds back into flowing electricity when the sun isn't shining or the wind isn't blowing allows the use of the stored energy, such as in a fuel cell that runs on hydrogen.

Electrons are often stored in batteries, but Bullock and his colleagues want to take advantage of the closer packing available in chemicals.

"We want to store energy as densely as possible. Chemical bonds can store a huge amount of energy in a small amount of physical space," said Bullock, director of the Center for Molecular Electrocatalysis at PNNL, one of DOE's Energy Frontier Research Centers. The team also included visiting researcher Monte Helm from Fort Lewis College in Durango, Colo.

Biology stores energy densely all the time. Plants use photosynthesis to store the sun's energy in chemical bonds, which people use when they eat food. And a common microbe stores energy in the bonds of hydrogen gas with the help of a protein called a hydrogenase.

Because the hydrogenases found in nature don't last as long as ones that are built out of tougher chemicals (think paper versus plastic), the researchers wanted to pull out the active portion of the biological hydrogenase and redesign it with a stable chemical backbone.

Two Plus Two Equals One

In this study, the researchers looked at only one small part of splitting water into hydrogen gas, like fast-forwarding to the end of a movie. Of the many steps, there's a part at the end when the catalyst has a hold of two hydrogen atoms that it has stolen from water and snaps the two together.

The catalyst does this by completely dismantling some hydrogen atoms from a source such as water and moving the pieces around. Due to the simplicity of hydrogen atoms, those pieces are positively charged protons and negatively charged electrons. The catalyst arranges those pieces into just the right position so they can be put together correctly. "Two protons plus two electrons equals one molecule of hydrogen gas," says Bullock.



In real life, the protons would come from water, but since the team only examined a portion of the reaction, the researchers used water stand-ins such as acids to test their catalyst.

"We looked at the hydrogenase and asked what is the important part of this?" said Bullock. "The hydrogenase moves the protons around in what we call a proton relay. Where the protons go, the electrons will follow."

A Bauble for Energy

Based on the hydrogenase's proton relay, the experimental catalyst contained regions that dangled off the main structure and attracted protons, called "pendant amines." A pendant amine moves a proton into position on the edge of the catalyst, while a nickel atom in the middle of the catalyst offers a hydrogen atom with an extra electron (that's a proton and two electrons for those counting).

The pendant amine's proton is positive, while the nickel atom is holding on to a negatively charged hydrogen. Positioned close to each other, the opposites attract and the conglomerate solidifies into a molecule, forming hydrogen gas.

With that plan in mind, the team built potential catalysts and tested them. On their first try, they put a bunch of pendant amines around the nickel center, thinking more would be better. Testing their catalyst, they found it didn't work very fast. An analysis of how the catalyst was moving protons and electrons around suggested too many pendant amines got in the way of the perfect reaction. An overabundance of protons made for a sticky catalyst, which pinched it and slowed the hydrogen-gas-forming reaction down.

Like good gardeners, the team trimmed a few pendant amines off their catalyst, leaving only enough to make the protons stand out, ready to accept a negatively charged hydrogen atom.

Fastest Cat in the West

Testing the trimmed catalyst, the team found it performed much better than anticipated. At first they used conditions in which no water was present (remember, they used water stand-ins), and the catalyst could create hydrogen gas at a rate of about 33,000 molecules per second. That's much faster than their natural inspiration, which clocks in at around 10,000 per second.

However, most real-life applications will have water around, so they added water to the reaction to see how it would perform. The catalyst ran three times as fast, creating more than 100,000 hydrogen molecules every second. The researchers think the water might help by moving protons to a more advantageous spot on the pendant amine, but they are still studying the details.

Their catalyst has a drawback, however. It's fast, but it's not efficient. The catalyst runs on electricity -- after all, it needs those electrons to stuff into the chemical bonds -- but it requires more electricity than practical, a characteristic called the overpotential.

Bullock says the team has some ideas on how to reduce the inefficiency. Also, future work will require assembling a catalyst that splits water in addition to making hydrogen gas. Even with a high overpotential, the researchers see high potential for this catalyst.

More information: Monte L. Helm, Michael P. Stewart, R. Morris Bullock, M. Rakowski DuBois, Daniel L. DuBois, A Synthetic Nickel Electrocatalyst With a Turnover Frequency Above 100,000 s-1 for H2 Production, Science, August 12, 2011, DOI:10.1126/science.1205864

Provided by Pacific Northwest National Laboratory

Thursday, August 11, 2011

Engineers Reverse E. Coli Metabolism for Quick Production of Fuels, Chemicals


In a biotechnological tour de force, Rice University engineering researchers this week unveiled a new method for rapidly converting simple glucose into biofuels and petrochemical substitutes. In a paper published online in Nature, Rice's team described how it reversed one of the most efficient of all metabolic pathways -- the beta oxidation cycle -- to engineer bacteria that produce biofuel at a breakneck pace.


Rice University engineering researchers Ramon Gonzalez (left) and
Clementina Dellomonaco reversed one of the most efficient of all
metabolic pathways -- the beta oxidation cycle -- to engineer bacteria
that make biofuels at a breakneck pace. (Credit: Jeff Fitlow/Rice
University)


Just how fast are Rice's single-celled chemical factories? On a cell-per-cell basis, the bacteria produced the butanol, a biofuel that can be substituted for gasoline in most engines, about 10 times faster than any previously reported organism.


"That's really not even a fair comparison because the other organisms used an expensive, enriched feedstock, and we used the cheapest thing you can imagine, just glucose and mineral salts," said Ramon Gonzalez, associate professor of chemical and biomolecular engineering at Rice and lead co-author of the Nature study.


Gonzalez's laboratory is in a race with hundreds of labs around the world to find green methods for producing chemicals like butanol that have historically come from petroleum.


"We call these 'drop-in' fuels and chemicals, because their structure and properties are very similar, sometimes identical, to petroleum-based products," he said. "That means they can be 'dropped in,' or substituted, for products that are produced today by the petrochemical industry."




Butanol is a relatively short molecule, with a backbone of just four carbon atoms. Molecules with longer carbon chains have been even more troublesome for biotech producers to make, particularly molecules with chains of 10 or more carbon atoms. Gonzalez said that's partly because researchers have focused on ramping up the natural metabolic processes that cells use to build long-chain fatty acids. Gonzalez and students Clementina Dellomonaco, James Clomburg and Elliot Miller took a completely different approach.


"Rather than going with the process nature uses to build fatty acids, we reversed the process that it uses to break them apart," Gonzalez said. "It's definitely unconventional, but it makes sense because the routes nature has selected to build fatty acids are very inefficient compared with the reversal of the route it uses to break them apart."


The beta oxidation process is one of biology's most fundamental, Gonzalez said. Species ranging from single-celled bacteria to human beings use beta oxidation to break down fatty acids and generate energy.


In the Nature study, Gonzalez's team reversed the beta oxidation cycle by selectively manipulating about a dozen genes in the bacteria Escherichia coli. They also showed that selective manipulations of particular genes could be used to produce fatty acids of particular lengths, including long-chain molecules like stearic acid and palmitic acid, which have chains of more than a dozen carbon atoms.


"This is not a one-trick pony," Gonzalez said. "We can make many kinds of specialized molecules for many different markets. We can also do this in any organism. Some producers prefer to use industrial organisms other than E. coli, like algae or yeast. That's another advantage of using reverse-beta oxidation, because the pathway is present in almost every organism."


The research was funded by Rice University.