BTemplates.com

Powered by Blogger.

Pageviews past week

Quantum mechanics

Auto News

artificial intelligence

About Me

Recommend us on Google!

Information Technology

Popular Posts

Saturday, August 29, 2009

Small Fluctuations In Solar Activity, Large Influence On Climate


Subtle connections between the 11-year solar cycle, the stratosphere, and the tropical Pacific Ocean work in sync to generate periodic weather patterns that affect much of the globe, according to research appearing this week in the journal Science. The study can help scientists get an edge on eventually predicting the intensity of certain climate phenomena, such as the Indian monsoon and tropical Pacific rainfall, years in advance.


Recently published research shows how newly discovered interactions between the Sun and the Earth affect our climate. (Credit: UCAR)

An international team of scientists led by the National Center for Atmospheric Research (NCAR) used more than a century of weather observations and three powerful computer models to tackle one of the more difficult questions in meteorology: if the total energy that reaches Earth from the Sun varies by only 0.1 percent across the approximately 11-year solar cycle, how can such a small variation drive major changes in weather patterns on Earth?


The answer, according to the new study, has to do with the Sun's impact on two seemingly unrelated regions. Chemicals in the stratosphere and sea surface temperatures in the Pacific Ocean respond during solar maximum in a way that amplifies the Sun's influence on some aspects of air movement. This can intensify winds and rainfall, change sea surface temperatures and cloud cover over certain tropical and subtropical regions, and ultimately influence global weather.


"The Sun, the stratosphere, and the oceans are connected in ways that can influence events such as winter rainfall in North America," says NCAR scientist Gerald Meehl, the lead author. "Understanding the role of the solar cycle can provide added insight as scientists work toward predicting regional weather patterns for the next couple of decades."


The study was funded by the National Science Foundation, NCAR's sponsor, and by the Department of Energy. It builds on several recent papers by Meehl and colleagues exploring the link between the peaks in the solar cycle and events on Earth that resemble some aspects of La Nina events, but are distinct from them. The larger amplitude La Nina and El Nino patterns are associated with changes in surface pressure that together are known as the Southern Oscillation.


The connection between peaks in solar energy and cooler water in the equatorial Pacific was first discovered by Harry Van Loon of NCAR and Colorado Research Associates, who is a co-author of the new paper.


Top down and bottom up


The new contribution by Meehl and his colleagues establishes how two mechanisms that physically connect changes in solar output to fluctuations in the Earth's climate can work together to amplify the response in the tropical Pacific.


The team first confirmed a theory that the slight increase in solar energy during the peak production of sunspots is absorbed by stratospheric ozone. The energy warms the air in the stratosphere over the tropics, where sunlight is most intense, while also stimulating the production of additional ozone there that absorbs even more solar energy. Since the stratosphere warms unevenly, with the most pronounced warming occurring at lower latitudes, stratospheric winds are altered and, through a chain of interconnected processes, end up strengthening tropical precipitation.


At the same time, the increased sunlight at solar maximum causes a slight warming of ocean surface waters across the subtropical Pacific, where Sun-blocking clouds are normally scarce. That small amount of extra heat leads to more evaporation, producing additional water vapor. In turn, the moisture is carried by trade winds to the normally rainy areas of the western tropical Pacific, fueling heavier rains and reinforcing the effects of the stratospheric mechanism.


The top-down influence of the stratosphere and the bottom-up influence of the ocean work together to intensify this loop and strengthen the trade winds. As more sunshine hits drier areas, these changes reinforce each other, leading to less clouds in the subtropics, allowing even more sunlight to reach the surface, and producing a positive feedback loop that further magnifies the climate response.


These stratospheric and ocean responses during solar maximum keep the equatorial eastern Pacific even cooler and drier than usual, producing conditions similar to a La Nina event. However, the cooling of about 1-2 degrees Fahrenheit is focused farther east than in a typical La Nina, is only about half as strong, and is associated with different wind patterns in the stratosphere.


Earth's response to the solar cycle continues for a year or two following peak sunspot activity. The La Nina-like pattern triggered by the solar maximum tends to evolve into a pattern similar to El Nino as slow-moving currents replace the cool water over the eastern tropical Pacific with warmer water. The ocean response is only about half as strong as with El Nino and the lagged warmth is not as consistent as the La Nina-like pattern that occurs during peaks in the solar cycle.


Enhancing ocean cooling


Solar maximum could potentially enhance a true La Nina event or dampen a true El Nino event. The La Nina of 1988-89 occurred near the peak of solar maximum. That La Nina became unusually strong and was associated with significant changes in weather patterns, such as an unusually mild and dry winter in the southwestern United States.


The Indian monsoon, Pacific sea surface temperatures and precipitation, and other regional climate patterns are largely driven by rising and sinking air in Earth's tropics and subtropics. Therefore the new study could help scientists use solar-cycle predictions to estimate how that circulation, and the regional climate patterns related to it, might vary over the next decade or two.


Three views, one answer


To tease out the elusive mechanisms that connect the Sun and Earth, the study team needed three computer models that provided overlapping views of the climate system.


One model, which analyzed the interactions between sea surface temperatures and lower atmosphere, produced a small cooling in the equatorial Pacific during solar maximum years. The second model, which simulated the stratospheric ozone response mechanism, produced some increases in tropical precipitation but on a much smaller scale than the observed patterns.


The third model contained ocean-atmosphere interactions as well as ozone. It showed, for the first time, that the two combined to produce a response in the tropical Pacific during peak solar years that was close to actual observations.


"With the help of increased computing power and improved models, as well as observational discoveries, we are uncovering more of how the mechanisms combine to connect solar variability to our weather and climate," Meehl says.


The University Corporation for Atmospheric Research manages the National Center for Atmospheric Research under sponsorship by the National Science Foundation.



If you like this post, buy me a Pittza at $1!
Reblog this post [with Zemanta]

Friday, August 28, 2009

'Plasmobot': Scientists To Design First Robot Using Mould


Scientists at the University of the West of England are to design the first ever biological robot using mould.


Plasmodium used in the research.
(Credit: Image courtesy of University of the West of England)

Researchers have received a Leverhulme Trust grant worth £228,000 to develop the amorphous non-silicon biological robot, plasmobot, using plasmodium, the vegetative stage of the slime mould Physarum polycephalum, a commonly occurring mould which lives in forests, gardens and most damp places in the UK. The Leverhulme Trust funded research project aims to design the first every fully biological (no silicon components) amorphous massively-parallel robot.


This project is at the forefront of research into unconventional computing. Professor Andy Adamatzky, who is leading the project, says their previous research has already proved the ability of the mould to have computational abilities.


Professor Adamatzky explains, “Most people’s idea of a computer is a piece of hardware with software designed to carry out specific tasks. This mould, or plasmodium, is a naturally occurring substance with its own embedded intelligence. It propagates and searches for sources of nutrients and when it finds such sources it branches out in a series of veins of protoplasm. The plasmodium is capable of solving complex computational tasks, such as the shortest path between points and other logical calculations. Through previous experiments we have already demonstrated the ability of this mould to transport objects. By feeding it oat flakes, it grows tubes which oscillate and make it move in a certain direction carrying objects with it. We can also use light or chemical stimuli to make it grow in a certain direction.


“This new plasmodium robot, called plasmobot, will sense objects, span them in the shortest and best way possible, and transport tiny objects along pre-programmed directions. The robots will have parallel inputs and outputs, a network of sensors and the number crunching power of super computers. The plasmobot will be controlled by spatial gradients of light, electro-magnetic fields and the characteristics of the substrate on which it is placed. It will be a fully controllable and programmable amorphous intelligent robot with an embedded massively parallel computer.”


This research will lay the groundwork for further investigations into the ways in which this mould can be harnessed for its powerful computational abilities.


Professor Adamatzky says that there are long term potential benefits from harnessing this power, “We are at the very early stages of our understanding of how the potential of the plasmodium can be applied, but in years to come we may be able to use the ability of the mould for example to deliver a small quantity of a chemical substance to a target, using light to help to propel it, or the movement could be used to help assemble micro-components of machines. In the very distant future we may be able to harness the power of plasmodia within the human body, for example to enable drugs to be delivered to certain parts of the human body. It might also be possible for thousands of tiny computers made of plasmodia to live on our skin and carry out routine tasks freeing up our brain for other things. Many scientists see this as a potential development of amorphous computing, but it is purely theoretical at the moment.”


Professor Adamatzky has recently edited and had published by Springer, ‘Artificial Life Models in Hardware’ aimed at students and researchers of robotics. The book focuses on the design and real-world implementation of artificial life robotic devices and covers a range of hopping, climbing, swimming robots, neural networks and slime mould and chemical brains.



If you like this post, buy me a Pittza at $1!
Reblog this post [with Zemanta]

Thursday, August 20, 2009

Satellites Unlock Secret To Northern India's Vanishing Water


Using satellite data, UC Irvine and NASA hydrologists have found that groundwater beneath northern India has been receding by as much as 1 foot per year over the past decade – and they believe human consumption is almost entirely to blame.


The map shows groundwater changes in India during 2002-08, with losses in red and gains in blue, based on GRACE satellite observations. The estimated rate of depletion of groundwater in northwestern India is 4.0 centimeters of water per year, equivalent to a water table decline of 33 centimeters per year. Increases in groundwater in southern India are due to recent above-average rainfall, whereas rain in northwestern India was close to normal during the study period. (Credit: I. Velicogna/UC Irvine)

More than 109 cubic kilometers (26 cubic miles) of groundwater disappeared from the region's aquifers between 2002 and 2008 – double the capacity of India's largest surface-water reservoir, the Upper Wainganga, and triple that of Lake Mead, the largest manmade reservoir in the U.S.


People are pumping northern India's underground water, mostly to irrigate cropland, faster than natural processes can replenish it, said Jay Famiglietti and Isabella Velicogna, UCI Earth system scientists, and Matt Rodell of NASA's Goddard Space Flight Center.


"If measures are not soon taken to ensure sustainable groundwater usage, consequences for the 114 million residents of the region may include a collapse of agricultural output, severe shortages of potable water, conflict and suffering," said Rodell, lead author of the study and former doctoral student of Famiglietti's at the University of Texas at Austin.


Study results will be published online Aug. 12 in the journal Nature.


Groundwater comes from the percolation of precipitation and other surface waters down through Earth's soil and rock, accumulating in aquifers – cavities and layers of porous rock, gravel, sand or clay. In some subterranean reservoirs, the water may be thousands to millions of years old; in others, water levels decline and rise again naturally each year.


Groundwater levels do not respond to changes in weather as rapidly as lakes, streams and rivers do. So when groundwater is pumped for irrigation or other uses, restoration of original levels can take months or years.


"Groundwater mining – that is when withdrawals exceed replenishment rates – is a rapidly growing problem in many of the world's large aquifers," Famiglietti said. "Since groundwater provides nearly 80 percent of the water required for irrigated agriculture, diminishing groundwater reserves pose a serious threat to global food security."


Data provided by India's Ministry of Water Resources had suggested that groundwater use across the nation was exceeding natural replenishment, but the regional rate of depletion had been unknown.


In the new study, the hydrologists analyzed six years of monthly data for northern India from twin satellites called GRACE – NASA's Gravity Recovery and Climate Experiment – to produce a chronology of underground water storage changes.


GRACE detects differences in gravity brought about by fluctuations in water mass, including water below the Earth's surface. As the satellites orbit 300 miles above Earth, their positions change – relative to each other – in response to variations in the pull of gravity. They fly about 137 miles apart, and microwave ranging systems measure every microscopic variance in the distance between the two.


"With GRACE, we can monitor water storage changes everywhere in the world from our desk," said Velicogna, also with NASA's Jet Propulsion Laboratory. "The satellites allow us to observe how water storage evolves from one month to the next in critical areas of the world."


Groundwater loss in northern India is particularly alarming because there were no unusual trends in rainfall – in fact, it was slightly above normal during the study period. The researchers also examined data on soil moisture, lake and surface reservoir storage, vegetation and glaciers in the nearby Himalayas to confirm that the apparent groundwater trend was real. The only influence they couldn't rule out was human.


"For the first time, we can observe water use on land with no additional ground-based data collection," Famiglietti said. "This is critical because in many developing countries, where hydrological data are both sparse and hard to access, space-based methods provide perhaps the only opportunity to assess changes in freshwater availability across large regions."


About GRACE: The Gravity Recovery and Climate Experiment is a partnership between NASA and the German Aerospace Center. The University of Texas Center for Space Research, Austin, has overall mission responsibility. NASA's Jet Propulsion Laboratory developed the twin satellites. The German Aerospace Center provided the launch, and GeoForschungsZentrum Potsdam, Germany, operates GRACE.


If you like this post, buy me a Pittza at $1!
Reblog this post [with Zemanta]

Tuesday, August 18, 2009

New Nanolaser Key To Future Optical Computers And Technologies


Researchers have created the tiniest laser since its invention nearly 50 years ago, paving the way for a host of innovations, including superfast computers that use light instead of electrons to process information, advanced sensors and imaging.


Researchers have created the tiniest laser since its invention nearly 50 years ago. Because the new device, called a "spaser," is the first of its kind to emit visible light, it represents a critical component for possible future technologies based on "nanophotonic" circuitry. The color diagram (a) shows the nanolaser's design: a gold core surrounded by a glasslike shell filled with green dye. Scanning electron microscope images (b and c) show that the gold core and the thickness of the silica shell were about 14 nanometers and 15 nanometers, respectively. A simulation of the SPASER (d) shows the device emitting visible light with a wavelength of 525 nanometers. (Credit: Birck Nanotechnology Center, Purdue University)

Because the new device, called a "spaser," is the first of its kind to emit visible light, it represents a critical component for possible future technologies based on "nanophotonic" circuitry, said Vladimir Shalaev, the Robert and Anne Burnett Professor of Electrical and Computer Engineering at Purdue University.


Such circuits will require a laser-light source, but current lasers can't be made small enough to integrate them into electronic chips. Now researchers have overcome this obstacle, harnessing clouds of electrons called "surface plasmons," instead of the photons that make up light, to create the tiny spasers.


Findings are detailed in a paper appearing online in the journal Nature, reporting on work conducted by researchers at Purdue, Norfolk State University and Cornell University.


Nanophotonics may usher in a host of radical advances, including powerful "hyperlenses" resulting in sensors and microscopes 10 times more powerful than today's and able to see objects as small as DNA; computers and consumer electronics that use light instead of electronic signals to process information; and more efficient solar collectors.


"Here, we have demonstrated the feasibility of the most critical component - the nanolaser - essential for nanophotonics to become a practical technology," Shalaev said.


The "spaser-based nanolasers" created in the research were spheres 44 nanometers, or billionths of a meter, in diameter - more than 1 million could fit inside a red blood cell. The spheres were fabricated at Cornell, with Norfolk State and Purdue performing the optical characterization needed to determine whether the devices behave as lasers.


The findings confirm work by physicists David Bergman at Tel Aviv University and Mark Stockman at Georgia State University, who first proposed the spaser concept in 2003.


"This work represents an important milestone that may prove to be the start of a revolution in nanophotonics, with applications in imaging and sensing at a scale that is much smaller than the wavelength of visible light," said Timothy D. Sands, the Mary Jo and Robert L. Kirk Director of the Birck Nanotechnology Center in Purdue's Discovery Park.


The spasers contain a gold core surrounded by a glasslike shell filled with green dye. When a light was shined on the spheres, plasmons generated by the gold core were amplified by the dye. The plasmons were then converted to photons of visible light, which was emitted as a laser.


Spaser stands for surface plasmon amplification by stimulated emission of radiation. To act like lasers, they require a "feedback system" that causes the surface plasmons to oscillate back and forth so that they gain power and can be emitted as light. Conventional lasers are limited in how small they can be made because this feedback component for photons, called an optical resonator, must be at least half the size of the wavelength of laser light.


The researchers, however, have overcome this hurdle by using not photons but surface plasmons, which enabled them to create a resonator 44 nanometers in diameter, or less than one-tenth the size of the 530-nanometer wavelength emitted by the spaser.


"It's fitting that we have realized a breakthrough in laser technology as we are getting ready to celebrate the 50th anniversary of the invention of the laser," Shalaev said.


The first working laser was demonstrated in 1960.


The research was conducted by Norfolk State researchers Mikhail A. Noginov, Guohua Zhu and Akeisha M. Belgrave; Purdue researchers Reuben M. Bakker, Shalaev and Evgenii E. Narimanov; and Cornell researchers Samantha Stout, Erik Herz, Teeraporn Suteewong and Ulrich B. Wiesner.


Future work may involve creating a spaser-based nanolaser that uses an electrical source instead of a light source, which would make them more practical for computer and electronics applications.


The work was funded by the National Science Foundation and U.S. Army Research Office and is affiliated with the Birck Nanotechnology Center, the Center for Materials Research at Norfolk State, and Cornell's Materials Science and Engineering Department.



If you like this post, buy me a Pittza at $1!
Reblog this post [with Zemanta]

Saturday, August 15, 2009

'Hidden Portal' Concept Described: First Tunable Electromagnetic Gateway


While the researchers can't promise delivery to a parallel universe or a school for wizards, books like Pullman's Dark Materials and JK Rowling's Harry Potter are steps closer to reality now that researchers in China have created the first tunable electromagnetic gateway.

Entrance to platform nine and three-quarters at King's Cross Station, used by Harry Potter on his way to school. New research describes the concept of a gateway that can block electromagnetic waves but that allows the passage of other entities, like a 'hidden portal'. (Credit: iStockphoto/Guy Erwood)

The work is a further advance in the study of metamaterials, published in New Journal of Physics (co-owned by the Institute of Physics and German Physical Society).


In the research paper, the researchers from the Hong Kong University of Science and Technology and Fudan University in Shanghai describe the concept of a "a gateway that can block electromagnetic waves but that allows the passage of other entities" like a "'hidden portal' as mentioned in fictions."


The gateway, which is now much closer to reality, uses transformation optics and an amplified scattering effect from an arrangement of ferrite materials called single-crystal yttrium-iron-garnet that force light and other forms of electromagnetic radiation in complicated directions to create a hidden portal.


Previous attempts at an electromagnetic gateway were hindered by their narrow bandwidth, only capturing a small range of visible light or other forms of electromagnetic radiation. This new configuration of metamaterials however can be manipulated to have optimum permittivity and permeability – able to insulate the electromagnetic field that encounters it with an appropriate magnetic reaction.


Because of the arrangement's response to magnetic fields it also has the added advantage of being tunable and can therefore be switched on and off remotely.


Dr Huanyang Chen from the Physics Department at Hong Kong University of Science and Technology has commented, "In the frequency range in which the metamaterial possesses a negative refraction index, people standing outside the gateway would see something like a mirror. Whether it can block all visible light depends on whether one can make a metamaterial that has a negative refractive index from 300 to 800 nanometres."


Metamaterials, the area of physics research behind the possible creation of a real Harry Potter-style invisibility cloak, are exotic composite materials constructed at the atomic (rather than the usual chemical) level to produce materials with properties beyond those which appear naturally.



If you like this post, buy me a Pittza at $1!
Reblog this post [with Zemanta]

World Record In Packing Puzzle Set In Tetrahedra Jam: Better Understanding Of Matter Itself?


Finding the best way to pack the greatest quantity of a specifically shaped object into a confined space may sound simple, yet it consistently has led to deep mathematical concepts and practical applications, such as improved computer security codes.


Princeton researchers have beaten the present world record for packing the most tetrahedra into a volume. Research into these so-called packing problems have produced deep mathematical ideas and led to practical applications as well.
(Credit: Princeton University/Torquato Lab)


When mathematicians solved a famed sphere-packing problem in 2005, one that first had been posed by renowned mathematician and astronomer Johannes Kepler in 1611, it made worldwide headlines.


Now, two Princeton University researchers have made a major advance in addressing a twist in the packing problem, jamming more tetrahedra -- solid figures with four triangular faces -- and other polyhedral solid objects than ever before into a space. The work could result in better ways to store data on compact discs as well as a better understanding of matter itself.


In the cover story of the Aug. 13 issue of Nature, Salvatore Torquato, a professor in the Department of Chemistry and the Princeton Institute for the Science and Technology of Materials, and Yang Jiao, a graduate student in the Department of Mechanical and Aerospace Engineering, report that they have bested the world record, set last year by Elizabeth Chen, a graduate student at the University of Michigan.


Using computer simulations, Torquato and Jiao were able to fill a volume to 78.2 percent of capacity with tetrahedra. Chen, before them, had filled 77.8 percent of the space. The previous world record was set in 2006 by Torquato and John Conway, a Princeton professor of mathematics. They succeeded in filling the space to 72 percent of capacity.


Beyond making a new world record, Torquato and Jiao have devised an approach that involves placing pairs of tetrahedra face-to-face, forming a "kissing" pattern that, viewed from the outside of the container, looks strangely jumbled and irregular.


"We wanted to know this: What's the densest way to pack space?" said Torquato, who is also a senior faculty fellow at the Princeton Center for Theoretical Science. "It's a notoriously difficult problem to solve, and it involves complex objects that, at the time, we simply did not know how to handle."


Henry Cohn, a mathematician with Microsoft Research New England in Cambridge, Mass., said, "What's exciting about Torquato and Jiao's paper is that they give compelling evidence for what happens in more complicated cases than just spheres." The Princeton researchers, he said, employ solid figures as a "wonderful test case for understanding the effects of corners and edges on the packing problem."


Studying shapes and how they fit together is not just an academic exercise. The world is filled with such solids, whether they are spherical oranges or polyhedral grains of sand, and it often matters how they are organized. Real-life specks of matter resembling these solids arise at ultra-low temperatures when materials, especially complex molecular compounds, pass through various chemical phases. How atoms clump can determine their most fundamental properties.


"From a scientific perspective, to know about the packing problem is to know something about the low-temperature phases of matter itself," said Torquato, whose interests are interdisciplinary, spanning physics, applied and computational mathematics, chemistry, chemical engineering, materials science, and mechanical and aerospace engineering.


And the whole topic of the efficient packing of solids is a key part of the mathematics that lies behind the error-detecting and error-correcting codes that are widely used to store information on compact discs and to compress information for efficient transmission around the world.


Beyond solving the practical aspects of the packing problem, the work contributes insight to a field that has fascinated mathematicians and thinkers for thousands of years. The Greek philosopher Plato theorized that the classical elements -- earth, wind, fire and water -- were constructed from polyhedra. Models of them have been found among carved stone balls created by the late Neolithic people of Scotland.


The tetrahedron, which is part of the family of geometric objects known as the Platonic solids, must be packed in the face-to-face fashion for maximum effect. But, for significant mathematical reasons, all other members of the Platonic solids, the researchers found, must be packed as lattices to cram in the largest quantity, much the way a grocer stacks oranges in staggered rows, with successive layers nestled in the dimples formed by lower levels. Lattices have great regularity because they are composed of single units that repeat themselves in exactly the same way.


Mathematicians define the five shapes composing the Platonic solids as being convex polyhedra that are regular. For non-mathematicians, this simply means that these solids have many flat faces, which are plane figures, such as triangles, squares or pentagons. Being regular figures, all angles and faces' sides are equal. The group includes the tetrahedron (with four faces), the cube (six faces), the octahedron (eight faces), the dodecahedron (12 faces) and the icosahedron (20 faces).


There's a good reason why tetrahedra must be packed differently from other Platonic solids, according to the authors. Tetrahedra lack a quality known as central symmetry. To possess this quality, an object must have a center that will bisect any line drawn to connect any two points on separate planes on its surface. The researchers also found this trait absent in 12 out of 13 of an even more complex family of shapes known as the Archimedean solids.


The conclusions of the Princeton scientists are not at all obvious, and it took the development of a complex computer program and theoretical analysis to achieve their groundbreaking results. Previous computer simulations had taken virtual piles of polyhedra and stuffed them in a virtual box and allowed them to "grow."


The algorithm designed by Torquato and Jiao, called "an adaptive shrinking cell optimization technique," did it the other way. It placed virtual polyhedra of a fixed size in a "box" and caused the box to shrink and change shape.


There are tremendous advantages to controlling the size of the box instead of blowing up polyhedra, Torquato said. "When you 'grow' the particles, it's easy for them to get stuck, so you have to wiggle them around to improve the density," he said. "Such programs get bogged down easily; there are all kinds of subtleties. It's much easier and productive, we found, thinking about it in the opposite way."


Cohn, of Microsoft, called the results remarkable. It took four centuries, he noted, for mathematician Tom Hales to prove Kepler's conjecture that the best way to pack spheres is to stack them like cannonballs in a war memorial. Now, the Princeton researchers, he said, have thrown out a new challenge to the math world. "Their results could be considered a 21st Century analogue of Kepler's conjecture about spheres," Cohn said. "And, as with that conjecture, I'm sure their work will inspire many future advances."


Many researchers have pointed to various assemblies of densely packed objects and described them as optimal. The difference with this work, Torquato said, is that the algorithm and analysis developed by the Princeton team most probably shows, in the case of the centrally symmetric Platonic and Archimedean solids, "the best packings, period."


Their simulation results are also supported by theoretical arguments that the densest packings of these objects are likely to be their best lattice arrangements. "This is now a strong conjecture that people can try to prove," Torquato said.



If you like this post, buy me a beer at $3!
Reblog this post [with Zemanta]

Wednesday, August 12, 2009

Hundreds Of New Species Discovered In Eastern Himalayas


Over 350 new species including the world’s smallest deer, a “flying frog” and a 100 million-year old gecko have been discovered in the Eastern Himalayas, a biological treasure trove now threatened by climate change.

Flying frog (Rhacophorus suffry). The bright green, red-footed tree frog was described in 2007. It is a 'flying frog' because long webbed feet allow the species to glide when falling. (Credit: Copyright Totul Bortamuli / WWF Nepal)

A decade of research carried out by scientists in remote mountain areas endangered by rising global temperatures brought exciting discoveries such as a bright green frog that uses its red and long webbed feet to glide in the air.


One of the most significant findings was not exactly “new” in the classic sense. A 100-million year-old gecko, the oldest fossil gecko species known to science, was discovered in an amber mine in the Hukawng Valley in the northern Myanmar.


The WWF report The Eastern Himalayas – Where Worlds Collide details discoveries made by scientists from various organizations between 1998 and 2008 in a region reaching across Bhutan and north-east India to the far north of Myanmar as well as Nepal and southern parts of Tibet Autonomus Region (China).


“The good news of this explosion in species discoveries is tempered by the increasing threats to the Himalayas’ cultural and biological diversity,” said Jon Miceler, Director of WWF’s Eastern Himalayas Program. “This rugged and remarkable landscape is already seeing direct, measurable impacts from climate change and risks being lost forever.”


In December world leaders will gather in Copenhagen to reach an agreement on a new climate deal, which will replace the existing Kyoto Protocol.


The Eastern Himalayas- Where Worlds Collide describes more than 350 new species discovered - including 244 plants, 16 amphibians, 16 reptiles, 14 fish, 2 birds, 2 mammals and at least 60 new invertebrates.


The report mentions the miniature muntjac, also called the “leaf deer,” which is the world’s oldest and smallest deer species. Scientists initially believed the small creature found in the world’s largest mountain range was a juvenile of another species but DNA tests confirmed the light brown animal with innocent dark eyes was a distinct and new species.


The Eastern Himalayas harbor a staggering 10,000 plant species, 300 mammal species, 977 bird species, 176 reptiles, 105 amphibians and 269 types of freshwater fish. The region also has the highest density of Bengal tigers in the world and is the last bastion of the charismatic greater one-horned rhino.


WWF is working to conserve the habitat of endangered species such as snow leopards, Bengal tigers, Asian elephants, red pandas, takin, golden langurs, Gangetic dolphins and one-horned rhinos.


Historically, the rugged and largely inaccessible landscape of the Eastern Himalayas has made biological surveys in the region extremely difficult. As a result, wildlife has remained poorly surveyed and there are large areas that are still biologically unexplored.


Today further species continue to be unearthed and many more species of amphibians, reptiles and fish are currently in the process of being officially named by scientists.



If you like this post, buy me a beer at $3!
Reblog this post [with Zemanta]

Saturday, August 8, 2009

Scientists Find Universal Rules For Food-web Stability


New findings, published in the journal Science, conclude that food-web stability is enhanced when many diverse predator-prey links connect high and intermediate trophic levels. The computations also reveal that small ecosystems follow other rules than large ecosystems: differences in the strength of predator-prey links increase the stability of small webs, but destabilize larger webs.


A Juvenile African Bush Viper (Atheris chlorechis) with a small frog, at night.
Researchers found that food-web stability is enhanced when many diverse predator-prey links connect high and intermediate trophic levels. (Credit: iStockphoto/Mark Kostich)


Natural ecosystems consist of interwoven food chains, in which individual animal or plant species function as predator or prey. Potential food webs not only differ by their species composition, but also vary in their stability. Observable food webs are stable food webs, with the relationships between their species remaining constant over relatively long periods of time.


Understanding complex systems such as food webs present major challenges to science. They can either be examined by observing natural environments, or by computer simulations. To enable computer simulations of such systems, scientists often have to make simplifying assumptions, keeping the number of model parameters as low as possible. Yet, the computational demands of such simulations are high and their relevance is often limited.


Innovative methodology


Scientists from the Max Planck Institute for the Physics of Complex Systems (MPIPKS) in Dresden, Germany, have developed a new method that allows them to efficiently analyze the impact of innumerable parameters on complex systems.


"By using a method called generalized modeling, we examine whether a given food web can, in principle, be stable, i.e., whether its species can coexist in the long term," says Thilo Gross from MPIPKS. Complex ecosystems can thus be simulated and analyzed under almost any conditions. "In this way we can estimate which parameters will keep ecosystems stable and which will upset their balance."


The method can also be used for examining other complex systems, such as human metabolism or gene regulation.


Generalists stabilize, specialists destabilize


Applying this innovative modeling approach together with colleagues at the International Institute for Applied Systems Analysis (IIASA) in Laxenburg, Austria, and Princeton University, USA, the scientists have succeeded in discovering not just one, but several universal rules in the dynamics of ecosystems.


"Food-web stability is enhanced when species at high trophic levels feed on multiple prey species and species at intermediate trophic levels are fed upon by multiple predator species," says Ulf Dieckmann of IIASA.


The scientists have also identified additional stabilizing and destabilizing factors within ecosystems. Ecosystems with high densities of predator-prey links are less likely to be stable, while a strong dependence of predation on predator density destabilizes the system. On the other hand, a strong dependence of predation on prey density has a stabilizing impact on food webs.


Differences between small and large systems


A further important finding is that food webs consisting of only a few species behave qualitatively different from webs consisting of many species.


"Small ecosystems apparently follow different rules than large ecosystems," says Ulf Dieckmann. "Systems with fewer species are more stable if there are strong interactions between some species, but only weak interactions between others. For food webs with many species, exactly the opposite is true. Extremely strong or weak predator-prey links in nature should therefore be the rarer the more species a food web contains," he concludes.



If you like this post, buy me a beer at $3!
Reblog this post [with Zemanta]

Friday, August 7, 2009

Nanoscale Origami From DNA


Scientists at the Technische Universitaet Muenchen (TUM) and Harvard University have thrown the lid off a new toolbox for building nanoscale structures out of DNA, with complex twisting and curving shapes. In the August 7 issue of the journal Science, they report a series of experiments in which they folded DNA, origami-like, into three dimensional objects including a beachball-shaped wireframe capsule just 50 nanometers in diameter.


Scientists at the Technische Universitaet Muenchen and Harvard University have thrown the lid off a new toolbox for building nanoscale structures out of DNA, with complex twisting and curving shapes. They report a series of experiments in which they folded DNA, origami-like, into 3-D objects including a beach ball-shaped wireframe capsule just 50 nanometers in diameter. (Credit: Used by permission of H. Dietz, TUM Dept. of Physics, all rights reserved.)

"Our goal was to find out whether we could program DNA to assemble into shapes that exhibit custom curvature or twist, with features just a few nanometers wide," says biophysicist Hendrik Dietz, a professor at the Technische Universitaet Muenchen. Dietz's collaborators in these experiments were Professor William Shih and Dr. Shawn Douglas of Harvard University. "It worked," he says, "and we can now build a diversity of three-dimensional nanoscale machine parts, such as round gears or curved tubes or capsules. Assembling those parts into bigger, more complex and functional devices should be possible."


As a medium for nanoscale engineering, DNA has the dual advantages of being a smart material – not only tough and flexible but also programmable – and being very well characterized by decades of study. Basic tools that Dietz, Douglas, and Shih employ are programmable self-assembly – directing DNA strands to form custom-shaped bundles of cross-linked double helices – and targeted insertions or deletions of base pairs that can give such bundles a desired twist or curve. Right-handed or left-handed twisting can be specified. They report achieving precise, quantitative control of these shapes, with a radius of curvature as tight as 6 nanometers.


The toolbox they have developed includes a graphical software program that helps to translate specific design concepts into the DNA programming required to realize them. Three-dimensional shapes are produced by "tuning" the number, arrangement, and lengths of helices.


In their current paper, the researchers present a wide variety of nanoscale structures and describe in detail how they designed, formed, and verified them. "Many advanced macroscopic machines require curiously shaped parts in order to function," Dietz says, "and we have the tools to make them. But we currently cannot build something intricate such as an ant's leg or, much smaller, a ten-nanometer-small chemical plant such as a protein enzyme. We expect many benefits if only we could build super-miniaturized devices on the nanoscale using materials that work robustly in the cells of our bodies – biomolecules such as DNA."



If you like this post, buy me a beer at $3!
Reblog this post [with Zemanta]

Monday, August 3, 2009

Why We Learn More From Our Successes Than Our Failures


If you've ever felt doomed to repeat your mistakes, researchers at MIT's Picower Institute for Learning and Memory may have explained why: Brain cells may only learn from experience when we do something right and not when we fail.

Given different images as cues, monkeys were trained to look right or left for rewards.
MIT neuroscientists found that neurons responded differently following correct and
incorrect responses, with correct responses setting up the brain for additional successes.
(Credit: Courtesy / Earl Miller)


In the July 30 issue of the journal Neuron, Earl K. Miller, the Picower Professor of Neuroscience, and MIT colleagues Mark Histed and Anitha Pasupathy have created for the first time a unique snapshot of the learning process that shows how single cells change their responses in real time as a result of information about what is the right action and what is the wrong one.


"We have shown that brain cells keep track of whether recent behaviors were successful or not," Miller said. Furthermore, when a behavior was successful, cells became more finely tuned to what the animal was learning. After a failure, there was little or no change in the brain — nor was there any improvement in behavior.


The study sheds light on the neural mechanisms linking environmental feedback to neural plasticity — the brain's ability to change in response to experience. It has implications for understanding how we learn, and understanding and treating learning disorders.


Rewarding success


Monkeys were given the task of looking at two alternating images on a computer screen. For one picture, the animal was rewarded when it shifted its gaze to the right; for another picture it was supposed to look left. The monkeys used trial and error to figure out which images cued which movements.


The researchers found that whether the animals' answers were right or wrong, signals within certain parts of their brains "resonated" with the repercussions of their answers for several seconds. The neural activity following a correct answer and a reward helped the monkeys do better on the trial that popped up a few seconds later.


"If the monkey just got a correct answer, a signal lingered in its brain that said, 'You did the right thing.' Right after a correct answer, neurons processed information more sharply and effectively, and the monkey was more likely to get the next answer correct as well," Miller said, "But after an error there was no improvement. In other words, only after successes, not failures, did brain processing and the monkeys' behavior improve."


Split-second influence


The prefrontal cortex orchestrates thoughts and actions in accordance with internal goals while the basal ganglia are associated with motor control, cognition and emotions. This work shows that these two brain areas, long suspected to play key roles in learning and memory, have full information available to them to do all the neural computations necessary for learning.


The prefrontal cortex and basal ganglia, extensively connected with each other and with the rest of the brain, are thought to help us learn abstract associations by generating brief neural signals when a response is correct or incorrect. But researchers never understood how this transient activity, which fades in less than a second, influenced actions that occurred later.


In this study, the researchers found activity in many neurons within both brain regions that reflected the delivery or withholding of a reward lasted for several seconds, until the next trial. Single neurons in both areas conveyed strong, sustained outcome information for four to six seconds, spanning the entire time frame between trials.


Response selectivity was stronger on a given trial if the previous trial had been rewarded and weaker if the previous trial was an error. This occurred whether the animal was just learning the association or was already good at it.


After a correct response, the electrical impulses coming from neurons in each of the brain areas was more robust and conveyed more information. "The signal-to-noise ratio improved in both brain regions," Miller said. "The heightened response led to them being more likely to get the next trial correct, too. This explains on a neural level why we seem to learn more from our successes than our failures."


In addition to Miller, authors include former MIT graduate student Mark H. Histed, now a postdoctoral fellow at Harvard Medical School, and former postdoctoral fellow Anitha Pasupathy, now an assistant professor at the University of Washington.


This work is supported by National Institute of Neurological Disorders and Stroke and the Tourette's Syndrome Association.


If you like this post, buy me a beer at $3!
Reblog this post [with Zemanta]