BTemplates.com

Powered by Blogger.

Pageviews past week

Quantum mechanics

Auto News

artificial intelligence

About Me

Recommend us on Google!

Information Technology

Popular Posts

Thursday, September 30, 2010

A Robot Chair Makes You Sit Up Straight After suffering back problems, a professor taught his chair to correct his posture.


John Morrell is your typical professor of mechanical engineering. He puts in long hours at his desk, slouched in front of a computer, gradually killing the spongey intervertebral discs in that keep the spine flexible. Even after he went to a physical therapist for back pain, he couldn't remember to sit up straight in his chair, he told the Yale Daily News.

Being an engineer, he eventually came around to the notion that it would be easier to change his environment than to change himself. So, along with his student Ying Zheng, he wired up an Aeron chair (street value: $850) with $70 worth of electronics, including six "force-sensitive resistors, or tactors."

The chair's operation is straightforward: if you deviate from an upright stance in which your spine is in the neutral position, as recommended by the National Institute for Occupational Safety and Health, whatever pressure-sensitive sensor you aren't leaning on starts to vibrate insistently.

If this sounds like, as the blog TechYum put it "robot Sister Mary Katherine SitUpStraight, complete with vibrating ruler," then perhaps you haven't quite got the right idea. The chair is meant to enhance your brief time on this earth, not turn your working hours into a Kafkaesque nightmare of unpredictable negative feedback.

"We are presently working on ways to make the chair less bitchy and more like a trusted yoga coach," Morrell told FastCoDesign, and no I am not making that up. "Good posture at the expense of productivity or happiness is not where we want to stop."

Making Music on a Microscopic Scale


Strings a fraction of the thickness of a human hair, with microscopic weights to pluck them: Researchers and students from the MESA+ Institute for Nanotechnology of the University of Twente in The Netherlands have succeeded in constructing the first musical instrument with dimensions measured in mere micrometres -- a 'micronium' -- that produces audible tones. A composition has been specially written for the instrument.
Image of the chip containing six mass-spring systems (i.e. six tones). (Credit: Image courtesy of University of Twente)

Earlier musical instruments with these minimal dimensions only produced tones that are inaudible to humans. But thanks to ingenious construction techniques, students from the University of Twente have succeeded in producing scales that are audible when amplified. To do so, they made use of the possibilities offered by micromechanics: the construction of moving structures with dimensions measured in micrometres (a micrometre is a thousandth of a millimetre). These miniscule devices can be built thanks to the ultra-clean conditions in a 'clean room', and the advanced etching techniques that are possible there.

"You can see comparable technology used in the Wii games computer for detecting movement, or in sensors for airbags," says PhD student Johan Engelen, who devised and led the student project.

Tuning

The tiny musical instrument is made up of springs that are only a tenth of the thickness of a human hair, and vary in length from a half to a whole millimetre. A mass of a few dozen micrograms is hung from these springs. The mass is set in motion by so-called 'comb drives': miniature combs that fit together precisely and shift in relation to each other, so 'plucking' the springs and creating sounds. The mass vibrates with a maximum deflection of just a few micrometres. This minimal movement can be accurately measured, and produces a tone. Each tone has its own mass spring system, and six tones fit on a microchip. By combining a number of chips, a wider range of tones can be achieved. "The tuning process turned out to be the greatest challenge," says Engelen. "We can learn a lot from this project for the construction of other moving structures. Above all, this is a great project for introducing students to micromechanics and clean room techniques."

The micronium played a leading role at the opening of a two-day scientific conference on micromechanics in the Atak music venue in Enschede on September 27 and 28. A composition has been specially written for the instrument: 'Impromptu No. 1 for Micronium' by Arvid Jense, who is studying MediaMusic at the conservatorium in Enschede.

A scientific paper -- 'A musical instrument in MEMS' -- has also been devoted to the instrument, and this will be presented to the conference by Johan Engelen. The project was carried out by the Transducers Science and Technology group led by Professor Miko Elwenspoek. The group forms a part of the MESA+ Institute for Nanotechnology of the University of Twente.

Wednesday, September 29, 2010

Sole Electron Reader Opens Path for Quantum Computation


A team led by engineers and physicists at the University of New South Wales (UNSW) in Sydney, Australia, make developed one of the key building blocks needed to make a quantum computer using silicon: a "single electron reader."
Artist's impression of a phosphorus atom (red sphere surrounded by a blue electron cloud, with spin) coupled to a silicon single-electron transistor, to achieve single-shot readout of the phosphorus electron spin. (Credit: William Algar-Chuklin, College of Fine Arts, The University of New South Wales)

Their work was published in the journal Nature.

Quantum computers promise exponential function increases in processing speed over today's computers through their use of the "spin," or magnetic orientation, of single electrons to correspond data in their computings.

In order to employ electron spin, the quantum computer needs both a way of changing the spin state (write) and of measuring that change (read) to form a qubit -- the equivalent of the bits in a conventional computer.

In creating the single electron reader, a team of engineers and physicists led by Dr Andrea Morello and Professor Andrew Dzurak, of the School of Electrical Engineering and Telecommunications at UNSW, has for the first time made possible the measurement of the spin of one electron in silicon in a single shot experiment. The team also includes researchers from the University of Melbourne and Aalto University in Finland.

"Our device detects the spin state of a single electron in a single phosphorus atom implanted in a block of silicon. The spin state of the electron controls the flow of electrons in a nearby circuit," said Dr Morello, the lead author of the paper, Single-shot readout of an electron spin in silicon.

"Until this experiment, no-one had actually measured the spin of a single electron in silicon in a single-shot experiment."

By using silicon -- the foundation material of conventional computers -- rather than light or the esoteric materials and approaches being pursued by other researchers, the device opens the way to constructing a simpler quantum computer, scalable and amenable to mass-production.

The team has built on a body of research that has put Australia at forefront of the race to construct a working quantum computer. In 1998 Bruce Kane, then at UNSW, outlined in Nature the concept for a silicon-based quantum computer, in which the qubits are defined by single phosphorus atoms in an otherwise ultra-pure silicon chip. The new device brings his vision closer.

"We expect quantum computers will be able to perform certain tasks much faster than normal computers, such as searching databases, modelling complex molecules or developing new drugs," says co-author Prof Andrew Dzurak. "They could also crack most modern forms of encryption."

"After a decade of work trying to build this type of single atom qubit device, this is a very special moment."

Now the team has created a single electron reader, they are working to quickly complete a single electron writer and combine the two. Then they will combine pairs of these devices to create a 2-bit logic gate -- the basic processing unit of a quantum computer.

The research team is part of the Australian Research Council (ARC) Centre of Excellence for Quantum Computer Technology, which is headquartered at UNSW. The team is led by Professor Dzurak and Dr Morello, with Mr Jarryd Pla and Dr Floris Zwanenburg as key supporting experimentalists. The paper's co-authors include Prof David Jamieson from the University of Melbourne; Dr Bob Clark, Australia's Chief Defence Scientist, and 10 other researchers from UNSW, The University of Melbourne, and Finland's Aalto University.

The research was funded by: the Australian, US, and NSW governments; UNSW; and the University of Melbourne.

Solar Cells Thinner Than Wavelengths of Light Hold Huge Power Potential


Ultra-thin solar cells can absorb sunlight more efficiently than the thicker, more expensive-to-make silicon cells used today, because light behaves differently at scales around a nanometer (a billionth of a meter), say Stanford engineers. They calculate that by properly configuring the thicknesses of several thin layers of films, an organic polymer thin film could absorb as much as 10 times more energy from sunlight than was thought possible.
This schematic diagram of a thin film organic solar 
cell shows the top layer, a patterned, roughened
scattering layer, in green. The organic thin film layer, 
shown in red, is where light is trapped and electrical
current is generated. The film is sandwiched between 
two layers that help keep light contained within 
the thin film. (Credit: Reproduced with permission
from Proceedings of the National Academy 
of Sciences USA)

In the smooth, white, bunny-suited clean-room world of silicon wafers and solar cells, it turns out that a little roughness may go a long way, perhaps all the way to making solar power an affordable energy source, say Stanford engineers.

Their research shows that light ricocheting around inside the polymer film of a solar cell behaves differently when the film is ultra thin. A film that's nanoscale-thin and has been roughed up a bit can absorb more than 10 times the energy predicted by conventional theory.

The key to overcoming the theoretical limit lies in keeping sunlight in the grip of the solar cell long enough to squeeze the maximum amount of energy from it, using a technique called "light trapping." It's the same as if you were using hamsters running on little wheels to generate your electricity -- you'd want each hamster to log as many miles as possible before it jumped off and ran away.

"The longer a photon of light is in the solar cell, the better chance the photon can get absorbed," said Shanhui Fan, associate professor of electrical engineering. The efficiency with which a given material absorbs sunlight is critically important in determining the overall efficiency of solar energy conversion. Fan is senior author of a paper describing the work published online by Proceedings of the National Academy of Sciences.

Light trapping has been used for several decades with silicon solar cells and is done by roughening the surface of the silicon to cause incoming light to bounce around inside the cell for a while after it penetrates, rather than reflecting right back out as it does off a mirror. But over the years, no matter how much researchers tinkered with the technique, they couldn't boost the efficiency of typical "macroscale" silicon cells beyond a certain amount.

Eventually the scientists realized that there was a physical limit related to the speed at which light travels within a given material.

But light has a dual nature, sometimes behaving as a solid particle (a photon) and other times as a wave of energy, and Fan and postdoctoral researcher Zongfu Yu decided to explore whether the conventional limit on light trapping held true in a nanoscale setting. Yu is the lead author of the PNAS paper.

"We all used to think of light as going in a straight line," Fan said. "For example, a ray of light hits a mirror, it bounces and you see another light ray. That is the typical way we think about light in the macroscopic world.

"But if you go down to the nanoscales that we are interested in, hundreds of millionths of a millimeter in scale, it turns out the wave characteristic really becomes important."

Visible light has wavelengths around 400 to 700 nanometers (billionths of a meter), but even at that small scale, Fan said, many of the structures that Yu analyzed had a theoretical limit comparable to the conventional limit proven by experiment.

"One of the surprises with this work was discovering just how robust the conventional limit is," Fan said.

It was only when Yu began investigating the behavior of light inside a material of deep subwavelength-scale -- substantially smaller than the wavelength of the light -- that it became evident to him that light could be confined for a longer time, increasing energy absorption beyond the conventional limit at the macroscale.

"The amount of benefit of nanoscale confinement we have shown here really is surprising," said Yu. "Overcoming the conventional limit opens a new door to designing highly efficient solar cells."

Yu determined through numerical simulations that the most effective structure for capitalizing on the benefits of nanoscale confinement was a combination of several different types of layers around an organic thin film.

He sandwiched the organic thin film between two layers of material -- called "cladding" layers -- that acted as confining layers once the light passed through the upper one into the thin film. Atop the upper cladding layer, he placed a patterned rough-surfaced layer designed to send the incoming light off in different directions as it entered the thin film.

By varying the parameters of the different layers, he was able to achieve a 12-fold increase in the absorption of light within the thin film, compared to the macroscale limit.

Nanoscale solar cells offer savings in material costs, as the organic polymer thin films and other materials used are less expensive than silicon and, being nanoscale, the quantities required for the cells are much smaller.

The organic materials also have the advantage of being manufactured in chemical reactions in solution, rather than needing high-temperature or vacuum processing, as is required for silicon manufacture.

"Most of the research these days is looking into many different kinds of materials for solar cells," Fan said. "Where this will have a larger impact is in some of the emerging technologies; for example, in organic cells."

"If you do it right, there is enormous potential associated with it," Fan said.

Aaswath Raman, a graduate student in applied physics, also worked on the research and is a coauthor of the paper.

The project was supported by funding from the King Abdullah University of Science and Technology, which supports the Center for Advanced Molecular Photovoltaics at Stanford, and by the U.S. Department of Energy.

Right or Left? Brain Stimulation Can Change Which Hand You Favor


Each time we perform a simple task, like pushing an elevator button or reaching for a cup of coffee, the brain races to decide whether the left or right hand will do the job. But the left hand is more likely to win if a certain region of the brain receives magnetic stimulation, according to new research from the University of California, Berkeley.
When the left posterior parietal cortex of the brain received magnetic stimulation, right-handed volunteers were more likely to use their left hand to perform simple one-handed tasks, UC Berkeley research shows. (Credit: Image courtesy of Flavio Oliveira)

UC Berkeley researchers applied transcranial magnetic stimulation (TMS) to the posterior parietal cortex region of the brain in 33 right-handed volunteers and found that stimulating the left side spurred an increase in their use of the left hand.

The left hemisphere of the brain controls the motor skills of the right side of the body and vice versa. By stimulating the parietal cortex, which plays a key role in processing spatial relationships and planning movement, the neurons that govern motor skills were disrupted.

"You're handicapping the right hand in this competition, and giving the left hand a better chance of winning," said Flavio Oliveira, a UC Berkeley postdoctoral researcher in psychology and neuroscience and lead author of the study, published in the journal Proceedings of the National Academy of Sciences.

The study's findings challenge previous assumptions about how we make decisions, revealing a competitive process, at least in the case of manual tasks. Moreover, it shows that TMS can manipulate the brain to change plans for which hand to use, paving the way for clinical advances in the rehabilitation of victims of stroke and other brain injuries.

"By understanding this process, we hope to be able to develop methods to overcome learned limb disuse," said Richard Ivry, UC Berkeley professor of psychology and neuroscience and co-author of the study.

At least 80 percent of the people in the world are right-handed, but most people are ambidextrous when it comes to performing one-handed tasks that do not require fine motor skills.

"Alien hand syndrome," a neurological disorder in which victims report the involuntary use of their hands, inspired researchers to investigate whether the brain initiates several action plans, setting in motion a competitive process before arriving at a decision.

While the study does not offer an explanation for why there is a competition involved in this type of decision making, researchers say it makes sense that we adjust which hand we use based on changing situations. "In the middle of the decision process, things can change, so we need to change track," Oliveira said.

In TMS, magnetic pulses alter electrical activity in the brain, disrupting the neurons in the underlying brain tissue. While the current findings are limited to hand choice, TMS could, in theory, influence other decisions, such as whether to choose an apple or an orange, or even which movie to see, Ivry said.

With sensors on their fingertips, the study's participants were instructed to reach for various targets on a virtual tabletop while a 3-D motion-tracking system followed the movements of their hands. When the left posterior parietal cortex was stimulated, and the target was located in a spot where they could use either hand, there was a significant increase of the use of the left hand, Oliveira said.

Other coauthors of the study are Jörn Diedrichsen from University College London, Timothy Verstynen from the University of Pittsburg and Julie Duque from the Université Catholique de Louvain in Belgium.

The study was funded by the Natural Sciences and Engineering Research Council of Canada, the Canadian Institutes of Health Research, the National Institutes of Health, the National Science Foundation and the Belgian American Educational Foundation.

A Shot to the Heart: Nanoneedle Delivers Quantum Dots to Cell Nucleus


Getting an inside look at the center of a cell can be as easy as a needle prick, thanks to University of Illinois researchers who have developed a tiny needle to deliver a shot right to a cell's nucleus.
University of Illinois researchers developed a nanoneedle that releases quantum dots directly into the nucleus of a living cell when a small electrical charge is applied. The quantum dots are tracked to gain information about conditions inside the nucleus. (Credit: Image courtesy Min-Feng Yu)

Understanding the processes inside the nucleus of a cell, which houses DNA and is the site for transcribing genes, could lead to greater comprehension of genetics and the factors that regulate expression. Scientists have used proteins or dyes to track activity in the nucleus, but those can be large and tend to be sensitive to light, making them hard to use with simple microscopy techniques.

Researchers have been exploring a class of nanoparticles called quantum dots, tiny specks of semiconductor material only a few molecules big that can be used to monitor microscopic processes and cellular conditions. Quantum dots offer the advantages of small size, bright fluorescence for easy tracking, and excellent stability in light.

"Lots of people rely on quantum dots to monitor biological processes and gain information about the cellular environment. But getting quantum dots into a cell for advanced applications is a problem," said professor Min-Feng Yu, a professor of mechanical science and engineering.

Getting any type of molecule into the nucleus is even trickier, because it's surrounded by an additional membrane that prevents most molecules in the cell from entering.

Yu worked with fellow mechanical science and engineering professor Ning Wang and postdoctoral researcher Kyungsuk Yum to develop a nanoneedle that also served as an electrode that could deliver quantum dots directly into the nucleus of a cell -- specifically to a pinpointed location within the nucleus. The researchers can then learn a lot about the physical conditions inside the nucleus by monitoring the quantum dots with a standard fluorescent microscope.

"This technique allows us to physically access the internal environment inside a cell," Yu said. "It's almost like a surgical tool that allows us to 'operate' inside the cell."

The group coated a single nanotube, only 50 nanometers wide, with a very thin layer of gold, creating a nanoscale electrode probe. They then loaded the needle with quantum dots. A small electrical charge releases the quantum dots from the needle. This provides a level of control not achievable by other molecular delivery methods, which involve gradual diffusion throughout the cell and into the nucleus.

"Now we can use electrical potential to control the release of the molecules attached on the probe," Yu said. "We can insert the nanoneedle in a specific location and wait for a specific point in a biologic process, and then release the quantum dots. Previous techniques cannot do that."

Because the needle is so small, it can pierce a cell with minimal disruption, while other injection techniques can be very damaging to a cell. Researchers also can use this technique to accurately deliver the quantum dots to a very specific target to study activity in certain regions of the nucleus, or potentially other cellular organelles.

"Location is very important in cellular functions," Wang said. "Using the nanoneedle approach you can get to a very specific location within the nucleus. That's a key advantage of this method." The new technique opens up new avenues for study. The team hopes to continue to refine the nanoneedle, both as an electrode and as a molecular delivery system.

They hope to explore using the needle to deliver other types of molecules as well -- DNA fragments, proteins, enzymes and others -- that could be used to study a myriad of cellular processes.

"It's an all-in-one tool," Wang said. "There are three main types of processes in the cell: chemical, electrical, and mechanical. This has all three: It's a mechanical probe, an electrode, and a chemical delivery system."

The team's findings will appear in the Oct. 4 edition of the journal Small. The National Institutes of Health and the National Science Foundation supported this work.

Tuesday, September 28, 2010

Semiconductor Could Turn Heat Into Computing Power


Computers might one day recycle part of their own waste heat, using a material being studied by researchers at Ohio State University.

The material is a semiconductor called gallium manganese arsenide. In the early online edition of Nature Materials, researchers describe the detection of an effect that converts heat into a quantum mechanical phenomenon – known as spin – in a semiconductor.

Once developed, the effect could enable integrated circuits that run on heat, rather than electricity.
Roberto Myers


This research merges two cutting-edge technologies: thermo-electricity and spintronics, explained team leaders Joseph Heremans, Ohio Eminent Scholar in Nanotechnology, and Roberto Myers, assistant professor of materials science and electrical engineering at Ohio State University.
Joseph Heremans


Researchers around the world are working to develop electronics that utilize the spin of electrons to read and write data. So-called “spintronics” are desirable because in principle they could store more data in less space, process data faster, and consume less power.

Myers and Heremans are trying to combine spintronics with thermo-electronics – that is, devices that convert heat to electricity.

The hybrid technology, “thermo-spintronics,” would convert heat to electron spin.

In so doing, thermo-spintronics could solve two problems for the computing industry: how to remove waste heat, and how to boost computing power without creating more heat.

“Spintronics is considered as a possible basis for new computers in part because the technology is claimed to produce no heat. Our measurements shed light on the thermodynamics of spintronics, and may help address the validity of this claim,” Heremans said.

In fact, as the electronics industry tries to build smaller, denser computer circuits, a main limiting factor is the heat those circuits produce, said Myers.

“All of the computers we have now could actually run much faster than they do, but they’re not allowed to – because if they did, they would fail after a short time,” Myers said. “So a huge amount of money in the semiconductor industry is put toward thermal management.”

In one possible use of thermo-spintronics, a device could sit atop a traditional microprocessor, and siphon waste heat away to run additional memory or computation. Myers noted that such applications are still a long way off.

The researchers studied how heat can be converted to spin polarization– an effect called the spin-Seebeck effect. It was first identified by researchers at Tohoku University and reported in a 2008 paper in the journal Nature. Those researchers detected the effect in a piece of metal, rather than a semiconductor.

The new measurements, carried out by team member Christopher Jaworski, doctoral student of mechanical engineering at Ohio State, provide the first independent verification of the effect in a semiconductor material called gallium manganese arsenide.

While gallium arsenide is a semiconductor used in cell phones today, the addition of the element manganese endows the material with magnetic properties.

Samples of this material were carefully prepared into thin single-crystal films by collaborators Shawn Mack and Professor David Awschalom at the University of California at Santa Barbara, who also assisted with interpretation of the results. Jing Yang, doctoral student of materials science and engineering at Ohio State, then processed the samples for the experiment.

In this type of material, the spins of the charges line up parallel with the orientation of the sample’s overall magnetic field. So when the Ohio State researchers were trying to detect the spins of the electrons, they were really measuring whether the electrons in any particular area of the material were oriented as “spin-up” or “spin-down.”

In the experiment, they heated one side of the sample, and then measured the orientations of spins on the hot side and the cool side. On the hot side, the electrons were oriented in the spin-up direction, and on the cool side, they were spin-down.

The researchers also discovered, to their own surprise, that two pieces of the material do not need to be physically connected for the effect to propagate from one to the other.

They scraped away a portion of the sample with a file, to create two pieces of material separated by a tiny gap. If the spin effect were caused by electrical conduction – that is, electrons flowing from one part of the material to the other – then the gap would block the effect from spreading. Again, they applied heat to one side.

The effect persisted.

“We figured that each piece would have its own distribution of spin-up and spin-down electrons,” said Myers. “Instead, one side of the first piece was spin up, and the far side of the second piece was spin down. The effect somehow crossed the gap.”

“The original spin-Seebeck detection by the Tohoku group baffled all theoreticians,” Heremans added. “In this study, we’ve independently confirmed those measurements on a completely different material. We’ve proven we can get the same results as the Tohoku group, even when we take the measurements on a sample that’s been separated into two pieces, so that electrons couldn’t possibly pass between them.”

Despite these new experiments, the origin of the spin-Seebeck effect remains a mystery.

Quarks 'Swing' to the Tones of Random Numbers


At the Large Hadron Collider at CERN protons crash into each other at incredibly high energies in order to 'smash' the protons and to study the elementary particles of nature -- including quarks. Quarks are found in each proton and are bound together by forces which cause all other known forces of nature to fade. To understand the effects of these strong forces between the quarks is one of the greatest challenges in modern particle physics. New theoretical results from the Niels Bohr Institute show that enormous quantities of random numbers can describe the way in which quarks 'swing' inside the protons.
A matrix is a rectangular array of numbers. A random matrix can be compared to a Sudoku filled with random numbers. Matrices are part of the equations governing the movements of the particles. In a random matrix there are numbers that are entered randomly, while there are still certain symmetries, for example, you can require that the numbers in the lower left should be a copy of the numbers above the diagonal. This is called a symmetrical matrix. (Credit: Kim Splittorff, Associate Professor, Niels Bohr Institute, University of Copenhagen)

The results have been published in arXiv and will be published in the journal Physical Review Letters.

Just as we must subject ourselves, for example, to the laws of gravity and not just float around weightless, quarks in protons are also subject to the laws of physics. Quarks are one of the universe's smallest, known building blocks. Each proton inside the atomic nucleus is made up of three quarks and the forces between the quarks are so strong that they can never -- under normal circumstances, escape the protons

Left- and right-handed quarks

The quarks combined charges give the proton its charge. But if you add up the masses of the quarks you do not get the mass of the proton. Instead, the mass of the proton is dependent on how the quarks swing. The oscillations of the quarks are also central for a variety of physical phenomena. That is why researchers have worked for years to find a theoretical method for describing the oscillations of quarks.

The two lightest quarks, 'up' and 'down' quarks, are so light that they can be regarded as massless in practice. There are two types of such massless quarks, which might be called left-handed and right-handed. The mathematical equation governing quarks' movements show that the left-handed quarks swing independently of the right-handed. But in spite of the equation being correct, the left-handed quarks love to 'swing' with the right-handed.

Spontaneous symmetry breaking

"Even though this sounds like a contradiction, it is actually a cornerstone of theoretical physics. The phenomenon is called spontaneous symmetry breaking and it is quite easy to illustrate," explains Kim Splittorff, Associate Professor and theoretical particle physicist at the Niels Bohr Institute, and gives an example: A dance floor is filled with people dancing to rhythmic music. The male dancers represent the left-handed quarks and the female dancers the right-handed quarks. All dance without dance partners and therefore all can dance around freely. Now the DJ puts on a slow dance and the dancers pair off. Suddenly, they cannot spin around freely by themselves. The male (left-handed) and female (right-handed) dancers can only spin around in pairs by agreeing on it. We say that the symmetry 'each person swings around, independent of all others' is broken into a different symmetry 'a pair can swing around, independent of other pairs'.

Similarly for quarks, it is the simple solution that the left-handed do not swing with the right-handed. But a more stabile solution is that they hold onto each other. This is spontaneous symmetry breaking.

Dance to random tones

"Over several years it became increasingly clear that the way in which the left-handed and right-handed quarks come together can be described using a massive quantities of random numbers. These random numbers are elements in a matrix, which one may think of as a Soduko filled in at random. In technical jargon these are called Random Matrices," explains Kim Splittorff, who has developed the new theory together with Poul Henrik Damgaard, Niels Bohr International Academy and Discovery Center and Jac Verbaarschot, Stony Brook, New York.

Even though random numbers are involved, what comes out is not entirely random. You could say that the equation that determines the oscillations of the quarks give rise to a dance determined by random notes. This description of quarks has proven to be extremely useful for researchers who are looking for a precise numerical description of the quarks inside a proton.

It requires some of the most advanced supercomputers in the world to make calculations about the quarks in a proton. The central question that the supercomputers are chewing on is how closely the left-handed and right-handed quarks 'dance' together. These calculations can also show why the quarks remain inside the protons.

One problem up until now has been that these numerical descriptions have to use an approximation to the 'real' equation for the quarks. Now the three researchers have shown how to correct for this so that the quarks in the numerical calculations also 'swing' correctly to random numbers.

New understanding of the data

"Using our results we can now describe the numerical calculations from large research groups at CERN and leading universities very accurately," says Kim Splittorff.

"What is new about our work is that not only the exact equation for quarks, but also the approximation, which researchers who work numerically have to use, can be described using random matrices. It is already extremely surprising that the exact equation shows that the quarks swing by random numbers. It is even more exciting that the approximation used for the equation has a completely analogous description. Having an accurate analytical description available for the numerical simulations is a powerful tool that provides an entirely new understanding of the numerical data. In particular, we can now measure very precisely how closely the right-handed and left-handed quarks are dancing," he says about the new perspectives in the world of particle physics.

Monday, September 27, 2010

Genetic "Light Switches" Control Muscle Movement The technique will improve research on neuromuscular disorders and could one day help paralyzed patients.


Using light-sensitive proteins from a single-celled alga and a tiny LED "cuff" placed on a nerve, researchers have triggered the leg muscles of mice to contract in response to millisecond pulses of light.
Light movement: This image shows a cross-section of a mouse sciatic nerve genetically engineered to produce a light-sensitive protein (shown in green). Stanford researchers used this protein to trigger muscle movements in the animal’s leg.
Credit: Nature

The study, published in the journal Nature Medicine, marks the first use of the nascent technology known as optogenetics to control muscle movements. Developed by study coauthor Karl Deisseroth, an associate professor of bioengineering and of psychiatry and behavioral science at Stanford University, optogenetics makes it possible to stimulate neurons with light by inserting the gene for a protein called channelrhodopsin-2, from a green alga. When a modified neuron is exposed to blue light, the protein initiates electrical activity inside the cell that then spreads from neuron to neuron. By controlling which neurons make the protein, as well as which cells are exposed to light, scientists can control neural activity in living animals with unprecedented precision. The paper's other senior author, Scott Delp, a professor of bioengineering, mechanical engineering, and orthopedic surgery at Stanford, says that the optical control method provides "fantastic advantages over electrical stimulation" for his study of muscles and the biomechanics of human movement.

Members of Deisseroth's lab had engineered mice to produce channelrhodopsin-2 in both the central and the peripheral nervous systems. Michael Llewellyn, a former graduate student in Delp's lab, developed a tiny, implantable LED cuff to apply light to the nerve evenly. He placed the cuff on the sciatic nerves of anesthetized mice and triggered millisecond pulses of light. This caused the leg muscles of the mice to contract. When Llewellyn compared the muscle contractions stimulated by light to those generated using a similar electrical cuff, he found that the light-triggered contractions were much more similar to normal muscle activity.

Muscles are made up of two different fibers: small, slow, fatigue-resistant fibers that are typically used for tasks that require fine motor control over longer periods, and larger, faster fibers that can produce higher forces but are more fatigue-prone. In the body, the small, slow fibers are activated first, with the large, fast fibers reserved for quick bursts of power or speed. When muscles are stimulated with electrical pulses, the fast fibers activate first. With the optogenetic switch, however, the fibers were recruited in the normal, physiological order: slow fibers first, fast fibers second. By altering the intensity of the light, Llewellyn found that he could even trigger only the slow fibers--a feat not possible with electrical stimulation.

In the near term, Delp says, the technology will improve the studies that his lab and others do on muscle activity in animal models of stroke, palsies, ALS, and other neuromuscular disorders. He also hopes that in time--a long time, he concedes--such optical switches could be used to help patients with physical disabilities caused by nerve damage such as stroke, spinal cord injury, or cerebral palsy. One possibility, he says, would be to use optical stimulation in place of functional electrical stimulation (FES), in which electrical current is applied to specific nerves or muscles to trigger muscle contractions. The U.S. Food and Drug Administration has already approved FES devices that can restore hand function and bladder control to some paralyzed people. However, FES can quickly lead to muscle fatigue. Delp hopes that, particularly with grasping functions, using optical stimulation might result in better fatigue resistance and perhaps finer muscle control.

"This is a brilliant study, really beautiful science," says Robert Kirsch, a bioengineer at Case Western Reserve University and associate director of the Cleveland Functional Electrical Stimulation Center; he was not involved in the research. "I think there are many [clinical implications]," he says, although, like Delp and Llewellyn, he notes that many high hurdles must be cleared--not least of which is developing a safe, effective way to deliver the channelrhodopsin-2 gene to nerve cells in humans. Otherwise, Kirsch says, "my one objection would be their implication that they've solved the fatigue problem with FES. I'm pretty sure that hasn't happened." Instead, Kirsch believes that most of the fatigue seen in FES patients is due to muscle atrophy and weakness that develop in the chronically paralyzed.

C.J. Heckman, a professor of physiology at Northwestern University's Feinberg School of Medicine, agrees: "It is true that a lot of the fatigue seen in FES patients is due to chronic muscle atrophy." But, he says, "if you could stimulate the muscles in the correct recruitment order repeatedly over time, you could potentially recover a lot of muscle function." This could help paralysis patients preserve their slow muscle fibers, "which would be a huge deal," Heckman says. This is because those fibers do a huge percentage of the work muscles do--everything from maintaining posture to typing on a keyboard.

Delp also thinks that stimulation-based exercise could be an important application for optical muscle control, as could helping wheelchair-bound people stand to reach for books or plates in a cabinet. "I'm not super-high on controlling locomotion"--that is, walking--"with either electrical or optical stimulation, though," Delp says. "It's an incredibly complicated command-and-control scheme that's really hard to coordinate."

In the meantime, Delp and Llewellyn have begun an effort to use a different light-sensitive protein, halorhodopsin, to inhibit motor nerves in mice, with the idea of treating or even curing muscle spasticity, often a serious side effect to brain or spinal injury. Current treatments are far from ideal; doctors may inject botulinum toxin into the affected muscles every few months to paralyze them, use oral medications such as Valium that affect the whole body instead of just the affected muscle, or, in the most severe cases, cut the nerves or tendons of the spastic muscle--a permanent treatment that leaves the patient with no control over that muscle. Delp hopes that genetically engineering the nerves with halorhodopsin might enable people to use light to reversibly relax muscles affected with spasticity.

"I think that's a great idea for treating spasticity," says Jerry Silver, a neuroscientist at Case Western. There may be some difficulties along the way, though, he says. Working with Case colleagues, Silver has started a company called LucCell to develop clinical applications of optogenetics. In one company project, scientists are trying to use halorhodopsin and other inhibitory opsins in animal models to turn off the muscle that controls the bladder sphincter; their ultimate goal is to restore bladder function to paralyzed people. Though they have seen some physiological changes in how the sphincter muscle behaves, they haven't been able to get it to relax enough. "We're learning it's easier to turn things on than turn things off," he says. Still, the team is persisting, looking for better ways to deliver the gene to nerve cells and for ways to increase production of the protein on the cell's surfaces.

"It all depends on the ability to get the transgene in the right place in the person's genome without causing problems," agrees Llewellyn. "It's the main obstacle."

Sunday, September 26, 2010

Dust Models Paint Alien's View of the Solar System


New supercomputer simulations tracking the interactions of thousands of dust grains show what the solar system might look like to alien astronomers searching for planets. The models also provide a glimpse of how this view might have changed as our planetary system matured.
These images, produced by computer models that track the movement of icy grains, represent infrared snapshots of Kuiper Belt dust as seen by a distant observer. For the first time, the models include the effects of collisions among grains. By ramping up the collision rate, the simulations show how the distant view of the solar system might have changed over its history. (Credit: NASA/Goddard/Marc Kuchner and Christopher Stark)

"The planets may be too dim to detect directly, but aliens studying the solar system could easily determine the presence of Neptune -- its gravity carves a little gap in the dust," said Marc Kuchner, an astrophysicist at NASA's Goddard Space Flight Center in Greenbelt, Md. who led the study. "We're hoping our models will help us spot Neptune-sized worlds around other stars."

The dust originates in the Kuiper Belt, a cold-storage zone beyond Neptune where millions of icy bodies -- including Pluto -- orbit the sun. Scientists believe the region is an older, leaner version of the debris disks they've seen around stars like Vega and Fomalhaut.

"Our new simulations also allow us to see how dust from the Kuiper Belt might have looked when the solar system was much younger," said Christopher Stark, who worked with Kuchner at NASA Goddard and is now at the Carnegie Institution for Science in Washington, D.C. "In effect, we can go back in time and see how the distant view of the solar system may have changed."

Kuiper Belt objects occasionally crash into each other, and this relentless bump-and-grind produces a flurry of icy grains. But tracking how this dust travels through the solar system isn't easy because small particles are subject to a variety of forces in addition to the gravitational pull of the sun and planets.

The grains are affected by the solar wind, which works to bring dust closer to the sun, and sunlight, which can either pull dust inward or push it outward. Exactly what happens depends on the size of the grain.

The particles also run into each other, and these collisions can destroy the fragile grains. A paper on the new models, which are the first to include collisions among grains, appeared in the Sept. 7 edition of The Astronomical Journal.

"People felt that the collision calculation couldn't be done because there are just too many of these tiny grains too keep track of," Kuchner said. "We found a way to do it, and that has opened up a whole new landscape."

With the help of NASA's Discover supercomputer, the researchers kept tabs on 75,000 dust particles as they interacted with the outer planets, sunlight, the solar wind -- and each other.

The size of the model dust ranged from about the width of a needle's eye (0.05 inch or 1.2 millimeters) to more than a thousand times smaller, similar in size to the particles in smoke. During the simulation, the grains were placed into one of three types of orbits found in today's Kuiper Belt at a rate based on current ideas of how quickly dust is produced.

From the resulting data, the researchers created synthetic images representing infrared views of the solar system seen from afar.

Through gravitational effects called resonances, Neptune wrangles nearby particles into preferred orbits. This is what creates the clear zone near the planet as well as dust enhancements that precede and follow it around the sun.

"One thing we've learned is that, even in the present-day solar system, collisions play an important role in the Kuiper Belt's structure," Stark explained. That's because collisions tend to destroy large particles before they can drift too far from where they're made. This results in a relatively dense dust ring that straddles Neptune's orbit.

To get a sense of what younger, heftier versions of the Kuiper Belt might have looked like, the team sped up the dust production rate. In the past, the Kuiper Belt contained many more objects that crashed together more frequently, generating dust at a faster pace. With more dust particles came more frequent grain collisions.

Using separate models that employed progressively higher collision rates, the team produced images roughly corresponding to dust generation that was 10, 100 and 1,000 times more intense than in the original model. The scientists estimate the increased dust reflects conditions when the Kuiper Belt was, respectively, 700 million, 100 million and 15 million years old.

"We were just astounded by what we saw," Kuchner said.

As collisions become increasingly important, the likelihood that large dust grains will survive to drift out of the Kuiper Belt drops sharply. Stepping back through time, today's broad dusty disk collapses into a dense, bright ring that bears more than a passing resemblance to rings seen around other stars, especially Fomalhaut.

"The amazing thing is that we've already seen these narrow rings around other stars," Stark said. "One of our next steps will be to simulate the debris disks around Fomalhaut and other stars to see what the dust distribution tells us about the presence of planets."

The researchers also plan to develop a more complete picture of the solar system's dusty disk by modeling additional sources closer to the sun, including the main asteroid belt and the thousands of so-called Trojan asteroids corralled by Jupiter's gravity.