BTemplates.com

Powered by Blogger.

Pageviews past week

Quantum mechanics

Auto News

artificial intelligence

About Me

Recommend us on Google!

Information Technology

Popular Posts

Wednesday, October 27, 2010

Stable Way to Store the Sun's Heat: Storing Thermal Energy in Chemical Could Lead to Advances in Storage and Portability


Researchers at MIT have revealed exactly how a molecule called fulvalene diruthenium, which was discovered in 1996, works to store and release heat on demand. This understanding, reported in a paper published on Oct. 20 in the journal Angewandte Chemie, should make it possible to find similar chemicals based on more abundant, less expensive materials than ruthenium, and this could form the basis of a rechargeable battery to store heat rather than electricity.
A molecule of fulvalene diruthenium, seen in diagram, changes its configuration when it absorbs heat, and later releases heat when it snaps back to its original shape. (Credit: Jeffrey Grossman)

The molecule undergoes a structural transformation when it absorbs sunlight, putting it into a higher-energy state where it can remain stable indefinitely. Then, triggered by a small addition of heat or a catalyst, it snaps back to its original shape, releasing heat in the process. But the team found that the process is a bit more complicated than that.

"It turns out there's an intermediate step that plays a major role," said Jeffrey Grossman, the Carl Richard Soderberg Associate Professor of Power Engineering in the Department of Materials Science and Engineering. In this intermediate step, the molecule forms a semi-stable configuration partway between the two previously known states. "That was unexpected," he said. The two-step process helps explain why the molecule is so stable, why the process is easily reversible and also why substituting other elements for ruthenium has not worked so far.

In effect, explained Grossman, this process makes it possible to produce a "rechargeable heat battery" that can repeatedly store and release heat gathered from sunlight or other sources. In principle, Grossman said, a fuel made from fulvalene diruthenium, when its stored heat is released, "can get as hot as 200 degrees C, plenty hot enough to heat your home, or even to run an engine to produce electricity."

Compared to other approaches to solar energy, he said, "it takes many of the advantages of solar-thermal energy, but stores the heat in the form of a fuel. It's reversible, and it's stable over a long term. You can use it where you want, on demand. You could put the fuel in the sun, charge it up, then use the heat, and place the same fuel back in the sun to recharge."

In addition to Grossman, the work was carried out by Yosuke Kanai of Lawrence Livermore National Laboratory, Varadharajan Srinivasan of MIT's Department of Materials Science and Engineering, and Steven Meier and Peter Vollhardt of the University of California, Berkeley.

The problem of ruthenium's rarity and cost still remains as "a dealbreaker," Grossman said, but now that the fundamental mechanism of how the molecule works is understood, it should be easier to find other materials that exhibit the same behavior. This molecule "is the wrong material, but it shows it can be done," he said.

The next step, he said, is to use a combination of simulation, chemical intuition, and databases of tens of millions of known molecules to look for other candidates that have structural similarities and might exhibit the same behavior. "It's my firm belief that as we understand what makes this material tick, we'll find that there will be other materials" that will work the same way, Grossman said.

Grossman plans to collaborate with Daniel Nocera, the Henry Dreyfus Professor of Energy and Professor of Chemistry, to tackle such questions, applying the principles learned from this analysis in order to design new, inexpensive materials that exhibit this same reversible process. The tight coupling between computational materials design and experimental synthesis and validation, he said, should further accelerate the discovery of promising new candidate solar thermal fuels.

Funding: The National Science Foundation and an MIT Energy Initiative seed grant.

Robotic Gripper Runs on Coffee ... and Balloons


The human hand is an amazing machine that can pick up, move and place objects easily, but for a robot, this "gripping" mechanism is a vexing challenge. Opting for simple elegance, researchers from Cornell University, University of Chicago and iRobot have bypassed traditional designs based around the human hand and fingers, and created a versatile gripper using everyday ground coffee and a latex party balloon.
Graduate student John Amend, left, and associate professor 
Hod Lipson with the universal robotic gripper. 
(Credit: Robert Barker/University Photography)

They call it a universal gripper, as it conforms to the object it's grabbing rather than being designed for particular objects, said Hod Lipson, Cornell associate professor of mechanical engineering and computer science. The research is a collaboration between the groups of Lipson, Heinrich Jaeger at the University of Chicago, and Chris Jones at iRobot Corp. It is published Oct. 25 online in Proceedings of the National Academy of Sciences.

"This is one of the closest things we've ever done that could be on the market tomorrow," Lipson said. He noted that the universality of the gripper makes future applications seemingly limitless, from the military using it to dismantle explosive devises or to move potentially dangerous objects, robotic arms in factories, on the feet of a robot that could walk on walls, or on prosthetic limbs.

Here's how it works: An everyday party balloon filled with ground coffee -- any variety will do -- is attached to a robotic arm. The coffee-filled balloon presses down and deforms around the desired object, and then a vacuum sucks the air out of the balloon, solidifying its grip. When the vacuum is released, the balloon becomes soft again, and the gripper lets go.

Jaeger said coffee is an example of a particulate material, which is characterized by large aggregates of individually solid particles. Particulate materials have a so-called jamming transition, which turns their behavior from fluid-like to solid-like when the particles can no longer slide past each other.

This phenomenon is familiar to coffee drinkers familiar with vacuum-packed coffee, which is hard as a brick until the package is unsealed.

"The ground coffee grains are like lots of small gears," Lipson said. "When they are not pressed together they can roll over each other and flow. When they are pressed together just a little bit, the teeth interlock, and they become solid."

Jaeger explains that the concept of a "jamming transition" provides a unified framework for understanding and predicting behavior in a wide range of disordered, amorphous materials. All of these materials can be driven into a 'glassy' state where they respond like a solid yet structurally resemble a liquid, and this includes many liquids, colloids, emulsions or foams, as well as particulate matter consisting of macroscopic grains.

"What is particularly neat with the gripper is that here we have a case where a new concept in basic science provided a fresh perspective in a very different area -- robotics -- and then opened the door to applications none of us had originally thought about," Jaeger said.

Eric Brown, a postdoctoral researcher, and Nick Rodenberg, a physics undergraduate, worked with Jaeger on characterizing the basic mechanisms that enable the gripping action. Prototypes of the gripper were built and tested by Lipson and Cornell graduate student John Amend as well as at iRobot.

As for the right particulate material, anything that can jam will do in principle, and early prototypes involved rice, couscous and even ground- up tires. They settled on coffee because it's light but also jams well, Amend said. Sand did better on jamming but was prohibitively heavy. What sets the jamming-based gripper apart is its good performance with almost any object, including a raw egg or a coin -- both notoriously difficult for traditional robotic grippers.

The project was supported by the Defense Advanced Research Projects Agency.

Monday, October 25, 2010

Brain Regions Can Switch Functions in Young


A new paper from MIT neuroscientists, in collaboration with Alvaro Pascual-Leone at Beth Israel Deaconess Medical Center, offers evidence that it is easier to rewire the brain early in life. The researchers found that a small part of the brain's visual cortex that processes motion became reorganized only in the brains of subjects who had been born blind, not those who became blind later in life.
Scientists offer evidence that it is easier to rewire the 
brain early in life. Researchers found that a small part 
of the brain's visual cortex that processes motion became
reorganized only in the brains of subjects who had been 
born blind, not those who became blind later in life.
(Credit: iStockphoto/Vasiliy Yakobchuk)

The new findings, described in the Oct. 14 issue of the journal Current Biology, shed light on how the brain wires itself during the first few years of life, and could help scientists understand how to optimize the brain's ability to be rewired later in life. That could become increasingly important as medical advances make it possible for congenitally blind people to have their sight restored, said MIT postdoctoral associate Marina Bedny, lead author of the paper.

In the 1950s and '60s, scientists began to think that certain brain functions develop normally only if an individual is exposed to relevant information, such as language or visual information, within a specific time period early in life. After that, they theorized, the brain loses the ability to change in response to new input.

Animal studies supported this theory. For example, cats blindfolded during the first months of life are unable to see normally after the blindfolds are removed. Similar periods of blindfolding in adulthood have no effect on vision.

However, there have been indications in recent years that there is more wiggle room than previously thought, said Bedny, who works in the laboratory of MIT assistant professor Rebecca Saxe, also an author of the Current Biology paper. Many neuroscientists now support the idea of a period early in life after which it is difficult, but not impossible, to rewire the brain.

Bedny, Saxe and their colleagues wanted to determine if a part of the brain known as the middle temporal complex (MT/MST) can be rewired at any time or only early in life. They chose to study MT/MST in part because it is one of the most studied visual areas. In sighted people, the MT region is specialized for motion vision.

In the few rare cases where patients have lost MT function in both hemispheres of the brain, they were unable to sense motion in a visual scene. For example, if someone poured water into a glass, they would see only a standing, frozen stream of water.

Previous studies have shown that in blind people, MT is taken over by sound processing, but those studies didn't distinguish between people who became blind early and late in life.

In the new MIT study, the researchers studied three groups of subjects -- sighted, congenitally blind, and those who became blind later in life (age nine or older). Using functional magnetic resonance imaging (fMRI), they tested whether MT in these subjects responded to moving sounds -- for example, approaching footsteps.

The results were clear, said Bedny. MT reacted to moving sounds in congenitally blind people, but not in sighted people or people who became blind at a later age.

This suggests that in late-blind individuals, the visual input they received in early years allowed the MT complex to develop its typical visual function, and it couldn't be remade to process sound after the person lost sight. Congenitally blind people never received any visual input, so the region was taken over by auditory input after birth.

"We need to think of early life as a window of opportunity to shape how the brain works," said Bedny. "That's not to say that later experience can't alter things, but it's easier to get organized early on."

Bedny believes that by better understanding how the brain is wired early in life, scientists may be able to learn how to rewire it later in life. There are now very few cases of sight restoration, but if it becomes more common, scientists will need to figure out how to retrain the patient's brain so it can process the new visual input.

"The unresolved question is whether the brain can relearn, and how that learning differs in an adult brain versus a child's brain," said Bedny.

Bedny hopes to study the behavioral consequences of the MT switch in future studies. Those would include whether blind people have an advantage over sighted people in auditory motion processing, and if they have a disadvantage if sight is restored.

Editor's Note: This article is not intended to provide medical advice, diagnosis or treatment.

Friday, October 22, 2010

New Mothers Grow Bigger Brains Within Months of Giving Birth: Warmer Feelings Toward Babies Linked to Bigger Mid-Brains


Motherhood may actually cause the brain to grow, not turn it into mush, as some have claimed. Exploratory research published by the American Psychological Association found that the brains of new mothers bulked up in areas linked to motivation and behavior, and that mothers who gushed the most about their babies showed the greatest growth in key parts of the mid-brain.
Mother holding newborn baby. 
(Credit: iStockphoto/Kati Molin)

Led by neuroscientist Pilyoung Kim, PhD, now with the National Institute of Mental Health, the authors speculated that hormonal changes right after birth, including increases in estrogen, oxytocin and prolactin, may help make mothers' brains susceptible to reshaping in response to the baby. Their findings were published in the October issue of Behavioral Neuroscience.

The motivation to take care of a baby, and the hallmark traits of motherhood, might be less of an instinctive response and more of a result of active brain building, neuroscientists Craig Kinsley, PhD, and Elizabeth Meyer, PhD, wrote in a special commentary in the same journal issue.

The researchers performed baseline and follow-up high-resolution magnetic-resonance imaging on the brains of 19 women who gave birth at Yale-New Haven Hospital, 10 to boys and nine to girls. A comparison of images taken two to four weeks and three to four months after the women gave birth showed that gray matter volume increased by a small but significant amount in various parts of the brain. In adults, gray matter volume doesn't ordinarily change over a few months without significant learning, brain injury or illness, or major environmental change.

The areas affected support maternal motivation (hypothalamus), reward and emotion processing (substantia nigra and amygdala), sensory integration (parietal lobe), and reasoning and judgment (prefrontal cortex).

In particular, the mothers who most enthusiastically rated their babies as special, beautiful, ideal, perfect and so on were significantly more likely to develop bigger mid-brains than the less awestruck mothers in key areas linked to maternal motivation, rewards and the regulation of emotions.

The mothers averaged just over 33 years in age and 18 years of school. All were breastfeeding, nearly half had other children and none had serious postpartum depression.

Although these early findings require replication with a larger and more representative sample, they raise intriguing questions about the interaction between mother and child (or parent and child, since fathers are also the focus of study). The intense sensory-tactile stimulation of a baby may trigger the adult brain to grow in key areas, allowing mothers, in this case, to "orchestrate a new and increased repertoire of complex interactive behaviors with infants," the authors wrote. Expansion in the brain's "motivation" area in particular could lead to more nurturing, which would help babies survive and thrive physically, emotionally and cognitively.

Further study using adoptive mothers could help "tease out effects of postpartum hormones versus mother-infant interactions," said Kim, and help resolve the question of whether the brain changes behavior or behavior changes the brain -- or both.

The authors said that postpartum depression may involve reductions in the same brain areas that grew in mothers who were not depressed. "The abnormal changes may be associated with difficulties in learning the rewarding value of infant stimuli and in regulating emotions during the postpartum period," they said. Further study is expected to clarify what happens in the brains of mothers at risk, which may lead to improved interventions.

In their "Theoretical Comment," Kinsley and Meyer, of the University of Richmond, connected this research on human mothers to similar basic research findings in laboratory animals. All the scientists agreed that further research may show whether increased brain volumes are due to growth in nerve cells themselves, longer and more complex connections (dendrites and dendritic spines) between them, or bushier branching in nerve-cell networks.

Editor's Note: This article is not intended to provide medical advice, diagnosis or treatment.

Thursday, October 21, 2010

Human Brain Can 'See' Shapes With Sound : See No Shape, Touch No Shape, Hear a Shape? New Way of 'Seeing' the World


Scientists at The Montreal Neurological Institute and Hospital -- The Neuro, McGill University have discovered that our brains have the ability to determine the shape of an object simply by processing specially-coded sounds, without any visual or tactile input. Not only does this new research tell us about the plasticity of the brain and how it perceives the world around us, it also provides important new possibilities for aiding those who are blind or with impaired vision.
New research shows that the human brain is able to 
determine the shape of an object simply by processing 
specially-coded sounds, without any visual or tactile input. 
(Credit: iStockphoto/Sergey Chushkin)

Shape is an inherent property of objects existing in both vision and touch but not sound. Researchers at The Neuro posed the question 'can shape be represented by sound artificially?' "The fact that a property of sound such as frequency can be used to convey shape information suggests that as long as the spatial relation is coded in a systematic way, shape can be preserved and made accessible -- even if the medium via which space is coded is not spatial in its physical nature," says Jung-Kyong Kim, PhD student in Dr. Robert Zatorre's lab at The Neuro and lead investigator in the study.

In other words, similar to our ocean-dwelling dolphin cousins who use echolocation to explore their surroundings, our brains can be trained to recognize shapes represented by sound and the hope is that those with impaired vision could be trained to use this as a tool. In the study, blindfolded sighted participants were trained to recognize tactile spatial information using sounds mapped from abstract shapes. Following training, the individuals were able to match auditory input to tactually discerned shapes and showed generalization to new auditory-tactile or sound-touch pairings.

"We live in a world where we perceive objects using information available from multiple sensory inputs," says Dr. Zatorre, neuroscientist at The Neuro and co-director of the International Laboratory for Brain Music and Sound Research. "On one hand, this organization leads to unique sense-specific percepts, such as colour in vision or pitch in hearing. On the other hand our perceptual system can integrate information present across different senses and generate a unified representation of an object. We can perceive a multisensory object as a single entity because we can detect equivalent attributes or patterns across different senses." Neuroimaging studies have identified brain areas that integrate information coming from different senses -- combining input from across the senses to create a complete and comprehensive picture.

The results from The Neuro study strengthen the hypothesis that our perception of a coherent object or event ultimately occurs at an abstract level beyond the sensory input modes in which it is presented. This research provides important new insight into how our brains process the world as well as new possibilities for those with impaired senses.

The study was published in the journal Experimental Brain Research. The research was supported by grants from the Canadian Institutes of Health Research and the Natural Sciences and Engineering Research Council of Canada.

Editor's Note: This article is not intended to provide medical advice, diagnosis or treatment.

Sunday, October 17, 2010

Mysterious Pulsar With Hidden Powers Discovered


Dramatic flares and bursts of energy -- activity previously thought reserved for only the strongest magnetized pulsars -- has been observed emanating from a weakly magnetised, slowly rotating pulsar. The international team of astrophysicists who made the discovery believe that the source of the pulsar's power may be hidden deep within its surface.
An artistic impression of a magnetar with a very complicated 
magnetic field at its interior and a simple small dipolar 
field outside. (Credit: ESA / Christophe Carreau)

Pulsars, or neutron stars, are the collapsed remains of massive stars. Although they are on average only about 30 km in diameter, they have hugely powerful surface magnetic fields, billions of times that of our Sun.

The most extreme kind of pulsars have a surface magnetic field 50-1000 times stronger than normal and emit powerful flares of gamma rays and X-rays. Named magnetars (which stands for "magnetic stars") by astronomers, their huge magnetic fields are thought to be the ultimate source of power for the bursts of gamma rays.

Theoretical studies indicate that in magnetars the internal field is actually stronger than the surface field, a property which can deform the crust and propagate outwards. The decay of the magnetic field leads to the production of steady and bursting X-ray emission through the heating of the neutron star crust or the acceleration of particles.

Now, research recently published in Science, suggests that the same power source can also work for weaker, non-magnetar, pulsars. The observations, which were made by NASA's Chandra and Swift X-ray observatories of the neutron star SGR 0418, may indicate the presence of a huge internal magnetic field in these seemingly less powerful pulsars, which is not matched by their surface magnetic field.

"We have now discovered bursts and flares, i.e. magnetar-like activity, from a new pulsar whose magnetic field is very low," said Dr Silvia Zane, from UCL's (University College London) Mullard Space Science Laboratory, and an author of the research.

Pulsars are highly magnetized, and as they rotate winds of high-energy particles carry energy away from the star, causing the rotation rate of the star to gradually decrease. What sets SGR 0418 apart from similar neutron stars is that, unlike those stars that are observed to be gradually rotating more slowly, careful monitoring of SGR 0418 over a span of 490 days has revealed no evidence that its rotation is decreasing.

"It is the very first time this has been observed and the discovery poses the question of where the powering mechanism is in this case. At this point, we are also interested in how many of the other normal, low field neutron stars that populate the galaxy can at some point wake up and manifest themselves as a flaring source," said Dr Zane.

A crucial question is how large an imbalance can be maintained between the surface and interior magnetic fields. SGR 0418 represents an important test case.

"If further observations by Chandra and other satellites push the surface magnetic field limit lower, then theorists may have to dig deeper for an explanation of this enigmatic object," said Dr Nanda Rea, Institut de Ciencies de l'Espai (ICE-CSIC, IEEC) in Barcelona, who led the discovery.

This work was part funded by the Science and Technologies Research Council, UK. NASA's Marshall Space Flight Center manages the Chandra program for NASA's Science Mission Directorate in Washington. The Smithsonian Astrophysical Observatory controls Chandra's science and flight operations.

Saturday, October 16, 2010

Carbon Dioxide Controls Earth's Temperature, New Modeling Study Shows


Water vapor and clouds are the major contributors to Earth's greenhouse effect, but a new atmosphere-ocean climate modeling study shows that the planet's temperature ultimately depends on the atmospheric level of carbon dioxide.
Various atmospheric components differ in their contributions 
to the greenhouse effect, some through feedbacks and some 
through forcings. Without carbon dioxide and other non-
condensing greenhouse gases, water vapor and clouds would 
be unable to provide the feedback mechanisms that
amplify the greenhouse effect. (Credit: NASA GISS)

The study, conducted by Andrew Lacis and colleagues at NASA's Goddard Institute for Space Studies (GISS) in New York, examined the nature of Earth's greenhouse effect and clarified the role that greenhouse gases and clouds play in absorbing outgoing infrared radiation. Notably, the team identified non-condensing greenhouse gases -- such as carbon dioxide, methane, nitrous oxide, ozone, and chlorofluorocarbons -- as providing the core support for the terrestrial greenhouse effect.

Without non-condensing greenhouse gases, water vapor and clouds would be unable to provide the feedback mechanisms that amplify the greenhouse effect. The study's results are published Oct. 15 in Science.

A companion study led by GISS co-author Gavin Schmidt that has been accepted for publication in the Journal of Geophysical Research shows that carbon dioxide accounts for about 20 percent of the greenhouse effect, water vapor and clouds together account for 75 percent, and minor gases and aerosols make up the remaining five percent. However, it is the 25 percent non-condensing greenhouse gas component, which includes carbon dioxide, that is the key factor in sustaining Earth's greenhouse effect. By this accounting, carbon dioxide is responsible for 80 percent of the radiative forcing that sustains the Earth's greenhouse effect.

The climate forcing experiment described in Science was simple in design and concept -- all of the non-condensing greenhouse gases and aerosols were zeroed out, and the global climate model was run forward in time to see what would happen to the greenhouse effect.

Without the sustaining support by the non-condensing greenhouse gases, Earth's greenhouse effect collapsed as water vapor quickly precipitated from the atmosphere, plunging the model Earth into an icebound state -- a clear demonstration that water vapor, although contributing 50 percent of the total greenhouse warming, acts as a feedback process, and as such, cannot by itself uphold the Earth's greenhouse effect.

"Our climate modeling simulation should be viewed as an experiment in atmospheric physics, illustrating a cause and effect problem which allowed us to gain a better understanding of the working mechanics of Earth's greenhouse effect, and enabled us to demonstrate the direct relationship that exists between rising atmospheric carbon dioxide and rising global temperature," Lacis said.

The study ties in to the geologic record in which carbon dioxide levels have oscillated between approximately 180 parts per million during ice ages, and about 280 parts per million during warmer interglacial periods. To provide perspective to the nearly 1 C (1.8 F) increase in global temperature over the past century, it is estimated that the global mean temperature difference between the extremes of the ice age and interglacial periods is only about 5 C (9 F).

"When carbon dioxide increases, more water vapor returns to the atmosphere. This is what helped to melt the glaciers that once covered New York City," said co-author David Rind, of NASA's Goddard Institute for Space Studies. "Today we are in uncharted territory as carbon dioxide approaches 390 parts per million in what has been referred to as the 'superinterglacial.'"

"The bottom line is that atmospheric carbon dioxide acts as a thermostat in regulating the temperature of Earth," Lacis said. "The Intergovernmental Panel on Climate Change has fully documented the fact that industrial activity is responsible for the rapidly increasing levels of atmospheric carbon dioxide and other greenhouse gases. It is not surprising then that global warming can be linked directly to the observed increase in atmospheric carbon dioxide and to human industrial activity in general."

Friday, October 15, 2010

Feelings of Love: Effective Pain Relief


Intense, passionate feelings of love can provide amazingly effective pain relief, similar to painkillers or such illicit drugs as cocaine, according to a new Stanford University School of Medicine study.
Love-induced pain relief was associated with the activation of primitive brain structures that control rewarding experiences, such as the nucleus accumbens – shown here in color. (Credit: Courtesy of Sean Mackey and Jarred Younger)

"When people are in this passionate, all-consuming phase of love, there are significant alterations in their mood that are impacting their experience of pain," said Sean Mackey, MD, PhD, chief of the Division of Pain Management, associate professor of anesthesia and senior author of the study, which will be published online Oct. 13 in PLoS ONE. "We're beginning to tease apart some of these reward systems in the brain and how they influence pain. These are very deep, old systems in our brain that involve dopamine -- a primary neurotransmitter that influences mood, reward and motivation."

Scientists aren't quite yet ready to tell patients with chronic pain to throw out the painkillers and replace them with a passionate love affair; rather, the hope is that a better understanding of these neural-rewards pathways that get triggered by love could lead to new methods for producing pain relief.

"It turns out that the areas of the brain activated by intense love are the same areas that drugs use to reduce pain," said Arthur Aron, PhD, a professor of psychology at State University of New York at Stony Brook and one of the study's authors. Aron has been studying love for 30 years. "When thinking about your beloved, there is intense activation in the reward area of the brain -- the same area that lights up when you take cocaine, the same area that lights up when you win a lot of money."

The concept for the study was sparked several years ago at a neuroscience conference when Aron, an expert in the study of love, met up with Mackey, an expert in the research of pain, and they began talking.

"Art was talking about love," Mackey said. "I was talking about pain. He was talking about the brain systems involved with love. I was talking about the brain systems involved with pain. We realized there was this tremendous overlapping system. We started wondering, 'Is it possible that the two modulate each other?'"

After the conference, Mackey returned to Stanford and collaborated with postdoctoral scholar Jarred Younger, PhD, now an assistant professor of anesthesia, who was also intrigued with the idea. Together the three set up a study that would entail examining the brain images of undergraduates who claimed to be "in that first phase of intense love."

"We posted fliers around Stanford University and within hours we had undergrads banging on our door," Mackey said. The fliers asked for couples who were in the first nine months of a romantic relationship.

"It was clearly the easiest study the pain center at Stanford has ever recruited for," Mackey said. "When you're in love you want to tell everybody about it.

"We intentionally focused on this early phase of passionate love," he added. "We specifically were not looking for longer-lasting, more mature phases of the relationship. We wanted subjects who were feeling euphoric, energetic, obsessively thinking about their beloved, craving their presence.

"When passionate love is described like this, it in some ways sounds like an addiction. We thought, 'Maybe this does involve similar brain systems as those involved in addictions which are heavily dopamine-related.' Dopamine is the neurotransmitter in our brain that is intimately involved with feeling good."

Researchers recruited 15 undergraduates (eight women and seven men) for the study. Each was asked to bring in photos of their beloved and photos of an equally attractive acquaintance. The researchers then successively flashed the pictures before the subjects, while heating up a computer-controlled thermal stimulator placed in the palm of their hand to cause mild pain. At the same time, their brains were scanned in a functional magnetic resonance imaging machine.

The undergraduates were also tested for levels of pain relief while being distracted with word-association tasks such as: "Think of sports that don't involve balls." Scientific evidence has shown in the past that distraction causes pain relief, and researchers wanted to make sure that love was not just working as a distraction from pain.

Results showed that both love and distraction did equally reduce pain, and at much higher levels than by concentrating on the photo of the attractive acquaintance, but interestingly the two methods of pain reduction used very different brain pathways.

"With the distraction test, the brain pathways leading to pain relief were mostly cognitive," Younger said. "The reduction of pain was associated with higher, cortical parts of the brain. Love-induced analgesia is much more associated with the reward centers. It appears to involve more primitive aspects of the brain, activating deep structures that may block pain at a spinal level -- similar to how opioid analgesics work.

"One of the key sites for love-induced analgesia is the nucleus accumbens, a key reward addiction center for opioids, cocaine and other drugs of abuse. The region tells the brain that you really need to keep doing this," Younger said.

"This tells us that you don't have to just rely on drugs for pain relief," Aron said. "People are feeling intense rewards without the side effects of drugs."

Other Stanford contributors include research assistants Sara Parke and Neil Chatterjee.

Funding for the study was received from the Chris Redlich Pain Research Fund.

Editor's Note: This article is not intended to provide medical advice, diagnosis or treatment.

Microsofts 3-D Strategy Microsoft's Craig Mundie describes how the company's vision of 3-D gaming could extend to all computer interactions.


Microsoft has joined the wave of companies betting that 3-D is the next big thing for computing. At a recent talk at MIT, chief research and strategy officer Craig Mundie said he sees the technology as an innovation that "will get people out of treating a computer as a tool" and into treating the device as a natural extension of how they interact with the world around them. Microsoft plans to introduce consumers to the change through its gaming products, but Mundie outlined a vision that would eventually have people shopping and searching in 3-D as well.
The future of 3-D: During a talk at MIT last week, Craig Mundie, Microsoft's chief research and strategy officer, showed how a natural 3-D interface could let users manipulate and examine products--like the disassembled motorcycle in the background.
Credit: Microsoft/Technology Review

The combination of better chips, better displays, and better sensors, Mundie said, is finally making it possible to move computing from today's graphical user interfaces to the "natural user interface," by allowing people to interact with 3-D content through the gestures they normally use. Today's interfaces require users to learn about menu bars and double-clicks, but Mundie believes natural user interfaces, which work through gesture and voice, will be faster and easier to learn, and will prove more flexible in the long run.

Mundie also argued that natural user interfaces would reduce the mental effort required for people to operate software. Even people who are good at using controllers, keyboards, and mouses might find that a natural interface frees up attention and concentration so that they can focus better on the task at hand, he said. He believes that natural interfaces will make it easier to introduce software to people unfamiliar with computers, as well as make software generally easier to use, and therefore more attractive to consumers.

He also noted that today many programs come with what is essentially "an application-specific prosthetic"--for example, some driving games come with a steering-wheel device. Natural user interfaces may require some peripherals, such as depth-sensing cameras that can detect users' movements, but Mundie sees these as ultimately having broader purpose than most of today's devices.

The first step in this strategy, Mundie said, is Microsoft's release next month of the Kinect sensor for the Xbox 360 gaming console; Kinect incorporates a depth-sensing camera and voice recognition and will cost about $150. It will allow users to play games by gesturing, without the need for a controller or additional equipment. This opens the way to 3-D interaction with games that Mundie hopes will lead to broader use of 3-D displays.

Mundie demonstrated how Kinect would allow a user to interact with 3-D game content through hand gestures, virtually picking up clues to examine them or show them to friends. "We're trying to create a genre of games where you don't have to think about how what you would do naturally would map to the controls," Mundie said.

He also showed a concept video for a real-time 3-D multiplayer game called "The Spy from the 2080s" that included a TV show and a game that players could interact with using multiple devices. For example, they might watch an episode in 3-D on TV, then log in through a gaming console to work with friends to solve clues from the show. Mobile devices might provide additional updates. In the video, the outcome of gameplay even influenced the course of the TV show.

But while the company may plan to start with gaming, Mundie envisions 3-D eventually becoming a key part of many computer interfaces and online content. In one example, he demonstrated shopping using a 3-D natural interface; his hand gestures spun a 3-D image of a product, displaying it from a variety of angles, and opened it up to look at the parts inside.

He acknowledged, however, that there are challenges that need to be solved before 3-D can become ubiquitous. "We need a lot more computer than we currently have," Mundie said, noting that processing high-definition, 3-D video in real time would strain the capabilities of most home computers today. He also admitted that companies still need to refine how users would interact with computers through gesture and voice--for example, distinguishing between when a gamer is issuing commands to the computer and when the same user is conversing with another player.

Microsoft is wise to focus on games initially, says Norbert Hildebrand, business development manager for Insight Media, a marketing research firm that covers emerging display technologies. With 3-D technologies, providing enough content is a huge issue today, he explains. Games are already created in 3-D and then rendered to work on a 2-D screen, which makes it easy to convert them for 3-D displays and other types of interfaces.

For other types of 3-D interaction, such as shopping or advertising, Hildebrand says content creators will need to be persuaded to invest the necessary money and resources. As far as Mundie's vision of 3-D shopping, Hildebrand says, "at this point, it's marketing talk only." He points out that today's 3-D displays don't display text well, so marketers would have to come up with a hybrid approach to display both product images and information.

For now, Hildebrand believes that the average person views 3-D technology as something used on special occasions, not as a day-to-day technology. Some people have interpreted sales figures for 3-D-enabled televisions as a sign that consumers are adopting the technology, he adds, but these can be misleading, since most high-end televisions today have 3-D capabilities. It's much harder to determine whether people are actually using 3-D and how often, he says.

3-D is on the way, Hildebrand says, but before Mundie's vision of day-to-day 3-D becomes viable, "a lot of things have to come together." This includes more 3-D content, better bandwidth for delivering it to users, faster processors to render it, and particularly, he believes, the next generation of display technology--one that doesn't require special glasses.

Thursday, October 14, 2010

Apple patents 'anti-sexting' technology


Apple has patented technology that could be used by parents to prevent their kids from sending sexually explicit text messages -- or "sexting."
An Apple patent shows how an anti-sexting
application might block messages on the iPhone.

The technology, which has not been commercialized, would let a phone's administrator block an iPhone from sending or receiving texts with certain words.

Messages containing blocked material either would not be received or would have the objectionable content redacted. Unlike other text blockers, Apple's version would also be able to filter content based on a child's grade level and claims to filter abbreviated words that maybe missed by other programs.

The patent, awarded Tuesday, does not address the sending or receiving of explicit images.

The U.S. patent, which Apple filed for in January 2008, could also turn these filters into educational tools, according to the patent document.

Parents of kids who are studying Spanish, for example, could be required to send a certain number of messages per month in that language, according to the document. If kids did not meet the foreign language quota, their texting privileges could be automatically revoked until they send more Spanish-language text messages.

Grammarians may cheer this innovation. The texting interface also could prod kids toward better grammar, requiring them to identify and fix spelling, punctuation and grammar mistakes before sending a message.

So maybe the Apple texting tool will be the end of LOL-speak.

Apple says old methods of monitoring and controlling text communications on phones have largely failed. Allowing kids to communicate only with a pre-set list of phone numbers or e-mail addresses is limiting, the patent document says, and does not address the content of the mobile phone communications, which Apple says is more important.

Other methods of filtering only block certain expletives, Apple says, instead of trying to recognize the overall offensiveness of a message and comparing that to a kid's age and learning level.

The blog TechCrunch asks if the patent will be the end of sexting:

"Yes and no," Alexia Tsotsis writes on that blog, "as those interesting in 'sexting' will probably find some clever workaround to express how much they want to bang, screw, hit it or a myriad of other words that don't immediately set off the censorship sensors."

The Daily Mail in the UK writes that this anti-sexting news "will be music to the ears of Tiger Woods. Or Ashley Cole, or Vernon Kay for that matter," referring to sexting scandals involving those celebrities.

It's unclear exactly how this technology would be incorporated into Apple's iPhone products, but it would appear to work through the phone's built-in text-messaging application. Other texting apps aim to prevent texting while driving and let iPhone users send text messages without incurring charges from AT&T, the mobile carrier that has exclusive rights to the iPhone in the U.S.

Do you think this kind of technology will bring about the end of sexting and SMS slang? Let us know what you think in the comments below.

Large Study Shows : Females Are Equal to Males in Math Skills


The mathematical skills of boys and girls, as well as men and women, are substantially equal, according to a new examination of existing studies in the current online edition of journal Psychological Bulletin.
Young women studying mathematics. The mathematical skills of boys and girls, as well as men and women, are substantially equal, according to a new examination of existing studies.

One portion of the new study looked systematically at 242 articles that assessed the math skills of 1,286,350 people, says chief author Janet Hyde, a professor of psychology and women's studies at the University of Wisconsin-Madison.

These studies, all published in English between 1990 and 2007, looked at people from grade school to college and beyond. A second portion of the new study examined the results of several large, long-term scientific studies, including the National Assessment of Educational Progress.

In both cases, Hyde says, the difference between the two sexes was so close as to be meaningless.

Sara Lindberg, now a postdoctoral fellow in women's health at the UW-Madison School of Medicine and Public Health, was the primary author of the meta-analysis in Psychological Bulletin.

The idea that both genders have equal math abilities is widely accepted among social scientists, Hyde adds, but word has been slow to reach teachers and parents, who can play a negative role by guiding girls away from math-heavy sciences and engineering. "One reason I am still spending time on this is because parents and teachers continue to hold stereotypes that boys are better in math, and that can have a tremendous impact on individual girls who are told to stay away from engineering or the physical sciences because 'Girls can't do the math.'"

Scientists now know that stereotypes affect performance, Hyde adds. "There is lots of evidence that what we call 'stereotype threat' can hold women back in math. If, before a test, you imply that the women should expect to do a little worse than the men, that hurts performance. It's a self-fulfilling prophecy.

"Parents and teachers give little implicit messages about how good they expect kids to be at different subjects," Hyde adds, "and that powerfully affects their self-concept of their ability. When you are deciding about a major in physics, this can become a huge factor."

Hyde hopes the new results will slow the trend toward single-sex schools, which are sometimes justified on the basis of differential math skills. It may also affect standardized tests, which gained clout with the passage of No Child Left Behind, and tend to emphasize lower-level math skills such as multiplication, Hyde says. "High-stakes testing really needs to include higher-level problem-solving, which tends to be more important in jobs that require math skills. But because many teachers teach to the test, they will not teach higher reasoning unless the tests start to include it."

The new findings reinforce a recent study that ranked gender dead last among nine factors, including parental education, family income, and school effectiveness, in influencing the math performance of 10-year-olds.

Hyde acknowledges that women have made significant advances in technical fields. Half of medical school students are female, as are 48 percent of undergraduate math majors. "If women can't do math, how are they getting these majors?" she asks.

Because progress in physics and engineering is much slower, "we have lots of work to do," Hyde says. "This persistent stereotyping disadvantages girls. My message to parents is that they should have confidence in their daughter's math performance. They need to realize that women can do math just as well as men. These changes will encourage women to pursue occupations that require lots of math."

Editor's Note: This article is not intended to provide medical advice, diagnosis or treatment.

Wednesday, October 13, 2010

A Touch Screen with Texture Electrovibration could make for a better sensory experience on a smooth touch surface.


Touch screens are ubiquitous today. But a common complaint is that the smooth surface just doesn't feel as good to use as a physical keypad. While some touch-screen devices use mechanical vibrations to enhance users' experiences of virtual keypads, the approach isn't widely used, mainly because mechanical vibrations are difficult to implement well, and they often make the entire device buzz in your hand, instead of just a particular spot on the screen.
Subtle sensation: In this TeslaTouch demonstration, one finger is stationary while the other experiences the sensation of friction as it moves.
Credit: Disney Research

Now, engineers from three different groups are proposing a type of tactile feedback that they believe will be more popular than mechanical buzzing. Called electrovibration, the technique uses electrical charges to simulate the feeling of localized vibration and friction, providing touch-screen textures that are impossible to simulate using mechanical actuators.

One of these groups, composed of researchers from Disney Research in Pittsburgh, Carnegie Mellon University, and the University of Paris Sud, presented a paper earlier this month at the User Interface Software and Technology (UIST) symposium in New York City. In the paper, they described their approach to electrovibration, called TeslaTouch, in which they modified a commercial touch panel from 3M that uses capacitive sensing -- the approach used in most mobile phones and in the iPad.

The touch panel is made of transparent electrodes on a glass plate coated with an insulating layer. By applying a periodic voltage to the electrodes via connections used for sensing a finger's position on the screen, the researchers were able to effectively induce a charge in a finger dragged along the surface. By changing the amplitude and frequency of the applied voltage, the surface can be made to feel as though it is bumpy, rough, sticky, or vibrating. The major difference is the specially designed control circuit that produces the sensations.

It's a challenge, says Ivan Poupyrev of Disney Research, to vibrate a screen in a way that makes sense for a user. When an entire device buzzes, it can be more annoying than helpful. There are also technical hurdles and extra costs in making a touch screen mechanically move. The goal, then, was to create a tactile sensation without using any mechanical motion. "It sounds crazy," Poupyrev says, "but that's what we've done with TeslaTouch."

Electrovibration was first proposed for touch screens in the 1950s, but the approach didn't see widespread use because the screens didn't achieve commercial success until recently. Now, with many researchers looking for ways to improve the now-popular screens, other groups have also rediscovered electrovibration. Nokia recently announced a smartphone prototype that uses the approach. And a Finnish company called Senseg has also implemented electrovibration in touch screens, having closed deals with three companies to incorporate the technology into products that could be available in 2011.

All three groups have filed patents for electrovibration; each outlines a different approach. Currently, the Disney demonstration only provides the feeling of texture when a finger is moving, although the group is working on a way to give feedback to a still finger. Senseg's technology, however, already provides localized feedback to a nonmoving finger, says Ville Mäkinen, founder of the company.

Another limitation of the Disney prototype is that it provides only a single sensation at a time. However, it is possible to split up the screen in various ways to generate different sensations in different parts of the screen, but the design of such a screen would most likely depend on the specific application.

Nokia is exploring ways to use the tactile feedback as a way to augment communication with another person, says Tapani Ryhänen, Nokia lab director in Cambridge, UK. "There's a possibility to use this as a type of communication," he says, "so if I do something on my screen, then you can feel it on your screen."

While electrovibration can provide a different feel for touch screens, the type of interaction is somewhat limited, says Bic Schediwy, director of research at a touch-screen company called Synaptics. Since some systems only work when a finger is moving, those systems couldn't simulate a button click, one of the biggest complaints with touch screens. Additionally, he says, in demonstrations of electrovibration systems, it appears that people have varying responses to the induced current, possibly because of varying skin thickness.

At the UIST symposium, the Disney researchers showed a range of demos to illustrate TeslaTouch, including a simulated ice-covered window that changes friction as virtual ice is removed and a racetrack that provides different sensation as a finger traverses varying terrain. On hand to test the system was Patrick Baudisch, professor of computer science at the Hasso Plattner Institute in Potsdam, Germany. While the demos were simple, he says, they were "very convincing." TeslaTouch may not provide "the basis for getting rid of keyboards or such," Baudisch says, but "it really enriches the interaction on touch devices."

Disney's Poupyrev isn't sure about what his company plans to do with the technology, but the applications that are most obvious involve honing electrovibration so it could be used to more easily draw and paint on a smooth touch surface. Poupyrev also thinks electrovibration, since it is so easily implemented, could find a home in more unusual applications, such as large surfaces like wallpaper, and conformable materials like cloth.

Tiny Creatures: Big Role in Global Carbon Cycle


Two separate research groups are reporting groundbreaking measurements of the fluid flow that surrounds freely swimming microorganisms. Experiments involving two common types of microbes reveal the ways that one creature's motion can affect its neighbors, which in turn can lead to collective motions of microorganism swarms. In addition, the research is helping to clarify how the motions of microscopic swimmers produces large scale stirring that distributes nutrients, oxygen and chemicals in lakes and oceans.
Researchers have mapped the flow field around a swimming Volvox carteri microbe by tracking the movements of tiny tracer particles. The spherical Volvox is swimming towards the top of the image. Streamlines appear as red curves, and the color map corresponds to the fluid velocity. (Credit: K. Drescher, R. E. Goldstein, N. Michel, M. Polin, and I. Tuval, University of Cambridge)

A pair of papers describing the experiments will appear in the Oct. 11 issue of the APS journal Physical Review Letters.

In order to observe the flow that microorganisms produce, researchers at the University of Cambridge tracked the motion of tiny tracer beads suspended in the fluid surrounding the tiny swimmers. They used the technique to study the fluid around two very different types of creatures: a small, blue-green form of algae called Chlamydomonas reinhardtii that swims by paddling with a pair of whip-like flagella, and the larger, spherical alga Volvox carterii that propels itself with thousands of flagella covering its surface.

The tracer beads showed that the two types of organisms generate distinctly different flow patterns, both of which are much more complex than previously assumed. In a related study performed at Haverford College in Pennsylvania, researchers used a high speed camera to track the flow of tracer particles around Chlamydomonas in a thin, two-dimension film of fluid over the course of a single stroke of its flagella.

The studies should help scientists develop new models to predict the fluid motions associated with aquatic microorganisms. The models will provide clearer pictures of the ways microbes mix bodies of water, and potentially offer insights into the role plankton plays in the carbon cycle as it stirs the world's oceans.

David Saintillan (University of Illinois at Urbana Champagne) gives an overview of the microorganism swimming research in a Viewpoint article in the October 11 edition of APS Physics.