BTemplates.com

Powered by Blogger.

Pageviews past week

Quantum mechanics

Auto News

artificial intelligence

About Me

Recommend us on Google!

Information Technology

Popular Posts

Showing posts with label Perception. Show all posts
Showing posts with label Perception. Show all posts

Sunday, June 30, 2013

Imagination Can Change What We Hear and See


A study from Karolinska Institut in Sweden shows, that our imagination may affect how we experience the world more than we perhaps think. What we imagine hearing or seeing "in our head" can change our actual perception. The study, which is published in the scientific journal Current Biology, sheds new light on a classic question in psychology and neuroscience -- about how our brains combine information from the different senses.

Illusion of colliding objects.
Illusion of colliding objects. (Credit: Image courtesy of Karolinska Institutet)

"We often think about the things we imagine and the things we perceive as being clearly dissociable," says Christopher Berger, doctoral student at the Department of Neuroscience and lead author of the study. "However, what this study shows is that our imagination of a sound or a shape changes how we perceive the world around us in the same way actually hearing that sound or seeing that shape does. Specifically, we found that what we imagine hearing can change what we actually see, and what we imagine seeing can change what we actually hear."

The study consists of a series of experiments that make use of illusions in which sensory information from one sense changes or distorts one's perception of another sense. Ninety-six healthy volunteers participated in total.

In the first experiment, participants experienced the illusion that two passing objects collided rather than passed by one-another when they imagined a sound at the moment the two objects met. In a second experiment, the participants' spatial perception of a sound was biased towards a location where they imagined seeing the brief appearance of a white circle. In the third experiment, the participants' perception of what a person was saying was changed by their imagination of a particular sound.

According to the scientists, the results of the current study may be useful in understanding the mechanisms by which the brain fails to distinguish between thought and reality in certain psychiatric disorders such as schizophrenia. Another area of use could be research on brain computer interfaces, where paralyzed individuals' imagination is used to control virtual and artificial devices.

"This is the first set of experiments to definitively establish that the sensory signals generated by one's imagination are strong enough to change one's real-world perception of a different sensory modality" says Professor Henrik Ehrsson, the principle investigator behind the study.

Saturday, June 29, 2013

A Telescope for Your Eye: New Contact Lens Design May Improve Sight of Patients With Macular Degeneration


Contact lenses correct many people's eyesight but do nothing to improve the blurry vision of those suffering from age-related macular degeneration (AMD), the leading cause of blindness among older adults in the western world. That's because simply correcting the eye's focus cannot restore the central vision lost from a retina damaged by AMD. Now a team of researchers from the United States and Switzerland led by University of California San Diego Professor Joseph Ford has created a slim, telescopic contact lens that can switch between normal and magnified vision. With refinements, the system could offer AMD patients a relatively unobtrusive way to enhance their vision.

This image shows five views of the switchable telescopic contact lens. a) From front. b) From back. c) On the mechanical model eye. d) With liquid crystal glasses. Here, the glasses block the unmagnified central portion of the lens. e) With liquid crystal glasses. Here, the central portion is not blocked.
This image shows five views of the switchable telescopic contact lens. a) From front. b) From back. c) On the mechanical model eye. d) With liquid crystal glasses. Here, the glasses block the unmagnified central portion of the lens. e) With liquid crystal glasses. Here, the central portion is not blocked. (Credit: Optics Express)

The team reports its work in the Optical Society's (OSA) open-access journal Optics Express.

Visual aids that magnify incoming light help AMD patients see by spreading light around to undamaged parts of the retina. These optical magnifiers can assist patients with a variety of important everyday tasks such as reading, identification of faces, and self-care. But these aids have not gained widespread acceptance because they either use bulky spectacle-mounted telescopes that interfere with social interactions, or micro-telescopes that require surgery to implant into the patient's eye.

"For a visual aid to be accepted it needs to be highly convenient and unobtrusive," says co-author Eric Tremblay of the École Polytechnique Fédérale de Lausanne (EPFL) in Switzerland. A contact lens is an "attractive compromise" between the head-mounted telescopes and surgically implanted micro-telescopes, Tremblay says.

The new lens system developed by Ford's team uses tightly fitting mirror surfaces to make a telescope that has been integrated into a contact lens just over a millimeter thick. The lens has a dual modality: the center of the lens provides unmagnified vision, while the ring-shaped telescope located at the periphery of the regular contact lens magnifies the view 2.8 times.

To switch back and forth between the magnified view and normal vision, users would wear a pair of liquid crystal glasses originally made for viewing 3-D televisions. These glasses selectively block either the magnifying portion of the contact lens or its unmagnified center. The liquid crystals in the glasses electrically change the orientation of polarized light, allowing light with one orientation or the other to pass through the glasses to the contact lens.

The team tested their design both with computer modeling and by fabricating the lens. They also created a life-sized model eye that they used to capture images through their contact lens-eyeglasses system. In constructing the lens, researchers relied on a robust material commonly used in early contact lenses called polymethyl methacrylate (PMMA). The team needed that robustness because they had to place tiny grooves in the lens to correct for aberrant color caused by the lens' shape, which is designed to conform to the human eye.

Tests showed that the magnified image quality through the contact lens was clear and provided a much larger field of view than other magnification approaches, but refinements are necessary before this proof-of-concept system could be used by consumers. The researchers report that the grooves used to correct color had the side effect of degrading image quality and contrast. These grooves also made the lens unwearable unless it is surrounded by a smooth, soft "skirt," something commonly used with rigid contact lenses today. Finally, the robust material they used, PMMA, is not ideal for contact lenses because it is gas-impermeable and limits wear to short periods of time.

The team is currently pursuing a similar design that will still be switchable from normal to telescopic vision, but that will use gas-permeable materials and will correct aberrant color without the need for grooves to bend the light. They say they hope their design will offer improved performance and better sight for people with macular degeneration, at least until a more permanent remedy for AMD is available.

"In the future, it will hopefully be possible to go after the core of the problem with effective treatments or retinal prosthetics," Tremblay says. "The ideal is really for magnifiers to become unnecessary. Until we get there, however, contact lenses may provide a way to make AMD a little less debilitating."

Friday, May 24, 2013

IQ Predicted by Ability to Filter Visual Motion


A brief visual task can predict IQ, according to a new study. This surprisingly simple exercise measures the brain's unconscious ability to filter out visual movement. The study shows that individuals whose brains are better at automatically suppressing background motion perform better on standard measures of intelligence. The test is the first purely sensory assessment to be strongly correlated with IQ and may provide a non-verbal and culturally unbiased tool for scientists seeking to understand neural processes associated with general intelligence.
Intelligence is closely linked to a person's ability to filter out background movement, according to a new cognitive science study from the University of Rochester.
Intelligence is closely linked to a person's ability to filter out background movement, according to a new cognitive science study from the University of Rochester. (Credit: J. Adam Fenster, University of Rochester)

"Because intelligence is such a broad construct, you can't really track it back to one part of the brain," says Duje Tadin, a senior author on the study and an assistant professor of brain and cognitive sciences at the University of Rochester. "But since this task is so simple and so closely linked to IQ, it may give us clues about what makes a brain more efficient, and, consequently, more intelligent."

The unexpected link between IQ and motion filtering was reported online in the Cell Press journal Current Biology on May 23 by a research team lead by Tadin and Michael Melnick, a doctoral candidate in brain and cognitive sciences at the University of Rochester.

In the study, individuals watched brief video clips of black and white bars moving across a computer screen. Their sole task was to identify which direction the bars drifted: to the right or to the left. The bars were presented in three sizes, with the smallest version restricted to the central circle where human motion perception is known to be optimal, an area roughly the width of the thumb when the hand is extended. Participants also took a standardized intelligence test.

As expected, people with higher IQ scores were faster at catching the movement of the bars when observing the smallest image. The results support prior research showing that individuals with higher IQs make simple perceptual judgments swifter and have faster reflexes. "Being 'quick witted' and 'quick on the draw' generally go hand in hand," says Melnick.

But the tables turned when presented with the larger images. The higher a person's IQ, the slower they were at detecting movement. "From previous research, we expected that all participants would be worse at detecting the movement of large images, but high IQ individuals were much, much worse," says Melnick. That counter-intuitive inability to perceive large moving images is a perceptual marker for the brain's ability to suppress background motion, the authors explain. In most scenarios, background movement is less important than small moving objects in the foreground. Think about driving in a car, walking down a hall, or even just moving your eyes across the room. The background is constantly in motion.

The key discovery in this study is how closely this natural filtering ability is linked to IQ. The first experiment found a 64 percent correlation between motion suppression and IQ scores, a much stronger relationship than other sensory measures to date. For example, research on the relationship between intelligence and color discrimination, sensitivity to pitch, and reaction times have found only a 20 to 40 percent correlation. "In our first experiment, the effect for motion was so strong," recalls Tadin, "that I really thought this was a fluke."

So the group tried to disprove the findings from the initial 12-participant study conducted while Tadin was at Vanderbilt University working with co-author Sohee Park, a professor of psychology. They reran the experiment at the University of Rochester on a new cohort of 53 subjects, administering the full IQ test instead of an abbreviated version and the results were even stronger; correlation rose to 71 percent. The authors also tested for other possible explanations for their findings.

For example, did the surprising link to IQ simply reflect a person's willful decision to focus on small moving images? To rule out the effect of attention, the second round of experiments randomly ordered the different image sizes and tested other types of large images that have been shown not to elicit suppression. High IQ individuals continued to be quicker on all tasks, except the ones that isolated motion suppression. The authors concluded that high IQ is associated with automatic filtering of background motion.

"We know from prior research which parts of the brain are involved in visual suppression of background motion. This new link to intelligence provides a good target for looking at what is different about the neural processing, what's different about the neurochemistry, what's different about the neurotransmitters of people with different IQs," says Tadin.

The relationship between IQ and motion suppression points to the fundamental cognitive processes that underlie intelligence, the authors write. The brain is bombarded by an overwhelming amount of sensory information, and its efficiency is built not only on how quickly our neural networks process these signals, but also on how good they are at suppressing less meaningful information. "Rapid processing is of little utility unless it is restricted to the most relevant information," the authors conclude.

The researchers point out that this vision test could remove some of the limitations associated with standard IQ tests, which have been criticized for cultural bias. "Because the test is simple and non-verbal, it will also help researchers better understand neural processing in individuals with intellectual and developmental disabilities," says co-author Loisa Bennetto, an associate professor of psychology at the University of Rochester.

Bryan Harrison, a doctoral candidate in clinical and social psychology at the University of Rochester is also an author on the paper. The research was supported by grants from the National Institutes of Health.

Monday, April 15, 2013

What Happens in the Brain to Make Music Rewarding?


A new study reveals what happens in our brain when we decide to purchase a piece of music when we hear it for the first time. The study, conducted at the Montreal Neurological Institute and Hospital -- The Neuro, McGill University and published in the journal Science on April 12, pinpoints the specific brain activity that makes new music rewarding and predicts the decision to purchase music.
A new study reveals what happens in our brain when we decide to purchase a piece of music when we hear it for the first time.
A new study reveals what happens in our brain when we decide to purchase a piece of music when we hear it for the first time. (Credit: © Warren Goldswain / Fotolia)

Participants in the study listened to 60 previously unheard music excerpts while undergoing functional resonance imaging (fMRI) scanning, providing bids of how much they were willing to spend for each item in an auction paradigm. "When people listen to a piece of music they have never heard before, activity in one brain region can reliably and consistently predict whether they will like or buy it, this is the nucleus accumbens which is involved in forming expectations that may be rewarding," says lead investigator Dr. Valorie Salimpoor, who conducted the research in Dr. Robert Zatorre's lab at The Neuro and is now at Baycrest Health Sciences' Rotman Research Institute. "What makes music so emotionally powerful is the creation of expectations. Activity in the nucleus accumbens is an indicator that expectations were met or surpassed, and in our study we found that the more activity we see in this brain area while people are listening to music, the more money they are willing to spend."

The second important finding is that the nucleus accumbens doesn't work alone, but interacts with the auditory cortex, an area of the brain that stores information about the sounds and music we have been exposed to. The more a given piece was rewarding, the greater the cross-talk between these regions. Similar interactions were also seen between the nucleus accumbens and other brain areas, involved in high-level sequencing, complex pattern recognition and areas involved in assigning emotional and reward value to stimuli.

In other words, the brain assigns value to music through the interaction of ancient dopaminergic reward circuitry, involved in reinforcing behaviours that are absolutely necessary for our survival such as eating and sex, with some of the most evolved regions of the brain, involved in advanced cognitive processes that are unique to humans.

"This is interesting because music consists of a series of sounds that when considered alone have no inherent value, but when arranged together through patterns over time can act as a reward, says Dr. Robert Zatorre, researcher at The Neuro and co-director of the International Laboratory for Brain, Music and Sound Research. "The integrated activity of brain circuits involved in pattern recognition, prediction, and emotion allow us to experience music as an aesthetic or intellectual reward."

"The brain activity in each participant was the same when they were listening to music that they ended up purchasing, although the pieces they chose to buy were all different," adds Dr. Salimpoor. "These results help us to see why people like different music -- each person has their own uniquely shaped auditory cortex, which is formed based on all the sounds and music heard throughout our lives. Also, the sound templates we store are likely to have previous emotional associations."

An innovative aspect of this study is how closely it mimics real-life music-listening experiences. Researchers used a similar interface and prices as iTunes. To replicate a real life scenario as much as possible and to assess reward value objectively, individuals could purchase music with their own money, as an indication that they wanted to hear it again. Since musical preferences are influenced by past associations, only novel music excerpts were selected (to minimize explicit predictions) using music recommendation software (such as Pandora, Last.fm) to reflect individual preferences.

The interactions between nucleus accumbens and the auditory cortex suggest that we create expectations of how musical sounds should unfold based on what is learned and stored in our auditory cortex, and our emotions result from the violation or fulfillment of these expectations. We are constantly making reward-related predictions to survive, and this study provides neurobiological evidence that we also make predictions when listening to an abstract stimulus, music, even if we have never heard the music before. Pattern recognition and prediction of an otherwise simple set of stimuli, when arranged together become so powerful as to make us happy or bring us to tears, as well as communicate and experience some of the most intense, complex emotions and thoughts.

Listen to the music excerpts used in the study: http://www.zlab.mcgill.ca/science2013/

Wednesday, July 18, 2012

Visual Searches: Human Brain Beats Computers


You're headed out the door and you realize you don't have your car keys. After a few minutes of rifling through pockets, checking the seat cushions and scanning the coffee table, you find the familiar key ring and off you go. Easy enough, right? What you might not know is that the task that took you a couple seconds to complete is a task that computers -- despite decades of advancement and intricate calculations -- still can't perform as efficiently as humans: the visual search.

Part of the research team in front of the Magnetic Resonance Imaging (MRI) device at the UCSB Brain Imaging Center From left to right : Researcher Tim Preston; Associate Professor of Psychological & Brain Sciences Barry Giesbrecht; and Professor of Psychological & Brain Sciences Miguel P. Eckstein. Not pictured: Koel Das, now a faculty member at the Indian Institute of Science in Bangalore, Karnatka, India; and lead author Fei Guo, now in the software industry.
Part of the research team in front of the Magnetic Resonance Imaging (MRI) device at the UCSB Brain Imaging Center From left to right : Researcher Tim Preston; Associate Professor of Psychological & Brain Sciences Barry Giesbrecht; and Professor of Psychological & Brain Sciences Miguel P. Eckstein. Not pictured: Koel Das, now a faculty member at the Indian Institute of Science in Bangalore, Karnatka, India; and lead author Fei Guo, now in the software industry. (Credit: Image courtesy of University of California - Santa Barbara)


"Our daily lives are composed of little searches that are constantly changing, depending on what we need to do," said Miguel Eckstein, UC Santa Barbara professor of psychological and brain sciences and co-author of the recently released paper "Feature-Independent Neural Coding of Target Detection during Search of Natural Scenes," published in the Journal of Neuroscience. "So the idea is, where does that take place in the brain?"

A large part of the human brain is dedicated to vision, with different parts involved in processing the many visual properties of the world. Some parts are stimulated by color, others by motion, yet others by shape.

However, those parts of the brain tell only a part of the story. What Eckstein and co-authors wanted to determine was how we decide whether the target object we are looking for is actually in the scene, how difficult the search is, and how we know we've found what we wanted.

They found their answers in the dorsal frontoparietal network, a region of the brain that roughly corresponds to the top of one's head, and is also associated with properties such as attention and eye movements. In the parts of the human brain used earlier in the processing stream, regions stimulated by specific features like color, motion, and direction are a major part of the search. However, in the dorsal frontoparietal network, activity is not confined to any specific features of the object.

"It's flexible," said Eckstein. Using 18 observers, an MRI machine, and hundreds of photos of scenes flashed before the observers with instructions to look for certain items, the scientists monitored their subjects' brain activity. By watching the intraparietal sulcus (IPS), located within the dorsal frontoparietal network, the researchers were able to note not only whether their subjects found the objects, but also how confident they were in their finds.

The IPS region would be stimulated even if the object was not there, said Eckstein, but the pattern of activity would not be the same as it would had the object actually existed in the scene. The pattern of activity was consistent, even though the 368 different objects the subjects searched for were defined by very different visual features. This, Eckstein said, indicates that IPS did not rely on the presence of any fixed feature to determine the presence or absence of various objects. Other visual regions did not show this consistent pattern of activity across objects.

"As you go further up in processing, the neurons are less interested in a specific feature, but they're more interested in whatever is behaviorally relevant to you at the moment," said Eckstein. Thus, a search for an apple, for instance, would make red, green, and rounded shapes relevant. If the search was for your car keys, the interparietal sulcus would now be interested in gold, silver, and key-type shapes and not interested in green, red, and rounded shapes.

"For visual search to be efficient, we want those visual features related to what we are looking for to elicit strong responses in our brain and not others that are not related to our search, and are distracting," Eckstein added. "Our results suggest that this is what is achieved in the intraparietal sulcus, and allows for efficient visual search."

For Eckstein and colleagues, these findings are just the tip of the iceberg. Future research will dig more deeply into the seemingly simple yet essential ability of humans to do a visual search and how they can use the layout of a scene to guide their search.

"What we're trying to really understand is what other mechanisms or strategies the brain has to make searches efficient and easy," said Eckstein. "What part of the brain is doing that?"

Research on this study was also conducted by Tim Preston, Koel Das, Barry Giesbrecht, and first author Fei Guo, all from UC Santa Barbara.

Friday, December 10, 2010

Brains Wired So We Can Better Hear Ourselves


Like the mute button on the TV remote control, our brains filter out unwanted noise so we can focus on what we're listening to. But when it comes to following our own speech, a new brain study from the University of California, Berkeley, shows that instead of one homogenous mute button, we have a network of volume settings that can selectively silence and amplify the sounds we make and hear.
Activity in the auditory cortex when we speak and listen 
is amplified in some regions of the brain and muted in 
others. In this image, the black line represents muting 
activity when we speak. (Credit: Courtesy 
of Adeen Flinker)

Neuroscientists from UC Berkeley, UCSF and Johns Hopkins University tracked the electrical signals emitted from the brains of hospitalized epilepsy patients. They discovered that neurons in one part of the patients' hearing mechanism were dimmed when they talked, while neurons in other parts lit up.

Their findings, published Dec. 8, 2010 in the Journal of Neuroscience, offer new clues about how we hear ourselves above the noise of our surroundings and monitor what we say. Previous studies have shown a selective auditory system in monkeys that can amplify their self-produced mating, food and danger alert calls, but until this latest study, it was not clear how the human auditory system is wired.

"We used to think that the human auditory system is mostly suppressed during speech, but we found closely knit patches of cortex with very different sensitivities to our own speech that paint a more complicated picture," said Adeen Flinker, a doctoral student in neuroscience at UC Berkeley and lead author of the study.

"We found evidence of millions of neurons firing together every time you hear a sound right next to millions of neurons ignoring external sounds but firing together every time you speak," Flinker added. "Such a mosaic of responses could play an important role in how we are able to distinguish our own speech from that of others."

While the study doesn't specifically address why humans need to track their own speech so closely, Flinker theorizes that, among other things, tracking our own speech is important for language development, monitoring what we say and adjusting to various noise environments.

"Whether it's learning a new language or talking to friends in a noisy bar, we need to hear what we say and change our speech dynamically according to our needs and environment," Flinker said.

He noted that people with schizophrenia have trouble distinguishing their own internal voices from the voices of others, suggesting that they may lack this selective auditory mechanism. The findings may be helpful in better understanding some aspects of auditory hallucinations, he said.

Moreover, with the finding of sub-regions of brain cells each tasked with a different volume control job -- and located just a few millimeters apart -- the results pave the way for a more detailed mapping of the auditory cortex to guide brain surgery.

In addition to Flinker, the study's authors are Robert Knight, director of the Helen Wills Neuroscience Institute at UC Berkeley; neurosurgeons Edward Chang, Nicholas Barbaro and neurologist Heidi Kirsch of the University of California, San Francisco; and Nathan Crone, a neurologist at Johns Hopkins University in Maryland.

The auditory cortex is a region of the brain's temporal lobe that deals with sound. In hearing, the human ear converts vibrations into electrical signals that are sent to relay stations in the brain's auditory cortex where they are refined and processed. Language is mostly processed in the left hemisphere of the brain.

In the study, researchers examined the electrical activity in the healthy brain tissue of patients who were being treated for seizures. The patients had volunteered to help out in the experiment during lulls in their treatment, as electrodes had already been implanted over their auditory cortices to track the focal points of their seizures.

Researchers instructed the patients to perform such tasks as repeating words and vowels they heard, and recorded the activity. In comparing the activity of electrical signals discharged during speaking and hearing, they found that some regions of the auditory cortex showed less activity during speech, while others showed the same or higher levels.

"This shows that our brain has a complex sensitivity to our own speech that helps us distinguish between our vocalizations and those of others, and makes sure that what we say is actually what we meant to say," Flinker said.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of Science Updates or its staff.

Thursday, October 21, 2010

Human Brain Can 'See' Shapes With Sound : See No Shape, Touch No Shape, Hear a Shape? New Way of 'Seeing' the World


Scientists at The Montreal Neurological Institute and Hospital -- The Neuro, McGill University have discovered that our brains have the ability to determine the shape of an object simply by processing specially-coded sounds, without any visual or tactile input. Not only does this new research tell us about the plasticity of the brain and how it perceives the world around us, it also provides important new possibilities for aiding those who are blind or with impaired vision.
New research shows that the human brain is able to 
determine the shape of an object simply by processing 
specially-coded sounds, without any visual or tactile input. 
(Credit: iStockphoto/Sergey Chushkin)

Shape is an inherent property of objects existing in both vision and touch but not sound. Researchers at The Neuro posed the question 'can shape be represented by sound artificially?' "The fact that a property of sound such as frequency can be used to convey shape information suggests that as long as the spatial relation is coded in a systematic way, shape can be preserved and made accessible -- even if the medium via which space is coded is not spatial in its physical nature," says Jung-Kyong Kim, PhD student in Dr. Robert Zatorre's lab at The Neuro and lead investigator in the study.

In other words, similar to our ocean-dwelling dolphin cousins who use echolocation to explore their surroundings, our brains can be trained to recognize shapes represented by sound and the hope is that those with impaired vision could be trained to use this as a tool. In the study, blindfolded sighted participants were trained to recognize tactile spatial information using sounds mapped from abstract shapes. Following training, the individuals were able to match auditory input to tactually discerned shapes and showed generalization to new auditory-tactile or sound-touch pairings.

"We live in a world where we perceive objects using information available from multiple sensory inputs," says Dr. Zatorre, neuroscientist at The Neuro and co-director of the International Laboratory for Brain Music and Sound Research. "On one hand, this organization leads to unique sense-specific percepts, such as colour in vision or pitch in hearing. On the other hand our perceptual system can integrate information present across different senses and generate a unified representation of an object. We can perceive a multisensory object as a single entity because we can detect equivalent attributes or patterns across different senses." Neuroimaging studies have identified brain areas that integrate information coming from different senses -- combining input from across the senses to create a complete and comprehensive picture.

The results from The Neuro study strengthen the hypothesis that our perception of a coherent object or event ultimately occurs at an abstract level beyond the sensory input modes in which it is presented. This research provides important new insight into how our brains process the world as well as new possibilities for those with impaired senses.

The study was published in the journal Experimental Brain Research. The research was supported by grants from the Canadian Institutes of Health Research and the Natural Sciences and Engineering Research Council of Canada.

Editor's Note: This article is not intended to provide medical advice, diagnosis or treatment.

Saturday, April 17, 2010

Electronic 'Nose' Can Predict Pleasantness of Novel Odors


Weizmann Institute scientists have 'trained' an electronic system to be able to predict the pleasantness of novel odors, just like a human would perceive them -- turning the popular notion that smell is completely personal and culture-specific on its head. In research published in PLoS Computational Biology, the scientists argue that the perception of an odor's pleasantness is innately hard-wired to its molecular structure, and it is only within specific contexts that personal or cultural differences are made apparent.
Electronic Nose
Scientists have 'trained' an electronic system to
be able to predict the pleasantness of novel odors, 
just like a human would perceive them. 
(Credit: iStockphoto/Sharon Dominick)

These findings have important implications for automated environmental toxicity and malodor monitoring, fast odor screening in the perfume industry, and provide a critical building block for the Holy Grail of sense technology -- transmitting scent digitally.

Over the last decade, electronic devices, commonly known as electronic noses or 'eNoses,' have been developed to be able to detect and recognize odors. The main component of an eNose is an array of chemical sensors. As an odor passes through the eNose, its molecular features stimulate the sensors in such a way as to produce a unique electrical pattern -- an 'odor fingerprint' -- that characterizes that specific odor. Like a sniffer dog, an eNose first needs to be trained with odor samples so as to build a database of reference. Then the instrument can recognize new samples of those odors by comparing the odor's fingerprint to those contained in its database.

But unlike humans, if eNoses are presented with a novel odor whose fingerprint has not already been recorded in their database, they are unable to classify or recognize it.

So a team of Weizmann scientists, led by Dr. Rafi Haddad, then a graduate student of Prof. Noam Sobel of the Neurobiology Department and co-supervisor Prof. David Harel of the Computer Science and Applied Mathematics Department, together with their colleague Abebe Medhanie of the Neurobiology Department, and Dr. Yehudah Roth of the Edith Wolfson Medical Center, Holon, decided to approach this issue from a different perspective. Rather than train an eNose to recognize a particular odor, they trained it to estimate the odor along a particular perceptual axis. The axis they chose was odorant pleasantness. In other words, they trained their eNose to predict whether an odor would be perceived as pleasant or unpleasant, or anywhere in between.

To achieve this, the scientists first asked a group of native Israelis to rate the pleasantness of a selection of odors according to a 30-point scale ranging from 'very pleasant' to 'very unpleasant.' From this dataset, they developed an 'odor pleasantness' algorithm, which they then programmed into the eNose. The scientists then got the eNose to predict the pleasantness of a completely new set of odors not contained in their database against the ratings provided by a completely different group of native Israelis. The scientists found that the eNose was able to generalize and rate the pleasantness of novel odors it never smelled before, and these ratings were about 80% similar to those of naive human raters who had not participated in the eNose training phase. Moreover, if the odors were simply categorized as either 'pleasant' or 'unpleasant,' as opposed to being rated on a scale, it achieved an accuracy of 99%.

But these findings still don't determine whether olfactory perception is culture-specific or not. With this in mind, the scientists decided to test eNose predictions against a group of recent immigrants to Israel from Ethiopia. The results showed that the eNose's ability to predict the pleasantness of novel odors against the native Ethipoians' ratings was just as good, even though it was 'tuned' to the pleasantness of odors as perceived by native Israelis. In other words, even though different odors have different meanings across cultures, the eNose performed equally well across these populations. This suggests a fundamental cross-cultural similarity in odorant pleasantness.

Sobel comments: "Being able to predict whether a person who we never tested before would like a specific odorant, no matter their cultural background, provides evidence that odor pleasantness is a fundamental biological property, and that certain aspects of molecular structure are what determine whether an odor is pleasant or not.' So how are cultural differences accounted for? 'We believe that culture influences the perception of olfactory pleasantness mostly in particular contexts. To stress this point, many may wonder how the French can like the smell of their cheese, when most find the smell quite repulsive. We believe that it is not that the French think the smell is pleasant per se, they merely think it is a sign of good cheese. However, if the smell was presented out of context in a jar, then the French would probably rate the odor just as unpleasant as anyone else would."

The scientists' findings that odor perception is hard-wired to molecular structure and their design of an eNose that is able to classify new odors could provide new methods for odor screening and environmental monitoring, and may, in the future, allow for the digital transmission of smell to scent-enable movies, games and music to provide a more immersive and captivating experience.

Prof. Noam Sobel's research is supported by the Nella and Leon Benoziyo Center for Neurosciences; the J&R Foundation; and Regina Wachter, New York. This research was funded by an FP7 grant from the European Research Council awarded to Noam Sobel.

Thursday, March 11, 2010

'Fat' Taste Could Hold The Key To Cut Obesity


A newly discovered ability for people to taste fat could hold the key to reducing obesity, Deakin University health researchers believe.
Obesity
Researchers have discovered that humans can detect a sixth taste: fat. (Credit: iStockphoto/Andrey Stepanov)

Deakin researchers Dr Russell Keast and PhD student Jessica Stewart, working with colleagues at the University of Adelaide, CSIRO, and Massey University (New Zealand), have found that humans can detect a sixth taste -- fat. They also found that people with a high sensitivity to the taste of fat tended to eat less fatty foods and were less likely to be overweight. The results of their research are published in the latest issue of the British Journal of Nutrition.

"Our findings build on previous research in the United States that used animal models to discover fat taste," Dr Keast said.

"We know that the human tongue can detect five tastes -- sweet, salt, sour, bitter and umami (a taste for identifying protein rich foods). Through our study we can conclude that humans have a sixth taste -- fat."

The research team developed a screening procedure to test the ability of people to taste a range of fatty acids commonly found in foods.

They found that people have a taste threshold for fat and that these thresholds vary from person to person; some people have a high sensitivity to the taste while others do not.

"Interestingly, we also found that those with a high sensitivity to the taste of fat consumed less fatty foods and had lower BMIs than those with lower sensitivity," Dr Keast said.

"With fats being easily accessible and commonly consumed in diets today, this suggests that our taste system may become desensitised to the taste of fat over time, leaving some people more susceptible to overeating fatty foods.

"We are now interested in understanding why some people are sensitive and others are not, which we believe will lead to ways of helping people lower their fat intakes and aide development of new low fat foods and diets."
Reblog this post [with Zemanta]