BTemplates.com

Powered by Blogger.

Pageviews past week

Quantum mechanics

Auto News

artificial intelligence

About Me

Recommend us on Google!

Information Technology

Popular Posts

Sunday, June 30, 2013

Imagination Can Change What We Hear and See


A study from Karolinska Institut in Sweden shows, that our imagination may affect how we experience the world more than we perhaps think. What we imagine hearing or seeing "in our head" can change our actual perception. The study, which is published in the scientific journal Current Biology, sheds new light on a classic question in psychology and neuroscience -- about how our brains combine information from the different senses.

Illusion of colliding objects.
Illusion of colliding objects. (Credit: Image courtesy of Karolinska Institutet)

"We often think about the things we imagine and the things we perceive as being clearly dissociable," says Christopher Berger, doctoral student at the Department of Neuroscience and lead author of the study. "However, what this study shows is that our imagination of a sound or a shape changes how we perceive the world around us in the same way actually hearing that sound or seeing that shape does. Specifically, we found that what we imagine hearing can change what we actually see, and what we imagine seeing can change what we actually hear."

The study consists of a series of experiments that make use of illusions in which sensory information from one sense changes or distorts one's perception of another sense. Ninety-six healthy volunteers participated in total.

In the first experiment, participants experienced the illusion that two passing objects collided rather than passed by one-another when they imagined a sound at the moment the two objects met. In a second experiment, the participants' spatial perception of a sound was biased towards a location where they imagined seeing the brief appearance of a white circle. In the third experiment, the participants' perception of what a person was saying was changed by their imagination of a particular sound.

According to the scientists, the results of the current study may be useful in understanding the mechanisms by which the brain fails to distinguish between thought and reality in certain psychiatric disorders such as schizophrenia. Another area of use could be research on brain computer interfaces, where paralyzed individuals' imagination is used to control virtual and artificial devices.

"This is the first set of experiments to definitively establish that the sensory signals generated by one's imagination are strong enough to change one's real-world perception of a different sensory modality" says Professor Henrik Ehrsson, the principle investigator behind the study.

Saturday, June 29, 2013

A Telescope for Your Eye: New Contact Lens Design May Improve Sight of Patients With Macular Degeneration


Contact lenses correct many people's eyesight but do nothing to improve the blurry vision of those suffering from age-related macular degeneration (AMD), the leading cause of blindness among older adults in the western world. That's because simply correcting the eye's focus cannot restore the central vision lost from a retina damaged by AMD. Now a team of researchers from the United States and Switzerland led by University of California San Diego Professor Joseph Ford has created a slim, telescopic contact lens that can switch between normal and magnified vision. With refinements, the system could offer AMD patients a relatively unobtrusive way to enhance their vision.

This image shows five views of the switchable telescopic contact lens. a) From front. b) From back. c) On the mechanical model eye. d) With liquid crystal glasses. Here, the glasses block the unmagnified central portion of the lens. e) With liquid crystal glasses. Here, the central portion is not blocked.
This image shows five views of the switchable telescopic contact lens. a) From front. b) From back. c) On the mechanical model eye. d) With liquid crystal glasses. Here, the glasses block the unmagnified central portion of the lens. e) With liquid crystal glasses. Here, the central portion is not blocked. (Credit: Optics Express)

The team reports its work in the Optical Society's (OSA) open-access journal Optics Express.

Visual aids that magnify incoming light help AMD patients see by spreading light around to undamaged parts of the retina. These optical magnifiers can assist patients with a variety of important everyday tasks such as reading, identification of faces, and self-care. But these aids have not gained widespread acceptance because they either use bulky spectacle-mounted telescopes that interfere with social interactions, or micro-telescopes that require surgery to implant into the patient's eye.

"For a visual aid to be accepted it needs to be highly convenient and unobtrusive," says co-author Eric Tremblay of the École Polytechnique Fédérale de Lausanne (EPFL) in Switzerland. A contact lens is an "attractive compromise" between the head-mounted telescopes and surgically implanted micro-telescopes, Tremblay says.

The new lens system developed by Ford's team uses tightly fitting mirror surfaces to make a telescope that has been integrated into a contact lens just over a millimeter thick. The lens has a dual modality: the center of the lens provides unmagnified vision, while the ring-shaped telescope located at the periphery of the regular contact lens magnifies the view 2.8 times.

To switch back and forth between the magnified view and normal vision, users would wear a pair of liquid crystal glasses originally made for viewing 3-D televisions. These glasses selectively block either the magnifying portion of the contact lens or its unmagnified center. The liquid crystals in the glasses electrically change the orientation of polarized light, allowing light with one orientation or the other to pass through the glasses to the contact lens.

The team tested their design both with computer modeling and by fabricating the lens. They also created a life-sized model eye that they used to capture images through their contact lens-eyeglasses system. In constructing the lens, researchers relied on a robust material commonly used in early contact lenses called polymethyl methacrylate (PMMA). The team needed that robustness because they had to place tiny grooves in the lens to correct for aberrant color caused by the lens' shape, which is designed to conform to the human eye.

Tests showed that the magnified image quality through the contact lens was clear and provided a much larger field of view than other magnification approaches, but refinements are necessary before this proof-of-concept system could be used by consumers. The researchers report that the grooves used to correct color had the side effect of degrading image quality and contrast. These grooves also made the lens unwearable unless it is surrounded by a smooth, soft "skirt," something commonly used with rigid contact lenses today. Finally, the robust material they used, PMMA, is not ideal for contact lenses because it is gas-impermeable and limits wear to short periods of time.

The team is currently pursuing a similar design that will still be switchable from normal to telescopic vision, but that will use gas-permeable materials and will correct aberrant color without the need for grooves to bend the light. They say they hope their design will offer improved performance and better sight for people with macular degeneration, at least until a more permanent remedy for AMD is available.

"In the future, it will hopefully be possible to go after the core of the problem with effective treatments or retinal prosthetics," Tremblay says. "The ideal is really for magnifiers to become unnecessary. Until we get there, however, contact lenses may provide a way to make AMD a little less debilitating."

Friday, June 28, 2013

Breaking habits before they start


Our daily routines can become so ingrained that we perform them automatically, such as taking the same route to work every day. Some behaviors, such as smoking or biting your fingernails, become so habitual that we can't stop even if we want to.


Although breaking habits can be hard, MIT neuroscientists have now shown that they can prevent them from taking root in the first place, in rats learning to run a maze to earn a reward. The researchers first demonstrated that activity in two distinct brain regions is necessary in order for habits to crystallize. Then, they were able to block habits from forming by interfering with activity in one of the brain regions—the infralimbic (IL) cortex, which is located in the prefrontal cortex.

The MIT researchers, led by Institute Professor Ann Graybiel, used a technique called optogenetics to block activity in the IL cortex. This allowed them to control cells of the IL cortex using light. When the cells were turned off during every maze training run, the rats still learned to run the maze correctly, but when the reward was made to taste bad, they stopped, showing that a habit had not formed. If it had, they would keep going back by habit.

"It's usually so difficult to break a habit," Graybiel says. "It's also difficult to have a habit not form when you get a reward for what you're doing. But with this manipulation, it's absolutely easy. You just turn the light on, and bingo."

Graybiel, a member of MIT's McGovern Institute for Brain Research, is the senior author of a paper describing the findings in the June 27 issue of the journal Neuron. Kyle Smith, a former MIT postdoc who is now an assistant professor at Dartmouth College, is the paper's lead author.

Patterns of habitual behavior

Previous studies of how habits are formed and controlled have implicated the IL cortex as well as the striatum, a part of the brain related to addiction and repetitive behavioral problems, as well as normal functions such as decision-making, planning and response to reward. It is believed that the motor patterns needed to execute a habitual behavior are stored in the striatum and its circuits.

Recent studies from Graybiel's lab have shown that disrupting activity in the IL cortex can block the expression of habits that have already been learned and stored in the striatum. Last year, Smith and Graybiel found that the IL cortex appears to decide which of two previously learned habits will be expressed.

"We have evidence that these two areas are important for habits, but they're not connected at all, and no one has much of an idea of what the cells are doing as a habit is formed, as the habit is lost, and as a new habit takes over," Smith says.

To investigate that, Smith recorded activity in cells of the IL cortex as rats learned to run a maze. He found activity patterns very similar to those that appear in the striatum during habit formation. Several years ago, Graybiel found that a distinctive "task-bracketing" pattern develops when habits are formed. This means that the cells are very active when the animal begins its run through the maze, are quiet during the run, and then fire up again when the task is finished.

This kind of pattern "chunks" habits into a large unit that the brain can simply turn on when the habitual behavior is triggered, without having to think about each individual action that goes into the habitual behavior.

The researchers found that this pattern took longer to appear in the IL cortex than in the striatum, and it was also less permanent. Unlike the pattern in the striatum, which remains stored even when a habit is broken, the IL cortex pattern appears and disappears as habits are formed and broken. This was the clue that the IL cortex, not the striatum, was tracking the development of the habit.

Multiple layers of control


The researchers' ability to optogenetically block the formation of new habits suggests that the IL cortex not only exerts real-time control over habits and compulsions, but is also needed for habits to form in the first place.

"The previous idea was that the habits were stored in the sensorimotor system and this cortical area was just selecting the habit to be expressed. Now we think it's a more fundamental contribution to habits, that the IL cortex is more actively making this happen," Smith says.

This arrangement offers multiple layers of control over habitual behavior, which could be advantageous in reining in automatic behavior, Graybiel says. It is also possible that the IL cortex is contributing specific pieces of the habitual behavior, in addition to exerting control over whether it occurs, according to the researchers. They are now trying to determine whether the IL cortex and the striatum are communicating with and influencing each other, or simply acting in parallel.

The study suggests a new way to look for abnormal activity that might cause disorders of repetitive behavior, Smith says. Now that the researchers have identified the neural signature of a normal habit, they can look for signs of habitual behavior that is learned too quickly or becomes too rigid. Finding such a signature could allow scientists to develop new ways to treat disorders of repetitive behavior by using deep brain stimulation, which uses electronic impulses delivered by a pacemaker to suppress abnormal brain activity.

Journal reference: Neuron

Provided by Massachusetts Institute of Technolog

Wednesday, June 26, 2013

Video Game Tech Used to Steer Cockroaches On Autopilot


North Carolina State University researchers are using video game technology to remotely control cockroaches on autopilot, with a computer steering the cockroach through a controlled environment. The researchers are using the technology to track how roaches respond to the remote control, with the goal of developing ways that roaches on autopilot can be used to map dynamic environments -- such as collapsed buildings.

North Carolina State University researchers are using video game technology to remotely control cockroaches on autopilot, with a computer steering the cockroach through a controlled environment.
North Carolina State University researchers are using video game technology to remotely control cockroaches on autopilot, with a computer steering the cockroach through a controlled environment. (Credit: Alper Bozkurt)

The researchers have incorporated Microsoft's motion-sensing Kinect system into an electronic interface developed at NC State that can remotely control cockroaches. The researchers plug in a digitally plotted path for the roach, and use Kinect to identify and track the insect's progress. The program then uses the Kinect tracking data to automatically steer the roach along the desired path.

The program also uses Kinect to collect data on how the roaches respond to the electrical impulses from the remote-control interface. This data will help the researchers fine-tune the steering parameters needed to control the roaches more precisely.

"Our goal is to be able to guide these roaches as efficiently as possible, and our work with Kinect is helping us do that," says Dr. Alper Bozkurt, an assistant professor of electrical and computer engineering at NC State and co-author of a paper on the work.

"We want to build on this program, incorporating mapping and radio frequency techniques that will allow us to use a small group of cockroaches to explore and map disaster sites," Bozkurt says. "The autopilot program would control the roaches, sending them on the most efficient routes to provide rescuers with a comprehensive view of the situation."

The roaches would also be equipped with sensors, such as microphones, to detect survivors in collapsed buildings or other disaster areas. "We may even be able to attach small speakers, which would allow rescuers to communicate with anyone who is trapped," Bozkurt says.

Bozkurt's team had previously developed the technology that would allow users to steer cockroaches remotely, but the use of Kinect to develop an autopilot program and track the precise response of roaches to electrical impulses is new.

The interface that controls the roach is wired to the roach's antennae and cerci. The cerci are sensory organs on the roach's abdomen, which are normally used to detect movement in the air that could indicate a predator is approaching -- causing the roach to scurry away. But the researchers use the wires attached to the cerci to spur the roach into motion. The wires attached to the antennae send small charges that trick the roach into thinking the antennae are in contact with a barrier and steering them in the opposite direction.

The paper, "Kinect-based System for Automated Control of Terrestrial Insect Biobots," will be presented at the Remote Controlled Insect Biobots Minisymposium at the 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society July 4 in Osaka, Japan. Lead author of the paper is NC State undergraduate Eric Whitmire. Co-authors are Bozkurt and NC State graduate student Tahmid Latif. The research was supported by the National Science Foundation.

Tuesday, June 25, 2013

Action needed to help tobacco users quit across the globe


More than half of the countries who signed the WHO 2005 Framework Convention on Tobacco Control have not formed plans to help tobacco users quit.


The World Health Organization Framework Convention on Tobacco Control (WHO FCTC) is a treaty developed to tackle the global tobacco epidemic that is killing 5 million people each year. It came into force in 2005 and is legally binding in 175 countries. The FCTC requires each country to develop plans to help tobacco users in their population to stop -- plans that should be based on strong scientific evidence for what works.

Two surveys of 121 countries just published in the scientific journal Addiction reveal that more than half of those countries have yet to develop these plans.

Just 53 of the 121 countries surveyed (44%) report having treatment guidelines: 75% of the high-income countries; 42% of upper-middle-income countries, 30% of lower-middle-income countries and only 11% of low-income countries.

Only one-fifth of the countries surveyed had a dedicated budget for treating tobacco dependence.

Commenting on the findings, Professor Robert West, Editor-in-Chief of Addiction, said: "Tobacco dependence treatment is a very inexpensive way of saving lives, much cheaper and more effective than many of the clinical services routinely provided by health systems worldwide. These reports map out for the first time the work that needs to be done to make this treatment accessible to those who could benefit from it. I hope they will be a spur to action."

Monday, June 24, 2013

Consider a Text for Teen Suicide Prevention and Intervention, Research Suggests


Adolescents Commonly Use Social Media to Reach Out When They are Depressed

"Obviously this is a place where adolescents are expressing their feelings. It leads me to believe that we need to think about using social media as an intervention and as a way to connect with people."

Teens and young adults are making use of social networking sites and mobile technology to express suicidal thoughts and intentions as well as to reach out for help, two studies suggest.

An analysis of about one month of public posts on MySpace revealed 64 comments in which adolescents expressed a wish to die. Researchers conducted a follow-up survey of young adults and found that text messages were the second-most common way for respondents to seek help when they felt depressed. Talking to a friend or family member ranked first.

These young adults also said they would be least likely to use suicide hotlines or online suicide support groups – the most prevalent strategy among existing suicide-prevention initiatives.

The findings of the two studies suggest that suicide prevention and intervention efforts geared at teens and young adults should employ social networking and other types of technology, researchers say.

“Obviously this is a place where adolescents are expressing their feelings,” said Scottye Cash, associate professor of social work at The Ohio State University and lead author of the studies. “It leads me to believe that we need to think about using social media as an intervention and as a way to connect with people.”

The research team is in the process conducting a study similar to the MySpace analysis by examining young people’s Twitter messages for suicidal content. The researchers would like to analyze Facebook, but too few of the profiles are public, Cash said.

Suicide is the third leading cause of death among youths between the ages of 10 and 24 years, according to the Centers for Disease Control and Prevention (CDC).

Cash and colleagues published the MySpace research in a recent issue of the journal Cyberpsychology, Behavior and Social Networking. They presented the survey findings at a meeting of the American Academy of Child and Adolescent Psychiatry.

Cash’s interest in this phenomenon was sparked in part by media reports about teenagers using social media to express suicidal thoughts and behaviors.

“We wanted to know: Is that accurate, or are these isolated incidents? We found that in a short period of time, there were dozens of examples of teens with suicidal thoughts using MySpace to talk to their friends,” she said.

The researchers performed a content analysis of public profiles on MySpace. They downloaded profile pages of a 41,000-member sample of 13- to 24-year-olds from March 3-4, 2008, and again in December 2008, this time with comments included. By developing a list of phrases to identify potential suicidal thoughts or behaviors, the researchers narrowed 2 million downloaded comments to 1,083 that contained suggestions of suicidality, and used a manual process to eventually arrive at 64 posts that were clear discussions of suicide.

“There’s a lot of drama and angst in teenagers so in a lot of cases, they might say something ‘will kill them’ but not really mean it. Teasing out that hyperbole was an intense process,” Cash said. Song lyrics also made up a surprising number of references to suicide, she added.

The three most common phrases within the final sample were “kill myself” (51.6 percent), “want to die” (15.6 percent) and “suicide” (14.1 percent). Though in more than half of the posts the context was unknown, Cash and colleagues determined that 42 percent of the posts referred to problems with family or other relationships – including 15.6 percent that were about break-ups – and 6.3 percent were attributable to mental health problems or substance abuse.

Very few of the posts identified the method the adolescents would consider in a suicide attempt, but 3 percent mentioned guns, 1.6 percent referred to a knife and 1.6 percent combined being hit by a car and a knife.

With this information in hand, Cash and co-investigator Jeffrey Bridge of the Research Institute at Nationwide Children’s Hospital surveyed young people to learn more about how they convey their depression and suicidal thoughts. Bridge also co-authored the MySpace paper.

Collaborating with Research Now, a social marketing firm, the researchers obtained a sample of survey participants through a company that collects consumer opinions. The final sample included 1,089 participants age 18-24 with an average age of almost 21, half male and half female, and 70.6 percent white.

They were asked about their history of suicidal thoughts and attempts, general Internet and technology use, social networking activity and whether they had symptoms of depression.

More than a third reported they have had suicidal thoughts; of those, 37.5 percent had attempted suicide, resulting in a 13 percent rate of suicide attempts among the entire sample. That figure compares to the 8 percent of U.S. high-school students who reported in a 2011 CDC national survey that they had attempted suicide at least once in the previous year. According to that survey, almost 16 percent of youths had seriously considered suicide and almost 13 percent had made a suicide plan in the previous 12 months.

Results of Cash’s survey showed that respondents would favor talking to a friend or family member when they were depressed, followed by sending texts, talking on the phone, using instant messaging and posting to a social networking site. Less common responses included talking to a health-care provider, posting to a blog, calling a suicide prevention hotline and posting to an online suicide support group.

Response trends suggested, though, that participants with suicidal thoughts or attempts were more willing to use technology – specifically the phone, instant messaging, texting and social networking – to reach out compared to those with no suicidal history. In light of this trend, the fact that the participants were active online consumers might have contributed to the relatively high percentage of suicide attempts among the study sample. In addition, the survey also asked about lifetime suicide history, not just recent history, Cash noted.

The survey also showed that this age group looks to the Internet for information on sensitive topics, and again suggested that young adults of both sexes with a history of suicidal thoughts or attempts consulted the Internet for information about topics that are difficult to discuss – specifically drug use, sex, depression, eating disorders or other mental health concerns. Females with past suicide attempts used social networking the most, according to the results.

“It appears that our methods of reaching out to adolescents and young adults is not actually meeting them where they are. If, as adults, we’re saying, ‘this is what we think you need’ and they tell us they’re not going to use it, should we keep pumping resources into suicide hotlines?” Cash said. “We need to find new ways to connect with them and help them with whatever they’re struggling with, or, in other words, meet them where they are in ways that make sense to them.”

A notable resource already available is www.reachout.com, a website geared toward adolescents who are struggling through a tough time. Some Internet-based resources exist that could serve as models for new suicide prevention interventions, she noted. They include teen.smokefree.gov and www.thatsnotcool.com

The survey research was supported by an Ohio State University College of Social Work Seed Grant.

Additional co-authors of the MySpace paper include Michael Thelwall of the University of Wolverhampton in the United Kingdom, Sydney Peck of Elmira College and Jared Ferrell of the University of Akron.

#

Contact: Scottye Cash, Cash.33@osu.edu (Email is the best way to reach Cash.)

Written by Emily Caldwell, (614) 292-8310; Caldwell.151@osu.edu

More data storage? Here's how to fit 1,000 terabytes on a DVD


We live in a world where digital information is exploding. Some 90% of the world's data was generated in the past two years. The obvious question is: how can we store it all?

Using nanotechnology, researchers have developed a technique to increase the data storage capacity of a DVD from a measly 4.7GB to 1,000TB.
Using nanotechnology, researchers have developed a technique to increase the data storage capacity of a DVD from a measly 4.7GB to 1,000TB. Credit: Nature Communications
In Nature Communications today, we, along with Richard Evans from CSIRO, show how we developed a new technique to enable the data capacity of a single DVD to increase from 4.7 gigabytes up to one petabyte (1,000 terabytes). This is equivalent of 10.6 years of compressed high-definition video or 50,000 full high-definition movies.

So how did we manage to achieve such a huge boost in data storage? First, we need to understand how data is stored on optical discs such as CDs and DVDs.

The basics of digital storage


Although optical discs are used to carry software, films, games, and private data, and have great advantages over other recording media in terms of cost, longevity and reliability, their low data storage capacity is their major limiting factor.

The operation of optical data storage is rather simple. When you burn a CD, for example, the information is transformed to strings of binary digits (0s and 1s, also called bits). Each bit is then laser "burned" into the disc, using a single beam of light, in the form of dots.

The storage capacity of optical discs is mainly limited by the physical dimensions of the dots. But as there's a limit to the size of the disc as well as the size of the dots, many current methods of data storage, such as DVDs and Blu-ray discs, continue to have low level storage density.

To get around this, we had to look at light's fundamental laws.
On the basis of this law, the diameter of a spot of light, obtained by focusing a light beam through a lens, cannot be smaller than half its wavelength – around 500 nanometres (500 billionths of a metre) for visible light.

And while this law plays a huge role in modern optical microscopy, it also sets up a barrier for any efforts from researchers to produce extremely small dots – in the nanometre region – to use as binary bits.

In our study, we showed how to break this fundamental limit by using a two-light-beam method, with different colours, for recording onto discs instead of the conventional single-light-beam method.

Both beams must abide by Abbe's law, so they cannot produce smaller dots individually. But we gave the two beams different functions:
  • The first beam (red, in the figure right) has a round shape, and is used to activate the recording. We called it the writing beam
  • The second beam – the purple donut-shape – plays an anti-recording function, inhibiting the function of the writing beam

The two beams were then overlapped. As the second beam cancelled out the first in its donut ring, the recording process was tightly confined to the centre of the writing beam.

This new technique produces an effective focal spot of nine nanometres – or one ten thousandth the diameter of a human hair.

The technique, in practical terms


Our work will greatly impact the development of super-compact devices as well as nanoscience and nanotechnology research.

The exceptional penetration feature of light beams allow for 3D recording or fabrication, which can dramatically increase the data storage – the number of dots – on a single optical device.

The technique is also cost-effective and portable, as only conventional optical and laser elements are used, and allows for the development of optical data storage with long life and low energy consumption, which could be an ideal platform for a Big Data centre.

As the rate of information generated worldwide continues to accelerate, the aim of more storage capacity in compact devices will continue. Our breakthrough has put that target within our reach.
 
 
Story from: http://phys.org/news/2013-06-storage-terabytes-dvd.html#ajTabs

Sunday, June 23, 2013

Beyond Silicon: Transistors, No Semiconductors


For decades, electronic devices have been getting smaller, and smaller, and smaller. It's now possible -- even routine -- to place millions of transistors on a single silicon chip.

Electrons flash across a series of gold quantum dots on boron nitride nanotubes. Michigan Tech scientists made the quantum-tunneling device, which acts like a transistor at room temperature, without using semiconducting materials.
Electrons flash across a series of gold quantum dots on boron 
nitride nanotubes. Michigan Tech scientists made the quantum
-tunneling device, which acts like a transistor at room 
temperature, without using semiconducting materials. 
(Credit: Yoke Khin Yap graphic)
But transistors based on semiconductors can only get so small. "At the rate the current technology is progressing, in 10 or 20 years, they won't be able to get any smaller," said physicist Yoke Khin Yap of Michigan Technological University. "Also, semiconductors have another disadvantage: they waste a lot of energy in the form of heat."

Scientists have experimented with different materials and designs for transistors to address these issues, always using semiconductors like silicon. Back in 2007, Yap wanted to try something different that might open the door to a new age of electronics.

"The idea was to make a transistor using a nanoscale insulator with nanoscale metals on top," he said. "In principle, you could get a piece of plastic and spread a handful of metal powders on top to make the devices, if you do it right. But we were trying to create it in nanoscale, so we chose a nanoscale insulator, boron nitride nanotubes, or BNNTs for the substrate."

Yap's team had figured out how to make virtual carpets of BNNTs,which happen to be insulators and thus highly resistant to electrical charge. Using lasers, the team then placed quantum dots (QDs) of gold as small as three nanometers across on the tops of the BNNTs, forming QDs-BNNTs. BNNTs are the perfect substrates for these quantum dots due to their small, controllable, and uniform diameters, as well as their insulating nature. BNNTs confine the size of the dots that can be deposited.

In collaboration with scientists at Oak Ridge National Laboratory (ORNL), they fired up electrodes on both ends of the QDs-BNNTs at room temperature, and something interesting happened. Electrons jumped very precisely from gold dot to gold dot, a phenomenon known as quantum tunneling.

"Imagine that the nanotubes are a river, with an electrode on each bank. Now imagine some very tiny stepping stones across the river," said Yap. "The electrons hopped between the gold stepping stones. The stones are so small, you can only get one electron on the stone at a time. Every electron is passing the same way, so the device is always stable."

Yap's team had made a transistor without a semiconductor. When sufficient voltage was applied, it switched to a conducting state. When the voltage was low or turned off, it reverted to its natural state as an insulator.

Furthermore, there was no "leakage": no electrons from the gold dots escaped into the insulating BNNTs, thus keeping the tunneling channel cool. In contrast, silicon is subject to leakage, which wastes energy in electronic devices and generates a lot of heat.

Other people have made transistors that exploit quantum tunneling, says Michigan Tech physicist John Jaszczak, who has developed the theoretical framework for Yap's experimental research. However, those tunneling devices have only worked in conditions that would discourage the typical cellphone user.

"They only operate at liquid-helium temperatures," said Jaszczak.

The secret to Yap's gold-and-nanotube device is its submicroscopic size: one micron long and about 20 nanometers wide. "The gold islands have to be on the order of nanometers across to control the electrons at room temperature," Jaszczak said. "If they are too big, too many electrons can flow." In this case, smaller is truly better: "Working with nanotubes and quantum dots gets you to the scale you want for electronic devices."

"Theoretically, these tunneling channels can be miniaturized into virtually zero dimension when the distance between electrodes is reduced to a small fraction of a micron," said Yap.

Yap has filed for a full international patent on the technology.

The Link Between Circadian Rhythms and Aging: Gene Associated With Longevity Also Regulates the Body's Circadian Clock


Human sleeping and waking patterns are largely governed by an internal circadian clock that corresponds closely with the 24-hour cycle of light and darkness. This circadian clock also controls other body functions, such as metabolism and temperature regulation.

A new study finds that a gene associated with longevity also regulates the body’s circadian clock.
A new study finds that a gene associated with longevity also regulates the body’s circadian clock. (Credit: iStockphoto)

Studies in animals have found that when that rhythm gets thrown off, health problems including obesity and metabolic disorders such as diabetes can arise. Studies of people who work night shifts have also revealed an increased susceptibility to diabetes.

A new study from MIT shows that a gene called SIRT1, previously shown to protect against diseases of aging, plays a key role in controlling these circadian rhythms. The researchers found that circadian function decays with aging in normal mice, and that boosting their SIRT1 levels in the brain could prevent this decay. Conversely, loss of SIRT1 function impairs circadian control in young mice, mimicking what happens in normal aging.

Since the SIRT1 protein itself was found to decline with aging in the normal mice, the findings suggest that drugs that enhance SIRT1 activity in humans could have widespread health benefits, says Leonard Guarente, the Novartis Professor of Biology at MIT and senior author of a paper describing the findings in the June 20 issue of Cell.

"If we could keep SIRT1 as active as possible as we get older, then we'd be able to retard aging in the central clock in the brain, and health benefits would radiate from that," Guarente says.

Staying on schedule

In humans and animals, circadian patterns follow a roughly 24-hour cycle, directed by the circadian control center of the brain, called the suprachiasmatic nucleus (SCN), located in the hypothalamus.

"Just about everything that takes place physiologically is really staged along the circadian cycle," Guarente says. "What's now emerging is the idea that maintaining the circadian cycle is quite important in health maintenance, and if it gets broken, there's a penalty to be paid in health and perhaps in aging."

Last year, Guarente found that a robust circadian period correlated with longer lifespan in mice. That got him wondering what role SIRT1, which has been shown to prolong lifespan in many animals, might play in that phenomenon. SIRT1, which Guarente first linked with aging more than 15 years ago, is a master regulator of cell responses to stress, coordinating a variety of hormone networks, proteins and genes to help keep cells alive and healthy.

To investigate SIRT1's role in circadian control, Guarente and his colleagues created genetically engineered mice that produce different amounts of SIRT1 in the brain. One group of mice had normal SIRT1 levels, another had no SIRT1, and two groups had extra SIRT1 -- either twice or 10 times as much as normal.

Mice lacking SIRT1 had slightly longer circadian cycles (23.9 hours) than normal mice (23.6 hours), and mice with a 10-fold increase in SIRT1 had shorter cycles (23.1 hours).

In mice with normal SIRT1 levels, the researchers confirmed previous findings that when the 12-hour light/dark cycle is interrupted, younger mice readjust their circadian cycles much more easily than older ones. However, they showed for the first time that mice with extra SIRT1 do not suffer the same decline in circadian control as they age.

The researchers also found that SIRT1 exerts this control by regulating the genes BMAL and CLOCK, the two major keepers of the central circadian clock.

Enhancing circadian function

A growing body of evidence suggests that being able to respond to large or small disruptions of the light/dark cycle is important to maintaining healthy metabolic function, Guarente says.

"Essentially we experience a mini jet lag every day because the light cycle is constantly changing. The critical thing for us is to be able to adapt smoothly to these jolts," Guarente says. "Many studies in mice say that while young mice do this perfectly well, it's the old mice that have the problem. So that could well be true in humans."

If so, it could be possible to treat or prevent diseases of aging by enhancing circadian function -- either by delivering SIRT1 activators in the brain or developing drugs that enhance another part of the circadian control system, Guarente says.

"I think we should look at every aspect of the machinery of the circadian clock in the brain, and any intervention that can maintain that machinery with aging ought to be good," he says. "One entry point would be SIRT1, because we've shown in mice that genetic maintenance of SIRT1 helps maintain circadian function."

Some SIRT1 activators are now being tested against diabetes, inflammation and other diseases, but they are not designed to cross the blood-brain barrier and would likely not be able to reach the SCN. However, Guarente believes it could be possible to design SIRT1 activators that can get into the brain.

Roman Kondratov, an associate professor of biology at Cleveland State University, says the study raises several exciting questions regarding the potential to delay or reverse age-related changes in the brain through rejuvenation of the circadian clock with SIRT1 enhancement.

"The importance of this study is that it has both basic and potentially translational applications, taking into account the fact that pharmacological modulators of SIRT1 are currently under active study," Kondratov says.

Researchers in Guarente's lab are now investigating the relationship between health, circadian function and diet. They suspect that high-fat diets might throw the circadian clock out of whack, which could be counteracted by increased SIRT1 activation.

The research was funded by the National Institutes of Health and the Glenn Foundation for Medical Research.

Thursday, June 20, 2013

A Battery Made of Wood?


A sliver of wood coated with tin could make a tiny, long-lasting, efficient and environmentally friendly battery.

A sliver of wood coated with tin could make a tiny, long-lasting, efficient and environmentally friendly battery.
Close-up image of wood fibers. "Wood fibers that make up a tree once held mineral-rich water, and so are ideal for storing liquid electrolytes, making them not only the base but an active part of the battery," Liangbing Hu said. (Credit: Image courtesy of University of Maryland)
But don't try it at home yet -- the components in the battery tested by scientists at the University of Maryland are a thousand times thinner than a piece of paper. Using sodium instead of lithium, as many rechargeable batteries do, makes the battery environmentally benign. Sodium doesn't store energy as efficiently as lithium, so you won't see this battery in your cell phone -- instead, its low cost and common materials would make it ideal to store huge amounts of energy at once, such as solar energy at a power plant.

Existing batteries are often created on stiff bases, which are too brittle to withstand the swelling and shrinking that happens as electrons are stored in and used up from the battery. Liangbing Hu, Teng Li and their team found that wood fibers are supple enough to let their sodium-ion battery last more than 400 charging cycles, which puts it among the longest lasting nanobatteries.

"The inspiration behind the idea comes from the trees," said Hu, an assistant professor of materials science. "Wood fibers that make up a tree once held mineral-rich water, and so are ideal for storing liquid electrolytes, making them not only the base but an active part of the battery."

Lead author Hongli Zhu and other team members noticed that after charging and discharging the battery hundreds of times, the wood ended up wrinkled but intact. Computer models showed that that the wrinkles effectively relax the stress in the battery during charging and recharging, so that the battery can survive many cycles.

"Pushing sodium ions through tin anodes often weaken the tin's connection to its base material," said Li, an associate professor of mechanical engineering. "But the wood fibers are soft enough to serve as a mechanical buffer, and thus can accommodate tin's changes. This is the key to our long-lasting sodium-ion batteries."

The team's research was supported by the University of Maryland and the U.S. National Science Foundation.

Wednesday, June 19, 2013

IQ Link to Baby's Weight Gain in First Month


New research from the University of Adelaide shows that weight gain and increased head size in the first month of a baby's life is linked to a higher IQ at early school age.

New research from the University of Adelaide shows that weight gain and increased head size in the first month of a baby's life is linked to a higher IQ at early school age.
IQ Link to Baby's Weight Gain in First Month
(Credit: © JPC-PROD / Fotolia)
The study was led by University of Adelaide Public Health researchers, who analysed data from more than 13,800 children who were born full-term.

The results, published in the international journal Pediatrics, show that babies who put on 40% of their birthweight in the first four weeks had an IQ 1.5 points higher by the time they were six years of age, compared with babies who only put on 15% of their birthweight.

Those with the biggest growth in head circumference also had the highest IQs.

"Head circumference is an indicator of brain volume, so a greater increase in head circumference in a newborn baby suggests more rapid brain growth," says the lead author of the study, Dr Lisa Smithers from the University of Adelaide's School of Population Health.

"Overall, newborn children who grew faster in the first four weeks had higher IQ scores later in life," she says.

"Those children who gained the most weight scored especially high on verbal IQ at age 6. This may be because the neural structures for verbal IQ develop earlier in life, which means the rapid weight gain during that neonatal period could be having a direct cognitive benefit for the child."

Previous studies have shown the association between early postnatal diet and IQ, but this is the first study of its kind to focus on the IQ benefits of rapid weight gain in the first month of life for healthy newborn babies.

Dr Smithers says the study further highlights the need for successful feeding of newborn babies.

"We know that many mothers have difficulty establishing breastfeeding in the first weeks of their baby's life," Dr Smithers says.

"The findings of our study suggest that if infants are having feeding problems, there needs to be early intervention in the management of that feeding."
 

Share this story on Facebook, Twitter, and Google

Tuesday, June 18, 2013

A Robot That Runs Like a Cat


Thanks to its legs, whose design faithfully reproduces feline morphology, EPFL's four-legged "cheetah-cub robot" has the same advantages as its model: it is small, light and fast. Still in its experimental stage, the robot will serve as a platform for research in locomotion and biomechanics.

This is cheetah-cub, a compliant quadruped robot.
This is cheetah-cub, a compliant quadruped robot.
(Credit: (c) EPFL)
Even though it doesn't have a head, you can still tell what kind of animal it is: the robot is definitely modeled upon a cat. Developed by EPFL's Biorobotics Laboratory (Biorob), the "cheetah-cub robot," a small-size quadruped prototype robot, is described in an article appearing today in the International Journal of Robotics Research. The purpose of the platform is to encourage research in biomechanics; its particularity is the design of its legs, which make it very fast and stable. Robots developed from this concept could eventually be used in search and rescue missions or for exploration.

This robot is the fastest in its category, namely in normalized speed for small quadruped robots under 30Kg. During tests, it demonstrated its ability to run nearly seven times its body length in one second. Although not as agile as a real cat, it still has excellent auto-stabilization characteristics when running at full speed or over a course that included disturbances such as small steps. In addition, the robot is extremely light, compact, and robust and can be easily assembled from materials that are inexpensive and readily available. 






Faithful reproduction

The machine's strengths all reside in the design of its legs. The researchers developed a new model with this robot, one that is based on the meticulous observation and faithful reproduction of the feline leg. The number of segments -- three on each leg -- and their proportions are the same as they are on a cat. Springs are used to reproduce tendons, and actuators -- small motors that convert energy into movement -- are used to replace the muscles.

"This morphology gives the robot the mechanical properties from which cats benefit, that's to say a marked running ability and elasticity in the right spots, to ensure stability," explains Alexander Sprowitz, a Biorob scientist. "The robot is thus naturally more autonomous."

Sized for a search


According to Biorob director Auke Ijspeert, this invention is the logical follow-up of research the lab has done into locomotion that included a salamander robot and a lamprey robot. "It's still in the experimental stages, but the long-term goal of the cheetah-cub robot is to be able to develop fast, agile, ground-hugging machines for use in exploration, for example for search and rescue in natural disaster situations. Studying and using the principles of the animal kingdom to develop new solutions for use in robots is the essence of our research."

Wednesday, June 12, 2013

New Layer of the Human Cornea Discovered


Scientists at The University of Nottingham have discovered a previously undetected layer in the cornea, the clear window at the front of the human eye.

Scientists have discovered a previously undetected layer in the cornea, the clear window at the front of the human eye.
Scientists have discovered a previously undetected layer in the cornea, the clear window at the front of the human eye. (Credit: © Kesu / Fotolia)

The breakthrough, announced in a study published in the academic journal Ophthalmology, could help surgeons to dramatically improve outcomes for patients undergoing corneal grafts and transplants.

The new layer has been dubbed the Dua's Layer after the academic Professor Harminder Dua who discovered it.

Professor Dua, Professor of Ophthalmology and Visual Sciences, said: "This is a major discovery that will mean that ophthalmology textbooks will literally need to be re-written. Having identified this new and distinct layer deep in the tissue of the cornea, we can now exploit its presence to make operations much safer and simpler for patients.

"From a clinical perspective, there are many diseases that affect the back of the cornea which clinicians across the world are already beginning to relate to the presence, absence or tear in this layer."

The human cornea is the clear protective lens on the front of the eye through which light enters the eye. Scientists previously believed the cornea to be composed of five layers, from front to back, the corneal epithelium, Bowman's layer, the corneal stroma, Descemet's membrane and the corneal endothelium.

The new layer that has been discovered is located at the back of the cornea between the corneal stroma and Descemet's membrane. Although it is just 15 microns thick -- the entire cornea is around 550 microns thick or 0.5mm -- it is incredibly tough and is strong enough to be able to withstand one and a half to two bars of pressure.

The scientists proved the existence of the layer by simulating human corneal transplants and grafts on eyes donated for research purposes to eye banks located in Bristol and Manchester.

During this surgery, tiny bubbles of air were injected into the cornea to gently separate the different layers. The scientists then subjected the separated layers to electron microscopy, allowing them to study them at many thousand times their actual size.

Understanding the properties and location of the new Dua's layer could help surgeons to better identify where in the cornea these bubbles are occurring and take appropriate measures during the operation. If they are able to inject a bubble next to the Dua's layer, its strength means that it is less prone to tearing, meaning a better outcome for the patient.

The discovery will have an impact on advancing understanding of a number of diseases of the cornea, including acute hydrops, Descematocele and pre-Descemet's dystrophies.

The scientists now believe that corneal hydrops, a bulging of the cornea caused by fluid build up that occurs in patients with keratoconus (conical deformity of the cornea), is caused by a tear in the Dua layer, through which water from inside the eye rushes in and causes waterlogging. 


Share this story on Facebook, Twitter, and Google

Brain-Computer Interfaces: Just Wave a Hand


Small electrodes placed on or inside the brain allow patients to interact with computers or control robotic limbs simply by thinking about how to execute those actions. This technology could improve communication and daily life for a person who is paralyzed or has lost the ability to speak from a stroke or neurodegenerative disease.

This image shows the changes that took place in the brain for all patients participating in the study using a brain-computer interface. Changes in activity were distributed widely throughout the brain.
This image shows the changes that took place in the brain for all patients participating in the study using a brain-computer interface. Changes in activity were distributed widely throughout the brain. (Credit: Jeremiah Wander, UW)
Now, University of Washington researchers have demonstrated that when humans use this technology -- called a brain-computer interface -- the brain behaves much like it does when completing simple motor skills such as kicking a ball, typing or waving a hand. Learning to control a robotic arm or a prosthetic limb could become second nature for people who are paralyzed.

"What we're seeing is that practice makes perfect with these tasks," said Rajesh Rao, a UW professor of computer science and engineering and a senior researcher involved in the study. "There's a lot of engagement of the brain's cognitive resources at the very beginning, but as you get better at the task, those resources aren't needed anymore and the brain is freed up."

Rao and UW collaborators Jeffrey Ojemann, a professor of neurological surgery, and Jeremiah Wander, a doctoral student in bioengineering, published their results online June 10 in the Proceedings of the National Academy of Sciences.

In this study, seven people with severe epilepsy were hospitalized for a monitoring procedure that tries to identify where in the brain seizures originate. Physicians cut through the scalp, drilled into the skull and placed a thin sheet of electrodes directly on top of the brain. While they were watching for seizure signals, the researchers also conducted this study.

The patients were asked to move a mouse cursor on a computer screen by using only their thoughts to control the cursor's movement. Electrodes on their brains picked up the signals directing the cursor to move, sending them to an amplifier and then a laptop to be analyzed. Within 40 milliseconds, the computer calculated the intentions transmitted through the signal and updated the movement of the cursor on the screen.

Researchers found that when patients started the task, a lot of brain activity was centered in the prefrontal cortex, an area associated with learning a new skill. But after often as little as 10 minutes, frontal brain activity lessened, and the brain signals transitioned to patterns similar to those seen during more automatic actions.

"Now we have a brain marker that shows a patient has actually learned a task," Ojemann said. "Once the signal has turned off, you can assume the person has learned it."

While researchers have demonstrated success in using brain-computer interfaces in monkeys and humans, this is the first study that clearly maps the neurological signals throughout the brain. The researchers were surprised at how many parts of the brain were involved.

"We now have a larger-scale view of what's happening in the brain of a subject as he or she is learning a task," Rao said. "The surprising result is that even though only a very localized population of cells is used in the brain-computer interface, the brain recruits many other areas that aren't directly involved to get the job done."

Several types of brain-computer interfaces are being developed and tested. The least invasive is a device placed on a person's head that can detect weak electrical signatures of brain activity. Basic commercial gaming products are on the market, but this technology isn't very reliable yet because signals from eye blinking and other muscle movements interfere too much.

A more invasive alternative is to surgically place electrodes inside the brain tissue itself to record the activity of individual neurons. Researchers at Brown University and the University of Pittsburgh have demonstrated this in humans as patients, unable to move their arms or legs, have learned to control robotic arms using the signal directly from their brain.

The UW team tested electrodes on the surface of the brain, underneath the skull. This allows researchers to record brain signals at higher frequencies and with less interference than measurements from the scalp. A future wireless device could be built to remain inside a person's head for a longer time to be able to control computer cursors or robotic limbs at home.

"This is one push as to how we can improve the devices and make them more useful to people," Wander said. "If we have an understanding of how someone learns to use these devices, we can build them to respond accordingly."

The research team, along with the National Science Foundation's Engineering Research Center for Sensorimotor Neural Engineering headquartered at the UW, will continue developing these technologies.

This research was funded by the National Institutes of Health, the NSF, the Army Research Office and the Keck Foundation. 


Share this story on Facebook, Twitter, and Google

Saturday, June 1, 2013

Gene Variants Linked to Educational Attainment


A multi-national team of researchers has identified genetic markers that predict educational attainment by pooling data from more than 125,000 individuals in the United States, Australia, and 13 western European countries.
A multi-national team of researchers has identified genetic markers that predict educational attainment by pooling data from more than 125,000 individuals in the United States, Australia, and 13 western European countries.
A multi-national team of researchers has identified genetic markers that predict educational attainment by pooling data from more than 125,000 individuals in the United States, Australia, and 13 western European countries. (Credit: © Tom Wang / Fotolia)

The study, which appears in the journal Science, was conducted by the Social Science Genetic Association Consortium (SSGAC), which includes researchers at NYU, Erasmus University, Cornell University, Harvard University, the University of Bristol, and the University of Queensland, among other institutions.

The SSGAC conducted what is called a genome-wide association study (GWAS) to explore the link between genetic variation and educational attainment -- the number of years of schooling completed by an individual and whether he or she graduated college. In a GWAS, researchers test hundreds of thousands of genetic markers for association with some characteristics such as a disease, trait or life outcome.

Because the sample included people from different countries -- where markers for schooling vary significantly -- the research team adopted the International Standard Classification of Education (ISCED) scale, which is a commonly used method for establishing a uniform measure of educational attainment across cohorts.

Anticipating that very large samples would be required to credibly detect genetic associations, the SSGAC researchers assembled a total sample size more than 10 times larger than any previous genetic study of any social-scientific outcome. The team examined associations between educational attainment and genetic variants called single-nucleotide polymorphisms, or SNPs, which are tiny changes at a single location in a person's genetic code.

The study found that the genetic markers with the strongest effects on educational attainment could each only explain two one-hundredths of a percentage point (0.02 percent). To put that figure into perspective, it is known from earlier research that the SNP with the largest effect on human height accounts for about 0.40 percent of the variation.

Combining the two million examined SNPs, the SSGAC researchers were able to explain about 2 percent of the variation in educational attainment across individuals, and anticipate that this figure will rise as larger samples become available.

"We hope that our findings will eventually be useful for understanding biological processes underlying learning, memory, reading disabilities and cognitive decline in the elderly," said co-author Daniel Benjamin, a behavioral economist at Cornell who is a co-director of the SSGAC.

"Another contribution of our study is that it will strengthen the methodological foundations of social-science genetics," said David Cesarini, an NYU assistant professor at the Center for Experimental Social Science and the Center for Neuroeconomics, who also co-directs the SSGAC. "We used 125,000 individuals to conduct this study. Previous studies used far smaller samples, sometimes as small as 100 individuals and rarely more than 10,000. These small samples make sense under the assumption that individual genes have large effects. However, if genes have small effects, as our study shows, then sample sizes need to be very large to produce robust findings that will reliably replicate in other samples."

The researchers were careful to note that they have not discovered "the gene for education" or that these findings somehow imply that a person's educational attainment is determined at birth.

"For most outcomes that we study as social scientists, genetic influences are likely to operate through environmental channels that are modifiable," explained NYU sociologist Dalton Conley, one of the study's co-authors who also serves on the Advisory Board of the SSGAC. "We have now taken a small but important first step toward identifying the specific genetic variants that predict educational attainment. Armed with this knowledge, we can now begin to examine how other factors -- including public policy, parental roles, and economic status -- dampen or amplify genetic effects and ultimately devise better remedies to bolster educational outcomes." 


Share this story on Facebook, Twitter, and Google: