BTemplates.com

Powered by Blogger.

Pageviews past week

Quantum mechanics

Auto News

artificial intelligence

About Me

Recommend us on Google!

Information Technology

Popular Posts

Showing posts with label Research. Show all posts
Showing posts with label Research. Show all posts

Monday, April 3, 2023

Artificial Brain: The Future of Intelligence?


The future of artificial brains is uncertain. It is possible that artificial brains could one day become as intelligent as humans, or even surpass human intelligence. If this happens, it would have a profound impact on society. Artificial brains could be used to solve some of the world's most pressing problems, such as climate change and poverty. However, they could also be used for malicious purposes, such as developing autonomous weapons.

Artificial intelligence (AI) is rapidly evolving, and with it, the possibility of creating artificial brains. While this may seem like something out of a science fiction movie, it is actually a very real possibility. In fact, researchers at Indiana University (IU) are already working on developing artificial brains that could one day rival the capabilities of the human brain.

The IU team is led by Professor of Computer Science David B. Hardcastle, who is an expert in artificial intelligence and machine learning. Hardcastle and his team are working on developing artificial brains that can learn and adapt in the same way that human brains do. They believe that this type of artificial intelligence could have a profound impact on many different areas of our lives, from healthcare to education to transportation.

One of the main goals of the IU team is to develop artificial brains that can be used to improve healthcare. They believe that artificial brains could be used to diagnose diseases, develop new treatments, and even provide personalized care to patients. For example, artificial brains could be used to analyze medical images and identify potential problems that human doctors might miss. They could also be used to develop new drugs and treatments that are tailored to the specific needs of each patient.

The IU team is also working on developing artificial brains that can be used to improve education. They believe that artificial brains could be used to create personalized learning experiences for students. For example, artificial brains could be used to identify each student's strengths and weaknesses and then provide them with the appropriate level of challenge. They could also be used to provide feedback to students in a way that is both timely and helpful.

In addition to healthcare and education, the IU team is also working on developing artificial brains that can be used to improve transportation. They believe that artificial brains could be used to develop self-driving cars and other autonomous vehicles. For example, artificial brains could be used to navigate complex traffic conditions and avoid accidents. They could also be used to provide passengers with a more comfortable and enjoyable travel experience.

The work being done by the IU team is just one example of the many ways that artificial intelligence is being used to develop new technologies. Artificial intelligence has the potential to revolutionize many different areas of our lives, and the IU team is at the forefront of this research. It will be interesting to see what the future holds for artificial intelligence and artificial brains.

The Benefits of Artificial Brains

Artificial brains could have a number of benefits, including:

  • Improved healthcare: Artificial brains could be used to diagnose diseases, develop new treatments, and even provide personalized care to patients.
  • Improved education: Artificial brains could be used to create personalized learning experiences for students.
  • Improved transportation: Artificial brains could be used to develop self-driving cars and other autonomous vehicles.
  • Increased productivity: Artificial brains could be used to automate tasks and increase productivity in a variety of industries.
  • Enhanced creativity: Artificial brains could be used to generate new ideas and solve problems in innovative ways.

The Challenges of Artificial Brains

While there are many potential benefits to artificial brains, there are also a number of challenges that need to be addressed. These challenges include:

  • Safety: Artificial brains need to be designed in a way that ensures they are safe and do not pose a threat to humans.
  • Ethics: The development of artificial brains raises a number of ethical concerns, such as the potential for job displacement and the impact on human autonomy.
  • Control: It is important to ensure that artificial brains are under human control and do not become uncontrollable.
  • Bias: Artificial brains are trained on data, and if that data is biased, the artificial brain will also be biased. This is a major challenge that needs to be addressed in order to ensure that artificial brains are fair and unbiased.

The Future of Artificial Brains

The future of artificial brains is uncertain. It is possible that artificial brains could one day become as intelligent as humans, or even surpass human intelligence. If this happens, it would have a profound impact on society. Artificial brains could be used to solve some of the world's most pressing problems, such as climate change and poverty. However, they could also be used for malicious purposes, such as developing autonomous weapons.

It is important to start thinking about the implications of artificial brains now, so that we can be prepared for whatever the future holds.

Wednesday, January 3, 2018

New technique allows rapid screening for new types of solar cells


Approach could bypass the time-consuming steps currently needed to test new photovoltaic materials.
This experimental setup was used by the team to measure the electrical output of a sample of solar cell material, under controlled conditions of varying temperature and illumination. The data from those tests was then used as the basis for computer modeling using statistical methods to predict the overall performance of the material in real-world operating conditions.
Image: Riley Brandt
  

The worldwide quest by researchers to find better, more efficient materials for tomorrow’s solar panels is usually slow and painstaking. Researchers typically must produce lab samples — which are often composed of multiple layers of different materials bonded together — for extensive testing.


Now, a team at MIT and other institutions has come up with a way to bypass such expensive and time-consuming fabrication and testing, allowing for a rapid screening of far more variations than would be practical through the traditional approach.

The new process could not only speed up the search for new formulations, but also do a more accurate job of predicting their performance, explains Rachel Kurchin, an MIT graduate student and co-author of a paper describing the new process that appears this week in the journal Joule. Traditional methods “often require you to make a specialized sample, but that differs from an actual cell and may not be fully representative” of a real solar cell’s performance, she says.

For example, typical testing methods show the behavior of the “majority carriers,” the predominant particles or vacancies whose movement produces an electric current through a material. But in the case of photovoltaic (PV) materials, Kurchin explains, it is actually the minority carriers — those that are far less abundant in the material — that are the limiting factor in a device’s overall efficiency, and those are much more difficult to measure. In addition, typical procedures only measure the flow of current in one set of directions — within the plane of a thin-film material — whereas it’s up-down flow that is actually harnessed in a working solar cell. In many materials, that flow can be “drastically different,” making it critical to understand in order to properly characterize the material, she says.

“Historically, the rate of new materials development is slow — typically 10 to 25 years,” says Tonio Buonassisi, an associate professor of mechanical engineering at MIT and senior author of the paper. “One of the things that makes the process slow is the long time it takes to troubleshoot early-stage prototype devices,” he says. “Performing characterization takes time — sometimes weeks or months — and the measurements do not always have the necessary sensitivity to determine the root cause of any problems.”

So, Buonassisi says, “the bottom line is, if we want to accelerate the pace of new materials development, it is imperative that we figure out faster and more accurate ways to troubleshoot our early-stage materials and prototype devices.” And that’s what the team has now accomplished. They have developed a set of tools that can be used to make accurate, rapid assessments of proposed materials, using a series of relatively simple lab tests combined with computer modeling of the physical properties of the material itself, as well as additional modeling based on a statistical method known as Bayesian inference.

The system involves making a simple test device, then measuring its current output under different levels of illumination and different voltages, to quantify exactly how the performance varies under these changing conditions. These values are then used to refine the statistical model.

“After we acquire many current-voltage measurements [of the sample] at different temperatures and illumination intensities, we need to figure out what combination of materials and interface variables make the best fit with our set of measurements,” Buonassisi explains. “Representing each parameter as a probability distribution allows us to account for experimental uncertainty, and it also allows us to suss out which parameters are covarying.”

The Bayesian inference process allows the estimates of each parameter to be updated based on each new measurement, gradually refining the estimates and homing in ever closer to the precise answer, he says.

In seeking a combination of materials for a particular kind of application, Kurchin says, “we put in all these materials properties and interface properties, and it will tell you what the output will look like.”

The system is simple enough that, even for materials that have been less well-characterized in the lab, “we’re still able to run this without tremendous computer overhead.” And, Kurchin says, making use of the computational tools to screen possible materials will be increasingly useful because “lab equipment has gotten more expensive, and computers have gotten cheaper. This method allows you to minimize your use of complicated lab equipment.”

The basic methodology, Buonassisi says, could be applied to a wide variety of different materials evaluations, not just solar cells — in fact, it may apply to any system that involves a computer model for the output of an experimental measurement. “For example, this approach excels in figuring out which material or interface property might be limiting performance, even for complex stacks of materials like batteries, thermoelectric devices, or composites used in tennis shoes or airplane wings.” And, he adds, “It is especially useful for early-stage research, where many things might be going wrong at once.”

Going forward, he says, “our vision is to link up this fast characterization method with the faster materials and device synthesis methods we’ve developed in our lab.” Ultimately, he says, “I’m very hopeful the combination of high-throughput computing, automation, and machine learning will help us accelerate the rate of novel materials development by more than a factor of five. This could be transformative, bringing the timelines for new materials-science discoveries down from 20 years to about three to five years.”

The research team also included Riley Brandt '11, SM '13, PhD '16; former postdoc Vera Steinmann; MIT graduate student Daniil Kitchaev and visiting professor Gerbrand Ceder, Chris Roat at Google Inc.; and Sergiu Levcenco and Thomas Unold at Hemholz Zentrum in Berlin. The work was supported by a Google Faculty Research Award, the U.S. Department of Energy, and a Total research grant through the MIT Energy Initiative.
 
Credit : https://news.mit.edu/2017/new-technique-allows-rapid-screening-new-types-solar-cells-1220

Saturday, January 14, 2012

Why Alcohol Is Addicting: Endorphins in Brain



Drinking alcohol leads to the release of endorphins in areas of the brain that produce feelings of pleasure and reward, according to a study led by researchers at the Ernest Gallo Clinic and Research Center at the University of California, San Francisco (UCSF).
New research shows that drinking alcohol leads to the
release of endorphins in areas of the brain that produce
feelings of pleasure and reward. (Credit: iStockphoto)

The finding marks the first time that endorphin release in the nucleus accumbens and orbitofrontal cortex in response to alcohol consumption has been directly observed in humans.

Endorphins are small proteins with opiate-like effects that are produced naturally in the brain.

"This is something that we've speculated about for 30 years, based on animal studies, but haven't observed in humans until now," said lead author Jennifer Mitchell, PhD, clinical project director at the Gallo Center and an adjunct assistant professor of neurology at UCSF. "It provides the first direct evidence of how alcohol makes people feel good."

The discovery of the precise locations in the brain where endorphins are released provides a possible target for the development of more effective drugs for the treatment of alcohol abuse, said senior author Howard L. Fields, MD, PhD, a professor of neurology and Endowed Chair in Pharmacology of Addiction in Neurology at UCSF and director of human clinical research at the Gallo Center.

The study appears on January 11, 2012, in Science Translational Medicine.

The researchers used positron emission tomography, or PET imaging, to observe the immediate effects of alcohol in the brains of 13 heavy drinkers and 12 matched "control" subjects who were not heavy drinkers.

In all of the subjects, alcohol intake led to a release of endorphins. And, in all of the subjects, the more endorphins released in the nucleus accumbens, the greater the feelings of pleasure reported by each drinker.

In addition, the more endorphins released in the orbitofrontal cortex, the greater the feelings of intoxication in the heavy drinkers, but not in the control subjects.

"This indicates that the brains of heavy or problem drinkers are changed in a way that makes them more likely to find alcohol pleasant, and may be a clue to how problem drinking develops in the first place," said Mitchell. "That greater feeling of reward might cause them to drink too much."

Results Suggest Possible Approach to Treat Alcohol Abuse

Before drinking, the subjects were given injections of radioactively tagged carfentanil, an opiate-like drug that selectively binds to sites in the brain called opioid receptors, where endorphins also bind. As the radioactive carfentanil was bound and emitted radiation, the receptor sites "lit up" on PET imaging, allowing the researchers to map their exact locations.

The subjects were then each given a drink of alcohol, followed by a second injection of radioactive carfentanil, and scanned again with PET imaging. As the natural endorphins released by drinking were bound to the opioid receptor sites, they prevented the carfentanil from being bound. By comparing areas of radioactivity in the first and second PET images, the researchers were able to map the exact locations -- areas of lower radioactivity -- where endorphins were released in response to drinking.

The researchers found that endorphins released in response to drinking bind to a specific type of opioid receptor, the Mu receptor.

This result suggests a possible approach to improving the efficacy of treatment for alcohol abuse through the design of better medications than naltrexone, said Fields, who collaborated with Mitchell in the design and analysis of the study.

Fields explained that naltrexone, which prevents binding at opioid receptor sites, is not widely accepted as a treatment for alcohol dependence -- "not because it isn't effective at reducing drinking, but because some people stop taking it because they don't like the way it makes them feel," he said.

"Naltrexone blocks more than one opioid receptor, and we need to know which blocking action reduces drinking and which causes the unwanted side effects," he said. "If we better understand how endorphins control drinking, we will have a better chance of creating more targeted therapies for substance addiction. This paper is a significant step in that direction because it specifically implicates the Mu opioid receptor in alcohol reward in humans."

Co-authors of the study are James P. O'Neill and Mustafa Janabi of Lawrence Berkeley Laboratory and Shawn M. Marks and William J. Jagust, MD, of LBL and the University of California, Berkeley.

The study was supported by funds from the Department of Defense and by State of California Funds for Research on Drug and Alcohol Abuse.

Wednesday, October 5, 2011

Electricity from the nose: Engineers make power from human respiration


The same piezoelectric effect that ignites your gas grill with the push of a button could one day power sensors in your body via the respiration in your nose.
Graduate Student Jian Shi and Materials Science and
Engineering Assistant Professor Xudong Wang
demonstrate a material that could be used to
capture energy from respiration.

Writing in the September issue of the journal Energy and Environmental Science, Materials Science and Engineering Assistant Professor Xudong Wang, postdoctoral Researcher Chengliang Sun and graduate student Jian Shi report creating a plastic microbelt that vibrates when passed by low-speed airflow such as human respiration.

In certain materials, such as the polyvinylidene fluoride (PVDF) used by Wang’s team, an electric charge accumulates in response to applied mechanical stress. This is known as the piezoelectric effect. The researchers engineered PVDF to generate sufficient electrical energy from respiration to operate small electronic devices.



“Basically, we are harvesting mechanical energy from biological systems. The airflow of normal human respiration is typically below about two meters per second,” says Wang. “We calculated that if we could make this material thin enough, small vibrations could produce a microwatt of electrical energy that could be useful for sensors or other devices implanted in the face.”

Researchers are taking advantage of advances in nanotechnology and miniaturized electronics to develop a host of biomedical devices that could monitor blood glucose for diabetics or keep a pacemaker battery charged so that it would not need replacing. What’s needed to run these tiny devices is a miniscule power supply. Waste energy in the form or blood flow, motion, heat, or in this case respiration, offers a consistent source of power.

Wang’s team used an ion-etching process to carefully thin material while preserving its piezoelectric properties. With improvements, he believes the thickness can be controlled down to the submicron level. Because PVDF is biocompatible, he says the development represents a significant advance toward creating a practical micro-scale device for harvesting energy from respiration.

Provided by University of Wisconsin-Madison

Tuesday, August 16, 2011

Scientists Have New Help Finding Their Way Around Brain's Nooks and Crannies


Like explorers mapping a new planet, scientists probing the brain need every type of landmark they can get. Each mountain, river or forest helps scientists find their way through the intricacies of the human brain.
Scientists have found a way to use MRI scanning data 
to map myelin, a white sheath that covers some brain 
cell branches. Such maps, previously only available via 
dissection, help scientists determine precisely where they 
are at in the brain. Red and yellow indicate regions with 
high myelin levels; blue, purple and black areas have low 
myelin levels. (Credit: David Van Essen)

Researchers at Washington University School of Medicine in St. Louis have developed a new technique that provides rapid access to brain landmarks formerly only available at autopsy. Better brain maps will result, speeding efforts to understand how the healthy brain works and potentially aiding in future diagnosis and treatment of brain disorders, the researchers report in the Journal of Neuroscience Aug. 10.

The technique makes it possible for scientists to map myelination, or the degree to which branches of brain cells are covered by a white sheath known as myelin in order to speed up long-distance signaling. It was developed in part through the Human Connectome Project, a $30 million, five-year effort to map the brain's wiring. That project is headed by Washington University in St. Louis and the University of Minnesota.

"The brain is among the most complex structures known, with approximately 90 billion neurons transmitting information across 150 trillion connections," says David Van Essen, PhD, Edison Professor and head of the Department of Anatomy and Neurobiology at Washington University. "New perspectives are very helpful for understanding this complexity, and myelin maps will give us important insights into where certain parts of the brain end and others begin."

Easy access to detailed maps of myelination in humans and animals also will aid efforts to understand how the brain evolved and how it works, according to Van Essen.

Neuroscientists have known for more than a century that myelination levels differ throughout the cerebral cortex, the gray outer layer of the brain where most higher mental functions take place. Until now, though, the only way they could map these differences in detail was to remove the brain after death, slice it and stain it for myelin.

Washington University graduate student Matthew Glasser developed the new technique, which combines data from two types of magnetic resonance imaging (MRI) scans that have been available for years.



"These are standard ways of imaging brain anatomy that scientists and clinicians have used for a long time," Glasser says. "After developing the new technique, we applied it in a detailed analysis of archived brain scans from healthy adults."

As in prior studies, Glasser's results show highest myelination levels in areas involved with early processing of information from the eyes and other sensory organs and control of movement. Many brain cells are packed into these regions, but the connections among the cells are less complex. Scientists suspect that these brain regions rely heavily on what computer scientists call parallel processing: Instead of every cell in the region working together on a single complex problem, multiple separate teams of cells work simultaneously on different parts of the problem.

Areas with less myelin include brain regions linked to speech, reasoning and use of tools. These regions have brain cells that are packed less densely, because individual cells are larger and have more complex connections with neighboring cells.

"It's been widely hypothesized that each chunk of the cerebral cortex is made up of very uniform information-processing machinery," Van Essen says. "But we're now adding to a picture of striking regional differences that are important for understanding how the brain works."

According to Van Essen, the technique will make it possible for the Connectome project to rapidly map myelination in many different research participants. Data on many subjects, acquired through many different analytical techniques including myelination mapping, will help the resulting maps cover the range of anatomic variation present in humans.

"Our colleagues are clamoring to make use of this approach because it's so helpful for figuring out where you are in the cortex, and the data are either already there or can be obtained in less than 10 minutes of MRI scanning," Glasser says.

This research was funded by the National Institutes of Health (NIH).

Thursday, July 28, 2011

Scientists Discover Tipping Point for the Spread of Ideas


Scientists at Rensselaer Polytechnic Institute have found that when just 10 percent of the population holds an unshakable belief, their belief will always be adopted by the majority of the society. The scientists, who are members of the Social Cognitive Networks Academic Research Center (SCNARC) at Rensselaer, used computational and analytical methods to discover the tipping point where a minority belief becomes the majority opinion. The finding has implications for the study and influence of societal interactions ranging from the spread of innovations to the movement of political ideals.

In this visualization, we see the tipping point where minority opinion (shown in red) quickly becomes majority opinion. Over time, the minority opinion grows. Once the minority opinion reached 10 percent of the population, the network quickly changes as the minority opinion takes over the original majority opinion (shown in green). (Credit: SCNARC/Rensselaer Polytechnic Institute)

"When the number of committed opinion holders is below 10 percent, there is no visible progress in the spread of ideas. It would literally take the amount of time comparable to the age of the universe for this size group to reach the majority," said SCNARC Director Boleslaw Szymanski, the Claire and Roland Schmitt Distinguished Professor at Rensselaer. "Once that number grows above 10 percent, the idea spreads like flame."

As an example, the ongoing events in Tunisia and Egypt appear to exhibit a similar process, according to Szymanski. "In those countries, dictators who were in power for decades were suddenly overthrown in just a few weeks."

The findings were published in the July 22, 2011, early online edition of the journal Physical Review E in an article titled "Social consensus through the influence of committed minorities."

An important aspect of the finding is that the percent of committed opinion holders required to shift majority opinion does not change significantly regardless of the type of network in which the opinion holders are working. In other words, the percentage of committed opinion holders required to influence a society remains at approximately 10 percent, regardless of how or where that opinion starts and spreads in the society.

To reach their conclusion, the scientists developed computer models of various types of social networks. One of the networks had each person connect to every other person in the network. The second model included certain individuals who were connected to a large number of people, making them opinion hubs or leaders. The final model gave every person in the model roughly the same number of connections. The initial state of each of the models was a sea of traditional-view holders. Each of these individuals held a view, but were also, importantly, open minded to other views.

Once the networks were built, the scientists then "sprinkled" in some true believers throughout each of the networks. These people were completely set in their views and unflappable in modifying those beliefs. As those true believers began to converse with those who held the traditional belief system, the tides gradually and then very abruptly began to shift.



"In general, people do not like to have an unpopular opinion and are always seeking to try locally to come to consensus. We set up this dynamic in each of our models," said SCNARC Research Associate and corresponding paper author Sameet Sreenivasan. To accomplish this, each of the individuals in the models "talked" to each other about their opinion. If the listener held the same opinions as the speaker, it reinforced the listener's belief. If the opinion was different, the listener considered it and moved on to talk to another person. If that person also held this new belief, the listener then adopted that belief.

"As agents of change start to convince more and more people, the situation begins to change," Sreenivasan said. "People begin to question their own views at first and then completely adopt the new view to spread it even further. If the true believers just influenced their neighbors, that wouldn't change anything within the larger system, as we saw with percentages less than 10."

The research has broad implications for understanding how opinion spreads. "There are clearly situations in which it helps to know how to efficiently spread some opinion or how to suppress a developing opinion," said Associate Professor of Physics and co-author of the paper Gyorgy Korniss. "Some examples might be the need to quickly convince a town to move before a hurricane or spread new information on the prevention of disease in a rural village."

The researchers are now looking for partners within the social sciences and other fields to compare their computational models to historical examples. They are also looking to study how the percentage might change when input into a model where the society is polarized. Instead of simply holding one traditional view, the society would instead hold two opposing viewpoints. An example of this polarization would be Democrat versus Republican.

The research was funded by the Army Research Laboratory (ARL) through SCNARC, part of the Network Science Collaborative Technology Alliance (NS-CTA), the Army Research Office (ARO), and the Office of Naval Research (ONR).

The research is part of a much larger body of work taking place under SCNARC at Rensselaer. The center joins researchers from a broad spectrum of fields -- including sociology, physics, computer science, and engineering -- in exploring social cognitive networks. The center studies the fundamentals of network structures and how those structures are altered by technology. The goal of the center is to develop a deeper understanding of networks and a firm scientific basis for the newly arising field of network science. More information on the launch of SCNARC can be found at http://news.rpi.edu/update.do?artcenterkey=2721&setappvar=page(1)

Szymanski, Sreenivasan, and Korniss were joined in the research by Professor of Mathematics Chjan Lim, and graduate students Jierui Xie (first author) and Weituo Zhang.

Thursday, July 7, 2011

Solar Cells that See Red


Metamaterials that convert lower-energy photons to usable wavelengths could offer solar cells an efficiency boost.
Light switch: In a process that could make
solar cells more efficient, green laser light
is "upconverted" to blue light by a
solution of dyes and metal nanoparticles.
Credit: Jennifer Dionne

Researchers at Stanford University have demonstrated a set of materials that could enable solar cells to use a band of the solar spectrum that otherwise goes to waste. The materials layered on the back of solar cells would convert red and near-infrared light—unusable by today's solar cells—into shorter-wavelength light that the cells can turn into energy. The university researchers will collaborate with the Bosch Research and Technology Center in Palo Alto, California, to demonstrate a system in working solar cells in the next four years.

Even the best of today's silicon solar cells can't use about 30 percent of the light from the sun: that's because the active materials in solar cells can't interact with photons whose energy is too low. But though each of these individual photons is low energy, as a whole they represent a large amount of untapped solar energy that could make solar cells more cost-competitive.

The process, called "upconversion," relies on pairs of dyes that absorb photons of a given wavelength and re-emit them as fewer, shorter-wavelength photons. In this case, the Bosch and Stanford researchers will work on systems that convert near-infrared wavelengths (most of which are unusable by today's solar cells). The leader of the Stanford group, assistant professor Jennifer Dionne, believes the group can improve the sunlight-to-electricity conversion efficiency of amorphous-silicon solar cells from 11 percent to 15 percent.



The concept of upconversion isn't new, but it's never been demonstrated in a working solar cell, says Inna Kozinsky, a senior engineer at Bosch. Upconversion typically requires two types of molecules to absorb relatively high-wavelength photons, combine their energy, and re-emit it as higher-energy, lower-wavelength photons. However, the chances of the molecules encountering each other at the right time when they're in the right energetic states are low. Dionne is developing nanoparticles to add to these systems in order to increase those chances. To make better upconversion systems, Dionne is designing metal nanoparticles that act like tiny optical antennas, directing light in these dye systems in such a way that the dyes are exposed to more light at the right time, which creates more upconverted light, and then directing more of that upconverted light out of the system in the end.

The ultimate vision, says Dionne, is to create a solid. Sheets of such a material could be laid down on the bottom of the cell, separated from the cell itself by an electrically insulating layer. Low-wavelength photons that pass through the active layer would be absorbed by the upconverter layer, then re-emitted back into the active layer as usable, higher-wavelength light.

Kozinsky says Bosch's goal is to demonstrate upconversion of red light in working solar cells in three years, and upconversion of infrared light in four years. Factoring in the time needed to scale up to manufacturing, she says, the technology could be in Bosch's commercial solar cells in seven to 10 years.

Saturday, July 2, 2011

Auto-pilots need a birds-eye view



New research on how birds can fly so quickly and accurately through dense forests may lead to new developments in robotics and auto-pilots.
The pigeons were fitted with a tiny head-camera 
before they flew through the artificial forest. 
Credit: Talia Moore

Scientists from Harvard University trained pigeons to fly through an artificial forest with a tiny camera attached to their heads, literally giving a birds-eye view. "Attaching the camera to the bird as well as filming them from either side means we can reconstruct both what the bird sees and how it moves," says Dr. Huai-Ti Lin, a lead researcher for this work who has special insight into flying as he is a remote control airplane pilot himself.

The methods pigeons use to navigate through difficult environments could be used as a model for auto-pilot technology. Pigeons, with >300 degree panoramic vision, are well suited to this task because this wrap-round vision allows them to assess obstacles on either side. They can also stabilise their vision and switch rapidly between views using what is called a "head saccade", a small rapid movement of the head.
This image shows a pigeon, fitted with 
a camera, about to fly through the 
artificial forest that can be seen in 
the background. Credit: Talia Moore

This research is being presented at the Society for Experimental Biology annual conference in Glasgow on the 1st of July, 2011.

The researchers also showed that the birds have other skills that would be important for auto-piloted machines, for example they tend to choose the straightest routes. "This is a very efficient way of getting through the forest, because the birds have to do less turns and therefore use less energy but also because they reach the other side quicker," says Dr Lin. "Another interesting finding is that pigeons seems to exit the forest heading in exactly the same direction as when they entered, in spite of all the twist and turns they made in the forest."

When using a robot or an unmanned air-craft it would be invaluable to simply provide it with the coordinates of the destination without having to give it detailed information of all the obstacles it might meet on the way. "If we could develop the technology to follow the same methods as birds we could let the robot get on with it without giving it any more input," says Dr. Lin

Provided by Society for Experimental Biology

Friday, July 1, 2011

How Social Pressure Can Affect What We Remember: Scientists Track Brain Activity as False Memories Are Formed



How easy is it to falsify memory? New research at the Weizmann Institute shows that a bit of social pressure may be all that is needed. The study, which appears in the journal Science, reveals a unique pattern of brain activity when false memories are formed -- one that hints at a surprising connection between our social selves and memory.
New research reveals a unique pattern of brain 
activity when false memories are formed -- one 
that hints at a surprising connection between 
our social selves and memory. (Credit: Image 
courtesy of Weizmann Institute of Science)

The experiment, conducted by Prof. Yadin Dudai and research student Micah Edelson of the Institute's Neurobiology Department with Prof. Raymond Dolan and Dr. Tali Sharot of University College London, took place in four stages. In the first, volunteers watched a documentary film in small groups. Three days later, they returned to the lab individually to take a memory test, answering questions about the film. They were also asked how confident they were in their answers.

They were later invited back to the lab to retake the test while being scanned in a functional MRI (fMRI) that revealed their brain activity. This time, the subjects were also given a "lifeline": the supposed answers of the others in their film viewing group (along with social-media-style photos). Planted among these were false answers to questions the volunteers had previously answered correctly and confidently. The participants conformed to the group on these "planted" responses, giving incorrect answers nearly 70% of the time.

But were they simply conforming to perceived social demands, or had their memory of the film actually undergone a change? To find out, the researchers invited the subjects back to the lab to take the memory test once again, telling them that the answers they had previously been fed were not those of their fellow film watchers, but random computer generations. Some of the responses reverted back to the original, correct ones, but close to half remained erroneous, implying that the subjects were relying on false memories implanted in the earlier session.

An analysis of the fMRI data showed differences in brain activity between the persistent false memories and the temporary errors of social compliance. The most outstanding feature of the false memories was a strong co-activation and connectivity between two brain areas: the hippocampus and the amygdala. The hippocampus is known to play a role in long-term memory formation, while the amygdala, sometimes known as the emotion center of the brain, plays a role in social interaction. The scientists think that the amygdala may act as a gateway connecting the social and memory processing parts of our brain; its "stamp" may be needed for some types of memories, giving them approval to be uploaded to the memory banks. Thus social reinforcement could act on the amygdala to persuade our brains to replace a strong memory with a false one.


Prof. Yadin Dudai's research is supported by the Norman and Helen Asher Center for Human Brain Imaging, which he heads; the Nella and Leon Benoziyo Center for Neurological Diseases; the Carl and Micaela Einhorn-Dominic Institute of Brain Research, which he heads; the Marc Besen and the Pratt Foundation, Australia; Lisa Mierins Smith, Canada; Abe and Kathryn Selsky Memorial Research Project; and Miel de Botton, UK. Prof. Dudai is the incumbent of the Sara and Michael Sela Professorial Chair of Neurobiology.

Thursday, June 30, 2011

Researchers can predict future actions from human brain activity


Bringing the real world into the brain scanner, researchers at The University of Western Ontario from The Centre for Brain and Mind can now determine the action a person was planning, mere moments before that action is actually executed.
A volunteer completes tasks while in the functional magnetic
imaging (fMRI) machine. This research project focuses
on understanding how the human brain plans actions.

The findings were published this week in the prestigious Journal of Neuroscience, in the paper, "Decoding Action Intentions from Preparatory Brain Activity in Human Parieto-Frontal Networks."



"This is a considerable step forward in our understanding of how the human brain plans actions," says Jason Gallivan, a Western Neuroscience PhD student, who was the first author on the paper.

University of Western Ontario researchers Jody Culham and Jason Gallivan describe how they can use a fMRI to determine the action a person was planning, mere moments before that action is actually executed. Credit: The University of Western Ontario

Over the course of the one-year study, human subjects had their brain activity scanned using functional magnetic resonance imaging (fMRI) while they performed one of three hand movements: grasping the top of an object, grasping the bottom of the object, or simply reaching out and touching the object. The team found that by using the signals from many brain regions, they could predict, better than chance, which of the actions the volunteer was merely intending to do, seconds later.


"Neuroimaging allows us to look at how action planning unfolds within human brain areas without having to insert electrodes directly into the human brain. This is obviously far less intrusive," explains Western Psychology professor Jody Culham, who was the paper's senior author.


Gallivan says the new findings could also have important clinical implications: "Being able to predict a human's desired movements using brain signals takes us one step closer to using those signals to control prosthetic limbs in movement-impaired patient populations, like those who suffer from spinal cord injuries or locked-in syndrome."

                    Brain timecourse video of subject's fMRI image during experiment

Provided by University of Western Ontario

Tuesday, June 21, 2011

Husband's employment status threatens marriage, but wife's does not, study finds



A new study of employment and divorce suggest that while social pressure discouraging women from working outside the home has weakened, pressure on husbands to be breadwinners largely remains.

The research, led by Liana Sayer of Ohio State University and forthcoming in the American Journal of Sociology, was designed to show how employment status influences both men's and women's decisions to end a marriage.

According to the study, a woman's employment status has no effect on the likelihood that her husband will opt to leave the marriage. An employed woman is more likely to initiate a divorce than a woman who is not employed, but only when she reports being highly unsatisfied with the marriage.

The results for male employment status on the other hand were far more surprising.

For a man, not being employed not only increases the chances that his wife will initiate divorce, but also that he will be the one who opts to leave. Even men who are relatively happy in their marriages are more likely to leave if they are not employed, the research found.



Taken together, the findings suggest an "asymmetric" change in traditional gender roles in marriage, the researchers say.

That men who are not employed, regardless of their marital satisfaction, are more likely to initiate divorce suggests that a marriage in which the man does not work "does not look like what [men] think a marriage is supposed to," the researchers write. In contrast, women's employment alone does not encourage divorce initiated by either party. That implies that a woman's choice to enter the workforce is not a violation of any marriage norms. Rather, being employed merely provides financial security that enables a woman to leave when all else fails.

"These effects probably emanate from the greater change in women's than men's roles," the researchers write. "Women's employment has increased and is accepted, men's nonemployment is unacceptable to many, and there is a cultural ambivalence and lack of institutional support for men taking on 'feminized' roles such as household work and emotional support."

The research used data on over 3,600 couples taken from three waves of the National Survey of Families and Households. Waves were conducted in 1987-88, 1992-94, and 2001-2.

More information: Liana C. Sayer, Paula England, Paul Allison, and Nicole Kangas, "She Left, He Left: How Employment and Satisfaction Affect Men's and Women's Decisions to Leave Marriages." American Journal of Sociology 116:6 (May 2010).


Wednesday, May 11, 2011

New Insect Repellant May Be Thousands of Times Stronger Than DEET



Imagine an insect repellant that not only is thousands of times more effective than DEET -- the active ingredient in most commercial mosquito repellants -- but also works against all types of insects, including flies, moths and ants.
Anopheles mosquito, which can spread malaria. 
(Credit: © Kletr / Fotolia)

That possibility has been created by the discovery of a new class of insect repellant made in the laboratory of Vanderbilt Professor of Biological Sciences and Pharmacology Laurence Zwiebel and reported this week in the online Early Edition of the Proceedings of the National Academy of Sciences.

"It wasn't something we set out to find," said David Rinker, a graduate student who performed the study in collaboration with graduate student Gregory Pask and post-doctoral fellow Patrick Jones. "It was an anomaly that we noticed in our tests."

The tests were conducted as part of a major interdisciplinary research project to develop new ways to control the spread of malaria by disrupting a mosquito's sense of smell supported by the Grand Challenges in Global Health Initiative funded by the Foundation for the NIH through a grant from the Bill & Melinda Gates Foundation.

"It's too soon to determine whether this specific compound can act as the basis of a commercial product," Zwiebel cautioned. "But it is the first of its kind and, as such, can be used to develop other similar compounds that have characteristics appropriate for commercialization."

The discovery of this new class of repellant is based on insights that scientists have gained about the basic nature of the insect's sense of smell in the last few years. Although the mosquito's olfactory system is housed in its antennae, 10 years ago biologists thought that it worked in the same way at the molecular level as it does in mammals. A family of special proteins called odorant receptors, or ORs, sits on the surface of nerve cells in the nose of mammals and in the antennae of mosquitoes. When these receptors come into contact with smelly molecules, they trigger the nerves signaling the detection of specific odors.

In the last few years, however, scientists have been surprised to learn that the olfactory system of mosquitoes and other insects is fundamentally different. In the insect system, conventional ORs do not act autonomously. Instead, they form a complex with a unique co-receptor (called Orco) that is also required to detect odorant molecules. ORs are spread all over the antennae and each responds to a different odor. To function, however, each OR must be connected to an Orco.

"Think of an OR as a microphone that can detect a single frequency," Zwiebel said. "On her antenna the mosquito has dozens of types of these microphones, each tuned to a specific frequency. Orco acts as the switch in each microphone that tells the brain when there is a signal. When a mosquito smells an odor, the microphone tuned to that smell will turn "on" its Orco switch. The other microphones remain off. However, by stimulating Orco directly we can turn them all on at once. This would effectively overload the mosquito's sense of smell and shut down her ability to find blood."

Because the researchers couldn't predict what chemicals might modulate OR-Orco complexes, they decided to "throw the kitchen sink" at the problem. Through their affiliation with Vanderbilt's Institute of Chemical Biology, they gained access to Vanderbilt's high throughput screening facility, a technology intended for the drug discovery process, not for the screening of insect ORs.

Jones used genetic engineering techniques to insert mosquito odorant receptors into the human embryonic kidney cells used in the screening process. Rinker tested these cells against a commercial library of 118,000 small molecules normally used in drug development. They expected to find, and did find, a number of compounds that triggered a response in the conventional mosquito ORs they were screening, but they were surprised to find one compound that consistently triggered OR-Orco complexes, leading them to conclude that they had discovered the first molecule that directly stimulates the Orco co-receptor. They have named the compound VUAA1.

Although it is not an odorant molecule, the researchers determined that VUAA1 activates insect OR-Orco complexes in a manner similar to a typical odorant molecule. Jones also verified that mosquitoes respond to exposure to VUAA1, a crucial step in demonstrating that VUAA1 can affect a mosquito's behavior.

"If a compound like VUAA1 can activate every mosquito OR at once, then it could overwhelm the insect's sense of smell, creating a repellant effect akin to stepping onto an elevator with someone wearing too much perfume, except this would be far worse for the mosquito," Jones said.

The researchers have just begun behavioral studies with the compound. In preliminary tests with mosquitoes, they have found that VUAA1 is thousands of times more effective than DEET.

They have also established that the compound stimulates the OR-Orco complexes of flies, moths and ants. As a result, "VUAA1 opens the door for the development of an entirely new class of agents, which could be used not only to disrupt disease vectors, but also the nuisance insects in your backyard or the agricultural pests in your crops," Jones said.

Many questions must be answered before VUAA1 can be considered for commercial applications. Zwiebel's team is currently working with researchers in Vanderbilt's Drug Discovery Program to pare away the parts of VUAA1 that don't contribute to its activity. Once that is done, they will begin testing its toxicity.

Vanderbilt University has filed for a patent on this class of compounds and is talking with potential corporate licensees interested in incorporating them into commercial products, with special focus on development of products to reduce the spread of malaria in the developing world.

Friday, April 29, 2011

Microsleep: Brain Regions Can Take Short Naps During Wakefulness, Leading to Errors



If you've ever lost your keys or stuck the milk in the cupboard and the cereal in the refrigerator, you may have been the victim of a tired brain region that was taking a quick nap.
A photo of rats with objects introduced into their 
cages to keep them awake. (Credit: Giulio Tononi,
M.D., Ph.D., University of Wisconsin-Madison)

Researchers at the University of Wisconsin-Madison have a new explanation. They've found that some nerve cells in a sleep-deprived yet awake brain can briefly go "off line," into a sleep-like state, while the rest of the brain appears awake.

"Even before you feel fatigued, there are signs in the brain that you should stop certain activities that may require alertness," says Dr. Chiara Cirelli, professor of psychiatry at the School of Medicine and Public Health. "Specific groups of neurons may be falling asleep, with negative consequences on performance."

Until now, scientists thought that sleep deprivation generally affected the entire brain. Electroencephalograms (EEGs) show network brain-wave patterns typical of either being asleep or awake.

"We know that when we are sleepy, we make mistakes, our attention wanders and our vigilance goes down," says Cirelli. "We have seen with EEGs that even while we are awake, we can experience shorts periods of 'micro sleep.' "

Periods of micro sleep were thought to be the most likely cause of people falling asleep at the wheel while driving, Cirelli says.

But the new research found that even before that stage, brains are already showing sleep-like activity that impairs them, she says.

As reported in the current issue of Nature, the researchers inserted probes into specific groups of neurons in the brains of freely-behaving rats. After the rats were kept awake for prolonged periods, the probes showed areas of "local sleep" despite the animals' appearance of being awake and active.

"Even when some neurons went off line, the overall EEG measurements of the brain indicated wakefulness in the rats," Cirelli says.

And there were behavioral consequences to the local sleep episodes.

"When we prolonged the awake period, we saw the rats start to make mistakes," Cirelli says.

When animals were challenged to do a tricky task, such as reaching with one paw to get a sugar pellet, they began to drop the pellets or miss in reaching for them, indicating that a few neurons might have gone off line.

"This activity happened in few cells," Cirelli adds. "For instance, out of 20 neurons we monitored in one experiment, 18 stayed awake. From the other two, there were signs of sleep -- brief periods of activity alternating with periods of silence."

The researchers tested only motor tasks, so they concluded from this study that neurons affected by local sleep are in the motor cortex.

Sunday, April 24, 2011

Functioning Synapse Created Using Carbon Nanotubes: Devices Might Be Used in Brain Prostheses or Synthetic Brains



Engineering researchers the University of Southern California have made a significant breakthrough in the use of nanotechnologies for the construction of a synthetic brain. They have built a carbon nanotube synapse circuit whose behavior in tests reproduces the function of a neuron, the building block of the brain.
This image shows nanotubes used in synthetic 
synapse and apparatus used to create them. 
(Credit: USC Viterbi School of Engineering)

The team, which was led by Professor Alice Parker and Professor Chongwu Zhou in the USC Viterbi School of Engineering Ming Hsieh Department of Electrical Engineering, used an interdisciplinary approach combining circuit design with nanotechnology to address the complex problem of capturing brain function.

In a paper published in the proceedings of the IEEE/NIH 2011 Life Science Systems and Applications Workshop in April 2011, the Viterbi team detailed how they were able to use carbon nanotubes to create a synapse.

Carbon nanotubes are molecular carbon structures that are extremely small, with a diameter a million times smaller than a pencil point. These nanotubes can be used in electronic circuits, acting as metallic conductors or semiconductors.

"This is a necessary first step in the process," said Parker, who began the looking at the possibility of developing a synthetic brain in 2006. "We wanted to answer the question: Can you build a circuit that would act like a neuron? The next step is even more complex. How can we build structures out of these circuits that mimic the function of the brain, which has 100 billion neurons and 10,000 synapses per neuron?"

Parker emphasized that the actual development of a synthetic brain, or even a functional brain area is decades away, and she said the next hurdle for the research centers on reproducing brain plasticity in the circuits.

The human brain continually produces new neurons, makes new connections and adapts throughout life, and creating this process through analog circuits will be a monumental task, according to Parker.

She believes the ongoing research of understanding the process of human intelligence could have long-term implications for everything from developing prosthetic nanotechnology that would heal traumatic brain injuries to developing intelligent, safe cars that would protect drivers in bold new ways.

For Jonathan Joshi, a USC Viterbi Ph.D. student who is a co-author of the paper, the interdisciplinary approach to the problem was key to the initial progress. Joshi said that working with Zhou and his group of nanotechnology researchers provided the ideal dynamic of circuit technology and nanotechnology.

"The interdisciplinary approach is the only approach that will lead to a solution. We need more than one type of engineer working on this solution," said Joshi. "We should constantly be in search of new technologies to solve this problem."