BTemplates.com

Powered by Blogger.

Pageviews past week

Quantum mechanics

Auto News

artificial intelligence

About Me

Recommend us on Google!

Information Technology

Popular Posts

Showing posts with label Artificial limb. Show all posts
Showing posts with label Artificial limb. Show all posts

Friday, September 24, 2010

Brain Coprocessors The need for operating systems to help brains and machines work together.


The last few decades have seen a surge of invention of technologies that enable the observation or perturbation of information in the brain. Functional MRI, which measures blood flow changes associated with brain activity, is being explored for purposes as diverse as lie detection, prediction of human decision making, and assessment of language recovery after stroke.

Implanted electrical stimulators, which enable control of neural circuit activity, are borne by hundreds of thousands of people to treat conditions such as deafness, Parkinson's disease, and obsessive-compulsive disorder. And new methods, such as the use of light to activate or silence specific neurons in the brain, are being widely utilized by researchers to reveal insights into how to control neural circuits to achieve therapeutically useful changes in brain dynamics. We are entering a neurotechnology renaissance, in which the toolbox for understanding the brain and engineering its functions is expanding in both scope and power at an unprecedented rate.

This toolbox has grown to the point where the strategic utilization of multiple neurotechnologies in conjunction with one another, as a system, may yield fundamental new capabilities, both scientific and clinical, beyond what they can offer alone. For example, consider a system that reads out activity from a brain circuit, computes a strategy for controlling the circuit so it enters a desired state or performs a specific computation, and then delivers information into the brain to achieve this control strategy. Such a system would enable brain computations to be guided by predefined goals set by the patient or clinician, or adaptively steered in response to the circumstances of the patient's environment or the instantaneous state of the patient's brain.

Some examples of this kind of "brain coprocessor" technology are under active development, such as systems that perturb the epileptic brain when a seizure is electrically observed, and prosthetics for amputees that record nerves to control artificial limbs and stimulate nerves to provide sensory feedback. Looking down the line, such system architectures might be capable of very advanced functions--providing just-in-time information to the brain of a patient with dementia to augment cognition, or sculpting the risk-taking profile of an addiction patient in the presence of stimuli that prompt cravings.

Given the ever-increasing number of brain readout and control technologies available, a generalized brain coprocessor architecture could be enabled by defining common interfaces governing how component technologies talk to one another, as well as an "operating system" that defines how the overall system works as a unified whole--analogous to the way personal computers govern the interaction of their component hard drives, memories, processors, and displays. Such a brain coprocessor platform could facilitate innovation by enabling neuroengineers to focus on neural prosthetics at an algorithmic level, much as a computer programmer can work on a computer at a conceptual level without having to plan the fate of every individual bit. In addition, if new technologies come along, e.g., a new kind of neural recording technology, they could be incorporated into a system, and in principle rapidly coupled to existing computation and perturbation methods, without requiring the heavy readaptation of those other components.

Developing such brain coprocessor architectures would take some work--in particular, it would require technologies standardized enough, or perhaps open enough, to be interoperable in a variety of combinations. Nevertheless, much could be learned from developing relatively simple prototype systems. For example, recording technologies by themselves can report brain activity, but cannot fully attest to the causal contribution that the observed brain activity makes to a specific behavioral or clinical outcome; control technologies can input information into neural targets, but by themselves their outcomes might be difficult to interpret due to endogenous neural information and unobserved neural processing. These scientific issues can be disambiguated by rudimentary brain coprocessors, built with readily available off-the-shelf components, that use recording technologies to assess how a given neural circuit perturbation alters brain dynamics. Such explorations may begin to reveal principles governing how best to control a circuit--revealing the neural targets and control strategies that most efficaciously lead to a goal brain state or behavioral effect, and thus pointing the way to new therapeutic strategies. Miniature, implantable brain coprocessors might be able to support new kinds of personalized medicine, for example continuously adapting a neural control strategy to the goals, state, environment, and history of an individual patient--important powers, given the dynamic nature of many brain disorders.

In the future, the computational module of a brain coprocessor may be powerful enough to assist in high-level human cognition or complex decision making. Of course, the augmentation of human intelligence has been one of the key goals of computer engineers for well over half a century. Indeed, if we relax the definition of brain coprocessor just a bit, so as not to require direct physical access to the brain, many consumer technologies being developed today are converging upon brain coprocessor-like architectures. A large number of new technologies are attempting to discover information useful to a user and to deliver this information to the user in real time. Also, these discovery and delivery processes are increasingly shaped by the environment (e.g., location) and history (e.g., social interactions, searches) of the user. Thus we are seeing a departure from the classical view (as initially anticipated by early thinkers about human-machine symbiosis such as J. C. R. Licklider) in which computers receive goals from humans, perform defined computations, and then provide the results back to humans.

Of course, giving machines the authority to serve as proactive human coprocessors, and allowing them to capture our attention with their computed priorities, has to be considered carefully, as anyone who has lost hours due to interruption by a slew of social-network updates or search-engine alerts can attest. How can we give the human brain access to increasingly proactive coprocessing technologies without losing sight of our overarching goals? One idea is to develop and deploy metrics that allow us to evaluate the IQ of a human plus a coprocessor, working together--evaluating the performance of collaborating natural and artificial intelligences in a broad battery of problem-solving contexts. After all, humans with Internet-based brain coprocessors (e.g., laptops running Web browsers) may be more distractible if the goals include long, focused writing tasks, but they may be better at synthesizing data broadly from disparate sources; a given brain coprocessor configuration may be good for some problems but bad for others. Thinking of emerging computational technologies as brain coprocessors forces us to think about them in terms of the impacts they have on the brain, positive and negative, and importantly provides a framework for thoughtfully engineering their direct, as well as their emergent, effects.

Wednesday, September 15, 2010

New Artificial Skin Could Make Prosthetic Limbs and Robots More Sensitive


The light, tickling tread of a pesky fly landing on your face may strike most of us as one of the most aggravating of life's small annoyances. But for scientists working to develop pressure sensors for artificial skin for use on prosthetic limbs or robots, skin sensitive enough to feel the tickle of fly feet would be a huge advance. Now Stanford researchers have built such a sensor.
The sensor is sensitive enough to easily detect this Peruvian butterfly (Chorinea faunus) with transparent wings and red-tipped tails, positioned on a sheet of the sensors. (Credit: Linda Cicero, Stanford University News Service)

By sandwiching a precisely molded, highly elastic rubber layer between two parallel electrodes, the team created an electronic sensor that can detect the slightest touch.

"It detects pressures well below the pressure exerted by a 20 milligram bluebottle fly carcass we experimented with, and does so with unprecedented speed," said Zhenan Bao, an associate professor of chemical engineering who led the research.

The key innovation in the new sensor is the use of a thin film of rubber molded into a grid of tiny pyramids, Bao said. She is the senior author of a paper published Sept. 12 online by Nature Materials.

Previous attempts at building a sensor of this type using a smooth film encountered problems.

"We found that with a very thin continuous film, when you press on it, the material does not have room to expand," said Stefan Mannsfeld, a former postdoctoral researcher in chemical engineering and a coauthor. "So the molecules in the continuous rubber film are forced closer together and become entangled. When pressure is released, they cannot go back to the original arrangement, so the sensor doesn't work as well."

"The microstructuring we developed makes the rubber behave more like an ideal spring," Mannsfeld said. The total thickness of the artificial skin, including the rubber layer and both electrodes, is less than one millimeter.

The speed of compression and rebound of the rubber is critical for the sensor to be able to detect -- and distinguish between -- separate touches in quick succession.

The thin rubber film between the two electrodes stores electrical charges, much like a battery. When pressure is exerted on the sensor, the rubber film compresses, which changes the amount of electrical charges the film can store. That change is detected by the electrodes and is what enables the sensor to transmit what it is "feeling."

The largest sheet of sensors that Bao's group has produced to date measures about seven centimeters on a side. The sheet exhibited a great deal of flexibility, indicating it should perform well when wrapped around a surface mimicking the curvature of something such as a human hand or the sharp angles of a robotic arm.

Bao said that molding the rubber in different shapes yields sensors that are responsive to different ranges of pressure. "It's the same as for human skin, which has a whole range of sensitivities," she said. "Fingertips are the most sensitive, while the elbow is quite insensitive."

The sensors have from several hundred thousand up to 25 million pyramids per square centimeter. Under magnification, the array of tiny structures looks like the product of an ancient Egyptian micro-civilization obsessed with order and gone mad with productivity.

But that density allows the sensors to perceive pressures "in the range of a very, very gentle touch," Bao said. By altering the configuration of the microstructure or the density of the sensors, she thinks the sensor can be refined to detect subtleties in the shape of an object.

"If we can make this in higher resolution, then potentially we should be able to have the image on a coin read by the sensor," she said. A robotic hand covered with the electronic skin could feel a surface and know rough from smooth.

That degree of sensitivity could make the sensors useful in a broad range of medical applications, including robotic surgery, Bao said. In addition, using bandages equipped with the sensors could aid in healing of wounds and incisions. Doctors could use data from the sensors to be sure the bandages were not too tight.

Automobile safety could also be enhanced. "If a driver is tired, or drunk, or falls asleep at the wheel, their hands might loosen or fall off the wheel," said Benjamin Tee, graduate student in electrical engineering and a coauthor. "If there are pressure sensors that can sense that no hands are holding the steering wheel, the car could be equipped with some automatic safety device that could sound an alarm or kick in to slow the car down. This could be simpler and cost less than other methods of detecting driver fatigue."

The team also invented a new type of transistor in which they used the structured, flexible rubber film to replace a component that is normally rigid in a typical transistor. When pressure is applied to their new transistor, the pressure causes a change in the amount of current that the transistor puts out. The new, flexible transistors could also be used in making artificial skin, Bao said.

As Bao's team continues its research, the members may find applications not yet considered as well as other ways to demonstrate the sensitivity of their sensors. They have already expanded their stable of insects beyond the bluebottle fly to include some beautiful, delicate looking -- albeit slightly heavier -- butterflies.

But if the researchers wanted an even more ethereal demonstration, could the sensors detect the bubbles rising in a glass of champagne?

"If the bubbles coming out from the champagne impinge onto the pressure sensor, that might be possible," Bao said. "That would be an interesting experiment to do in the lab."

Engineers Make Artificial Skin out of Nanowires


Engineers at the University of California, Berkeley, have developed a pressure-sensitive electronic material from semiconductor nanowires that could one day give new meaning to the term "thin-skinned."
This is an artist's illustration of an artificial e-skin 
 with nanowire active matrix circuitry covering a hand.
A fragile egg is held, illustrating the functionality of 
the e-skin device for prosthetic and robotic applications. 
(Credit: Ali Javey and Kuniharu Takei)

"The idea is to have a material that functions like the human skin, which means incorporating the ability to feel and touch objects," said Ali Javey, associate professor of electrical engineering and computer sciences and head of the UC Berkeley research team developing the artificial skin.

The artificial skin, dubbed "e-skin" by the UC Berkeley researchers, is described in a Sept. 12 paper in the advanced online publication of the journal Nature Materials. It is the first such material made out of inorganic single crystalline semiconductors.

A touch-sensitive artificial skin would help overcome a key challenge in robotics: adapting the amount of force needed to hold and manipulate a wide range of objects.

"Humans generally know how to hold a fragile egg without breaking it," said Javey, who is also a member of the Berkeley Sensor and Actuator Center and a faculty scientist at the Lawrence Berkeley National Laboratory Materials Sciences Division. "If we ever wanted a robot that could unload the dishes, for instance, we'd want to make sure it doesn't break the wine glasses in the process. But we'd also want the robot to be able to grip a stock pot without dropping it."

A longer term goal would be to use the e-skin to restore the sense of touch to patients with prosthetic limbs, which would require significant advances in the integration of electronic sensors with the human nervous system.

Previous attempts to develop an artificial skin relied upon organic materials because they are flexible and easier to process.

"The problem is that organic materials are poor semiconductors, which means electronic devices made out of them would often require high voltages to operate the circuitry," said Javey. "Inorganic materials, such as crystalline silicon, on the other hand, have excellent electrical properties and can operate on low power. They are also more chemically stable. But historically, they have been inflexible and easy to crack. In this regard, works by various groups, including ours, have recently shown that miniaturized strips or wires of inorganics can be made highly flexible -- ideal for high performance, mechanically bendable electronics and sensors."

The UC Berkeley engineers utilized an innovative fabrication technique that works somewhat like a lint roller in reverse. Instead of picking up fibers, nanowire "hairs" are deposited.

The researchers started by growing the germanium/silicon nanowires on a cylindrical drum, which was then rolled onto a sticky substrate. The substrate used was a polyimide film, but the researchers said the technique can work with a variety of materials, including other plastics, paper or glass. As the drum rolled, the nanowires were deposited, or "printed," onto the substrate in an orderly fashion, forming the basis from which thin, flexible sheets of electronic materials could be built.

In another complementary approach utilized by the researchers, the nanowires were first grown on a flat source substrate, and then transferred to the polyimide film by a direction-rubbing process.

For the e-skin, the engineers printed the nanowires onto an 18-by-19 pixel square matrix measuring 7 centimeters on each side. Each pixel contained a transistor made up of hundreds of semiconductor nanowires. Nanowire transistors were then integrated with a pressure sensitive rubber on top to provide the sensing functionality. The matrix required less than 5 volts of power to operate and maintained its robustness after being subjected to more than 2,000 bending cycles.

The researchers demonstrated the ability of the e-skin to detect pressure from 0 to 15 kilopascals, a range comparable to the force used for such daily activities as typing on a keyboard or holding an object. In a nod to their home institution, the researchers successfully mapped out the letter C in Cal.

"This is the first truly macroscale integration of ordered nanowire materials for a functional system -- in this case, an electronic skin," said study lead author Kuniharu Takei, post-doctoral fellow in electrical engineering and computer sciences. "It's a technique that can be potentially scaled up. The limit now to the size of the e-skin we developed is the size of the processing tools we are using."

Other UC Berkeley co-authors of the paper are Ron Fearing, professor of electrical engineering and computer sciences; Toshitake Takahashi, graduate student in electrical engineering and computer sciences; Johnny C. Ho, graduate student in materials science and engineering; Hyunhyub Ko and Paul Leu, post-doctoral researchers in electrical engineering and computer sciences; and Andrew G. Gillies, graduate student in mechanical engineering.

The National Science Foundation and the Defense Advanced Research Projects Agency helped support this research.