BTemplates.com

Powered by Blogger.

Pageviews past week

Quantum mechanics

Auto News

artificial intelligence

About Me

Recommend us on Google!

Information Technology

Popular Posts

Showing posts with label Disability. Show all posts
Showing posts with label Disability. Show all posts

Sunday, June 30, 2013

Imagination Can Change What We Hear and See


A study from Karolinska Institut in Sweden shows, that our imagination may affect how we experience the world more than we perhaps think. What we imagine hearing or seeing "in our head" can change our actual perception. The study, which is published in the scientific journal Current Biology, sheds new light on a classic question in psychology and neuroscience -- about how our brains combine information from the different senses.

Illusion of colliding objects.
Illusion of colliding objects. (Credit: Image courtesy of Karolinska Institutet)

"We often think about the things we imagine and the things we perceive as being clearly dissociable," says Christopher Berger, doctoral student at the Department of Neuroscience and lead author of the study. "However, what this study shows is that our imagination of a sound or a shape changes how we perceive the world around us in the same way actually hearing that sound or seeing that shape does. Specifically, we found that what we imagine hearing can change what we actually see, and what we imagine seeing can change what we actually hear."

The study consists of a series of experiments that make use of illusions in which sensory information from one sense changes or distorts one's perception of another sense. Ninety-six healthy volunteers participated in total.

In the first experiment, participants experienced the illusion that two passing objects collided rather than passed by one-another when they imagined a sound at the moment the two objects met. In a second experiment, the participants' spatial perception of a sound was biased towards a location where they imagined seeing the brief appearance of a white circle. In the third experiment, the participants' perception of what a person was saying was changed by their imagination of a particular sound.

According to the scientists, the results of the current study may be useful in understanding the mechanisms by which the brain fails to distinguish between thought and reality in certain psychiatric disorders such as schizophrenia. Another area of use could be research on brain computer interfaces, where paralyzed individuals' imagination is used to control virtual and artificial devices.

"This is the first set of experiments to definitively establish that the sensory signals generated by one's imagination are strong enough to change one's real-world perception of a different sensory modality" says Professor Henrik Ehrsson, the principle investigator behind the study.

Thursday, February 21, 2013

Using 3-D Printing and Injectable Molds, Bioengineered Ears Look and Act Like the Real Thing


Cornell bioengineers and physicians have created an artificial ear -- using 3-D printing and injectable molds -- that looks and acts like a natural ear, giving new hope to thousands of children born with a congenital deformity called microtia.
A 3-D printer in Weill Hall deposits cells encapsulated in a hydrogel that will develop into new ear tissue. The printer takes instructions from a file built from 3-D photographs of human ears taken with a scanner in Rhodes Hall.
A 3-D printer in Weill Hall deposits cells encapsulated in a hydrogel that will develop into new ear tissue. The printer takes instructions from a file built from 3-D photographs of human ears taken with a scanner in Rhodes Hall. (Credit: Lindsay France/University Photography)

In a study published online Feb. 20 in PLOS One, Cornell biomedical engineers and Weill Cornell Medical College physicians described how 3-D printing and injectable gels made of living cells can fashion ears that are practically identical to a human ear. Over a three-month period, these flexible ears grew cartilage to replace the collagen that was used to mold them.

"This is such a win-win for both medicine and basic science, demonstrating what we can achieve when we work together," said co-lead author Lawrence Bonassar, associate professor of biomedical engineering.

The novel ear may be the solution reconstructive surgeons have long wished for to help children born with ear deformity, said co-lead author Dr. Jason Spector, director of the Laboratory for Bioregenerative Medicine and Surgery and associate professor of plastic surgery at Weill Cornell in New York City.

"A bioengineered ear replacement like this would also help individuals who have lost part or all of their external ear in an accident or from cancer," Spector said. Replacement ears are usually constructed with materials that have a Styrofoam-like consistency, or sometimes, surgeons build ears from a patient's harvested rib. This option is challenging and painful for children, and the ears rarely look completely natural or perform well, Spector said.

To make the ears, Bonassar and colleagues started with a digitized 3-D image of a human subject's ear, and converted the image into a digitized "solid" ear using a 3-D printer to assemble a mold.

This Cornell-developed, high-density gel is similar to the consistency of Jell-o when the mold is removed. The collagen served as a scaffold upon which cartilage could grow.

The process is also fast, Bonassar added: "It takes half a day to design the mold, a day or so to print it, 30 minutes to inject the gel, and we can remove the ear 15 minutes later. We trim the ear and then let it culture for several days in nourishing cell culture media before it is implanted."

The incidence of microtia, which is when the external ear is not fully developed, varies from almost 1 to more than 4 per 10,000 births each year. Many children born with microtia have an intact inner ear, but experience hearing loss due to the missing external structure.

Spector and Bonassar have been collaborating on bioengineered human replacement parts since 2007. The researchers specifically work on replacement human structures that are primarily made of cartilage -- joints, trachea, spine, nose -- because cartilage does not need to be vascularized with a blood supply in order to survive.

"Using human cells, specifically those from the same patient, would reduce any possibility of rejection," Spector said.

He added that the best time to implant a bioengineered ear on a child would be when they are about 5 or six 6 old. At that age, ears are 80 percent of their adult size. If all future safety and efficacy tests work out, it might be possible to try the first human implant of a Cornell bioengineered ear in as little as three years, Spector said.

Sunday, December 23, 2012

Sound Beam Could One Day Be Invisible Scalpel


A carbon-nanotube-coated lens that converts light to sound can focus high-pressure sound waves to finer points than ever before. The University of Michigan engineering researchers who developed the new therapeutic ultrasound approach say it could lead to an invisible knife for noninvasive surgery.

With a new technique that uses tightly-focused sound waves for micro-surgery, University of Michigan engineering researchers drilled a 150-micrometer hole in a confetti-sized artificial kidney stone.
With a new technique that uses tightly-focused sound waves for micro-surgery, University of Michigan engineering researchers drilled a 150-micrometer hole in a confetti-sized artificial kidney stone. (Credit: Hyoung Won Baac)

Today's ultrasound technology enables far more than glimpses into the womb. Doctors routinely use focused sound waves to blast apart kidney stones and prostate tumors, for example. The tools work primarily by focusing sound waves tightly enough to generate heat, says Jay Guo, a professor of electrical engineering and computer science, mechanical engineering, and macromolecular science and engineering. Guo is a co-author of a paper on the new technique published in the current issue of Nature's journal Scientific Reports.

The beams that today's technology produces can be unwieldy, says Hyoung Won Baac, a research fellow at Harvard Medical School who worked on this project as a doctoral student in Guo's lab.

"A major drawback of current strongly focused ultrasound technology is a bulky focal spot, which is on the order of several millimeters," Baac said. "A few centimeters is typical. Therefore, it can be difficult to treat tissue objects in a high-precision manner, for targeting delicate vasculature, thin tissue layer and cellular texture. We can enhance the focal accuracy 100-fold."

The team was able to concentrate high-amplitude sound waves to a speck just 75 by 400 micrometers (a micrometer is one-thousandth of a millimeter). Their beam can blast and cut with pressure, rather than heat. Guo speculates that it might be able to operate painlessly because its beam is so finely focused it could avoid nerve fibers. The device hasn't been tested in animals or humans yet, though.

"We believe this could be used as an invisible knife for noninvasive surgery," Guo said. "Nothing pokes into your body, just the ultrasound beam. And it is so tightly focused, you can disrupt individual cells."

To achieve this superfine beam, Guo's team took an optoacoustic approach that converts light from a pulsed laser to high-amplitude sound waves through a specially designed lens. The general technique has been around since Thomas Edison's time. It has advanced over the centuries, but for medical applications today, the process doesn't normally generate a sound signal strong enough to be useful.

The U-M researchers' system is unique because it performs three functions: it converts the light to sound, focuses it to a tiny spot and amplifies the sound waves. To achieve the amplification, the researchers coated their lens with a layer of carbon nanotubes and a layer of a rubbery material called polydimethylsiloxane. The carbon nanotube layer absorbs the light and generates heat from it. Then the rubbery layer, which expands when exposed to heat, drastically boosts the signal by the rapid thermal expansion.

The resulting sound waves are 10,000 times higher frequency than humans can hear. They work in tissues by creating shockwaves and microbubbles that exert pressure toward the target, which Guo envisions could be tiny cancerous tumors, artery-clogging plaques or single cells to deliver drugs. The technique might also have applications in cosmetic surgery.

In experiments, the researchers demonstrated micro ultrasonic surgery, accurately detaching a single ovarian cancer cell and blasting a hole less than 150 micrometers in an artificial kidney stone in less than a minute.

"This is just the beginning," Guo said. "This work opens a way to probe cells or tissues in much smaller scale."

The researchers will present the work at the SPIE Photonics West meeting in San Francisco. The research was funded by the National Science Foundation and the National Institutes of Health.

Thursday, June 14, 2012

New Energy Source for Future Medical Implants: Sugar


MIT engineers have developed a fuel cell that runs on the same sugar that powers human cells: glucose. This glucose fuel cell could be used to drive highly efficient brain implants of the future, which could help paralyzed patients move their arms and legs again.

This silicon wafer consists of glucose fuel cells of varying sizes; the largest is 64 by 64 mm. Image: (Credit: Sarpeshkar Lab)
This silicon wafer consists of glucose fuel cells of varying sizes; 
the largest is 64 by 64 mm. Image: (Credit: Sarpeshkar Lab)

The fuel cell, described in the June 12 edition of the journal PLoS ONE, strips electrons from glucose molecules to create a small electric current. The researchers, led by Rahul Sarpeshkar, an associate professor of electrical engineering and computer science at MIT, fabricated the fuel cell on a silicon chip, allowing it to be integrated with other circuits that would be needed for a brain implant.

The idea of a glucose fuel cell is not new: In the 1970s, scientists showed they could power a pacemaker with a glucose fuel cell, but the idea was abandoned in favor of lithium-ion batteries, which could provide significantly more power per unit area than glucose fuel cells. These glucose fuel cells also utilized enzymes that proved to be impractical for long-term implantation in the body, since they eventually ceased to function efficiently.

The new twist to the MIT fuel cell described in PLoS ONE is that it is fabricated from silicon, using the same technology used to make semiconductor electronic chips. The fuel cell has no biological components: It consists of a platinum catalyst that strips electrons from glucose, mimicking the activity of cellular enzymes that break down glucose to generate ATP, the cell's energy currency. (Platinum has a proven record of long-term biocompatibility within the body.) So far, the fuel cell can generate up to hundreds of microwatts -- enough to power an ultra-low-power and clinically useful neural implant.

"It will be a few more years into the future before you see people with spinal-cord injuries receive such implantable systems in the context of standard medical care, but those are the sorts of devices you could envision powering from a glucose-based fuel cell," says Benjamin Rapoport, a former graduate student in the Sarpeshkar lab and the first author on the new MIT study.

Rapoport calculated that in theory, the glucose fuel cell could get all the sugar it needs from the cerebrospinal fluid (CSF) that bathes the brain and protects it from banging into the skull. There are very few cells in the CSF, so it's highly unlikely that an implant located there would provoke an immune response. There is also significant glucose in the CSF, which does not generally get used by the body. Since only a small fraction of the available power is utilized by the glucose fuel cell, the impact on the brain's function would likely be small.

Karim Oweiss, an associate professor of electrical engineering, computer science and neuroscience at Michigan State University, says the work is a good step toward developing implantable medical devices that don't require external power sources.

"It's a proof of concept that they can generate enough power to meet the requirements," says Oweiss, adding that the next step will be to demonstrate that it can work in a living animal.

A team of researchers at Brown University, Massachusetts General Hospital and other institutions recently demonstrated that paralyzed patients could use a brain-machine interface to move a robotic arm; those implants have to be plugged into a wall outlet.

Mimicking biology with microelectronics

Sarpeshkar's group is a leader in the field of ultra-low-power electronics, having pioneered such designs for cochlear implants and brain implants. "The glucose fuel cell, when combined with such ultra-low-power electronics, can enable brain implants or other implants to be completely self-powered," says Sarpeshkar, author of the book "Ultra Low Power Bioelectronics." This book discusses how the combination of ultra-low-power and energy-harvesting design can enable self-powered devices for medical, bio-inspired and portable applications.

Sarpeshkar's group has worked on all aspects of implantable brain-machine interfaces and neural prosthetics, including recording from nerves, stimulating nerves, decoding nerve signals and communicating wirelessly with implants. One such neural prosthetic is designed to record electrical activity from hundreds of neurons in the brain's motor cortex, which is responsible for controlling movement. That data is amplified and converted into a digital signal so that computers -- or in the Sarpeshkar team's work, brain-implanted microchips -- can analyze it and determine which patterns of brain activity produce movement.

The fabrication of the glucose fuel cell was done in collaboration with Jakub Kedzierski at MIT's Lincoln Laboratory. "This collaboration with Lincoln Lab helped make a long-term goal of mine -- to create glucose-powered bioelectronics -- a reality," Sarpeshkar says. Although he has just begun working on bringing ultra-low-power and medical technology to market, he cautions that glucose-powered implantable medical devices are still many years away.

Friday, September 24, 2010

Robotic Arm's Big Flaw: Patients in Wheelchairs Say It's 'Too Easy'


One touch directs a robotic arm to grab objects in a new computer program designed to give people in wheelchairs more independence. University of Central Florida researchers thought the ease of the using the program's automatic mode would be a huge hit. But they were wrong -- many participants in a pilot study didn't like it because it was "too easy."
Bob Melia, a quadriplegic who advised the UCF team, says the new technology will make life easier for thousands of people who are so dependent on others because of physical limitations. (Credit: Jason Greene, UCF)

Most participants preferred the manual mode, which requires them to think several steps ahead and either physically type in instructions or verbally direct the arm with a series of precise commands. They favored the manual mode even though they did not perform tasks as well with it.

"We focused so much on getting the technology right," said Assistant Professor Aman Behal. "We didn't expect this."

John Bricout, Behal's collaborator and the associate dean for Research and Community Outreach at the University of Texas at Arlington School of Social Work, said the study demonstrates how people want to be engaged -- but not overwhelmed -- by technology. The psychology theory of Flow describes this need to have a balance between challenge and capacity in life.

"If we're too challenged, we get angry and frustrated. But if we aren't challenged enough, we get bored," said Bricout, who has conducted extensive research on adapting technology for users with disabilities. "We all experience that. People with disabilities are no different."

The computer program is based on how the human eye sees. A touch screen, computer mouse, joystick or voice command sends the arm into action. Then sensors mounted on the arm see an object, gather information and relay it to the computer, which completes the calculations necessary to move the arm and retrieve the object.

Behal is seeking grants to translate the study's findings into a smoother "hybrid" mode that is more interactive and challenging for users and features a more accurate robotic arm. Laser, ultrasound and infrared technology coupled with an adaptive interface will help him achieve his goals.

The key is to design technology that can be individualized with ease, Behal said. Some patients will have more mobility than others, and they may prefer a design closer to the manual mode. Though the automatic mode wasn't popular in the pilot study, it may be the best option for patients with more advanced disease and less mobility.

Bob Melia, a quadriplegic who advised the UCF team, says the new technology will make life easier for thousands of people who are so dependent on others because of physical limitations.

"You have no idea what it is like to want to do something as simple as scratching your nose and have to rely on someone else to do it for you," Melia said. "I see this device as someday giving people more freedom to do a lot more things, from getting their own bowl of cereal in the morning to scratching their nose anytime they want."

Behal's initial research was funded with a grant from the National Science Foundation and through a pilot grant from the National Multiple Sclerosis Society. Behal presented his findings at the 2010 International Conference on Robotics and Automation in Anchorage, Alaska.

Behal is collaborating with Bricout, who previously worked in the College of Health and Public Affairs at UCF, to apply for another grant in the area of assistive technology.

The research team includes Dae-Jin Kim, Zhao Wang, and Rebekah Hazlett from UCF, John Bricout from UT Arlington, and Heather Godfrey, Greta Rucks, David Portee and Tara Cunningham from Orlando Health Rehabilitation Institute. The institute helped recruit patients for the study.

Thursday, July 29, 2010

Invention Enables People With Disabilities Communicate and Steer a Wheelchair by Sniffing


A unique device based on sniffing -- inhaling and exhaling through the nose -- might enable numerous disabled people to navigate wheelchairs or communicate with their loved ones. Sniffing technology might even be used in the future to create a sort of 'third hand,' to assist healthy surgeons or pilots.
Brain scans
Brain scans. Ten patients, all quadriplegics, succeeded 
in operating a computer and writing messages through 
sniffing. (Credit: Image courtesy of Weizmann 
Institute of Science)

Developed by Prof. Noam Sobel, electronics engineers Dr. Anton Plotkin and Aharon Weissbrod and research student Lee Sela in the Weizmann Institute's Neurobiology Department, the new system identifies changes in air pressure inside the nostrils and translates these into electrical signals. The device was tested on healthy volunteers as well as quadriplegics, and the results showed that the method is easily mastered. Users were able to navigate a wheelchair around a complex path or play a computer game with nearly the speed and accuracy of a mouse or joystick.

Sobel explains: "The most stirring tests were those we did with locked-in syndrome patients. These are people with unimpaired cognitive function who are completely paralyzed -- 'locked into' their bodies. With the new system, they were able to communicate with family members, and even initiate communication with the outside. Some wrote poignant messages to their loved ones, sharing with them, for the first time in a very long time, their thoughts and feelings." Four of those who participated in the experiments are already using the new writing system, and Yeda Research and Development Company, Ltd. -- the technology transfer arm of the Weizmann Institute -- is investigating the possibilities for developing and distributing the technology.

Sniffing is a precise motor skill that is controlled, in part, by the soft palate -- the flexible divider that moves to direct air in or out through the mouth or nose. The soft palate is controlled by several nerves that connect to it directly through the braincase. This close link led Sobel and his scientific team to theorize that the ability to sniff -- that is, to control soft palate movement -- might be preserved even in the most acute cases of paralysis. Functional magnetic resonance imaging (fMRI) lent support to the idea, showing that a number of brain areas contribute to soft palate control. This imaging revealed a significant overlap between soft palate control and the language areas of the brain, hinting to the scientists that the use of sniffing to communicate might be learned intuitively.

To test their theory, the researchers created a device with a sensor that fits on the nostril's opening and measures changes in air pressure. For patients on respirators, they developed a passive version of the device, which diverts airflow to the patient's nostrils. About 75% of the subjects on respirators were able to control their soft palate movement to operate the device. Initial tests, carried out with healthy volunteers, showed that the device compared favorably with a mouse or joystick for playing computer games. In the next stage, carried out in collaboration with Prof. Nachum Soroker of Loewenstein Hospital Rehabilitation Center in Raanana, quadriplegics and locked-in patients tested the device.

One patient who had been locked in for seven months following a stroke learned to use the device over a period of several days, writing her first message to her family. Another, who had been locked in since a traffic accident 18 years earlier wrote that the new device was much easier to use than one based on blinking. Another ten patients, all quadriplegics, succeeded in operating a computer and writing messages through sniffing.

In addition to communication, the device can function as a sort of steering mechanism for wheelchairs: Two successive sniffs in tell it to go forward, two out mean reverse, out and then in turn it left, and in and out turn it right. After fifteen minutes of practice, a subject who is paralyzed from the neck down managed to navigate a wheelchair through a complex route -- sharp turns and all -- as well as a non-disabled volunteer.

Sniffs can be in or out, strong or shallow, long or short; and this gives the device's developers the opportunity to create a complex 'language' with multiple signals. The new system is relatively inexpensive to produce, and simple and quick to learn to operate in comparison with other brain-machine interfaces. Sobel believes that this invention may not only bring new hope to severely disabled people, but it could be useful in other areas, for instance as a control for a 'third arm' for surgeons and pilots.

Prof. Noam Sobel's research is supported by the Nella and Leon Benoziyo Center for Neurosciences; the J&R Foundation; and Regina Wachter, NY.