BTemplates.com

Powered by Blogger.

Pageviews past week

Quantum mechanics

Auto News

artificial intelligence

About Me

Recommend us on Google!

Information Technology

Popular Posts

Showing posts with label Hearing Loss. Show all posts
Showing posts with label Hearing Loss. Show all posts

Thursday, February 21, 2013

Using 3-D Printing and Injectable Molds, Bioengineered Ears Look and Act Like the Real Thing


Cornell bioengineers and physicians have created an artificial ear -- using 3-D printing and injectable molds -- that looks and acts like a natural ear, giving new hope to thousands of children born with a congenital deformity called microtia.
A 3-D printer in Weill Hall deposits cells encapsulated in a hydrogel that will develop into new ear tissue. The printer takes instructions from a file built from 3-D photographs of human ears taken with a scanner in Rhodes Hall.
A 3-D printer in Weill Hall deposits cells encapsulated in a hydrogel that will develop into new ear tissue. The printer takes instructions from a file built from 3-D photographs of human ears taken with a scanner in Rhodes Hall. (Credit: Lindsay France/University Photography)

In a study published online Feb. 20 in PLOS One, Cornell biomedical engineers and Weill Cornell Medical College physicians described how 3-D printing and injectable gels made of living cells can fashion ears that are practically identical to a human ear. Over a three-month period, these flexible ears grew cartilage to replace the collagen that was used to mold them.

"This is such a win-win for both medicine and basic science, demonstrating what we can achieve when we work together," said co-lead author Lawrence Bonassar, associate professor of biomedical engineering.

The novel ear may be the solution reconstructive surgeons have long wished for to help children born with ear deformity, said co-lead author Dr. Jason Spector, director of the Laboratory for Bioregenerative Medicine and Surgery and associate professor of plastic surgery at Weill Cornell in New York City.

"A bioengineered ear replacement like this would also help individuals who have lost part or all of their external ear in an accident or from cancer," Spector said. Replacement ears are usually constructed with materials that have a Styrofoam-like consistency, or sometimes, surgeons build ears from a patient's harvested rib. This option is challenging and painful for children, and the ears rarely look completely natural or perform well, Spector said.

To make the ears, Bonassar and colleagues started with a digitized 3-D image of a human subject's ear, and converted the image into a digitized "solid" ear using a 3-D printer to assemble a mold.

This Cornell-developed, high-density gel is similar to the consistency of Jell-o when the mold is removed. The collagen served as a scaffold upon which cartilage could grow.

The process is also fast, Bonassar added: "It takes half a day to design the mold, a day or so to print it, 30 minutes to inject the gel, and we can remove the ear 15 minutes later. We trim the ear and then let it culture for several days in nourishing cell culture media before it is implanted."

The incidence of microtia, which is when the external ear is not fully developed, varies from almost 1 to more than 4 per 10,000 births each year. Many children born with microtia have an intact inner ear, but experience hearing loss due to the missing external structure.

Spector and Bonassar have been collaborating on bioengineered human replacement parts since 2007. The researchers specifically work on replacement human structures that are primarily made of cartilage -- joints, trachea, spine, nose -- because cartilage does not need to be vascularized with a blood supply in order to survive.

"Using human cells, specifically those from the same patient, would reduce any possibility of rejection," Spector said.

He added that the best time to implant a bioengineered ear on a child would be when they are about 5 or six 6 old. At that age, ears are 80 percent of their adult size. If all future safety and efficacy tests work out, it might be possible to try the first human implant of a Cornell bioengineered ear in as little as three years, Spector said.

Friday, December 10, 2010

Brains Wired So We Can Better Hear Ourselves


Like the mute button on the TV remote control, our brains filter out unwanted noise so we can focus on what we're listening to. But when it comes to following our own speech, a new brain study from the University of California, Berkeley, shows that instead of one homogenous mute button, we have a network of volume settings that can selectively silence and amplify the sounds we make and hear.
Activity in the auditory cortex when we speak and listen 
is amplified in some regions of the brain and muted in 
others. In this image, the black line represents muting 
activity when we speak. (Credit: Courtesy 
of Adeen Flinker)

Neuroscientists from UC Berkeley, UCSF and Johns Hopkins University tracked the electrical signals emitted from the brains of hospitalized epilepsy patients. They discovered that neurons in one part of the patients' hearing mechanism were dimmed when they talked, while neurons in other parts lit up.

Their findings, published Dec. 8, 2010 in the Journal of Neuroscience, offer new clues about how we hear ourselves above the noise of our surroundings and monitor what we say. Previous studies have shown a selective auditory system in monkeys that can amplify their self-produced mating, food and danger alert calls, but until this latest study, it was not clear how the human auditory system is wired.

"We used to think that the human auditory system is mostly suppressed during speech, but we found closely knit patches of cortex with very different sensitivities to our own speech that paint a more complicated picture," said Adeen Flinker, a doctoral student in neuroscience at UC Berkeley and lead author of the study.

"We found evidence of millions of neurons firing together every time you hear a sound right next to millions of neurons ignoring external sounds but firing together every time you speak," Flinker added. "Such a mosaic of responses could play an important role in how we are able to distinguish our own speech from that of others."

While the study doesn't specifically address why humans need to track their own speech so closely, Flinker theorizes that, among other things, tracking our own speech is important for language development, monitoring what we say and adjusting to various noise environments.

"Whether it's learning a new language or talking to friends in a noisy bar, we need to hear what we say and change our speech dynamically according to our needs and environment," Flinker said.

He noted that people with schizophrenia have trouble distinguishing their own internal voices from the voices of others, suggesting that they may lack this selective auditory mechanism. The findings may be helpful in better understanding some aspects of auditory hallucinations, he said.

Moreover, with the finding of sub-regions of brain cells each tasked with a different volume control job -- and located just a few millimeters apart -- the results pave the way for a more detailed mapping of the auditory cortex to guide brain surgery.

In addition to Flinker, the study's authors are Robert Knight, director of the Helen Wills Neuroscience Institute at UC Berkeley; neurosurgeons Edward Chang, Nicholas Barbaro and neurologist Heidi Kirsch of the University of California, San Francisco; and Nathan Crone, a neurologist at Johns Hopkins University in Maryland.

The auditory cortex is a region of the brain's temporal lobe that deals with sound. In hearing, the human ear converts vibrations into electrical signals that are sent to relay stations in the brain's auditory cortex where they are refined and processed. Language is mostly processed in the left hemisphere of the brain.

In the study, researchers examined the electrical activity in the healthy brain tissue of patients who were being treated for seizures. The patients had volunteered to help out in the experiment during lulls in their treatment, as electrodes had already been implanted over their auditory cortices to track the focal points of their seizures.

Researchers instructed the patients to perform such tasks as repeating words and vowels they heard, and recorded the activity. In comparing the activity of electrical signals discharged during speaking and hearing, they found that some regions of the auditory cortex showed less activity during speech, while others showed the same or higher levels.

"This shows that our brain has a complex sensitivity to our own speech that helps us distinguish between our vocalizations and those of others, and makes sure that what we say is actually what we meant to say," Flinker said.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of Science Updates or its staff.