BTemplates.com

Powered by Blogger.

Pageviews past week

Quantum mechanics

Auto News

artificial intelligence

About Me

Recommend us on Google!

Information Technology

Popular Posts

Showing posts with label University of Michigan. Show all posts
Showing posts with label University of Michigan. Show all posts

Saturday, June 18, 2011

Implant to Translate Thoughts Into Movement



A brain implant developed at the University of Michigan uses the body's skin like a conductor to wirelessly transmit the brain's neural signals to control a computer, and may eventually be used to reactivate paralyzed limbs.
A brain implant developed at the University of Michigan uses the body's skin like a conductor to wirelessly transmit the brain's neural signals to control a computer, and may eventually be used to reactivate paralyzed limbs. (Credit: Photo provided by Euisik Yoon of University of Michigan)

The implant is called the BioBolt, and unlike other neural interface technologies that establish a connection from the brain to an external device such as a computer, it's minimally invasive and low power, said principal investigator Euisik Yoon, a professor in the U-M College of Engineering, Department of Electrical Engineering and Computer Science.

Currently, the skull must remain open while neural implants are in the head, which makes using them in a patient's daily life unrealistic, said Kensall Wise, the William Gould Dow Distinguished University professor emeritus in engineering.

BioBolt does not penetrate the cortex and is completely covered by the skin to greatly reduce risk of infection. Researchers believe it's a critical step toward the Holy Grail of brain-computer interfacing: allowing a paralyzed person to "think" a movement.

"The ultimate goal is to be able to reactivate paralyzed limbs," by picking the neural signals from the brain cortex and transmitting those signals directly to muscles, said Wise, who is also founding director of the NSF Engineering Research Center for Wireless Integrated MicroSystems (WIMS ERC). That technology is years away, the researchers say.



Another promising application for the BioBolt is controlling epilepsy, and diagnosing certain diseases like Parkinson's.

The concept of BioBolt is filed for patent and was presented on June 16 at the 2011 Symposium on VLSI Circuits in Kyoto, Japan. Sun-Il Chang, a PhD student in Yoon's research group, is lead author on the presentation.

The BioBolt looks like a bolt and is about the circumference of a dime, with a thumbnail-sized film of microcircuits attached to the bottom. The BioBolt is implanted in the skull beneath the skin and the film of microcircuits sits on the brain. The microcircuits act as microphones to 'listen' to the overall pattern of firing neurons and associate them with a specific command from the brain. Those signals are amplified and filtered, then converted to digital signals and transmitted through the skin to a computer, Yoon said.

Another hurdle to brain interfaces is the high power requirement for transmitting data wirelessly from the brain to an outside source. BioBolt keeps the power consumption low by using the skin as a conductor or a signal pathway, which is analogous to downloading a video into your computer simply by touching the video.

Eventually, the hope is that the signals can be transmitted through the skin to something on the body, such as a watch or a pair of earrings, to collect the signals, said Yoon, eliminating the need for an off-site computer to process the signals.

Tuesday, June 14, 2011

'Smart cars' that are actually, well, smart



Since 2000, there have been 110 million car accidents in the United States, more than 443,000 of which have been fatal — an average of 110 fatalities per day. These statistics make traffic accidents one of the leading causes of death in this country, as well as worldwide.
The researchers test their algorithm using a miniature
autonomous vehicle traveling along a track that partially
overlaps with a second track for a human-controlled vehicle,
observing incidences of collision and collision avoidance.
Photo: Melanie Gonick

Engineers have developed myriad safety systems aimed at preventing collisions: automated cruise control, a radar- or laser-based sensor system that slows a car when approaching another vehicle; blind-spot warning systems, which use lights or beeps to alert the driver to the presence of a vehicle he or she can’t see; and traction control and stability assist, which automatically apply the brakes if they detect skidding or a loss of steering control.

Still, more progress must be made to achieve the long-term goal of “intelligent transportation”: cars that can “see” and communicate with other vehicles on the road, making them able to prevent crashes virtually 100 percent of the time.

Of course, any intelligent transportation system (ITS), even one that becomes a mainstream addition to new cars, will have to contend with human-operated vehicles as long as older cars remain on the road — that is, for the foreseeable future. To this end, MIT mechanical engineers are working on a new ITS algorithm that takes into account models of human driving behavior to warn drivers of potential collisions, and ultimately takes control of the vehicle to prevent a crash.

The theory behind the algorithm and some experimental results will be published in the journal IEEE Robotics and Automation Magazine. The paper is co-authored by Rajeev Verma, who was a visiting PhD student at MIT this academic year, and Domitilla Del Vecchio, assistant professor of mechanical engineering and W. M. Keck Career Development Assistant Professor in Biomedical Engineering.

Avoiding the car that cried wolf

According to Del Vecchio, a common challenge for ITS developers is designing a system that is safe without being overly conservative. It’s tempting to treat every vehicle on the road as an “agent that’s playing against you,” she says, and construct hypersensitive systems that consistently react to worst-case scenarios. But with this approach, Del Vecchio says, “you get a system that gives you warnings even when you don’t feel them as necessary. Then you would say, ‘Oh, this warning system doesn’t work,’ and you would neglect it all the time.”

That’s where predicting human behavior comes in. Many other researchers have worked on modeling patterns of human driving. Following their lead, Del Vecchio and Verma reasoned that driving actions fall into two main modes: braking and accelerating. Depending on which mode a driver is in at a given moment, there is a finite set of possible places the car could be in the future, whether a tenth of a second later or a full 10 seconds later. This set of possible positions, combined with predictive models of human behavior — when and where drivers slow down or speed up around an intersection, for example — all went into building the new algorithm.

The result is a program that is able to compute, for any two vehicles on the road nearing an intersection, a “capture set,” or a defined area in which two vehicles are in danger of colliding. The ITS-equipped car then engages in a sort of game-theoretic decision, in which it uses information from its onboard sensors as well as roadside and traffic-light sensors to try to predict what the other car will do, reacting accordingly to prevent a crash.

When both cars are ITS-equipped, the “game” becomes a cooperative one, with both cars communicating their positions and working together to avoid a collision.

Steering clear of the ‘bad set’

Del Vecchio and Verma tested their algorithm with a laboratory setup involving two miniature vehicles on overlapping circular tracks: one autonomous and one controlled by a human driver. Eight volunteers participated, to account for differences in individual driving styles. Out of 100 trials, there were 97 instances of collision avoidance. The vehicles entered the capture set three times; one of these times resulted in a collision.

In the three “failed” trials, Del Vecchio says the trouble was largely due to delays in communication between ITS vehicles and the workstation, which represents the roadside infrastructure that captures and transmits information about non-ITS-equipped cars. In these cases, one vehicle may be making decisions based on information about the position and speed of the other vehicle that is off by a fraction of a second. “So you may end up actually being in the capture set while the vehicles think you are not,” Del Vecchio says.



One way to handle this problem is to improve the communication hardware as much as possible, but the researchers say there will virtually always be delays, so their next step is to make the system robust to these delays — that is, to ensure that the algorithm is conservative enough to avoid a situation in which a communication delay could mean the difference between crashing and not crashing.

Jim Freudenberg, a professor of electrical and computer engineering at the University of Michigan, says that although it’s nearly impossible to correctly predict human behavior 100 percent of the time, Del Vecchio and Verma’s approach is promising. “Human-controlled technologies and computer-controlled technologies are coming more and more into contact with one other, and we have to have some way of making assumptions about the human — otherwise, you can’t do anything because of how conservative you have to be,” he says.

The researchers have already begun to test their system in full-size passenger vehicles with human drivers. In addition to learning from these real-life trials, future work will focus on incorporating human reaction-time data to refine when the system must actively take control of the car and when it can merely provide a passive warning to the driver.

Eventually, the researchers also hope to build in sensors for weather and road conditions and take into account car-specific manufacturing details — all of which affect handling — to help their algorithm make even better informed decisions.

This story is republished courtesy of MIT News (http://web.mit.edu/newsoffice/), a popular site that covers news about MIT research, innovation and teaching.

Thursday, June 9, 2011

Using Magnets to Help Prevent Heart Attacks: Magnetic Field Can Reduce Blood Viscosity, Physicist Discovers



If a person's blood becomes too thick it can damage blood vessels and increase the risk of heart attacks. But a Temple University physicist has discovered that he can thin the human blood by subjecting it to a magnetic field.
Aggregated red-cell clusters have a streamlined 
shape, leading to further viscosity reduction. 
(Credit: Image courtesy of Temple University)

Rongjia Tao, professor and chair of physics at Temple University, has pioneered the use of electric or magnetic fields to decrease the viscosity of oil in engines and pipelines. Now, he is using the same magnetic fields to thin human blood in the circulation system.

Because red blood cells contain iron, Tao has been able to reduce a person's blood viscosity by 20-30 percent by subjecting it to a magnetic field of 1.3 Telsa (about the same as an MRI) for about one minute.

Tao and his collaborator tested numerous blood samples in a Temple lab and found that the magnetic field polarizes the red blood cells causing them to link together in short chains, streamlining the movement of the blood. Because these chains are larger than the single blood cells, they flow down the center, reducing the friction against the walls of the blood vessels. The combined effects reduce the viscosity of the blood, helping it to flow more freely.

When the magnetic field was taken away, the blood's original viscosity state slowly returned, but over a period of several hours.

"By selecting a suitable magnetic field strength and pulse duration, we will be able to control the size of the aggregated red-cell chains, hence to control the blood's viscosity," said Tao. "This method of magneto-rheology provides an effective way to control the blood viscosity within a selected range."

Currently, the only method for thinning blood is through drugs such as aspirin; however, these drugs often produce unwanted side effects. Tao said that the magnetic field method is not only safer, it is repeatable. The magnetic fields may be reapplied and the viscosity reduced again. He also added that the viscosity reduction does not affect the red blood cells' normal function.



Tao said that further studies are needed and that he hopes to ultimately develop this technology into an acceptable therapy to prevent heart disease.

Tao and his former graduate student, Ke "Colin" Huang, now a medical physics resident in the Department of Radiation Oncology at the University of Michigan, are publishing their findings in the journal Physical Review E.

Friday, April 15, 2011

Solar Power Without Solar Cells: A Hidden Magnetic Effect of Light Could Make It Possible


A dramatic and surprising magnetic effect of light discovered by University of Michigan researchers could lead to solar power without traditional semiconductor-based solar cells.
Researchers have discovered a dramatic and surprising magnetic effect of light that could lead to solar power without traditional semiconductor-based solar cells. (Credit: © Sean Gladwell / Fotolia)

The researchers found a way to make an "optical battery," said Stephen Rand, a professor in the departments of Electrical Engineering and Computer Science, Physics and Applied Physics.

In the process, they overturned a century-old tenet of physics.

"You could stare at the equations of motion all day and you will not see this possibility. We've all been taught that this doesn't happen," said Rand, an author of a paper on the work published in the Journal of Applied Physics. "It's a very odd interaction. That's why it's been overlooked for more than 100 years."

Light has electric and magnetic components. Until now, scientists thought the effects of the magnetic field were so weak that they could be ignored. What Rand and his colleagues found is that at the right intensity, when light is traveling through a material that does not conduct electricity, the light field can generate magnetic effects that are 100 million times stronger than previously expected. Under these circumstances, the magnetic effects develop strength equivalent to a strong electric effect.

"This could lead to a new kind of solar cell without semiconductors and without absorption to produce charge separation," Rand said. "In solar cells, the light goes into a material, gets absorbed and creates heat. Here, we expect to have a very low heat load. Instead of the light being absorbed, energy is stored in the magnetic moment. Intense magnetization can be induced by intense light and then it is ultimately capable of providing a capacitive power source."

What makes this possible is a previously undetected brand of "optical rectification," says William Fisher, a doctoral student in applied physics. In traditional optical rectification, light's electric field causes a charge separation, or a pulling apart of the positive and negative charges in a material. This sets up a voltage, similar to that in a battery. This electric effect had previously been detected only in crystalline materials that possessed a certain symmetry.

Rand and Fisher found that under the right circumstances and in other types of materials, the light's magnetic field can also create optical rectification.

"It turns out that the magnetic field starts curving the electrons into a C-shape and they move forward a little each time," Fisher said. "That C-shape of charge motion generates both an electric dipole and a magnetic dipole. If we can set up many of these in a row in a long fiber, we can make a huge voltage and by extracting that voltage, we can use it as a power source."

The light must be shone through a material that does not conduct electricity, such as glass. And it must be focused to an intensity of 10 million watts per square centimeter. Sunlight isn't this intense on its own, but new materials are being sought that would work at lower intensities, Fisher said.

"In our most recent paper, we show that incoherent light like sunlight is theoretically almost as effective in producing charge separation as laser light is," Fisher said.

This new technique could make solar power cheaper, the researchers say. They predict that with improved materials they could achieve 10 percent efficiency in converting solar power to useable energy. That's equivalent to today's commercial-grade solar cells.

"To manufacture modern solar cells, you have to do extensive semiconductor processing," Fisher said. "All we would need are lenses to focus the light and a fiber to guide it. Glass works for both. It's already made in bulk, and it doesn't require as much processing. Transparent ceramics might be even better."

In experiments this summer, the researchers will work on harnessing this power with laser light, and then with sunlight.

The paper is titled "Optically-induced charge separation and terahertz emission in unbiased dielectrics." The university is pursuing patent protection for the intellectual property.

Thursday, July 29, 2010

Multifunctional Nanoparticle Enables New Type of Biological Imaging


Spotting a single cancerous cell that has broken free from a tumor and is traveling through the bloodstream to colonize a new organ might seem like finding a needle in a haystack. But a new imaging technique from the University of Washington is a first step toward making this possible.
Biological Imaging
On top are photoacoustic images taken for gold nanorods (left), the new UW particle that has a magnetic core and surrounding gold shell (center), and a simple magnetic nanoparticle (right). Below is the same image after processing to remove pixels not vibrating with the magnetic field. The center blob is retained because of the particles' magnetic core and is bright because of the particles' gold shell. (Credit: Xiaohu Gao, University of Washington)

UW researchers have developed a multifunctional nanoparticle that eliminates the background noise, enabling a more precise form of medical imaging -- essentially erasing the haystack, so the needle shines through. A successful demonstration with photoacoustic imaging was reported n the journal Nature Communications.

Nanoparticles are promising contrast agents for ultrasensitive medical imaging. But in all techniques that do not use radioactive tracers, the surrounding tissues tend to overwhelm weak signals, preventing researchers from detecting just one or a few cells.

"Although the tissues are not nearly as effective at generating a signal as the contrast agent, the quantity of the tissue is much greater than the quantity of the contrast agent and so the background signal is very high," said lead author Xiaohu Gao, a UW assistant professor of bioengineering.

The newly presented nanoparticle solves this problem by for the first time combining two properties to create an image that is different from what any existing technique could have produced.

The new particle combines magnetic properties and photoacoustic imaging to erase the background noise. Researchers used a pulsing magnetic field to shake the nanoparticles by their magnetic cores. Then they took a photoacoustic image and used image processing techniques to remove everything except the vibrating pixels.

Gao compares the new technique to "Tourist Remover" photo editing software that allows a photographer to delete other people by combining several photos of the same scene and keeping only the parts of the image that aren't moving. "We are using a very similar strategy," Gao said. "Instead of keeping the stationary parts, we only keep the moving part.

"We use an external magnetic field to shake the particles," he explained. "Then there's only one type of particle that will shake at the frequency of our magnetic field, which is our own particle."

Experiments with synthetic tissue showed the technique can almost completely suppress a strong background signal. Future work will try to duplicate the results in lab animals, Gao said.

The 30-nanometer particle consists of an iron-oxide magnetic core with a thin gold shell that surrounds but does not touch the center. The gold shell is used to absorb infrared light, and could also be used for optical imaging, delivering heat therapy, or attaching a biomolecule that would grab on to specific cells.

Earlier work by Gao's group combined functions in a single nanoparticle, something that is difficult because of the small size.

"In nanoparticles, one plus one is often less than two," Gao said. "Our previous work showed that one plus one can be equal to two. This paper shows that one plus one is, finally, greater than two."

The first biological imaging, in the 1950s, was used to identify anatomy inside the body, detecting tumors or fetuses. The second generation has been used to monitor function -- fMRI, or functional magnetic resonance imaging, for example, detects oxygen use in the brain to produce a picture of brain activity. The next generation of imaging will be molecular imaging, said co-author Matthew O'Donnell, a UW professor of bioengineering and engineering dean.

This will mean that medical assays and cell counts can be done inside the body. In other words, instead of taking a biopsy and inspecting tissue under a microscope, imaging could detect specific proteins or abnormal activity at the source.

But making this happen means improving the confidence limits of the imaging.

"Today, we can use biomarkers to see where there's a large collection of diseased cells," O'Donnell said. "This new technique could get you down to a very precise level, potentially of a single cell."

Researchers tested the method for photoacoustic imaging, a low-cost method now being developed that is sensitive to slight variations in tissues' properties and can penetrate several centimeters in soft tissue. It works by using a pulse of laser light to heat a cell very slightly. This heat causes the cell to vibrate and produce ultrasound waves that travel through the tissue to the body's surface. The new technique should also apply to other types of imaging, the authors said.

Co-authors are UW postdoctoral researchers Yongdong Jin and Sheng-Wen Huang and University of Michigan doctoral student Congxian Jia.

Research was funded by the National Institutes of Health, the National Science Foundation and the UW Department of Bioengineering.

Sunday, September 13, 2009

Carbon Nanotubes Could Make Efficient Solar Cells


Using a carbon nanotube instead of traditional silicon, Cornell researchers have created the basic elements of a solar cell that hopefully will lead to much more efficient ways of converting light to electricity than now used in calculators and on rooftops.
In a carbon nanotube-based photodiode, electrons (blue) and holes (red) - the positively charged areas where electrons used to be before becoming excited - release their excess energy to efficiently create more electron-hole pairs when light is shined on the device. (Credit: Nathan Gabor)

The researchers fabricated, tested and measured a simple solar cell called a photodiode, formed from an individual carbon nanotube. Reported online Sept. 11 in the journal Science, the researchers -- led by Paul McEuen, the Goldwin Smith Professor of Physics, and Jiwoong Park, assistant professor of chemistry and chemical biology -- describe how their device converts light to electricity in an extremely efficient process that multiplies the amount of electrical current that flows. This process could prove important for next-generation high efficiency solar cells, the researchers say.


"We are not only looking at a new material, but we actually put it into an application -- a true solar cell device," said first author Nathan Gabor, a graduate student in McEuen's lab.


The researchers used a single-walled carbon nanotube, which is essentially a rolled-up sheet of graphene, to create their solar cell. About the size of a DNA molecule, the nanotube was wired between two electrical contacts and close to two electrical gates, one negatively and one positively charged. Their work was inspired in part by previous research in which scientists created a diode, which is a simple transistor that allows current to flow in only one direction, using a single-walled nanotube. The Cornell team wanted to see what would happen if they built something similar, but this time shined light on it.


Shining lasers of different colors onto different areas of the nanotube, they found that higher levels of photon energy had a multiplying effect on how much electrical current was produced.


Further study revealed that the narrow, cylindrical structure of the carbon nanotube caused the electrons to be neatly squeezed through one by one. The electrons moving through the nanotube became excited and created new electrons that continued to flow. The nanotube, they discovered, may be a nearly ideal photovoltaic cell because it allowed electrons to create more electrons by utilizing the spare energy from the light.


This is unlike today's solar cells, in which extra energy is lost in the form of heat, and the cells require constant external cooling.


Though they have made a device, scaling it up to be inexpensive and reliable would be a serious challenge for engineers, Gabor said.


"What we've observed is that the physics is there," he said.


The research was supported by Cornell's Center for Nanoscale Systems and the Cornell NanoScale Science and Technology Facility, both National Science Foundation facilities, as well as the Microelectronics Advanced Research Corporation Focused Research Center on Materials, Structures and Devices. Research collaborators also included Zhaohui Zhong, of the University of Michigan, and Ken Bosnick, of the National Institute for Nanotechnology at University of Alberta.



If you like this post, buy me a Pittza at $1!
Reblog this post [with Zemanta]

Saturday, September 5, 2009

Mice Can Eat 'Junk' And Not Get Fat: Researchers Find Gene That Protects High-fat-diet Mice From Obesity


University of Michigan researchers have identified a gene that acts as a master switch to control obesity in mice. When the switch is turned off, even high-fat-diet mice remain thin.

Both mice were fed high-fat diets for several months.
Deleting the IKKE gene in the mouse on the left protected it
against the weight gain apparent in the mouse on the right.
(Credit: Photo by Scott Galvin, U-M Photo Services)

Deleting the gene, called IKKE, also appears to protect mice against conditions that, in humans, lead to Type 2 diabetes, which is associated with obesity and is on the rise among Americans, including children and adolescents.


If follow-up studies show that IKKE is tied to obesity in humans, the gene and the protein it makes will be prime targets for the development of drugs to treat obesity, diabetes and complications associated with those disorders, said Alan Saltiel, the Mary Sue Coleman Director of the U-M Life Sciences Institute.


"We've studied other genes associated with obesity – we call them 'obesogenes' – but this is the first one we've found that, when deleted, stops the animal from gaining weight," said Saltiel, senior author of a paper to be published in the Sept. 4 edition of the journal Cell.


"The fact that you can disrupt all the effects of a high-fat diet by deleting this one gene in mice is pretty interesting and surprising," he said.


Obesity is associated with a state of chronic, low-grade inflammation that leads to insulin resistance, which is usually the first step in the development of Type 2 diabetes. In the Cell paper, Saltiel and his colleagues show that deleting, or "knocking out," the IKKE gene not only protected high-fat-diet mice from obesity, it prevented chronic inflammation, a fatty liver and insulin resistance, as well.


The high-fat-diet mice were fed a lard-like substance with 45 percent of its calories from fat. Control mice were fed standard chow with 4.5 percent of its calories from fat. The dietary regimen began when the mice were 8 weeks old and continued for 14 to 16 weeks.


The gene IKKE produces a protein kinase also known as IKKE. Protein kinases are enzymes that turn other proteins on or off. The IKKE protein kinase appears to target proteins which, in turn, control genes that regulate the mouse metabolism.


When the high-fat diet is fed to a normal mouse, IKKE protein-kinase levels rise, the metabolic rate slows, and the animal gains weight. In that situation, the IKKE protein kinase acts as a brake on the metabolism.


Knockout mice placed on the high-fat diet did not gain weight, apparently because deleting the IKKE gene releases the metabolic brake, allowing it to speed up and burn more calories, instead of storing those calories as fat.


"The knockout mice are not exercising any more than the control mice used in the study. They're just burning more energy," Saltiel said. "And in the process, they're generating a little heat, as well – their body temperature actually increases a bit."


Saltiel's team is now searching for small molecules that block IKKE protein-kinase activity. IKKE inhibitors could become candidates for drug development.


"If you find an inhibitor of this protein kinase, you should be able to obtain the same effect as knocking out the gene. And that's the goal," Saltiel said. If successful candidates are identified and drug development is pursued, a new treatment for obesity and diabetes is likely a decade away, he said.


###


First author of the Cell paper is Shian-Huey Chiang of the Life Sciences Institute. Co-authors are U-M researchers Merlijn Bazuine, Carey Lumeng, Lynn Geletka, Jonathan Mowers, Nicole White, Jing-Tyan Ma, Jie Zhou, Nathan Qi, Dan Westcott and Jennifer Delproposto. Timothy Blackwell and Fiona Yull of the Vanderbilt University School of Medicine also are co-authors.


The research was funded by the National Institutes of Health and the American Diabetes Association. All animal use was conducted in compliance with the Institute of Laboratory Animal Research's Guide for the Care and Use of Laboratory Animals and was approved by the University Committee on Use and Care of Animals at the University of Michigan.


Related links:

Life Sciences Institute:

http://www.lsi.umich.edu

Saltiel Lab:

http://www.lsi.umich.edu/facultyresearch/labs/saltiel


If you like this post, buy me a Pittza at $1!
Reblog this post [with Zemanta]

Saturday, August 15, 2009

World Record In Packing Puzzle Set In Tetrahedra Jam: Better Understanding Of Matter Itself?


Finding the best way to pack the greatest quantity of a specifically shaped object into a confined space may sound simple, yet it consistently has led to deep mathematical concepts and practical applications, such as improved computer security codes.


Princeton researchers have beaten the present world record for packing the most tetrahedra into a volume. Research into these so-called packing problems have produced deep mathematical ideas and led to practical applications as well.
(Credit: Princeton University/Torquato Lab)


When mathematicians solved a famed sphere-packing problem in 2005, one that first had been posed by renowned mathematician and astronomer Johannes Kepler in 1611, it made worldwide headlines.


Now, two Princeton University researchers have made a major advance in addressing a twist in the packing problem, jamming more tetrahedra -- solid figures with four triangular faces -- and other polyhedral solid objects than ever before into a space. The work could result in better ways to store data on compact discs as well as a better understanding of matter itself.


In the cover story of the Aug. 13 issue of Nature, Salvatore Torquato, a professor in the Department of Chemistry and the Princeton Institute for the Science and Technology of Materials, and Yang Jiao, a graduate student in the Department of Mechanical and Aerospace Engineering, report that they have bested the world record, set last year by Elizabeth Chen, a graduate student at the University of Michigan.


Using computer simulations, Torquato and Jiao were able to fill a volume to 78.2 percent of capacity with tetrahedra. Chen, before them, had filled 77.8 percent of the space. The previous world record was set in 2006 by Torquato and John Conway, a Princeton professor of mathematics. They succeeded in filling the space to 72 percent of capacity.


Beyond making a new world record, Torquato and Jiao have devised an approach that involves placing pairs of tetrahedra face-to-face, forming a "kissing" pattern that, viewed from the outside of the container, looks strangely jumbled and irregular.


"We wanted to know this: What's the densest way to pack space?" said Torquato, who is also a senior faculty fellow at the Princeton Center for Theoretical Science. "It's a notoriously difficult problem to solve, and it involves complex objects that, at the time, we simply did not know how to handle."


Henry Cohn, a mathematician with Microsoft Research New England in Cambridge, Mass., said, "What's exciting about Torquato and Jiao's paper is that they give compelling evidence for what happens in more complicated cases than just spheres." The Princeton researchers, he said, employ solid figures as a "wonderful test case for understanding the effects of corners and edges on the packing problem."


Studying shapes and how they fit together is not just an academic exercise. The world is filled with such solids, whether they are spherical oranges or polyhedral grains of sand, and it often matters how they are organized. Real-life specks of matter resembling these solids arise at ultra-low temperatures when materials, especially complex molecular compounds, pass through various chemical phases. How atoms clump can determine their most fundamental properties.


"From a scientific perspective, to know about the packing problem is to know something about the low-temperature phases of matter itself," said Torquato, whose interests are interdisciplinary, spanning physics, applied and computational mathematics, chemistry, chemical engineering, materials science, and mechanical and aerospace engineering.


And the whole topic of the efficient packing of solids is a key part of the mathematics that lies behind the error-detecting and error-correcting codes that are widely used to store information on compact discs and to compress information for efficient transmission around the world.


Beyond solving the practical aspects of the packing problem, the work contributes insight to a field that has fascinated mathematicians and thinkers for thousands of years. The Greek philosopher Plato theorized that the classical elements -- earth, wind, fire and water -- were constructed from polyhedra. Models of them have been found among carved stone balls created by the late Neolithic people of Scotland.


The tetrahedron, which is part of the family of geometric objects known as the Platonic solids, must be packed in the face-to-face fashion for maximum effect. But, for significant mathematical reasons, all other members of the Platonic solids, the researchers found, must be packed as lattices to cram in the largest quantity, much the way a grocer stacks oranges in staggered rows, with successive layers nestled in the dimples formed by lower levels. Lattices have great regularity because they are composed of single units that repeat themselves in exactly the same way.


Mathematicians define the five shapes composing the Platonic solids as being convex polyhedra that are regular. For non-mathematicians, this simply means that these solids have many flat faces, which are plane figures, such as triangles, squares or pentagons. Being regular figures, all angles and faces' sides are equal. The group includes the tetrahedron (with four faces), the cube (six faces), the octahedron (eight faces), the dodecahedron (12 faces) and the icosahedron (20 faces).


There's a good reason why tetrahedra must be packed differently from other Platonic solids, according to the authors. Tetrahedra lack a quality known as central symmetry. To possess this quality, an object must have a center that will bisect any line drawn to connect any two points on separate planes on its surface. The researchers also found this trait absent in 12 out of 13 of an even more complex family of shapes known as the Archimedean solids.


The conclusions of the Princeton scientists are not at all obvious, and it took the development of a complex computer program and theoretical analysis to achieve their groundbreaking results. Previous computer simulations had taken virtual piles of polyhedra and stuffed them in a virtual box and allowed them to "grow."


The algorithm designed by Torquato and Jiao, called "an adaptive shrinking cell optimization technique," did it the other way. It placed virtual polyhedra of a fixed size in a "box" and caused the box to shrink and change shape.


There are tremendous advantages to controlling the size of the box instead of blowing up polyhedra, Torquato said. "When you 'grow' the particles, it's easy for them to get stuck, so you have to wiggle them around to improve the density," he said. "Such programs get bogged down easily; there are all kinds of subtleties. It's much easier and productive, we found, thinking about it in the opposite way."


Cohn, of Microsoft, called the results remarkable. It took four centuries, he noted, for mathematician Tom Hales to prove Kepler's conjecture that the best way to pack spheres is to stack them like cannonballs in a war memorial. Now, the Princeton researchers, he said, have thrown out a new challenge to the math world. "Their results could be considered a 21st Century analogue of Kepler's conjecture about spheres," Cohn said. "And, as with that conjecture, I'm sure their work will inspire many future advances."


Many researchers have pointed to various assemblies of densely packed objects and described them as optimal. The difference with this work, Torquato said, is that the algorithm and analysis developed by the Princeton team most probably shows, in the case of the centrally symmetric Platonic and Archimedean solids, "the best packings, period."


Their simulation results are also supported by theoretical arguments that the densest packings of these objects are likely to be their best lattice arrangements. "This is now a strong conjecture that people can try to prove," Torquato said.



If you like this post, buy me a beer at $3!
Reblog this post [with Zemanta]