BTemplates.com

Powered by Blogger.

Pageviews past week

Quantum mechanics

Auto News

artificial intelligence

About Me

Recommend us on Google!

Information Technology

Popular Posts

Wednesday, April 17, 2013

Bad Decisions Arise from Faulty Information, Not Faulty Brain Circuits


Making decisions involves a gradual accumulation of facts that support one choice or another. A person choosing a college might weigh factors such as course selection, institutional reputation and the quality of future job prospects.
Researchers have found that it might be the information rather than the brain's decision-making process that is to blame. The researchers report that erroneous decisions tend to arise from errors, or "noise," in the information coming into the brain rather than errors in how the brain accumulates information.
Researchers have found that it might be the information rather than the brain's decision-making process that is to blame. The researchers report that erroneous decisions tend to arise from errors, or "noise," in the information coming into the brain rather than errors in how the brain accumulates information.

But if the wrong choice is made, Princeton University researchers have found that it might be the information rather than the brain's decision-making process that is to blame. The researchers report in the journal Science that erroneous decisions tend to arise from errors, or "noise," in the information coming into the brain rather than errors in how the brain accumulates information.

These findings address a fundamental question among neuroscientists about whether bad decisions result from noise in the external information -- or sensory input -- or because the brain made mistakes when tallying that information. In the example of choosing a college, the question might be whether a person made a poor choice because of misleading or confusing course descriptions, or because the brain failed to remember which college had the best ratings.

Previous measurements of brain neurons have indicated that brain functions are inherently noisy. The Princeton research, however, separated sensory inputs from the internal mental process to show that the former can be noisy while the latter is remarkably reliable, said senior investigator Carlos Brody, a Princeton associate professor of molecular biology and the Princeton Neuroscience Institute (PNI), and a Howard Hughes Medical Institute Investigator.

"To our great surprise, the internal mental process was perfectly noiseless. All of the imperfections came from noise in the sensory processes," Brody said. Brody worked with first author Bingni Brunton, now a postdoctoral research associate in the departments of biology and applied mathematics at the University of Washington; and Matthew Botvinick, a Princeton associate professor of psychology and PNI.

The research subjects -- four college-age volunteers and 19 laboratory rats -- listened to streams of randomly timed clicks coming into both the left ear and the right ear. After listening to a stream, the subjects had to choose the side from which more clicks originated. The rats had been trained to turn their noses in the direction from which more clicks originated.

The test subjects mostly chose the correct side but occasionally made errors. By comparing various patterns of clicks with the volunteers' responses, researchers found that all of the errors arose when two clicks overlapped, and not from any observable noise in the brain system that tallied the clicks. This was true in experiment after experiment utilizing different click patterns, in humans and rats.

The researchers used the timing of the clicks and the decision-making behavior of the test subjects to create computer models that can be used to indicate what happens in the brain during decision-making. The models provide a clear window into the brain during the "mulling over" period of decision-making, the time when a person is accumulating information but has yet to choose, Brody said.

"Before we conducted this study, we did not have a way of looking at this process without inserting electrodes into the brain," Brody said. "Now thanks to our model, we have an estimation of what is going on at each moment in time during the formation of the decision."

The study suggests that information represented and processed in the brain's neurons must be robust to noise, Brody said. "In other words, the 'neural code' may have a mechanism for inherent error correction," he said.

"The new work from the Brody lab is important for a few reasons," said Anne Churchland, an assistant professor of biological sciences at Cold Spring Harbor Laboratory who studies decision-making and was not involved in the study. "First, the work was very innovative because the researchers were able to study carefully controlled decision-making behavior in rodents. This is surprising in that one might have guessed rodents were incapable of producing stable, reliable decisions that are based on complex sensory stimuli.

"This work exposed some unexpected features of why animals, including humans, sometimes make incorrect decisions," Churchland said. "Specifically, the researchers found that errors are mostly driven by the inability to accurately encode sensory information. Alternative possibilities, which the authors ruled out, included noise associated with holding the stimulus in mind, or memory noise, and noise associated with a bias toward one alternative or the other."

The work was funded by the Howard Hughes Medical Institute, Princeton University and National Institutes of Health training grants.

Small in Size, Big On Power: New Microbatteries the Most Powerful Yet


Though they be but little, they are fierce. The most powerful batteries on the planet are only a few millimeters in size, yet they pack such a punch that a driver could use a cellphone powered by these batteries to jump-start a dead car battery -- and then recharge the phone in the blink of an eye. 
The graphic illustrates a high power battery technology from the University of Illinois. Ions flow between three-dimensional micro-electrodes in a lithium ion battery.
The graphic illustrates a high power battery technology from the University of Illinois. Ions flow between three-dimensional micro-electrodes in a lithium ion battery. (Credit: Image courtesy of the Beckman Institute for Advanced Science and Technology)
Developed by researchers at the University of Illinois at Urbana-Champaign, the new microbatteries out-power even the best supercapacitors and could drive new applications in radio communications and compact electronics.

Led by William P. King, the Bliss Professor of mechanical science and engineering, the researchers published their results in the April 16 issue of Nature Communications.

"This is a whole new way to think about batteries," King said. "A battery can deliver far more power than anybody ever thought. In recent decades, electronics have gotten small. The thinking parts of computers have gotten small. And the battery has lagged far behind. This is a microtechnology that could change all of that. Now the power source is as high-performance as the rest of it."

With currently available power sources, users have had to choose between power and energy. For applications that need a lot of power, like broadcasting a radio signal over a long distance, capacitors can release energy very quickly but can only store a small amount. For applications that need a lot of energy, like playing a radio for a long time, fuel cells and batteries can hold a lot of energy but release it or recharge slowly.

"There's a sacrifice," said James Pikul, a graduate student and first author of the paper. "If you want high energy you can't get high power; if you want high power it's very difficult to get high energy. But for very interesting applications, especially modern applications, you really need both. That's what our batteries are starting to do. We're really pushing into an area in the energy storage design space that is not currently available with technologies today."

The new microbatteries offer both power and energy, and by tweaking the structure a bit, the researchers can tune them over a wide range on the power-versus-energy scale.

The batteries owe their high performance to their internal three-dimensional microstructure. Batteries have two key components: the anode (minus side) and cathode (plus side). Building on a novel fast-charging cathode design by materials science and engineering professor Paul Braun's group, King and Pikul developed a matching anode and then developed a new way to integrate the two components at the microscale to make a complete battery with superior performance.

With so much power, the batteries could enable sensors or radio signals that broadcast 30 times farther, or devices 30 times smaller. The batteries are rechargeable and can charge 1,000 times faster than competing technologies -- imagine juicing up a credit-card-thin phone in less than a second. In addition to consumer electronics, medical devices, lasers, sensors and other applications could see leaps forward in technology with such power sources available.

"Any kind of electronic device is limited by the size of the battery -- until now," King said. "Consider personal medical devices and implants, where the battery is an enormous brick, and it's connected to itty-bitty electronics and tiny wires. Now the battery is also tiny."

Now, the researchers are working on integrating their batteries with other electronics components, as well as manufacturability at low cost.

"Now we can think outside of the box," Pikul said. "It's a new enabling technology. It's not a progressive improvement over previous technologies; it breaks the normal paradigms of energy sources. It's allowing us to do different, new things."

The National Science Foundation and the Air Force Office of Scientific Research supported this work. King also is affiliated with the Beckman Institute for Advanced Science and Technology; the Frederick Seitz Materials Research Laboratory; the Micro and Nanotechnology Laboratory; and the department of electrical and computer engineering at the U. of I.

Tuesday, April 16, 2013

Brain Development Is Guided by Junk DNA That Isn't Really Junk


Specific DNA once dismissed as junk plays an important role in brain development and might be involved in several devastating neurological diseases, UC San Francisco scientists have found.
UCSF researchers have uncovered a role in brain development and in neurological disease for little appreciated molecules called long noncoding RNA. In this image, fluorescent dyes track the presence of the RNA molecules and the genes they affect in the developing mouse brain.
UCSF researchers have uncovered a role in brain development and in neurological disease for little appreciated molecules called long noncoding RNA. In this image, fluorescent dyes track the presence of the RNA molecules and the genes they affect in the developing mouse brain. (Credit: Image courtesy of Alexander Ramos)
Their discovery in mice is likely to further fuel a recent scramble by researchers to identify roles for long-neglected bits of DNA within the genomes of mice and humans alike.

While researchers have been busy exploring the roles of proteins encoded by the genes identified in various genome projects, most DNA is not in genes. This so-called junk DNA has largely been pushed aside and neglected in the wake of genomic gene discoveries, the UCSF scientists said.

In their own research, the UCSF team studies molecules called long noncoding RNA (lncRNA, often pronounced as "link" RNA), which are made from DNA templates in the same way as RNA from genes.

"The function of these mysterious RNA molecules in the brain is only beginning to be discovered," said Daniel Lim, assistant professor of neurological surgery, a member of the Eli and Edythe Broad Center of Regeneration Medicine and Stem Cell Research at UCSF, and the senior author of the study, published online April 11 in the journal Cell Stem Cell.

Alexander Ramos, a student enrolled in the MD/PhD program at UCSF and first author of the study, conducted extensive computational analysis to establish guilt by association, linking lncRNAs within cells to the activation of genes.

Ramos looked specifically at patterns associated with particular developmental pathways or with the progression of certain diseases. He found an association between a set of 88 long noncoding RNAs and Huntington's disease, a deadly neurodegenerative disorder. He also found weaker associations between specific groups of long noncoding RNAs and Alzheimer's disease, convulsive seizures, major depressive disorder and various cancers.

"Alex was the team member who developed this new research direction, did most of the experiments, and connected results to the lab's ongoing work," Lim said. The study was mostly funded through Lim's grant - a National Institutes of Health (NIH) Director's New Innovator Award, a competitive award for innovative projects that have the potential for unusually high impact.

Unlike messenger RNA, which is transcribed from the DNA in genes and guides the production of proteins, lncRNA molecules do not carry the blueprints for proteins. Because of this fact, they were long thought to not influence a cell's fate or actions.

Nonetheless, lncRNAs also are transcribed from DNA in the same way as messenger RNA, and they, too, consist of unique sequences of nucleic acid building blocks.

Evidence indicates that lncRNAs can tether structural proteins to the DNA-containing chromosomes, and in so doing indirectly affect gene activation and cellular physiology without altering the genetic code. In other words, within the cell, lncRNA molecules act "epigenetically" -- beyond genes -- not through changes in DNA.

The brain cells that the scientists focused on the most give rise to various cell types of the central nervous system. They are found in a region of the brain called the subventricular zone, which directly overlies the striatum. This is the part of the brain where neurons are destroyed in Huntington's disease, a condition triggered by a single genetic defect.

Ramos combined several advanced techniques for sequencing and analyzing DNA and RNA to identify where certain chemical changes happen to the chromosomes, and to identify lncRNAs on specific cell types found within the central nervous system. The research revealed roughly 2,000 such molecules that had not previously been described, out of about 9,000 thought to exist in mammals ranging from mice to humans.

In fact, the researchers generated far too much data to explore on their own. The UCSF scientists created a website through which their data can be used by others who want to study the role of lncRNAs in development and disease.

"There's enough here for several labs to work on," said Ramos, who has training grants from the California Institute for Regenerative Medicine (CIRM) and the NIH.

"It should be of interest to scientists who study long noncoding RNA, the generation of new nerve cells in the adult brain, neural stem cells and brain development, and embryonic stem cells," he said.

Other co-authors who worked on the study include UCSF postdoctoral fellows Aaron Diaz, PhD, Abhinav Nellore, PhD, Michael Oldham, PhD, Jun Song, PhD, Ki-Youb Park, PhD, and Gabriel Gonzales-Roybal, PhD; and MD/PhD student Ryan Delgado. Additional funders of the study included the Sontag Foundation and the Sandler Foundation.

UCSF is a leading university dedicated to promoting health worldwide through advanced biomedical research, graduate-level education in the life sciences and health professions, and excellence in patient care. 

================
Follow UCSF UCSF.edu | Facebook.com/ucsf | Twitter.com/ucsf | YouTube.com/ucsf

Monday, April 15, 2013

Human Genome Project Marks 10th Anniversary


This month marks the 10-year anniversary of the Human Genome Project, a 13-year international effort to determine the sequence of the 3 billion "letters" in a human being's DNA.
This month marks the 10-year anniversary of the Human Genome Project, a 13-year international effort to determine the sequence of the 3 billion "letters" in a human being's DNA.
The $3 billion project, led by the U.S. Department of Energy and the National Institutes of Health, began in 1990 and was completed on April 14, 2003. In the decade since then, scientists have achieved many important milestones in using genomic discoveries to advance medical knowledge.

Sequencing technology has vastly improved in recent years. Sequencing the first human genome cost about $1 billion and took 13 years to complete; today it costs about $3,000 to $5000 and takes just one to two days.

Probing genome function

But just knowing the sequence would be meaningless without a way to interpret it. So researchers found ways to study the genome’s function, by sequencing the genomes of 135 other organisms and surveying the global variation among human genomes.

Researchers compared the genome sequences of other animals, such as chimpanzees and platypuses, as well as other eurkaryotic organisms (those whose cells have a nucleus), such as yeast and flat worms. From this comparison, scientists could identify stretches of DNA that have remained largely unchanged over the course of evolution. Five to 8 percent of the human genome has been unchanged for thousands of years.

One of the more surprising findings is how little of the human genome (only 1.5 percent) actually encodes proteins, the molecular building blocks that perform most of the critical functions inside cells.

To probe this mystery, more than 400 researchers from 32 labs worldwide created the ENCyclopedia Of DNA Elements (ENCODE) consortium. In 2012, they published many important findings about how the human genome functions. These include locations in the genome that may be genetic "switches" to turn genes on and off, as well as demonstrating that more than 80 percent of the genome that was once called "junk DNA" actually does serve a function.

Other research has focused on measuring the variation among human genomes. Preliminary studies during the Human Genome Project indicated that human genomes differ by just one-tenth of a percent. Investigating the limited variation that does exist is key to understanding human health and disease.

In sickness and in health
The first catalog of human genome variation was the International HapMap Project, which compared the genomes of people from Europe, China, Japan and Africa. Biotech companies have used findings from this project and its follow-on, the 1000 Genomes Project, to study populations with and without diseases, in the hope of identifying genetic variants associated with disease. Such genome-wide association studies have resulted in the identification of thousands of variants that can influence a person's likelihood of developing a disease.

As a result of these studies, the U.S. Food and Drug Administration now requires that the labels of more than 100 drugs include information about certain genetic markers, so doctors can tailor their prescriptions based on a patient's genetic makeup.

In the 10 years since the Human Genome Project was completed, researchers have made big strides in using genomic information in diagnosing and treating cancer. For instance, the breast cancer drug trastuzumab (Herceptin) only works for women with tumors of a certain type known as "HER-2 positive." Similarly, the lung cancer drugs gefitinib (Iressa) and erlotinib (Tarceva) are only effective for patients whose tumors have so-called "EGFR" mutations.

Mutations in only 53 genes were linked to disease when the genome project began, whereas more than 2,900 genes are today.

But scientists have a long way to go in understanding the human genome and how it can be used for improving human health. The rise of personalized genomics and changes in the ways health information is collected and used are prompting a new era in medicine, which brings both challenges and opportunities.

What Happens in the Brain to Make Music Rewarding?


A new study reveals what happens in our brain when we decide to purchase a piece of music when we hear it for the first time. The study, conducted at the Montreal Neurological Institute and Hospital -- The Neuro, McGill University and published in the journal Science on April 12, pinpoints the specific brain activity that makes new music rewarding and predicts the decision to purchase music.
A new study reveals what happens in our brain when we decide to purchase a piece of music when we hear it for the first time.
A new study reveals what happens in our brain when we decide to purchase a piece of music when we hear it for the first time. (Credit: © Warren Goldswain / Fotolia)

Participants in the study listened to 60 previously unheard music excerpts while undergoing functional resonance imaging (fMRI) scanning, providing bids of how much they were willing to spend for each item in an auction paradigm. "When people listen to a piece of music they have never heard before, activity in one brain region can reliably and consistently predict whether they will like or buy it, this is the nucleus accumbens which is involved in forming expectations that may be rewarding," says lead investigator Dr. Valorie Salimpoor, who conducted the research in Dr. Robert Zatorre's lab at The Neuro and is now at Baycrest Health Sciences' Rotman Research Institute. "What makes music so emotionally powerful is the creation of expectations. Activity in the nucleus accumbens is an indicator that expectations were met or surpassed, and in our study we found that the more activity we see in this brain area while people are listening to music, the more money they are willing to spend."

The second important finding is that the nucleus accumbens doesn't work alone, but interacts with the auditory cortex, an area of the brain that stores information about the sounds and music we have been exposed to. The more a given piece was rewarding, the greater the cross-talk between these regions. Similar interactions were also seen between the nucleus accumbens and other brain areas, involved in high-level sequencing, complex pattern recognition and areas involved in assigning emotional and reward value to stimuli.

In other words, the brain assigns value to music through the interaction of ancient dopaminergic reward circuitry, involved in reinforcing behaviours that are absolutely necessary for our survival such as eating and sex, with some of the most evolved regions of the brain, involved in advanced cognitive processes that are unique to humans.

"This is interesting because music consists of a series of sounds that when considered alone have no inherent value, but when arranged together through patterns over time can act as a reward, says Dr. Robert Zatorre, researcher at The Neuro and co-director of the International Laboratory for Brain, Music and Sound Research. "The integrated activity of brain circuits involved in pattern recognition, prediction, and emotion allow us to experience music as an aesthetic or intellectual reward."

"The brain activity in each participant was the same when they were listening to music that they ended up purchasing, although the pieces they chose to buy were all different," adds Dr. Salimpoor. "These results help us to see why people like different music -- each person has their own uniquely shaped auditory cortex, which is formed based on all the sounds and music heard throughout our lives. Also, the sound templates we store are likely to have previous emotional associations."

An innovative aspect of this study is how closely it mimics real-life music-listening experiences. Researchers used a similar interface and prices as iTunes. To replicate a real life scenario as much as possible and to assess reward value objectively, individuals could purchase music with their own money, as an indication that they wanted to hear it again. Since musical preferences are influenced by past associations, only novel music excerpts were selected (to minimize explicit predictions) using music recommendation software (such as Pandora, Last.fm) to reflect individual preferences.

The interactions between nucleus accumbens and the auditory cortex suggest that we create expectations of how musical sounds should unfold based on what is learned and stored in our auditory cortex, and our emotions result from the violation or fulfillment of these expectations. We are constantly making reward-related predictions to survive, and this study provides neurobiological evidence that we also make predictions when listening to an abstract stimulus, music, even if we have never heard the music before. Pattern recognition and prediction of an otherwise simple set of stimuli, when arranged together become so powerful as to make us happy or bring us to tears, as well as communicate and experience some of the most intense, complex emotions and thoughts.

Listen to the music excerpts used in the study: http://www.zlab.mcgill.ca/science2013/

Friday, April 5, 2013

3-D Printer Can Build Synthetic Tissues


A custom-built programmable 3D printer can create materials with several of the properties of living tissues, Oxford University scientists have demonstrated.
A custom-built programmable 3D printer can create materials with several of the properties of living tissues, Oxford University scientists have demonstrated: Droplet network c.500 microns across with electrically conductive pathway between electrodes mimicking nerve.
A custom-built programmable 3D printer can create materials with several of the properties of living tissues, Oxford University scientists have demonstrated: Droplet network c.500 microns across with electrically conductive pathway between electrodes mimicking nerve. (Credit: Oxford University/G Villar)

The new type of material consists of thousands of connected water droplets, encapsulated within lipid films, which can perform some of the functions of the cells inside our bodies.

These printed 'droplet networks' could be the building blocks of a new kind of technology for delivering drugs to places where they are needed and potentially one day replacing or interfacing with damaged human tissues. Because droplet networks are entirely synthetic, have no genome and do not replicate, they avoid some of the problems associated with other approaches to creating artificial tissues -- such as those that use stem cells.

The team report their findings in this week's Science.

'We aren't trying to make materials that faithfully resemble tissues but rather structures that can carry out the functions of tissues,' said Professor Hagan Bayley of Oxford University's Department of Chemistry, who led the research. 'We've shown that it is possible to create networks of tens of thousands connected droplets. The droplets can be printed with protein pores to form pathways through the network that mimic nerves and are able to transmit electrical signals from one side of a network to the other.'

Each droplet is an aqueous compartment about 50 microns in diameter. Although this is around five times larger than living cells the researchers believe there is no reason why they could not be made smaller. The networks remain stable for weeks.

'Conventional 3D printers aren't up to the job of creating these droplet networks, so we custom built one in our Oxford lab to do it,' said Professor Bayley. 'At the moment we've created networks of up to 35,000 droplets but the size of network we can make is really only limited by time and money. For our experiments we used two different types of droplet, but there's no reason why you couldn't use 50 or more different kinds.'

The unique 3D printer was built by Gabriel Villar, a DPhil student in Professor Bayley's group and the lead author of the paper.

The droplet networks can be designed to fold themselves into different shapes after printing -- so, for example, a flat shape that resembles the petals of a flower is 'programmed' to fold itself into a hollow ball, which cannot be obtained by direct printing. The folding, which resembles muscle movement, is powered by osmolarity differences that generate water transfer between droplets.

Gabriel Villar of Oxford University's Department of Chemistry said: 'We have created a scalable way of producing a new type of soft material. The printed structures could in principle employ much of the biological machinery that enables the sophisticated behaviour of living cells and tissues.'

Monday, April 1, 2013

Biological Transistor Enables Computing Within Living Cells


When Charles Babbage prototyped the first computing machine in the 19th century, he imagined using mechanical gears and latches to control information. ENIAC, the first modern computer developed in the 1940s, used vacuum tubes and electricity. Today, computers use transistors made from highly engineered semiconducting materials to carry out their logical operations.
Artist's rendering of cells
Artist's rendering of cells. (Credit: © Jezper / Fotolia)
And now a team of Stanford University bioengineers has taken computing beyond mechanics and electronics into the living realm of biology. In a paper to be published March 28 in Science, the team details a biological transistor made from genetic material -- DNA and RNA -- in place of gears or electrons. The team calls its biological transistor the "transcriptor."

"Transcriptors are the key component behind amplifying genetic logic -- akin to the transistor and electronics," said Jerome Bonnet, PhD, a postdoctoral scholar in bioengineering and the paper's lead author.

The creation of the transcriptor allows engineers to compute inside living cells to record, for instance, when cells have been exposed to certain external stimuli or environmental factors, or even to turn on and off cell reproduction as needed.

"Biological computers can be used to study and reprogram living systems, monitor environments and improve cellular therapeutics," said Drew Endy, PhD, assistant professor of bioengineering and the paper's senior author.

The biological computer

In electronics, a transistor controls the flow of electrons along a circuit. Similarly, in biologics, a transcriptor controls the flow of a specific protein, RNA polymerase, as it travels along a strand of DNA.

"We have repurposed a group of natural proteins, called integrases, to realize digital control over the flow of RNA polymerase along DNA, which in turn allowed us to engineer amplifying genetic logic," said Endy.

Using transcriptors, the team has created what are known in electrical engineering as logic gates that can derive true-false answers to virtually any biochemical question that might be posed within a cell.

They refer to their transcriptor-based logic gates as "Boolean Integrase Logic," or "BIL gates" for short.

Transcriptor-based gates alone do not constitute a computer, but they are the third and final component of a biological computer that could operate within individual living cells.

Despite their outward differences, all modern computers, from ENIAC to Apple, share three basic functions: storing, transmitting and performing logical operations on information.

Last year, Endy and his team made news in delivering the other two core components of a fully functional genetic computer. The first was a type of rewritable digital data storage within DNA. They also developed a mechanism for transmitting genetic information from cell to cell, a sort of biological Internet.

It all adds up to creating a computer inside a living cell.

Boole's gold

Digital logic is often referred to as "Boolean logic," after George Boole, the mathematician who proposed the system in 1854. Today, Boolean logic typically takes the form of 1s and 0s within a computer. Answer true, gate open; answer false, gate closed. Open. Closed. On. Off. 1. 0. It's that basic. But it turns out that with just these simple tools and ways of thinking you can accomplish quite a lot.

"AND" and "OR" are just two of the most basic Boolean logic gates. An "AND" gate, for instance, is "true" when both of its inputs are true -- when "a" and "b" are true. An "OR" gate, on the other hand, is true when either or both of its inputs are true.

In a biological setting, the possibilities for logic are as limitless as in electronics, Bonnet explained. "You could test whether a given cell had been exposed to any number of external stimuli -- the presence of glucose and caffeine, for instance. BIL gates would allow you to make that determination and to store that information so you could easily identify those which had been exposed and which had not," he said.

By the same token, you could tell the cell to start or stop reproducing if certain factors were present. And, by coupling BIL gates with the team's biological Internet, it is possible to communicate genetic information from cell to cell to orchestrate the behavior of a group of cells.

"The potential applications are limited only by the imagination of the researcher," said co-author Monica Ortiz, a PhD candidate in bioengineering who demonstrated autonomous cell-to-cell communication of DNA encoding various BIL gates.

Building a transcriptor

To create transcriptors and logic gates, the team used carefully calibrated combinations of enzymes -- the integrases mentioned earlier -- that control the flow of RNA polymerase along strands of DNA. If this were electronics, DNA is the wire and RNA polymerase is the electron.

"The choice of enzymes is important," Bonnet said. "We have been careful to select enzymes that function in bacteria, fungi, plants and animals, so that bio-computers can be engineered within a variety of organisms."

On the technical side, the transcriptor achieves a key similarity between the biological transistor and its semiconducting cousin: signal amplification.

With transcriptors, a very small change in the expression of an integrase can create a very large change in the expression of any two other genes.

To understand the importance of amplification, consider that the transistor was first conceived as a way to replace expensive, inefficient and unreliable vacuum tubes in the amplification of telephone signals for transcontinental phone calls. Electrical signals traveling along wires get weaker the farther they travel, but if you put an amplifier every so often along the way, you can relay the signal across a great distance. The same would hold in biological systems as signals get transmitted among a group of cells.

"It is a concept similar to transistor radios," said Pakpoom Subsoontorn, a PhD candidate in bioengineering and co-author of the study who developed theoretical models to predict the behavior of BIL gates. "Relatively weak radio waves traveling through the air can get amplified into sound."

Public-domain biotechnology

To bring the age of the biological computer to a much speedier reality, Endy and his team have contributed all of BIL gates to the public domain so that others can immediately harness and improve upon the tools.

"Most of biotechnology has not yet been imagined, let alone made true. By freely sharing important basic tools everyone can work better together," Bonnet said.