BTemplates.com

Powered by Blogger.

Pageviews past week

Quantum mechanics

Auto News

artificial intelligence

About Me

Recommend us on Google!

Information Technology

Popular Posts

Friday, December 30, 2011

The Law of Online Sharing




Facebook's Mark Zuckerberg will eventually have to deal with the fact that all growth has limits.

Credit: Technology Review
The idea of limitless growth gives sleepless nights to environmentalists, but not to Facebook founder Mark Zuckerberg. He espouses a law of social sharing, which predicts that every year, for the foreseeable future, the amount of information you share on the Web will double.

That rule of thumb can be visualized mathematically as a rapidly growing exponential curve. More simply, our online social lives are set to get significantly busier. As for Facebook, more personal data means better ad targeting. If things work out, Zuckerberg's net worth will follow a similar trajectory to that described in his law of social sharing.

That law is said to be mathematically derived from data inside Facebook. In ambition, it is closely modeled on Moore's Law, which was conceived by the computer-processor pioneer Gordon Moore in 1965 and has been at work in every advance in computing since. Also an exponential curve, it states that every two years twice as many transistors can be fitted onto a chip of any given area for the same price, allowing processing power to get cheaper and more capable.

There's a hint of vanity in Zuckerberg's attempt to ape Moore. But it makes sense to try to describe the mechanisms that have raised Facebook and other social-Web companies to power. The Web defines our time and is being rapidly reshaped by social content—from dumb viral videos to earnest pleas on serious issues. Facebook's success has left older companies like Google scrambling to add social features to their own products. Zuckerberg's Law can help us understand such a sudden change of tack from a seemingly dominant company, just as Moore's Law has long been used to plan and explain new strategies and technologies.

Inasmuch as Facebook is the company most invested in ­Zuckerberg's Law, its every move can be understood as an effort to sustain the graceful upward curve of its founder's formula. The short-term prospects look good for Zuckerberg. The original Moore's Law is on his side; faster, cheaper computers and mobile devices have made sharing easier and allowed us to do it wherever we go. Just as important, we are willing to play along, embracing new features from Facebook and others that lead us to share things today that we wouldn't or couldn't have yesterday.

Facebook's most recent major product launch, last September, is clearly aimed at validating Zuckerberg's prophecy and may provide its first real test. An upgrade to the Open Graph platform that unleashed the now ubiquitous Like button onto the Web , it added a feature that allows apps and Web sites to automatically share your activity via Facebook as you go about your business. Users must first give a service permission to share automatically on their behalf. After that, frictionless sharing, as it has become known, makes sharing happen without your needing to click a Like button, or to even think about sharing. The most prominent early implementation was the music-streaming service Spotify, which can now automatically post on Facebook the details of every song you listen to. In the first two months of frictionless sharing, more than 1.5 billion "listens" were shared through Spotify and other music apps. News organizations like the Washington Post use the feature, making it possible for them to share every article a person reads on their sites or in a dedicated app. Frictionless sharing is also helping Facebook drag formerly offline activities onto the Web. An app for runners can now automatically post the time, distance, and path of a person's morning run.

Frictionless sharing sustains ­Zuckerberg's Law by automating what used to be a manual task, thus removing a brake on the rate at which we can share. It also shows that we are willing to compromise our previous positions on how much sharing is too much. Facebook introduced a form of automatic sharing four years ago with a feature called Beacon, but it retreated after a strong backlash from users. Beacon automatically shared purchases that Facebook members made through affiliated online retailers, such as eBay. Frictionless sharing reintroduces the same basic model with the difference that it is opt-in rather than opt-out. Carl ­Sjogreen, a computer scientist who is a product director overseeing Open Graph, says it hasn't elicited anything like the rage that met Beacon's debut. "Everyone has a different idea of what they want to share, and what they want to see," says Sjogreen. Moreover, judging by the number of Spotify updates from my Facebook friends, frictionless sharing is pretty popular.

Privacy concerns will surely arise again as Facebook and others become able to ingest and process more of our personal data. Yet our urge to share always seems to win out. The potential for GPS-equipped cell phones to become location trackers, should the government demand access to our data, has long concerned some people. A South Park episode last year even portrayed an evil caricature of Apple boss Steve Jobs standing before a wall-sized map labeled "Where Everybody in the World Is Right Now." Six months later, to a mostly positive reception, Apple debuted a new iPhone feature called Find My Friends, which encourages users to let Apple track their location and share it.

It's not hard to explain why we seem eager to do our bit to maintain the march of Zuckerberg's Law. Social sites are like Skinner boxes: we press the Like button and are rewarded with attention and interaction from our friends. It doesn't take long to get conditioned to that reward. Frictionless sharing can now push the lever for us day and night, in hopes of drawing even more attention from others.

Unfortunately for Zuckerberg and his law, not every part of that feedback loop can be so easily boosted. Frictionless sharing helps, but getting others to care is the bigger challenge. In 2009 a new social site called Blippy was launched; it connected with your credit card to create a Twitter-style online feed of everything you bought. That stream could be made public or shared with particular contacts. Blippy got a lot of press but not the wide adoption its cofounder Philip Kaplan had hoped for. "Most people thought Blippy's biggest challenge would be getting users to share their purchases," he says. "Turns out the hard part was getting users to look at other people's purchases. Getting people to share is a small hump. Getting them to obsess over the data—making it fun, interesting, or useful—is the big hump."

Sjogreen has that problem in his sights. He says he is working on ways to turn the impending flood of daily trivialities coming from frictionless sharing into something fun, interesting, and useful. Repackaging the raw information to make it more compelling to others is one tactic. "It's the patterns and anomalies that matter to us," he says. For example, if you notice that a friend just watched 23 episodes of Breaking Bad in a row, you may decide you should check out that show after all. Or if he sets a new personal record on his morning run, the app in the phone strapped to his arm could automatically tout it to friends. Perhaps Blippy would have thrived if it highlighted significant purchases like vacations, instead of simply blasting people with everything from grocery lists to fuel bills.

We can only guess at the effectiveness of Sjogreen's future tactics, but it is certain that they can sustain Zuckerberg's Law for only so long. Gordon Moore put it well in 2005 when reflecting on the success of his own law: "It can't continue forever. The nature of exponentials is that you push them out and eventually disaster happens."

Facebook's impending problem is that even if the company enables future pacemakers to share our every heartbeat, the company cannot automate caring—the most important part of the feedback loop that has driven the social Web's ascent. Nothing can support exponential growth for long. No matter how cleverly our friends' social output is summarized and highlighted for us, there are only so many hours in the day for us to express that we care. Today, the law of social sharing is a useful way to think about the rise of social computing, but eventually, reality will make it obsolete.

Thursday, December 29, 2011

Crucial Advances in 'Brain Reading' Demonstrated





At UCLA's Laboratory of Integrative Neuroimaging Technology, researchers use functional MRI brain scans to observe brain signal changes that take place during mental activity. They then employ computerized machine learning (ML) methods to study these patterns and identify the cognitive state -- or sometimes the thought process -- of human subjects. The technique is called "brain reading" or "brain decoding."

An innovative machine learning method
anticipates neurocognitive changes,
similar to predictive text-entry for
cell phones, Internet search engines.
(Credit: © ktsdesign / Fotolia)

In a new study, the UCLA research team describes several crucial advances in this field, using fMRI and machine learning methods to perform "brain reading" on smokers experiencing nicotine cravings.

The research, presented last week at the Neural Information Processing Systems' Machine Learning and Interpretation in Neuroimaging workshop in Spain, was funded by the National Institute on Drug Abuse, which is interested in using these method to help people control drug cravings.

In this study on addiction and cravings, the team classified data taken from cigarette smokers who were scanned while watching videos meant to induce nicotine cravings. The aim was to understand in detail which regions of the brain and which neural networks are responsible for resisting nicotine addiction specifically, and cravings in general, said Dr. Ariana Anderson, a postdoctoral fellow in the Integrative Neuroimaging Technology lab and the study's lead author.

"We are interested in exploring the relationships between structure and function in the human brain, particularly as related to higher-level cognition, such as mental imagery," Anderson said. "The lab is engaged in the active exploration of modern data-analysis approaches, such as machine learning, with special attention to methods that reveal systems-level neural organization."

For the study, smokers sometimes watched videos meant to induce cravings, sometimes watched "neutral" videos and at sometimes watched no video at all. They were instructed to attempt to fight nicotine cravings when they arose.

The data from fMRI scans taken of the study participants was then analyzed. Traditional machine learning methods were augmented by Markov processes, which use past history to predict future states. By measuring the brain networks active over time during the scans, the resulting machine learning algorithms were able to anticipate changes in subjects' underlying neurocognitive structure, predicting with a high degree of accuracy (90 percent for some of the models tested) what they were watching and, as far as cravings were concerned, how they were reacting to what they viewed.

"We detected whether people were watching and resisting cravings, indulging in them, or watching videos that were unrelated to smoking or cravings," said Anderson, who completed her Ph.D. in statistics at UCLA. "Essentially, we were predicting and detecting what kind of videos people were watching and whether they were resisting their cravings."

In essence, the algorithm was able to complete or "predict" the subjects' mental states and thought processes in much the same way that Internet search engines or texting programs on cell phones anticipate and complete a sentence or request before the user is finished typing. And this machine learning method based on Markov processes demonstrated a large improvement in accuracy over traditional approaches, the researchers said.

Machine learning methods, in general, create a "decision layer" -- essentially a boundary separating the different classes one needs to distinguish. For example, values on one side of the boundary might indicate that a subject believes various test statements and, on the other, that a subject disbelieves these statements. Researchers have found they can detect these believe-disbelieve differences with high accuracy, in effect creating a lie detector. An innovation described in the new study is a means of making these boundaries interpretable by neuroscientists, rather than an often obscure boundary created by more traditional methods, like support vector machine learning.

"In our study, these boundaries are designed to reflect the contributed activity of a variety of brain sub-systems or networks whose functions are identifiable -- for example, a visual network, an emotional-regulation network or a conflict-monitoring network," said study co-author Mark S. Cohen, a professor of neurology, psychiatry and biobehavioral sciences at UCLA's Staglin Center for Cognitive Neuroscience and a researcher at the California NanoSystems Institute at UCLA.

"By projecting our problem of isolating specific networks associated with cravings into the domain of neurology, the technique does more than classify brain states -- it actually helps us to better understand the way the brain resists cravings," added Cohen, who also directs UCLA's Neuroengineering Training Program.

Remarkably, by placing this problem into neurological terms, the decoding process becomes significantly more reliable and accurate, the researchers said. This is especially significant, they said, because it is unusual to use prior outcomes and states in order to inform the machine learning algorithms, and it is particularly challenging in the brain because so much is unknown about how the brain works.

Machine learning typically involves two steps: a "training phase" in which the computer evaluates a set of known outcomes -- say, a bunch of trials in which a subject indicated belief or disbelief -- and a second, "prediction" phase in which the computer builds a boundary based on that knowledge.

In future research, the neuroscientists said, they will be using these machine learning methods in a biofeedback context, showing subjects real-time brain readouts to let them know when they are experiencing cravings and how intense those cravings are, in the hopes of training them to control and suppress those cravings.

But since this clearly changes the process and cognitive state for the subject, the researchers said, they may face special challenges in trying to decode a "moving target" and in separating the "training" phase from the "prediction" phase.

Wednesday, December 28, 2011

More Powerful Supercomputers? New Device Could Bring Optical Information Processing



Researchers have created a new type of optical device small enough to fit millions on a computer chip that could lead to faster, more powerful information processing and supercomputers.

This illustration shows a new "all-silicon passive optical diode," a device small enough to fit millions on a computer chip that could lead to faster, more powerful information processing and supercomputers. The device has been developed by Purdue University researchers. (Credit: Birck Nanotechnology Center, Purdue University)

The "passive optical diode" is made from two tiny silicon rings measuring 10 microns in diameter, or about one-tenth the width of a human hair. Unlike other optical diodes, it does not require external assistance to transmit signals and can be readily integrated into computer chips.

The diode is capable of "nonreciprocal transmission," meaning it transmits signals in only one direction, making it capable of information processing, said Minghao Qi (pronounced Chee), an associate professor of electrical and computer engineering at Purdue University.

"This one-way transmission is the most fundamental part of a logic circuit, so our diodes open the door to optical information processing," said Qi, working with a team also led by Andrew Weiner, Purdue's Scifres Family Distinguished Professor of Electrical and Computer Engineering.

The diodes are described in a paper to be published online Dec. 22 in the journal Science. The paper was written by graduate students Li Fan, Jian Wang, Leo Varghese, Hao Shen and Ben Niu, research associate Yi Xuan, and Weiner and Qi.

Although fiberoptic cables are instrumental in transmitting large quantities of data across oceans and continents, information processing is slowed and the data are susceptible to cyberattack when optical signals must be translated into electronic signals for use in computers, and vice versa.

"This translation requires expensive equipment," Wang said. "What you'd rather be able to do is plug the fiber directly into computers with no translation needed, and then you get a lot of bandwidth and security."

Electronic diodes constitute critical junctions in transistors and help enable integrated circuits to switch on and off and to process information. The new optical diodes are compatible with industry manufacturing processes for complementary metal-oxide-semiconductors, or CMOS, used to produce computer chips, Fan said.

"These diodes are very compact, and they have other attributes that make them attractive as a potential component for future photonic information processing chips," she said.

The new optical diodes could make for faster and more secure information processing by eliminating the need for this translation. The devices, which are nearly ready for commercialization, also could lead to faster, more powerful supercomputers by using them to connect numerous processors together.

"The major factor limiting supercomputers today is the speed and bandwidth of communication between the individual superchips in the system," Varghese said. "Our optical diode may be a component in optical interconnect systems that could eliminate such a bottleneck."

Infrared light from a laser at telecommunication wavelength goes through an optical fiber and is guided by a microstructure called a waveguide. It then passes sequentially through two silicon rings and undergoes "nonlinear interaction" while inside the tiny rings. Depending on which ring the light enters first, it will either pass in the forward direction or be dissipated in the backward direction, making for one-way transmission. The rings can be tuned by heating them using a "microheater," which changes the wavelengths at which they transmit, making it possible to handle a broad frequency range.

Monday, December 26, 2011

Chemists Solve an 84-Year-Old Theory On How Molecules Move Energy After Light Absorption



The same principle that causes figure skaters to spin faster as they draw their arms into their bodies has now been used by Michigan State University researchers to understand how molecules move energy around following the absorption of light.
MSU chemist Jim McCusker and postdoctoral researcher Dong Guo proved an 84-year-old theory. (Credit: Photo courtesy of MSU.)

Conservation of angular momentum is a fundamental property of nature, one that astronomers use to detect the presence of satellites circling distant planets. In 1927, it was proposed that this principle should apply to chemical reactions, but a clear demonstration has never been achieved.

In the current issue of Science, MSU chemist Jim McCusker demonstrates for the first time the effect is real and also suggests how scientists could use it to control and predict chemical reaction pathways in general.

"The idea has floated around for decades and has been implicitly invoked in a variety of contexts, but no one had ever come up with a chemical system that could demonstrate whether or not the underlying concept was valid," McCusker said. "Our result not only validates the idea, but it really allows us to start thinking about chemical reactions from an entirely different perspective."

The experiment involved the preparation of two closely related molecules that were specifically designed to undergo a chemical reaction known as fluorescence resonance energy transfer, or FRET. Upon absorption of light, the system is predisposed to transfer that energy from one part of the molecule to another.

McCusker's team changed the identity of one of the atoms in the molecule from chromium to cobalt. This altered the molecule's properties and shut down the reaction. The absence of any detectable energy transfer in the cobalt-containing compound confirmed the hypothesis.

"What we have successfully conducted is a proof-of-principle experiment," McCusker said. "One can easily imagine employing these ideas to other chemical processes, and we're actually exploring some of these avenues in my group right now."

The researchers believe their results could impact a variety of fields including molecular electronics, biology and energy science through the development of new types of chemical reactions.

Dong Guo, a postdoctoral researcher, and Troy Knight, former graduate student and now research scientist at Dow Chemical, were part of McCusker's team. Funding was provided by the National Science Foundation.

Tuesday, December 20, 2011

Eating less keeps the brain young



Overeating may cause brain aging while eating less turns on a molecule that helps the brain stay young.

Study: Eating less keeps the brain young
A team of Italian researchers at the Catholic University of Sacred Heart in Rome have discovered that this molecule, called CREB1, is triggered by "caloric restriction" (low caloric diet) in the brain of mice. They found that CREB1 activates many genes linked to longevity and to the proper functioning of the brain.

This work was led by Giovambattista Pani, researcher at the Institute of General Pathology, Faculty of Medicine at the Catholic University of Sacred Heart in Rome, directed by Professor Achille Cittadini, in collaboration with Professor Claudio Grassi of the Institute of Human Physiology. The research appears this week in the Proceedings of the National Academy of Sciences (PNAS).

"Our hope is to find a way to activate CREB1, for example through new drugs, so to keep the brain young without the need of a strict diet," Dr Pani said.

Caloric restriction means the animals can only eat up to 70 percent of the food they consume normally, and is a known experimental way to extend life, as seen in many experimental models. Typically, caloric-restricted mice do not become obese and don't develop diabetes; moreover they show greater cognitive performance and memory, are less aggressive. Furthermore they do not develop, if not much later, Alzheimer's disease and with less severe symptoms than in overfed animals.

Many studies suggest that obesity is bad for our brain, slows it down, causes early brain aging, making it susceptible to diseases typical of older people as the Alzheimer's and Parkinson's. In contrast, caloric restriction keeps the brain young. Nevertheless, the precise molecular mechanism behind the positive effects of an hypocaloric diet on the brain remained unknown till now.

The Italian team discovered that CREB1 is the molecule activated by caloric restriction and that it mediates the beneficial effects of the diet on the brain by turning on another group of molecules linked to longevity, the "sirtuins". This finding is consistent with the fact that CREB1 is known to regulate important brain functions as memory, learning and anxiety control, and its activity is reduced or physiologically compromised by aging.

Moreover, Italian researchers have discovered that the action of CREB1 can be dramatically increased by simply reducing caloric intake, and have shown that CREB is absolutely essential to make caloric restriction work on the brain. In fact, if mice lack CREB1 the benefits of caloric restriction on the brain (improving memory, etc.) disappeear. So the animals without CREB1 show the same brain disabilities typical of overfed and/or old animals.

"Thus, our findings identify for the first time an important mediator of the effects of diet on the brain," Dr. Pani said. "This discovery has important implications to develop future therapies to keep our brain young and prevent brain degeneration and the aging process. In addition, our study shed light on the correlation among metabolic diseases as diabetes and obesity and the decline in cognitive activities."

Provided by Catholic University of Rome

Big Ecosystem Shifts from Climate Change




By 2100, global climate change will modify plant communities covering almost half of Earth's land surface and will drive the conversion of nearly 40 percent of land-based ecosystems from one major ecological community type -- such as forest, grassland or tundra -- toward another, according to a new NASA and university computer modeling study.

Predicted percentage of ecological landscape
being driven toward changes in plant species
as a result of projected human-induced climate
change by 2100. (Credit: NASA/JPL-Caltech)

Researchers from NASA's Jet Propulsion Laboratory and the California Institute of Technology in Pasadena, Calif., investigated how Earth's plant life is likely to react over the next three centuries as Earth's climate changes in response to rising levels of human-produced greenhouse gases. Study results are published in the journal Climatic Change.

The model projections paint a portrait of increasing ecological change and stress in Earth's biosphere, with many plant and animal species facing increasing competition for survival, as well as significant species turnover, as some species invade areas occupied by other species. Most of Earth's land that is not covered by ice or desert is projected to undergo at least a 30 percent change in plant cover -- changes that will require humans and animals to adapt and often relocate.

In addition to altering plant communities, the study predicts climate change will disrupt the ecological balance between interdependent and often endangered plant and animal species, reduce biodiversity and adversely affect Earth's water, energy, carbon and other element cycles.

"For more than 25 years, scientists have warned of the dangers of human-induced climate change," said Jon Bergengren, a scientist who led the study while a postdoctoral scholar at Caltech. "Our study introduces a new view of climate change, exploring the ecological implications of a few degrees of global warming. While warnings of melting glaciers, rising sea levels and other environmental changes are illustrative and important, ultimately, it's the ecological consequences that matter most."

When faced with climate change, plant species often must "migrate" over multiple generations, as they can only survive, compete and reproduce within the range of climates to which they are evolutionarily and physiologically adapted. While Earth's plants and animals have evolved to migrate in response to seasonal environmental changes and to even larger transitions, such as the end of the last ice age, they often are not equipped to keep up with the rapidity of modern climate changes that are currently taking place. Human activities, such as agriculture and urbanization, are increasingly destroying Earth's natural habitats, and frequently block plants and animals from successfully migrating.

To study the sensitivity of Earth's ecological systems to climate change, the scientists used a computer model that predicts the type of plant community that is uniquely adapted to any climate on Earth. This model was used to simulate the future state of Earth's natural vegetation in harmony with climate projections from 10 different global climate simulations. These simulations are based on the intermediate greenhouse gas scenario in the United Nations' Intergovernmental Panel on Climate Change Fourth Assessment Report. That scenario assumes greenhouse gas levels will double by 2100 and then level off. The U.N. report's climate simulations predict a warmer and wetter Earth, with global temperature increases of 3.6 to 7.2 degrees Fahrenheit (2 to 4 degrees Celsius) by 2100, about the same warming that occurred following the Last Glacial Maximum almost 20,000 years ago, except about 100 times faster. Under the scenario, some regions become wetter because of enhanced evaporation, while others become drier due to changes in atmospheric circulation.

The researchers found a shift of biomes, or major ecological community types, toward Earth's poles -- most dramatically in temperate grasslands and boreal forests -- and toward higher elevations. Ecologically sensitive "hotspots" -- areas projected to undergo the greatest degree of species turnover -- that were identified by the study include regions in the Himalayas and the Tibetan Plateau, eastern equatorial Africa, Madagascar, the Mediterranean region, southern South America, and North America's Great Lakes and Great Plains areas. The largest areas of ecological sensitivity and biome changes predicted for this century are, not surprisingly, found in areas with the most dramatic climate change: in the Northern Hemisphere high latitudes, particularly along the northern and southern boundaries of boreal forests.

"Our study developed a simple, consistent and quantitative way to characterize the impacts of climate change on ecosystems, while assessing and comparing the implications of climate model projections," said JPL co-author Duane Waliser. "This new tool enables scientists to explore and understand interrelationships between Earth's ecosystems and climate and to identify regions projected to have the greatest degree of ecological sensitivity."

"In this study, we have developed and applied two new ecological sensitivity metrics -- analogs of climate sensitivity -- to investigate the potential degree of plant community changes over the next three centuries," said Bergengren. "The surprising degree of ecological sensitivity of Earth's ecosystems predicted by our research highlights the global imperative to accelerate progress toward preserving biodiversity by stabilizing Earth's climate."

JPL is managed for NASA by the California Institute of Technology in Pasadena.

Monday, December 19, 2011

Novel Device Removes Heavy Metals from Water



Engineers at Brown University have developed a system that cleanly and efficiently removes trace heavy metals from water. In experiments, the researchers showed the system reduced cadmium, copper, and nickel concentrations, returning contaminated water to near or below federally acceptable standards. The technique is scalable and has viable commercial applications, especially in the environmental remediation and metal recovery fields.
Heavy metal removal Brown engineers have devised an
automated system that combines chemical precipitation
with electrolytic techniques in a cyclic fashion to remove
mixtures of trace heavy metals from contaminated water.
(Credit: Calo Lab/Brown University)


Results appear in the Chemical Engineering Journal.

An unfortunate consequence of many industrial and manufacturing practices, from textile factories to metalworking operations, is the release of heavy metals in waterways. Those metals can remain for decades, even centuries, in low but still dangerous concentrations.

Ridding water of trace metals "is really hard to do," said Joseph Calo, professor emeritus of engineering who maintains an active laboratory at Brown. He noted the cost, inefficiency, and time needed for such efforts. "It's like trying to put the genie back in the bottle."

That may be changing. Calo and other engineers at Brown describe a novel method that collates trace heavy metals in water by increasing their concentration so that a proven metal-removal technique can take over. In a series of experiments, the engineers report the method, called the cyclic electrowinning/precipitation (CEP) system, removes up to 99 percent of copper, cadmium, and nickel, returning the contaminated water to federally accepted standards of cleanliness. The automated CEP system is scalable as well, Calo said, so it has viable commercial potential, especially in the environmental remediation and metal recovery fields. The system's mechanics and results are described in a paper published in the Chemical Engineering Journal.

A proven technique for removing heavy metals from water is through the reduction of heavy metal ions from an electrolyte. While the technique has various names, such as electrowinning, electrolytic removal/recovery or electroextraction, it all works the same way, by using an electrical current to transform positively charged metal ions (cations) into a stable, solid state where they can be easily separated from the water and removed. The main drawback to this technique is that there must be a high-enough concentration of metal cations in the water for it to be effective; if the cation concentration is too low -- roughly less than 100 parts per million -- the current efficiency becomes too low and the current acts on more than the heavy metal ions.

Another way to remove metals is through simple chemistry. The technique involves using hydroxides and sulfides to precipitate the metal ions from the water, so they form solids. The solids, however, constitute a toxic sludge, and there is no good way to deal with it. Landfills generally won't take it, and letting it sit in settling ponds is toxic and environmentally unsound. "Nobody wants it, because it's a huge liability," Calo said.

The dilemma, then, is how to remove the metals efficiently without creating an unhealthy byproduct. Calo and his co-authors, postdoctoral researcher Pengpeng Grimshaw and George Hradil, who earned his doctorate at Brown and is now an adjunct professor, combined the two techniques to form a closed-loop system. "We said, 'Let's use the attractive features of both methods by combining them in a cyclic process,'" Calo said.

It took a few years to build and develop the system. In the paper, the authors describe how it works. The CEP system involves two main units, one to concentrate the cations and another to turn them into stable, solid-state metals and remove them. In the first stage, the metal-laden water is fed into a tank in which an acid (sulfuric acid) or base (sodium hydroxide) is added to change the water's pH, effectively separating the water molecules from the metal precipitate, which settles at the bottom. The "clear" water is siphoned off, and more contaminated water is brought in. The pH swing is applied again, first redissolving the precipitate and then reprecipitating all the metal, increasing the metal concentration each time. This process is repeated until the concentration of the metal cations in the solution has reached a point at which electrowinning can be efficiently employed.

When that point is reached, the solution is sent to a second device, called a spouted particulate electrode (SPE). This is where the electrowinning takes place, and the metal cations are chemically changed to stable metal solids so they can be easily removed. The engineers used an SPE developed by Hradil, a senior research engineer at Technic Inc., located in Cranston, R.I. The cleaner water is returned to the precipitation tank, where metal ions can be precipitated once again. Further cleaned, the supernatant water is sent to another reservoir, where additional processes may be employed to further lower the metal ion concentration levels. These processes can be repeated in an automated, cyclic fashion as many times as necessary to achieve the desired performance, such as to federal drinking water standards.

In experiments, the engineers tested the CEP system with cadmium, copper, and nickel, individually and with water containing all three metals. The results showed cadmium, copper, and nickel were lowered to 1.50, 0.23 and 0.37 parts per million (ppm), respectively -- near or below maximum contaminant levels established by the Environmental Protection Agency. The sludge is continuously formed and redissolved within the system so that none is left as an environmental contaminant.

"This approach produces very large volume reductions from the original contaminated water by electrochemical reduction of the ions to zero-valent metal on the surfaces of the cathodic particles," the authors write. "For an initial 10 ppm ion concentration of the metals considered, the volume reduction is on the order of 106."

Calo said the approach can be used for other heavy metals, such as lead, mercury, and tin. The researchers are currently testing the system with samples contaminated with heavy metals and other substances, such as sediment, to confirm its operation.

The research was funded by the National Institute of Environmental Health Sciences, a branch of the National Institutes of Health, through the Brown University Superfund Research Program.

Editors: Brown University has a fiber link television studio available for domestic and international live and taped interviews, and maintains an ISDN line for radio interviews. For more information, call (401) 863-2476.

Close Family Ties Keep Cheaters in Check: Why Almost All Multicellular Organisms Begin Life as a Single Cell



Any multicellular animal, from a blue whale to a human being, poses a special difficulty for the theory of evolution. Most of the cells in its body will die without reproducing, and only a privileged few will pass their genes to the next generation.
An amoeba that must succeed at both single-celled and
multicellular living to pass on its genes, Dicty allows
scientists to ask questions about cooperation and cheating
in multicellular organisms. (Credit: Scott Solomon)

How could the extreme degree of cooperation multicellular existence requires ever evolve? Why aren't all creatures unicellular individualists determined to pass on their own genes?

Joan Strassmann, PhD, and David Queller, PhD, a husband and wife team of evolutionary biologists at Washington University in St. Louis, provide an answer in the Dec. 16 issue of the journal Science. Experiments with amoebae that usually live as individuals but must also join with others to form multicellular bodies to complete their life cycles showed that cooperation depends on kinship.

If amoebae occur in well-mixed cosmopolitan groups, then cheaters will always be able to thrive by freeloading on their cooperative neighbors. But if groups derive from a single cell, cheaters will usually occur in all-cheater groups and will have no cooperators to exploit.

The only exceptions are brand new cheater mutants in all-cooperator groups, and these could pose a problem if the mutation rate is high enough and there are many cells in the group to mutate. In fact, the scientists calculated just how many times amoebae that arose from a single cell can safely divide before cooperation degenerates into a free-for-all.

The answer turns out to be 100 generations or more.

So population bottlenecks that kill off diversity and restart the population from a single cell are powerful stabilizers of cellular cooperation, the scientists conclude.

In other words our liver, blood and bone cells help our eggs and sperm pass on their genes because we passed through a single-cell bottleneck at the moment of conception.

The social amoebae

Queller, the Spencer T. Olin professor, and Strassmann, professor of biology, moved to WUSTL from Rice University this summer, bringing a truckload of frozen spores with them.

Although they worked for many years with wasps and stingless bees, Queller and Strassmann's current "lab rat" is the social amoeba Dictyostelium discoideum, known as Dicty for short.

The social amoebae can be found almost everywhere; in Antarctica, in deserts, in the canopies of tropical forests, and in Forest Park, the urban park that adjoins Washington University.

The amoebae spend most of their lives as tiny amorphous blobs of streaming protoplasm crawling through the soil looking for E. coli and other bacteria to eat.

Things become interesting when bacteria are scarce and the amoebae begin to starve. They then release chemicals that attract other amoebae, which follow this trail until they bump into one another.

A mound of some 10,000 amoebae forms and then elongates into a slug a few millimeters long that crawls forward (but never backward) toward heat and light.

The slug stops moving when it has reached a suitable place for dispersal, and then the front 20 percent of the amoebae die to produce a sturdy stalk that the remaining cells flow up and there become hardy spores.

Crucially, the 20 percent of the amoebae in the stalk sacrifice their genes so that the other 80 percent can pass theirs on.

When Strassmann and Queller began to work with Dicty in 1998, one of the first things they discovered was that the amoebae sometimes cheat.

Dennis Welker of Utah State University had given them a genetically diverse collection of wild-caught clones (genetically identical amoebae). They mixed amoebae from two clones together and then examined the fruiting bodies to see where the clones ended up. Each fruiting body included cells from both clones, but some clones contributed disproportionately to the spore body. They had cheated.

How can a blob of protoplasm cheat? The answer, it turns out, is many different ways.

"They might," Queller says, "have a mutation that makes an adhesion molecule less sticky, for example, so that they slide to the back of the slug, the part that forms spores."

"But there are tradeoffs," Strassmann says, "because if you're too slippery, you'll fall off the slug and lose all the advantages of being part of group."

Natural born cheaters

Mulling this over, Strassmann and Queller began to wonder if it would be possible to break the social contract among the amoebae by setting up conditions where relatedness was low and each clonal lineage encountered mostly strangers and rarely relatives.

Together with then-graduate student, Jennie Kuzdzal-Fick, they set up an experiment to learn what happened to cheating as heterogeneous (low relatedness) populations of amoebae evolved.

"At the end of the experiment, we assessed the cheating ability of the descendants by mixing equal numbers of descendants and ancestors and checking to see whether the descendants ended up in the stalks or the spores of the fruiting bodies," Strassmann says.

They found that in nearly all cases, the descendants cheated their ancestors. What's more, when descendent amoebae were grown as individual clones, about a third of them were unable to form fruiting bodies.

Many of the mutants, in other words, were "obligate" cheaters. Having lost the ability to form their own fruiting bodies, they were able to survive only by freeloading, or taking advantage of the amoebae that had retained the ability to cooperate.

This result, Queller and Strassmann say, shows that cheater mutations that threaten multicellularity occur naturally and are even favored -- as long as the population of amoebae remains genetically diverse.

What happens in the wild?

But the scientists were aware that obligate cheaters are either very rare or altogether missing among wild social amoebae. They had not found any obligate cheaters in the more than 2,000 wild clones they have sampled.

They also knew that in the wild, the amoebae in fruiting bodies are close kin, if not clones.

What prevents cooperation in wild populations from degenerating into the laboratory free-for-all? Could the difference be that the amoebae in the laboratory were distant relations and those in the wild are kissing kin?

Suppose, the scientists thought, one amoeba ventured alone into a pristine field of bacteria. As it grew and multiplied, making copies of itself, how long would it take for cheating mutations to appear (what was the mutation rate) and how successfully would these mutations proliferate (how strongly would they be selected)?

To establish the mutation rate, Strassmann and Queller together with graduate student Sara Fox ran what is called a mutation accumulation experiment.

In this experiment, amoebae that mutated didn't have to compete against amoebae that were faithful replicators. In the absence of selection, all but the most severe mutations were also reproduced and became a permanent part of the lineage's genome.

The scientists allowed 90 different lines of amoebae to accumulate mutations in this way.

"At the end," Queller says, "we found that among those 90 lines not a single one had lost the ability to fruit. So that's almost 100 lines, almost a thousand generations, so 100,000 opportunities to lose fruiting and none of them did.

"That allowed us, using statistics, to put an upper limit on the rate at which mutations turn a cooperator into an obligate cheater," he says.

The rate was low enough that if fruiting bodies were forming in the wild from amoebae that were all descended from one spore, cheating would never be an issue.

What this has to do with elephants and blue whales

But the scientists were inquisitive enough to ask another, bigger question. They used calculations invented for population genetics to ask how many times the amoeba could divide -- theoretically -- before cheating became a problem.

What if, they asked, we let an initial single amoebae divide until there were as many of amoebae as there are cells as a fruit fly and then transferred one amoeba and allowed it to divide until the daughter colony reached fruit-fly size, and so on?

What if we let the colonies grow to human size? To elephant size? To blue whale size? Would the cheaters bring down the whale-sized Dicty colony?

The answer, it turned out, was no.

A whale-sized Dicty colony is not the same thing as a whale, but nonetheless the experiments suggest how organisms, over the course of evolution, have sidestepped the cheating trap and maintained the levels of cooperation multicellular bodies demand.

"A multicellular body like the human body is an incredibly cooperative thing," Queller says, "and sociobiologists have learned that really cooperative things are hard to evolve because of the potential for cheating.

"It's the single-cell bottleneck that generates high relatedness among the cells that, in turn, allows them to cooperate, " he says.

Our liver cells have no kick against our sperm or egg cells, in other words, because they're all nearly genetically identical descendants of a single fertilized egg.

Saturday, December 17, 2011

Biofuel Research Boosted by Discovery of How Cyanobacteria Make Energy



A generally accepted, 44-year-old assumption about how certain kinds of bacteria make energy and synthesize cell materials has been shown to be incorrect by a team of scientists led by Donald Bryant, the Ernest C. Pollard Professor of Biotechnology at Penn State and a research professor in the Department of Chemistry and Biochemistry at Montana State University. The research, which will be published in the journal Science on Dec. 16, is expected to help scientists discover new ways of genetically engineering bacteria to manufacture biofuels -- energy-rich compounds derived from biological sources. Many textbooks, which cite the 44-year-old interpretation as fact, likely will be revised as a result of the new discovery.
Penn State scientists have scoured this cyanobacterium's
genome to discover genes that could make alternative
energy-cycle enzymes for biofuels and plastics.
(Credit: Bryant lab, Penn State)

Bryant explained that, in 1967, two groups of researchers concluded that an important energy-making cycle was incomplete in cyanobacteria -- photosynthetic bacteria formerly known as blue-green algae. This energy-producing cycle -- known as the tricarboxylic acid (TCA) cycle or the Krebs cycle -- includes a series of chemical reactions that are used for metabolism by most forms of life, including bacteria, molds, protozoa and animals. This series of chemical reactions eventually leads to the production of ATP -- molecules responsible for providing energy for cell metabolism.

"During studies 44 years ago, researchers concluded that cyanobacteria were missing an essential enzyme of the metabolic pathway that is found in most other life forms," Bryant explained. "They concluded that cyanobacteria lacked the ability to make one enzyme, called 2-oxoglutarate dehydrogenase, and that this missing enzyme rendered the bacteria unable to produce a compound -- called succinyl-coenzyme A -- for the next step in the TCA cycle. The absence of this reaction was assumed to render the organisms unable to oxidize metabolites for energy production, although they could still use the remaining TCA-cycle reactions to produce substrates for biosynthetic reactions. As it turns out, the researchers just weren't looking hard enough, so there was more work to be done."

Bryant suspected that the decades-old finding needed to be re-evaluated with a fresh set of eyes and new scientific tools. He explained that, after researchers in the 1960s concluded that cyanobacteria had an incomplete TCA cycle, that false assumption was compounded by later researchers who used modern genomics-research methods to confirm it.

"One idea we had was that the 1967 hypothesis never was corrected because modern genome-annotation methods were partly to blame," Bryant said. "Computer algorithms are used to search for strings of genetic code to identify genes. Sometimes important genes simply can be missed because of matching errors, which occur when very similar genes have very different functions. So if researchers don't use biochemical methods to validate computer-identified gene functions, they run the risk of making premature and often incorrect conclusions about what's there and what's not there."

To re-test the 1967 hypothesis, the team performed new biochemical and genetic analyses on a cyanobacterium called Synechococcus sp. PCC 7002, scouring its genome for genes that might be responsible for making alternative energy-cycle enzymes. The scientists discovered that Synechococcus indeed had genes that coded for one important alternative enzyme, succinic semialdehyde dehydrogenase, and that adjacent to the gene for this enzyme was a misidentified gene that subsequently was shown to encode a novel enzyme, 2-oxo-glutarate decarboxylase.

"As it turns out, these two enzymes work together to complete the TCA cycle in a slightly different way," Bryant said. "That is, rather than making 2-oxoglutarate dehydrogenase, these bacteria produce both 2-oxoglutarate decarboxylase and succinic semialdehyde dehydrogenase. That combination of enzymes allows these organisms to move to the next intermediate -- succinate -- and to complete the TCA cycle." Bryant also said that his team found that the genes coding for the two enzymes are present in all cyanobacterial genomes except those of a few marine species. Bryant's co-author on the Science paper is Shuyi Zhang, a graduate student in the Department of Biochemistry and Molecular Biology at Penn State.

Bryant hopes to use the findings of his research to investigate new ways of producing biofuels. "Now that we understand better how cyanobacteria make energy, it might be possible to genetically engineer a cyanobacterial strain to synthesize 1,3-butanediol -- an organic compound that is the precursor for making not just biofuels but also plastics," Bryant said.

Bryant also said that his team's discoveries about cyanobacteria show how science is an ever-evolving process, and that firm conclusions never should be drawn from studies with negative results.

"Sadly, the conclusion that cyanobacteria have an incomplete TCA cycle is written into many textbooks as fact, simply because the research teams in 1967 misinterpreted their failure to find a particular enzyme," Bryant said. "But in science there is never really an end. There always is something new to discover."

The research was supported by the Air Force Office of Scientific Research and the Genomic Science Program of the U.S. Department of Energy.

Friday, December 9, 2011

One of the World's Smallest Electronic Circuits Created



A team of scientists, led by Guillaume Gervais from McGill's Physics Department and Mike Lilly from Sandia National Laboratories, has engineered one of the world's smallest electronic circuits. It is formed by two wires separated by only about 150 atoms or 15 nanometers (nm).
Scientists have engineered one of the world's smallest electronic
circuits. It is formed by two wires separated by only about 150
atoms or 15 nanometers (nm).
(Credit: Image courtesy of McGill University)

The discovery, published in the journal Nature Nanotechnology, could have a significant effect on the speed and power of the ever smaller integrated circuits of the future in everything from smartphones to desktop computers, televisions and GPS systems.

This is the first time that anyone has studied how the wires in an electronic circuit interact with one another when packed so tightly together. Surprisingly, the authors found that the effect of one wire on the other can be either positive or negative. This means that a current in one wire can produce a current in the other one that is either in the same or the opposite direction. This discovery, based on the principles of quantum physics, suggests a need to revise our understanding of how even the simplest electronic circuits behave at the nanoscale.

In addition to the effect on the speed and efficiency of future electronic circuits, this discovery could also help to solve one of the major challenges facing future computer design. This is managing the ever-increasing amount of heat produced by integrated circuits

Well-known theorist Markus Büttiker speculates that it may be possible to harness the energy lost as heat in one wire by using other wires nearby. Moreover, Buttiker believes that these findings will have an impact on the future of both fundamental and applied research in nanoelectronics.

The research was funded by the Natural Sciences and Engineering Research Council of Canada, the Fonds de recherche Nature et Technologies of Quebec, the Canadian Institute for Advanced Research and the Center of Integrated Nanotechnologies at Sandia National Laboratories.

Monday, December 5, 2011

New Switch Could Improve Electronics



Researchers at the University of Pittsburgh have invented a new type of electronic switch that performs electronic logic functions within a single molecule. The incorporation of such single-molecule elements could enable smaller, faster, and more energy-efficient electronics.
The switch was discovered by experimenting with the rotation
of a triangular cluster of three metal atoms held together by a
nitrogen atom, which is enclosed entirely within a cage made
up entirely of carbon atoms. (Credit: Image courtesy of
University of Pittsburgh)

The research findings, supported by a $1 million grant from the W.M. Keck Foundation, were published online in the Nov. 14 issue of Nano Letters.

"This new switch is superior to existing single-molecule concepts," said Hrvoje Petek, principal investigator and professor of physics and chemistry in the Kenneth P. Dietrich School of Arts and Sciences and codirector of the Petersen Institute for NanoScience and Engineering (PINSE) at Pitt. "We are learning how to reduce electronic circuit elements to single molecules for a new generation of enhanced and more sustainable technologies."

The switch was discovered by experimenting with the rotation of a triangular cluster of three metal atoms held together by a nitrogen atom, which is enclosed entirely within a cage made up entirely of carbon atoms. Petek and his team found that the metal clusters encapsulated within a hollow carbon cage could rotate between several structures under the stimulation of electrons. This rotation changes the molecule's ability to conduct an electric current, thereby switching among multiple logic states without changing the spherical shape of the carbon cage. Petek says this concept also protects the molecule so it can function without influence from outside chemicals.

Because of their constant spherical shape, the prototype molecular switches can be integrated as atom-like building blocks the size of one nanometer (100,000 times smaller than the diameter of a human hair) into massively parallel computing architectures.

The prototype was demonstrated using an Sc3N@C80 molecule sandwiched between two electrodes consisting of an atomically flat copper oxide substrate and an atomically sharp tungsten tip. By applying a voltage pulse, the equilateral triangle-shaped Sc3N could be rotated predictably among six logic states.

The research was led by Petek in collaboration with chemists at the Leibnitz Institute for Solid State Research in Dresden, Germany, and theoreticians at the University of Science and Technology of China in Hefei, People's Republic of China. The experiments were performed by postdoctoral researcher Tian Huang and research assistant professor Min Feng, both in Pitt's Department of Physics and Astronomy.

Some People Can Hallucinate Colors at Will



Scientists at the University of Hull have found that some people have the ability to hallucinate colours at will -- even without the help of hypnosis.

Scientists at the University of Hull have found that some
people have the ability to hallucinate colours at will -- even
without the help of hypnosis. (Credit: © Paul Herbert / Fotolia)

The study, published this week in the journal Consciousness and Cognition, was carried out in the Department of Psychology at the University of Hull. It focused on a group of people that had shown themselves to be 'highly suggestible' in hypnosis.

The subjects were asked to look at a series of monochrome patterns and to see colour in them. They were tested under hypnosis and without hypnosis and both times reported that they were able to see colours.

Individuals' reactions to the patterns were also captured using an MRI scanner, which enabled the researchers to monitor differences in brain activity between the suggestible and non-suggestible subjects. The results of the research, showed significant changes in brain activity in areas of the brain responsible for visual perception among the suggestible subjects only.

Professor Giuliana Mazzoni, lead researcher on the project says: "These are very talented people. They can change their perception and experience of the world in ways that the rest of us cannot."

The ability to change experience at will can be very useful. Research has shown that hypnotic suggestions can be used to block pain and increase the effectiveness of psychotherapy.

It has always been assumed that hypnosis was needed for these effects to occur, but the new study suggests that this is not true. Although hypnosis does seem to heighten the subjects' ability to see colour, the suggestible subjects were also able to see colours and change their brain activity even without the help of hypnosis.

The MRI scans also showed clearly that although it was not necessary for the subjects to be under hypnosis to be able to perceive colours in the tests, it was evident that hypnosis increased the ability of the subjects to experience these effects.

Dr William McGeown, who also contributed to the study, says: "Many people are afraid of hypnosis, although it appears to be very effective in helping with certain medical interventions, particularly pain control. The work we have been doing shows that certain people may benefit from suggestion without the need for hypnosis."

The study, which was partially funded by the BBC, used a control group formed of less suggestible people, or people less likely to respond to hypnosis. It was found that this group of people were not able to hallucinate colour and, again, these reported results were supported by MRI scans.