BTemplates.com

Powered by Blogger.

Pageviews past week

Quantum mechanics

Auto News

artificial intelligence

About Me

Recommend us on Google!

Information Technology

Popular Posts

Tuesday, May 31, 2011

What Bitcoin Is, and Why It Matters Can a booming "crypto-currency" really compete with conventional cash?



Recent weeks have been exciting for a relatively new kind of currency speculator. In just three weeks, the total value of a unique new digital currency called Bitcoin has jumped four times, to over $40 million.
Credit: Science n Technology Updates

Bitcoin is underwritten not by a government, but by a clever cryptographic scheme.

For now, little can be bought with bitcoins, and the new currency is still a long way from competing with the dollar. But this explainer lays out what Bitcoin is, why it matters, and what needs to happen for it to succeed.

Where does Bitcoin come from?

In 2008, a programmer known as Satoshi Nakamoto—a name believed to be an alias—posted a paper outlining Bitcoin's design to a cryptography e-mail list. Then, in early 2009, he (or she) released software that can be used to exchange bitcoins using the scheme. That software is now maintained by a volunteer open-source community coordinated by four core developers.

"Satoshi's a bit of a mysterious figure," says Jeff Garzik, a member of that core team and founder of Bitcoin Watch, which tracks the Bitcoin economy. "I and the other core developers have occasionally corresponded with him by e-mail, but it's always a crapshoot as to whether he responds," says Garzik. "That and the forum are the entirety of anyone's experience with him."

How does Bitcoin work?

Nakamoto wanted people to be able to exchange money electronically securely without the need for a third party, such as a bank or a company like PayPal. He based Bitcoin on cryptographic techniques that allow you to be sure the money you receive is genuine, even if you don't trust the sender.

The basics

Once you download and run the Bitcoin client software, it connects over the Internet to the decentralized network of all Bitcoin users and also generates a pair of unique, mathematically linked keys, which you'll need to exchange bitcoins with any other client. One key is private and kept hidden on your computer. The other is public and a version of it dubbed a Bitcoin address is given to other people so they can send you bitcoins. Crucially, it is practically impossible—even with the most powerful supercomputer—to work out someone's private key from their public key. This prevents anyone from impersonating you. Your public and private keys are stored in a file that can be transferred to another computer, for example if you upgrade.

A Bitcoin address looks something like this: 15VjRaDX9zpbA8LVnbrCAFzrVzN7ixHNsC. Stores that accept bitcoins—for example, this one, selling alpaca socks—provide you with their address so you can pay for goods.

Transferring bitcoins



When you perform a transaction, your Bitcoin software performs a mathematical operation to combine the other party's public key and your own private key with the amount of bitcoins that you want to transfer. The result of that operation is then sent out across the distributed Bitcoin network so the transaction can be verified by Bitcoin software clients not involved in the transfer.

Those clients make two checks on a transaction. One uses the public key to confirm that the true owner of the pair sent the money, by exploiting the mathematical relationship between a person's public and private keys; the second refers to a public transaction log stored on the computer of every Bitcoin user to confirm that the person has the bitcoins to spend.

When a client verifies a transaction, it forwards the details to others in the network to check for themselves. In this way a transaction quickly reaches and is verified by every Bitcoin client that is online. Some of those clients - "miners" - also try to add the new transfer to the public transaction log, by racing to solve a cryptographic puzzle. Once one of them wins the updated log is passed throughout the Bitcoin network. When your software receives the updated log it knows your payment was successful.

Security

The nature of the mathematics ensures that it is computationally easy to verify a transaction but practically impossible to generate fake transactions and spend bitcoins you don't own. The existence of a public log of all transactions also provides a deterrent to money laundering, says Garzik. "You're looking at a global public transaction register," he says. "You can trace the history of every single Bitcoin through that log, from its creation through every transaction."

How can you obtain bitcoins?

Exchanges like Mt. Gox provide a place for people to trade bitcoins for other types of currency. Some enthusiasts have also started doing work, such as designing websites, in exchange for bitcoins. This jobs board advertises contract work paying in bitcoins.

But bitcoins also need to be generated in the first place. Bitcoins are "mined" when you set your Bitcoin client to a mode that has it compete to update the public log of transactions. All the clients set to this mode race to solve a cryptographic puzzle by completing the next "block" of the shared transaction log. Winning the race to complete the next block wins you a 50-Bitcoin prize. This feature exists as a way to distribute bitcoins in the currency's early years. Eventually, new coins will not be issued this way; instead, mining will be rewarded with a small fee taken from some of the value of a verified transaction.

Mining is very computationally intensive, to the point that any computer without a powerful graphics card is unlikely to mine any bitcoins in less than a few years.

Where to spend your bitcoins

There aren't a lot of places right now. Some Bitcoin enthusiasts with their own businesses have made it possible to swap bitcoins for tea, books, or Web design (see a comprehensive list here). But no major retailers accept the new currency yet.

If the Federal Reserve controls the dollar, who controls the Bitcoin economy?

No one. The economics of the currency are fixed into the underlying protocol developed by Nakamoto.

Nakamoto's rules specify that the amount of bitcoins in circulation will grow at an ever-decreasing rate toward a maximum of 21 million. Currently there are just over 6 million; in 2030, there will be over 20 million bitcoins.

Nakamoto's scheme includes one loophole, however: if more than half of the Bitcoin network's computing power comes under the control of one entity, then the rules can change. This would prevent, for example, a criminal cartel faking a transaction log in its own favor to dupe the rest of the community.

It is unlikely that anyone will ever obtain this kind of control. "The combined power of the network is currently equal to one of the most powerful supercomputers in the world," says Garzik. "Satoshi's rules are probably set in stone."

Isn't a fixed supply of money dangerous?

It's certainly different. "Elaborate controls to make sure that currency is not produced in greater numbers is not something any other currency, like the dollar or the euro, has," says Russ Roberts, professor of economics at George Mason University. The consequence will likely be slow and steady deflation, as the growth in circulating bitcoins declines and their value rises.

"That is considered very destructive in today's economies, mostly because when it occurs, it is unexpected," says Roberts. But he thinks that won't apply in an economy where deflation is expected. "In a Bitcoin world, everyone would anticipate that, and they know what they got paid would buy more then than it would now."

Does Bitcoin threaten the dollar or other currencies?

That's unlikely. "It might have a niche as a way to pay for certain technical services," says Roberts, adding that even limited success could allow Bitcoin to change the fate of more established currencies. "Competition is good, even between currencies—perhaps the example of Bitcoin could influence the behavior of the Federal Reserve."

Central banks the world over have freely increased the money supply of their currencies in response to the global downturn. Roberts suggests that Bitcoin could set a successful, if smaller scale, example of how economies that forbid such intervention can also succeed.

Sony Sets Its Sights on Augmented Reality The future of mobile gaming will merge the virtual and real worlds.



Sony has demonstrated a new augmented reality system called Smart AR that can be built into the company's future gaming devices.
Credit: Sony Corporation

Augmented reality involves mapping virtual objects onto a view of the real world, usually as seen through the screen of a smart phone. The technology has so far been used to create a handful of dazzling smart-phone apps, but has yet to take off in a big way. However, many believe that mobile gaming could prove to be an ideal platform for the technology. With Smart AR, certain real-world objects could become part of a game when viewed through a device such as the PlayStation Portable. This could allow game characters to appear on a tabletop, perhaps, or to respond to the movement of real objects.

Unlike many augmented reality systems, Smart AR does not use satellite tracking or special markers to figure out where to overlay a virtual object. Instead, it uses object recognition. This means it works where GPS signals are poor or nonexistent, for example, indoors. The markerless system is more difficult to pull off, but it allows many more everyday objects to be used.

"Prototypes of Sony Computer Entertainment's next-generation of portable entertainment systems will be integrated with this technology," says Takayuki Yoshigahara, deputy general manager of Sony's Intelligent Systems Research Laboratory in Tokyo. "SCE is also considering adopting this technology for its software development kit in the future." This would allow games developers to add augmented reality features in the games made for Sony consoles.

Sony has dabbled with the technology before, using two-dimensional barcodes known as CyberCodes as markers for tracking objects.

According to Yoshigahara, Smart AR identifies objects using an approach known as local feature extraction, which means it tries to identify salient parts of the object within the image. The system also tracks the object's movement, and works out its orientation. This is necessary in order to know how the virtual data should be positioned in relation to the object.

Smart AR also builds a rough 3-D map of a room. This is achieved by measuring disparities between different snapshots taken from slightly different perspectives as the camera moves. This allows virtual objects to interact with the environment.

Tobias Hollerer, an associate professor at the University of California, Santa Barbara, says Sony's technology combines several areas of research. "If they do anything new, it is in tracking the entire room," he says.

Edward Rostens, a lecturer at the University of Cambridge and cocreator of an augmented reality system for the iPhone, called Popcode, says getting several different techniques to work together using the limited processing power of a handheld device would be impressive.

Biological Circuits for Synthetic Biology



"If you don't like the news, go out and make some of your own," said Wes "Scoop" Nisker. Taking a page from the book of San Francisco radio legend Scoop Nisker, biologists who find themselves dissatisfied with the microbes nature has provided are going out and making some of their own. Members of the fast-growing "synthetic biology" research community are designing and constructing novel organisms and biologically-inspired systems -- or redesigning existing organisms and systems -- to solve problems that natural systems cannot. The range of potential applications for synthetic biological systems runs broad and deep, and includes such profoundly important ventures as the microbial-based production of advanced biofuels and inexpensive versions of critical therapeutic drugs.
Berkeley Lab researchers are using RNA molecules 
to engineer genetic networks – analogous to 
microcircuits - into E. coli. (Credit: Image courtesy of 
DOE/Lawrence Berkeley National Laboratory)

Synthetic biology, however, is still a relatively new scientific field plagued with the trial and error inefficiencies that hamper most technologies in their early stages of development. To help address these problems, synthetic biologists aim to create biological circuits that can be used for the safer and more efficient construction of increasingly complex functions in microorganisms. A central component of such circuits is RNA, the multipurpose workhorse molecule of biology.

"A widespread natural ability to sense small molecules and regulate genes has made the RNA molecule an important tool for synthetic biology in applications as diverse as environmental sensing and metabolic engineering," says Adam Arkin, a computational biologist with the U.S. Department of Energy (DOE)'s Lawrence Berkeley National Laboratory (Berkeley Lab), where he serves as director of the Physical Biosciences Division. Arkin is also a professor at the University of California (UC) Berkeley where he directs the Synthetic Biology Institute, a partnership between UC Berkeley and Berkeley Lab.

In his multiple capacities, Arkin is leading a major effort to use RNA molecules for the engineering of programmable genetic networks. In recent years, scientists have learned that the behavior of cells is often governed by multiple different genes working together in networked teams that are regulated through RNA-based mechanisms. Synthetic biologists have been using RNA regulatory mechanisms to program genetic networks in cells to achieve specific results. However, to date these programming efforts have required proteins to propagate RNA regulatory signals. This can pose problems because one of the primary goals of synthetic biology is to create families of standard genetic parts that can be combined to create biological circuits with behaviors that are to some extent predictable. Proteins can be difficult to design and predict. They also add a layer of complexity to biological circuits that can delay and slow the dynamics of the circuit's responses.

"We're now able to eliminate the protein requirement and directly propagate regulatory signals as RNA molecules," Arkin says.

Working with their own variations of RNA transcription attenuators -- nucleotide sequences that under a specific set of conditions will stop the RNA transcription process -- Arkin and his colleagues engineered a system in which these independent attenuators can be configured to sense RNA input and synthesize RNA output signals. These variant RNA attenuators can also be configured to regulate multiple genes in the same cell and -- through the controlled expression of these genes -- perform logic operations.

"We have demonstrated the ability to construct with minimal changes orthogonal variants of natural RNA transcription attenuators that function more or less homogeneously in a single regulatory system, and we have shown that the composition of this system is predictable," Arkin says. "This is the first time that the three regulatory features of our system, which are all properties featured in a semiconductor transistor, have been captured in a single biological molecule."

A paper describing this breakthrough appears in the Proceedings of the National Academy of Science (PNAS).

The success of Arkin and his colleagues was based on their making use of an element in the bacterial plasmid (Staphylococcus aureus) known as pT181. The element in pT181 was an antisense RNA-mediated transcription attenuation mechanism that controls the plasmid's copy number. Plasmids are molecules of DNA that serve as a standard tool of synthetic biology for, among other applications, encoding RNA molecules. Antisense RNA consists of non-coding nucleotide sequences that are used to regulate genetic elements and activities, including transcription. Since the plasmid pT181 antisense-RNA-mediated transcription attenuation mechanism works through RNA-to-RNA interactions, Arkin and his colleagues could use it to create attenuator variants that would independently regulate the transcription activity of multiple targets in the same cell -- in this case, in Escherichia coli, one of the most popular bacteria for synthetic biology applications.

"It is very advantageous to have independent regulatory units that control processes such as transcription because the assembly of these units into genetic networks follows a simple rule of composition," Arkin says.

While acknowledging the excellent work done on other RNA-based regulatory mechanisms that can each perform some portion of the control functions required for a genetic network, Arkin believes that the attenuator variants he and his colleagues engineered provide the simplest route to achieving all of the required control functions within a single regulatory mechanism.

"Furthermore," he says, "these previous efforts were fundamentally dependent on molecular interactions through space between two or more regulatory subunits to create a network. Our approach, which relies on the processive transcription process, is more reliable."

Arkin and his colleagues say their results provide synthetic biologists with a versatile new set of RNA-based transcriptional regulators that could change how future genetic networks are designed and constructed. Their engineering strategy for constructing orthogonal variants from natural RNA system should also be applicable to other gene regulatory mechanisms, and should add to the growing synthetic biology repertoire.

"Although RNA has less overall functionality than proteins, its nucleic acid-based polymer physics make mechanisms based on RNA simpler and easier to engineer and evolve," Arkin says. "With our RNA regulatory system and other work in progress, we're on our way to developing the first complete and scalable biological design system. Ultimately, our goal is to create a tool revolution in synthetic biology similar to the revolution that led to the success of major integrated circuit design and deployment."

Much of this research was supported by was supported by the Synthetic Biology Engineering Research Center (SynBERC) under a grant from the National Science Foundation.
Enhanced by Zemanta

Monday, May 23, 2011

Researchers Create Nanopatch for the Heart



Engineers at Brown University and in India have a promising new approach to treating heart-attack victims. The researchers created a nanopatch with carbon nanofibers and a polymer. In laboratory tests, natural heart-tissue cell density on the nanoscaffold was six times greater than the control sample, while neuron density had doubled.
Beating heart. Engineers at Brown University have 
created a nanopatch for the heart that tests show 
restores areas that have been damaged, such as 
from a heart attack. Credit: Frank Mullin/Brown 
University (Credit: Image courtesy of Brown University)

When you suffer a heart attack, a part of your heart dies. Nerve cells in the heart's wall and a special class of cells that spontaneously expand and contract -- keeping the heart beating in perfect synchronicity -- are lost forever. Surgeons can't repair the affected area. It's as if when confronted with a road riddled with potholes, you abandon what's there and build a new road instead.

Needless to say, this is a grossly inefficient way to treat arguably the single most important organ in the human body. The best approach would be to figure out how to resuscitate the deadened area, and in this quest, a group of researchers at Brown University and in India may have an answer.

The scientists turned to nanotechnology. In a lab, they built a scaffold-looking structure consisting of carbon nanofibers and a government-approved polymer. Tests showed the synthetic nanopatch regenerated natural heart tissue cells ­- called cardiomyocytes -- as well as neurons. In short, the tests showed that a dead region of the heart can be brought back to life.

"This whole idea is to put something where dead tissue is to help regenerate it, so that you eventually have a healthy heart," said David Stout, a graduate student in the School of Engineering at Brown and the lead author of the paper published in Acta Biomaterialia.

The approach, if successful, would help millions of people. In 2009, some 785,000 Americans suffered a new heart attack linked to weakness caused by the scarred cardiac muscle from a previous heart attack, according to the American Heart Association. Just as ominously, a third of women and a fifth of men who have experienced a heart attack will have another one within six years, the researchers added, citing the American Heart Association.

What is unique about the experiments at Brown and at the India Institute of Technology Kanpur is the engineers employed carbon nanofibers, helical-shaped tubes with diameters between 60 and 200 nanometers. The carbon nanofibers work well because they are excellent conductors of electrons, performing the kind of electrical connections the heart relies upon for keeping a steady beat. The researchers stitched the nanofibers together using a poly lactic-co-glycolic acid polymer to form a mesh about 22 millimeters long and 15 microns thick and resembling "a black Band Aid," Stout said. They laid the mesh on a glass substrate to test whether cardiomyocytes would colonize the surface and grow more cells.

In tests with the 200-nanometer-diameter carbon nanofibers seeded with cardiomyocytes, five times as many heart-tissue cells colonized the surface after four hours than with a control sample consisting of the polymer only. After five days, the density of the surface was six times greater than the control sample, the researchers reported. Neuron density had also doubled after four days, they added.

The scaffold works because it is elastic and durable, and can thus expand and contract much like heart tissue, said Thomas Webster, associate professor in engineering and orthopaedics at Brown and the corresponding author on the paper. It's because of these properties and the carbon nanofibers that cardiomyocytes and neurons congregate on the scaffold and spawn new cells, in effect regenerating the area.

The scientists want to tweak the scaffold pattern to better mimic the electrical current of the heart, as well as build an in-vitro model to test how the material reacts to the heart's voltage and beat regime. They also want to make sure the cardiomyocytes that grow on the scaffolds are endowed with the same abilities as other heart-tissue cells.

Bikramjit Basu at the India Institute of Technology Kanpur contributed to the paper. The Indo-U.S. Science and Technology Forum, the Hermann Foundation, the Indian Institute of Technology, Kanpur, the government of India and California State University funded the research.

Sunday, May 22, 2011

Record Efficiency of 18.7 Percent for Flexible Solar Cells on Plastics, Swiss Researchers Report



Scientists at Empa, the Swiss Federal Laboratories for Materials Science and Technology, have further boosted the energy conversion efficiency of flexible solar cells made of copper indium gallium (di)selenide (also known as CIGS) to a new world record of 18.7 percent -- a significant improvement over the previous record of 17.6 percent achieved by the same team in June 2010.
Flexible thin film CIGS solar cell on polymer substrate 
developed at Empa. (Credit: Copyright: Empa)

The measurements have been independently certified by the Fraunhofer Institute for Solar Energy Systems in Freiburg, Germany.

It's all about money. To make solar electricity affordable on a large scale, scientists and engineers worldwide have long been trying to develop a low-cost solar cell, which is highly efficient, easy to manufacture and has high throughput. Now a team at Empa's Laboratory for Thin Film and Photovoltaics, led by Ayodhya N. Tiwari, has made a major step forward. "The new record value for flexible CIGS solar cells of 18.7% nearly closes the "efficiency gap" to solar cells based on polycrystalline silicon (Si) wafers or CIGS thin film cells on glass," says Tiwari. He is convinced that "flexible and lightweight CIGS solar cells with efficiencies comparable to the "best-in-class" will have excellent potential to bring about a paradigm shift and to enable low-cost solar electricity in the near future."

One major advantage of flexible high-performance CIGS solar cells is the potential to lower manufacturing costs through roll-to-roll processing while at the same time offering a much higher efficiency than the ones currently on the market. What's more, such lightweight and flexible solar modules offer additional cost benefits in terms of transportation, installation, structural frames for the modules etc., i.e. they significantly reduce the so-called "balance of system" costs. Taken together, the new CIGS polymer cells exhibit numerous advantages for applications such as facades, solar farms and portable electronics. With high-performance devices now within reach, the new results suggest that monolithically-interconnected flexible CIGS solar modules with efficiencies above 16% should be achievable with the recently developed processes and concepts.

At the forefront of efficiency improvements

In recent years, thin film photovoltaic technology based on glass substrates has gained sufficient maturity towards industrial production; flexible CIGS technology is, however, still an emerging field. The recent improvements in efficiency in research labs and pilot plants -- among others by Tiwari's group, first at ETH Zurich and since a couple of years now at Empa -- are contributing to performance improvements and to overcoming manufacturability barriers.

Working closely with scientists at FLISOM, a start-up company who is scaling up and commercializing the technology, the Empa team made significant progress in low-temperature growth of CIGS layers yielding flexible CIGS cells that are ever more efficient, up from a record value of 14.1% in 2005 to the new "high score" of 18.7% for any type of flexible solar cell grown on polymer or metal foil. The latest improvements in cell efficiency were made possible through a reduction in recombination losses by improving the structural properties of the CIGS layer and the proprietary low-temperature deposition process for growing the layers as well as in situ doping with Na during the final stage. With these results, polymer films have for the first time proven to be superior to metal foils as a carrier substrate for achieving highest efficiency.

Record efficiencies of up to 17.5% on steel foils covered with impurity diffusion barriers were so far achieved with CIGS growth processes at temperatures exceeding 550°C. However, when applied to steel foil without any diffusion barrier, the proprietary low temperature CIGS deposition process developed by Empa and FLISOM for polymer films easily matched the performance achieved with high-temperature procedure, resulting in an efficiency of 17.7%. The results suggest that commonly used barrier coatings for detrimental impurities on metal foils would not be required. "Our results clearly show the advantages of the low-temperature CIGS deposition process for achieving highest efficiency flexible solar cells on polymer as well as metal foils," says Tiwari.

The projects were supported by the Swiss National Science Foundation (SNSF), the Commission for Technology and Innovation (CTI), the Swiss Federal Office of Energy (SFOE), EU Framework Programmes as well as by Swiss companies W.Blösch AG and FLISOM.

Scaling up production of flexible CIGS solar cells

The continuous improvement in energy conversion efficiencies of flexible CIGS solar cells is no small feat, says Empa Director Gian-Luca Bona. "What we see here is the result of an in-depth understanding of the material properties of layers and interfaces combined with an innovative process development in a systematic manner. Next, we need to transfer these innovations to industry for large scale production of low-cost solar modules to take off." Empa scientists are currently working together with FLISOM to further develop manufacturing processes and to scale up production.

Friday, May 20, 2011

Japan's 9.0 Tohoku-Oki Earthquake: Surprising Findings About Energy Distribution Over Fault Slip and Stress Accumulation



When the magnitude 9.0 Tohoku-Oki earthquake and resulting tsunami struck off the northeast coast of Japan on March 11, they caused widespread destruction and death. Using observations from a dense regional geodetic network (allowing measurements of earth movement to be gathered from GPS satellite data), globally distributed broadband seismographic networks, and open-ocean tsunami data, researchers have begun to construct numerous models that describe how the earth moved that day.
The image represents on overhead model of the 
estimated fault slip due to the 9.0 Tohoku Oki 
earthquake. The fault responsible for this earthquake 
dips under Japan, starting at the Japan Trench 
(indicated by the barbed line), which is the point 
of contact between the subducting Pacific Plate 
and the overriding Okhotsk Plate. The magnitude 
of fault slip is indicated both by the color and the 
contours, which are at 8 meter intervals. The question 
mark indicates the general region where researchers 
currently lack information about future seismic potential. 
(Credit: Mark Simons/Caltech Seismological Laboratory)

Now, a study led by researchers at the California Institute of Technology (Caltech), published online in the May 19 issue of Science Express, explains the first large set of observational data from this rare megathrust event.

"This event is the best recorded great earthquake ever," says Mark Simons, professor of geophysics at Caltech's Seismological Laboratory and lead author of the study. For scientists working to improve infrastructure and prevent loss of life through better application of seismological data, observations from the event will help inform future research priorities.

Simons says one of the most interesting findings of the data analysis was the spatial compactness of the event. The megathrust earthquake occurred at a subduction zone where the Pacific Plate dips below Japan. The length of fault that experienced significant slip during the Tohoku-Oki earthquake was about 250 kilometers, about half of what would be conventionally expected for an event of this magnitude.

Furthermore, the area where the fault slipped the most -- 30 meters or more -- happened within a 50- to 100-kilometer-long segment. "This is not something we have documented before," says Simons. "I'm sure it has happened in the past, but technology has advanced only in the past 10 to 15 years to the point where we can measure these slips much more accurately through GPS and other data."

For Jean Paul Ampuero, assistant professor of seismology at Caltech's Seismological Laboratory who studies earthquake dynamics, the most significant finding was that high- and low-frequency seismic waves can come from different areas of a fault. "The high-frequency seismic waves in the Tohoku earthquake were generated much closer to the coast, away from the area of the slip where we saw low-frequency waves," he says.

Simons says there are two factors controlling this behavior; one is because the largest amount of stress (which is what generates the highest-frequency waves) was found at the edges of the slip, not near the center of where the fault began to break. He compares the finding to what happens when you rip a piece of paper in half. "The highest amounts of stress aren't found where the paper has just ripped, but rather right where the paper has not yet been torn," he explains. "We had previously thought high-frequency energy was an indicator of fault slippage, but it didn't correlate in our models of this event." Equally important is how the fault reacts to these stress concentrations; it appears that only the deeper segments of the fault respond to these stresses by producing high-frequency energy.

Ampuero says the implications of these observations of the mechanical properties of tectonic faults need to be further explored and integrated in physical models of earthquakes, which will help scientists better quantify earthquake hazards.

"We learn from each significant earthquake, especially if the earthquake is large and recorded by many sensors," says Ampuero. "The Tohoku earthquake was recorded by upwards of 10 times more sensors at near-fault distances than any other earthquake. This will provide a sharper and more robust view of earthquake rupture processes and their effects."

For seismologist Hiroo Kanamori, Caltech's Smits Professor of Geophysics, Emeritus, who was in Japan at the time of the earthquake and has been studying the region for many years, the most significant finding was that a large slip occurred near the Japan Trench. While smaller earthquakes have happened in the area, it was believed that the relatively soft material of the seafloor would not support a large amount of stress. "The amount of strain associated with this large displacement is nearly five to 10 times larger than we normally see in large megathrust earthquakes," he notes. "It has been generally thought that rocks near the Japan Trench could not accommodate such a large elastic strain."

The researchers are still unsure why such a large strain was able to accumulate in this area. One possibility is that either the subducting seafloor or the upper plate (or both) have some unusual structures -- such as regions that were formerly underwater mountain ranges on the Pacific Plate -- that have now been consumed by the subduction zone and cause the plates to get stuck and build up stress.

"Because of this local strengthening -- whatever its cause -- the Pacific Plate and the Okhotsk Plate had been pinned together for a long time, probably 500 to 1000 years, and finally failed in this magnitude 9.0 event," says Kanamori. "Hopefully, detailed geophysical studies of seafloor structures will eventually clarify the mechanism of local strengthening in this area."

Simons says researchers knew very little about the area where the earthquake occurred because of limited historical data.

"Instead of saying a large earthquake probably wouldn't happen there, we should have said that we didn't know," he says. Similarly, he says the area just south of where the fault slipped is in a similar position; researchers don't yet know what it might do in the future.

"It is important to note that we are not predicting an earthquake here," emphasizes Simons. "However, we do not have data on the area, and therefore should focus attention there, given its proximity to Tokyo."

He says that the relatively new Japanese seafloor observation systems will prove very useful in scientists' attempts to learn more about the area.

"Our study is only the first foray into what is an enormous quantity of available data," says Simons. "There will be a lot more information coming out of this event, all of which will help us learn more in order to help inform infrastructure and safety procedures."

The work was funded by the Gordon and Betty Moore Foundation, National Science Foundation grants, the Southern California Earthquake Center, and NASA's internal Research and Technology Development program.

Dark Energy Is Driving Universe Apart: NASA's Galaxy Evolution Explorer Finds Dark Energy Repulsive



A five-year survey of 200,000 galaxies, stretching back seven billion years in cosmic time, has led to one of the best independent confirmations that dark energy is driving our universe apart at accelerating speeds.
New results from NASA's Galaxy Evolution Explorer
and the Anglo-Australian Telescope atop Siding Spring
Mountain in Australia confirm that dark energy (represented
by purple grid) is a smooth, uniform force that now
dominates over the effects of gravity (green grid). The
observations follow from careful measurements of the
separations between pairs of galaxies (examples of such
pairs are illustrated here). (Credit: NASA/JPL-Caltech)

The survey used data from NASA's space-based Galaxy Evolution Explorer and the Anglo-Australian Telescope on Siding Spring Mountain in Australia.

The findings offer new support for the favored theory of how dark energy works -- as a constant force, uniformly affecting the universe and propelling its runaway expansion. They contradict an alternate theory, where gravity, not dark energy, is the force pushing space apart. According to this alternate theory, with which the new survey results are not consistent, Albert Einstein's concept of gravity is wrong, and gravity becomes repulsive instead of attractive when acting at great distances.

"The action of dark energy is as if you threw a ball up in the air, and it kept speeding upward into the sky faster and faster," said Chris Blake of the Swinburne University of Technology in Melbourne, Australia. Blake is lead author of two papers describing the results that appeared in recent issues of the Monthly Notices of the Royal Astronomical Society. "The results tell us that dark energy is a cosmological constant, as Einstein proposed. If gravity were the culprit, then we wouldn't be seeing these constant effects of dark energy throughout time."

Dark energy is thought to dominate our universe, making up about 74 percent of it. Dark matter, a slightly less mysterious substance, accounts for 22 percent. So-called normal matter, anything with atoms, or the stuff that makes up living creatures, planets and stars, is only approximately four percent of the cosmos.

The idea of dark energy was proposed during the previous decade, based on studies of distant exploding stars called supernovae. Supernovae emit constant, measurable light, making them so-called "standard candles," which allows calculation of their distance from Earth. Observations revealed dark energy was flinging the objects out at accelerating speeds.

The new survey provides two separate methods for independently checking these results. This is the first time astronomers performed these checks across the whole cosmic timespan dominated by dark energy. Astronomers began by assembling the largest three-dimensional map of galaxies in the distant universe, spotted by the Galaxy Evolution Explorer.

"The Galaxy Evolution Explorer helped identify bright, young galaxies, which are ideal for this type of study," said Christopher Martin, principal investigator for the mission at the California Institute of Technology in Pasadena. "It provided the scaffolding for this enormous 3-D map."

The team acquired detailed information about the light for each galaxy using the Anglo-Australian Telescope and studied the pattern of distance between them. Sound waves from the very early universe left imprints in the patterns of galaxies, causing pairs of galaxies to be separated by approximately 500 million light-years.

Blake and his colleagues used this "standard ruler" to determine the distance from the galaxy pairs to Earth. As with the supernovae studies, this distance data was combined with information about the speeds the pairs are moving away from us, revealing, yet again, the fabric of space is stretching apart faster and faster.

The team also used the galaxy map to study how clusters of galaxies grow over time like cities, eventually containing many thousands of galaxies. The clusters attract new galaxies through gravity, but dark energy tugs the clusters apart. It slows down the process, allowing scientists to measure dark energy's repulsive force.

"Observations by astronomers over the last 15 years have produced one of the most startling discoveries in physical science; the expansion of the universe, triggered by the big bang, is speeding up," said Jon Morse, astrophysics division director at NASA Headquarters in Washington. "Using entirely independent methods, data from the Galaxy Evolution Explorer have helped increase our confidence in the existence of dark energy."

For more information about NASA's Galaxy Evolution Explorer, visit: http://www.nasa.gov/galex and http://www.galex.caltech.edu .

Diamond Aerogel: New Form of Diamond Is Lighter Than Ever



By combining high pressure with high temperature, Livermore researchers have created a nanocyrstalline diamond aerogel that could improve the optics for something as big as a telescope or as small as the lenses in eyeglasses.
A diamond aerogel has been hammered out of a
microscopic anvil. (Credit: Image by Kwei-Yu Chu/LLNL)

Aerogels are a class of materials that exhibit the lowest density, thermal conductivity, refractive index and sound velocity of any bulk solid. Aerogels are among the most versatile materials available for technical applications due to their many exceptional properties. This material has chemists, physicists, astronomers, and materials scientists utilizing its properties in myriad applications, from a water purifier for desalinizing seawater to installation on a NASA satellite as a meteorite particle collector.

In new research appearing in the May 9-13 online edition of the Proceedings of the National Academy of Sciences, a Livermore team created a diamond aerogel from a standard carbon-based aerogel precursor using a laser-heated diamond anvil cell.

A diamond anvil cell consists of two opposing diamonds with the sample compressed between them. It can compress a small piece of material (tens of micrometers or smaller) to extreme pressures, which can exceed 3 million atmospheres. The device has been used to recreate the pressure existing deep inside planets, creating materials and phases not observed under normal conditions. Since diamonds are transparent, intense laser light also can be focused onto the sample to simultaneously heat it to thousands of degrees.

The new form of diamond has a very low density similar to that of the precursor of around 40 milligrams per cubic centimeter, which is only about 40 times denser than air.

The diamond aerogel could have applications in antireflection coatings, a type of optical coating applied to the surface of lenses and other optical devices to reduce reflection. Less light is lost, improving the efficiency of the system. It can be applied to telescopes, binoculars, eyeglasses or any other device that may require reflection reduction. It also has potential applications in enhanced or modified biocompatibility, chemical doping, thermal conduction and electrical field emission.

In creating diamond aergoels, lead researcher Peter Pauzauskie, a former Lawrence fellow now at the University of Washington, infused the pores of a standard, carbon-based aerogel with neon, preventing the entire aerogel from collapsing on itself.

At that point, the team subjected the aerogel sample to tremendous pressures and temperatures (above 200,000 atmospheres and in excess of 2,240 degrees Fahrenheit), forcing the carbon atoms within to shift their arrangement and create crystalline diamonds.

The success of this work also leads the team to speculate that additional novel forms of diamond may be obtained by exposing appropriate precursors to the right combination of high pressure and temperature.

Livermore researchers on the project include: Jonathan Crowhurst, Marcus Worsley, Ted Laurence, Yinmin "Morris" Wang, Trevor Wiley, Kenneth Visbeck, William Evans, Joseph Zaug and Joe Satcher Jr.

Bringing a Whole New Meaning to 'High-Speed Internet'



Researchers push a staggering 100 terabits of data per second through an optical fiber.
Faster broadband needed (Image: Ray Tang/Rex Features)

Talk about fast. Researchers have recently reported sending over 100 terabits of information per second through an optical fiber, New Scientist recently reported. That's a staggering amount of data--it would take three months' worth of HD video footage to use so much space.

The findings were revealed at the Optical Fiber Communications Conference, held in Los Angeles recently. First, an NEC Laboratories researcher (in Princeton, NJ) named Dayou Qian shared how he managed to push 101.7 terabits of data per second along 103 miles of fiber. The trick involved using pulses from 370 different lasers to multiply the amount of information that could be encoded at once. The light pulses were further varied to encode more information by using different polarities, phases, and amplitudes of light waves, according to reports.

Breakthroughs often occur in pairs (otherwise we wouldn't have so many patent disputes). Not to be outdone, a researcher at the Japanese National Institute of Information and Communications Technology, Jan Sakaguchi, had an even more impressive figure to share. Sakaguchi managed to squeeze 109 terabits per second through a fiber. His technique was different, and a little more intuitive--he simply used seven light-guiding cores in his fiber, rather than the more traditional single core. "We introduced a new dimension, spatial multiplication, to increasing transmission capacity," as he put it to New Scientist.



Does this mean your page-loading woes are forever dissipated? According to the report, the finding has little immediate bearing on your day-to-day Internet usage. The numbers involved here are so large, that they matter less to the individual consumer than they do to major data centers like those fueling giants like Google and Facebook (though presumably, any time saved there might ultimately benefit you in one way or another). Even looking on an infrastructural level, the numbers involved simply dwarf current commercial need. 100 terabits per second? One of today's most heavily trafficked broadband channels, that between New York and DC, only needs to send over a handful of terabits per second--not anywhere near 100 of them. Still, the rise of video-streaming and other data intensive projects means it can't hurt to have this tech in our back pocket, by any means. "Traffic has been growing about 50% per year for the last few years," Tim Strong of Telegeography Research told New Scientist.

As more and more cities come online in serious, data-guzzling ways--as we enter what's been termed the Terabit Age--it certainly won't hurt to have hit what one NEC researcher dubbed this "critical milestone in fiber capacity."

Tuesday, May 17, 2011

Striking Ecological Impact on Canada's Arctic Coastline Linked to Global Climate Change



Scientists from Queen's and Carleton universities head a national multidisciplinary research team that has uncovered startling new evidence of the destructive impact of global climate change on North America's largest Arctic delta.
Dead vegetation killed by the 1999 storm surge is in stark
contrast to the vegetation along the edges of waterways
that receive regular freshwater (and thus survived the
damage). (Credit: Trevor Lantz, University of Victoria)


"One of the most ominous threats of global warming today is from rising sea levels, which can cause marine waters to inundate the land," says the team's co-leader, Queen's graduate student Joshua Thienpont. "The threat is especially acute in polar regions, where shrinking sea ice increases the risk of storm surges."

By studying growth rings from coastal shrubs and lake sediments in the Mackenzie Delta region of the Northwest Territories -- the scene of a widespread and ecologically destructive storm surge in 1999 -- the researchers have discovered that the impact of these salt-water surges is unprecedented in the 1,000-year history of the lake.

"This had been predicted by all the models and now we have empirical evidence," says team co-leader Michael Pisaric, a geography professor at Carleton. The Inuvialuit, who live in the northwest Arctic, identified that a major surge had occurred in 1999, and assisted with field work.

The researchers studied the impact of salt water flooding on alder bushes along the coastline. More than half of the shrubs sampled were dead within a year of the 1999 surge, while an additional 37 per cent died within five years. A decade after the flood, the soils still contained high concentrations of salt. In addition, sediment core profiles from inland lakes revealed dramatic changes in the aquatic life -- with a striking shift from fresh to salt-water species following the storm surge.

"Our findings show this is ecologically unprecedented over the last millennium," says Queen's biology professor and team member John Smol, Canada Research Chair in Environmental Change and winner of the 2004 NSERC Herzberg Gold Medal as Canada's top scientist. "The Arctic is on the front line of climate change. It's a bellwether of things to come: what affects the Arctic eventually will affect us all."

Since nearly all Arctic indigenous communities are coastal, the damage from future surges could also have significant social impacts. The team predicts that sea ice cover, sea levels and the frequency and intensity of storms and marine storm surges will become more variable in the 21st century.

Other members of the team include Trevor Lantz from the University of Victoria, Steven Kokelj from Indian and Northern Affairs Canada, Steven Solomon from the Geological Survey of Canada and Queen's undergraduate student Holly Nesbitt. Their findings are published in the Proceedings of the National Academy of Sciences.

Research funding comes from the Natural Sciences and Engineering Research Council of Canada (NSERC), the Polar Continental Shelf Program, the Cumulative Impact Monitoring Program, and Indian and Northern Affairs Canada.

Sections of Retinas Regenerated and Visual Function Increased With Stem Cells from Skin



Scientists from Schepens Eye Research Institute are the first to regenerate large areas of damaged retinas and improve visual function using IPS cells (induced pluripotent stem cells) derived from skin. The results of their study, which is published in PLoS ONE this month, hold great promise for future treatments and cures for diseases such as age-related macular degeneration, retinitis pigmentosa, diabetic retinopathy and other retinal diseases that affect millions worldwide.
Histological staining of a teratoma containing Rho-/- eye
at 21 days post-injection of a heterogeneous population of
SSEA1-containing D33 differentiated cells. (Credit: Tucker
et al.,DOI: 10.1371/journal.pone.0018992)

"We are very excited about these results," says Dr. Budd A. Tucker, the study's first author. "While other researchers have been successful in converting skin cells into induced pluripotent stem cells (iPSCs) and subsequently into retinal neurons, we believe that this is the first time that this degree of retinal reconstruction and restoration of visual function has been detected," he adds. Tucker, who is currently an Assistant Professor of Ophthalmology at the University of Iowa, Carver College of Medicine, completed the study at Schepens Eye Research Institute in collaboration with Dr. Michael J. Young, the principle investigator of the study, who heads the Institute's regenerative medicine center.

Today, diseases such as retinitis pigmentosa (RP) and age-related macular degeneration (AMD) are the leading causes of incurable blindness in the western world. In these diseases, retinal cells, also known as photoreceptors, begin to die and with them the eye's ability to capture light and transmit this information to the brain. Once destroyed, retinal cells, like other cells of the central nervous system have limited capacity for endogenous regeneration.

"Stem cell regeneration of this precious tissue is our best hope for treating and someday curing these disorders," says Young, who has been at the forefront of vision stem cell research for more than a decade.

While Tucker, Young and other scientists were beginning to tap the potential of embryonic and adult stem cells early in the decade, the discovery that skin cells could be transformed into "pluripotent" cells, nearly identical to embryonic cells, stirred excitement in the vision research community. Since 2006 when researchers in Japan first used a set of four "transcription factors" to signal skin cells to become iPSCs, vision scientists have been exploring ways to use this new technology. Like embryonic stem cells, iPSCs have ¬the ability to become any other cell in the body, but are not fraught with the ethical, emotional and political issues associated with the use of tissue from human embryos.

Tucker and Young harvested skin cells from the tails of red fluorescent mice. They used red mice, because the red tissue would be easy to track when transplanted in the eyes of non-fluorescent diseased mice.

By forcing these cells to express the four Yamanaka transcription factors (named for their discoverer) the group generated red fluorescent IPSCs, and, with additional chemical coaxing, precursors of retinal cells. Precursor cells are immature photoreceptors that only mature in their natural habitat -- the eye.

Within 33 days the cells were ready to be transplanted and were introduced into the eyes of a mouse model of retina degenerative disease. Due to a genetic mutation, the retinas of these recipient mice quickly degenerate, the photoreceptor cells die and at the time of transplant electrical activity, as detected by ERG (electroretinography), is absent.

Within four to six weeks, the researchers observed that the transplanted "red" cells had taken up residence in the appropriate retinal area (photoreceptor layer) of the eye and had begun to integrate and assemble into healthily looking retinal tissue.

The team then retested the mice with ERG and found a significant increase in electrical activity in the newly reconstructed retinal tissue. In fact, the amount of electrical activity was approximately half of what would be expected in a normal retina. They also conducted a dark adaption test to see if connections were being made between the new photoreceptor cells and the rest of the retina. In brief, the group found that by stimulating the newly integrated photoreceptor cells with light they could detect a signal in the downstream neurons, which was absent in the other untreated eye.

Based on the results of their study, Tucker and Young believe that harvesting skin cells for use in retinal regeneration is and will continue to be a promising resource for the future.

The two scientists say their next step will be to take this technology into large animal models of retinal degenerative disease and eventually toward human clinical trials.

Other scientists involved in the PLoS ONE study include In-Hyun Park, Sara D. Qi, Henry J. Klassen, Caihui Jiang, Jing Yao, Stephen Redenti, and George Q. Daley.

'Master Switch' Gene for Obesity and Diabetes Discovered



A team of researchers, led by King's College London and the University of Oxford, have found that a gene linked to type 2 diabetes and cholesterol levels is in fact a 'master regulator' gene, which controls the behaviour of other genes found within fat in the body.
Scientists have found that a gene linked to type 2
diabetes and cholesterol levels is in fact a "master
regulator" gene, which controls the behavior of
other genes found within fat in the body.
(Credit: iStockphoto)

As fat plays a key role in susceptibility to metabolic diseases such as obesity, heart disease and diabetes, this study highlights the regulatory gene as a possible target for future treatments to fight these diseases.

Published May 15 in Nature Genetics, the study was one part of a large multi-national collaboration funded by the Wellcome Trust, known as the MuTHER study. It involves researchers from King's College London, University of Oxford, The Wellcome Trust Sanger Institute, and the University of Geneva. DeCODE Genetics also contributed to the results reported in this paper.

It was already known that the KLF14 gene is linked to type 2 diabetes and cholesterol levels but, until now, how it did this and the role it played in controlling other genes located further away on the genome was unknown.

The researchers examined over 20,000 genes in subcutaneous fat biopsies from 800 UK female twin volunteers. They found an association between the KLF14 gene and the expression levels of multiple distant genes found in fat tissue, which means it acts as a master switch to control these genes. This was then confirmed in a further independent sample of 600 subcutaneous fat biopsies from Icelandic subjects.

These other genes found to be controlled by KLF14 are in fact linked to a range of metabolic traits, including body-mass index (obesity), cholesterol, insulin and glucose levels, highlighting the interconnectedness of metabolic traits.

The KLF14 gene is special in that its activity is inherited from the mother. Each person inherits a set of all genes from both parents. But in this case, the copy of KLF14 from the father is switched off, meaning that the copy from the mother is the active gene -- a process called imprinting. Moreover, the ability of KLF14 to control other genes was entirely dependent on the copy of KLF14 inherited from the mother -- the copy inherited from the father had no effect.

Professor Tim Spector from the Department of Twin Research at King's, who led the MuTHER project, said: 'This is the first major study that shows how small changes in one master regulator gene can cause a cascade of other metabolic effects in other genes. This has great therapeutic potential particularly as by studying large detailed populations such as the twins we hope to find more of these regulators.'

Professor Mark McCarthy from the University of Oxford, who co-led the study, said: 'KLF14 seems to act as a master switch controlling processes that connect changes in the behaviour of subcutaneous fat to disturbances in muscle and liver that contribute to diabetes and other conditions. We are working hard right now to understand these processes and how we can use this information to improve treatment of these conditions.'