BTemplates.com

Powered by Blogger.

Pageviews past week

Quantum mechanics

Auto News

artificial intelligence

About Me

Recommend us on Google!

Information Technology

Popular Posts

Showing posts with label Technology. Show all posts
Showing posts with label Technology. Show all posts

Monday, April 3, 2023

Artificial Brain: The Future of Intelligence?


The future of artificial brains is uncertain. It is possible that artificial brains could one day become as intelligent as humans, or even surpass human intelligence. If this happens, it would have a profound impact on society. Artificial brains could be used to solve some of the world's most pressing problems, such as climate change and poverty. However, they could also be used for malicious purposes, such as developing autonomous weapons.

Artificial intelligence (AI) is rapidly evolving, and with it, the possibility of creating artificial brains. While this may seem like something out of a science fiction movie, it is actually a very real possibility. In fact, researchers at Indiana University (IU) are already working on developing artificial brains that could one day rival the capabilities of the human brain.

The IU team is led by Professor of Computer Science David B. Hardcastle, who is an expert in artificial intelligence and machine learning. Hardcastle and his team are working on developing artificial brains that can learn and adapt in the same way that human brains do. They believe that this type of artificial intelligence could have a profound impact on many different areas of our lives, from healthcare to education to transportation.

One of the main goals of the IU team is to develop artificial brains that can be used to improve healthcare. They believe that artificial brains could be used to diagnose diseases, develop new treatments, and even provide personalized care to patients. For example, artificial brains could be used to analyze medical images and identify potential problems that human doctors might miss. They could also be used to develop new drugs and treatments that are tailored to the specific needs of each patient.

The IU team is also working on developing artificial brains that can be used to improve education. They believe that artificial brains could be used to create personalized learning experiences for students. For example, artificial brains could be used to identify each student's strengths and weaknesses and then provide them with the appropriate level of challenge. They could also be used to provide feedback to students in a way that is both timely and helpful.

In addition to healthcare and education, the IU team is also working on developing artificial brains that can be used to improve transportation. They believe that artificial brains could be used to develop self-driving cars and other autonomous vehicles. For example, artificial brains could be used to navigate complex traffic conditions and avoid accidents. They could also be used to provide passengers with a more comfortable and enjoyable travel experience.

The work being done by the IU team is just one example of the many ways that artificial intelligence is being used to develop new technologies. Artificial intelligence has the potential to revolutionize many different areas of our lives, and the IU team is at the forefront of this research. It will be interesting to see what the future holds for artificial intelligence and artificial brains.

The Benefits of Artificial Brains

Artificial brains could have a number of benefits, including:

  • Improved healthcare: Artificial brains could be used to diagnose diseases, develop new treatments, and even provide personalized care to patients.
  • Improved education: Artificial brains could be used to create personalized learning experiences for students.
  • Improved transportation: Artificial brains could be used to develop self-driving cars and other autonomous vehicles.
  • Increased productivity: Artificial brains could be used to automate tasks and increase productivity in a variety of industries.
  • Enhanced creativity: Artificial brains could be used to generate new ideas and solve problems in innovative ways.

The Challenges of Artificial Brains

While there are many potential benefits to artificial brains, there are also a number of challenges that need to be addressed. These challenges include:

  • Safety: Artificial brains need to be designed in a way that ensures they are safe and do not pose a threat to humans.
  • Ethics: The development of artificial brains raises a number of ethical concerns, such as the potential for job displacement and the impact on human autonomy.
  • Control: It is important to ensure that artificial brains are under human control and do not become uncontrollable.
  • Bias: Artificial brains are trained on data, and if that data is biased, the artificial brain will also be biased. This is a major challenge that needs to be addressed in order to ensure that artificial brains are fair and unbiased.

The Future of Artificial Brains

The future of artificial brains is uncertain. It is possible that artificial brains could one day become as intelligent as humans, or even surpass human intelligence. If this happens, it would have a profound impact on society. Artificial brains could be used to solve some of the world's most pressing problems, such as climate change and poverty. However, they could also be used for malicious purposes, such as developing autonomous weapons.

It is important to start thinking about the implications of artificial brains now, so that we can be prepared for whatever the future holds.

Saturday, February 11, 2023

The Rise of Artificial Intelligence: Understanding its Applications and Impacts


Artificial Intelligence (AI) is a branch of computer science that deals with the creation of intelligent machines that work and react like human beings. It has become a critical tool in a wide range of industries, including healthcare, finance, manufacturing, retail, and many others. AI technology has made significant progress in recent years, and it is changing the way businesses and individuals interact with technology.

AI is based on the idea that machines can learn from experience, recognize patterns in data, and make decisions. There are two main types of AI: narrow or weak AI, which is designed to perform a specific task, and general or strong AI, which has the ability to perform any intellectual task that a human can. The most common forms of AI include machine learning, natural language processing (NLP), computer vision, and robotics.

Machine learning is a type of AI that enables computers to learn from data, identify patterns, and make predictions. It is used in a variety of applications, including recommendation systems, image and speech recognition, and fraud detection. NLP is a branch of AI that focuses on the interaction between computers and human language. It is used in applications such as chatbots, language translation, and sentiment analysis.

Computer vision, another form of AI, is the ability of computers to interpret and understand visual information from the world, such as images and videos. This technology is used in a wide range of applications, including object recognition, facial recognition, and autonomous vehicles. Robotics is the field of AI that deals with the design, construction, operation, and use of robots. It is used in manufacturing, healthcare, and other industries to automate tasks and increase efficiency.

AI has the potential to revolutionize the way we live and work, and it has already begun to do so. It is helping businesses to make better decisions, improve customer experiences, and increase efficiency. It is also being used to solve complex problems in healthcare, such as disease diagnosis and drug discovery. However, as with any new technology, there are also concerns about the potential consequences of AI, including job loss and privacy issues.

In conclusion, AI is a rapidly evolving technology that has the potential to bring about significant changes in the way we live and work. While there are certainly challenges to be addressed, the benefits of AI are undeniable, and its impact on society and the global economy will only continue to grow in the years to come. 

Sunday, November 26, 2017

High-speed encryption to secure future internet


In a bid to fight against the future cyber threats, scientists have developed a new system with high-speed encryption properties that drives quantum computers to create theoretically hack-proof forms of data encryption. The novel system is capable of creating and distributing encryption codes at megabit-per-second rates, which is five to 10 times faster than existing methods and on par with current internet speeds when running several systems in parallel. The technique is secure from common attacks, even in the face of equipment flaws that could open up leaks.

"We are now likely to have a functioning quantum computer that might be able to start breaking the existing cryptographic codes in the near future," said Daniel Gauthier, Professor at The Ohio State University. "We really need to be thinking hard now of different techniques that we could use for trying to secure the internet," Gauthier added, in the paper appearing in the journal Science Advances. For the new system to work, both the hacker as well as the sender must have access to the same key, and it must be kept secret. The novel system uses a weakened laser to encode information or transmit keys on individual photons of light, but also packs more information onto each photon, making the technique faster. By adjusting the time at which the photon is released, and a property of the photon called the phase, the new system can encode two bits of information per photon instead of one.

This trick, paired with high-speed detectors powers the system to transmit keys five to 10 times faster than other methods. "It was changing these additional properties of the photon that allowed us to almost double the secure key rate that we were able to obtain if we hadn't done that," Gauthier said.

Friday, July 5, 2013

For better batteries, just add water


A new type of lithium-ion battery that uses aqueous iodide ions in an aqueous cathode configuration provides twice the energy density of conventional lithium-ion batteries.

A new type of lithium-ion battery that uses aqueous iodide ions
in an aqueous cathode configuration provides twice the energy
density of conventional lithium-ion batteries.
Lithium-ion batteries are now found everywhere in devices such as cellular phones and laptop computers, where they perform well. In automotive applications, however, engineers face the challenge of squeezing enough lithium-ion batteries onto a vehicle to provide the desired power and range without introducing storage and weight issues. Hye Ryung Byon, Yu Zhao and Lina Wang from the RIKEN Byon Initiative Research Unit have now developed a lithium-iodine battery system with twice the energy density of conventional lithium-ion batteries.

Byon's team is involved in alternative energy research and, specifically, improving the performance of lithium-based battery technologies. In their research they turned to an 'aqueous' system in which the organic electrolyte in conventional lithium-ion cells is replaced with water. Such aqueous lithium battery technologies have gained attention among alternative energy researchers because of their greatly reduced fire risk and environmental hazard. Aqueous solutions also have other advantages, which include an inherently high ionic conductivity.

For their battery system, the researchers investigated an 'aqueous cathode' configuration (Fig. 1), which accelerates reduction and oxidation reactions to improve battery performance. Finding suitable reagents for the aqueous cathode, however, proved to be a tricky proposition. According to Byon, water solubility is the most important criterion for screening new materials, since this parameter determines the battery's energy density. Furthermore, the redox reaction has to take place in a restricted voltage range in order to avoid water electrolysis. An extensive search led the researchers to produce the first-ever lithium battery involving aqueous iodine—an element with high water solubility and a pair of ions, known as the triiodide/iodide redox couple, that readily undergo aqueous electrochemical reactions.

The team constructed a prototype aqueous cathode device and found the energy density to be nearly double that of a conventional lithium-ion battery, thanks to the high solubility of the triiodide/iodide ions. Their battery had high and near-ideal power storage capacities and could be successfully recharged hundreds of times, avoiding a problem that plagues other alternative high-energy-density lithium-ion batteries. Microscopy analysis revealed that the cathode collector remained untouched after 100 charge/discharge cycles with no observable corrosion or precipitate formation.

Byon and colleagues now plan to develop a three-dimensional, microstructured current collector that could enhance the diffusion-controlled triiodide/iodide process and accelerate charge and discharge. They are also seeking to raise energy densities even further by using a flowing-electrode configuration that stores aqueous 'fuel' in an external reservoir—a modification that should make this low-cost, heavy metal-free design more amenable to electric vehicle specifications.

More information: 1.Zhao, Y., Wang, L. & Byon, H. R. High-performance rechargeable lithium-iodine batteries using triiodide/iodide redox couples in an aqueous cathode. Nature Communications 4, 1896 (2013). dx.doi.org/10.1038/ncomms2907

Wednesday, June 26, 2013

Video Game Tech Used to Steer Cockroaches On Autopilot


North Carolina State University researchers are using video game technology to remotely control cockroaches on autopilot, with a computer steering the cockroach through a controlled environment. The researchers are using the technology to track how roaches respond to the remote control, with the goal of developing ways that roaches on autopilot can be used to map dynamic environments -- such as collapsed buildings.

North Carolina State University researchers are using video game technology to remotely control cockroaches on autopilot, with a computer steering the cockroach through a controlled environment.
North Carolina State University researchers are using video game technology to remotely control cockroaches on autopilot, with a computer steering the cockroach through a controlled environment. (Credit: Alper Bozkurt)

The researchers have incorporated Microsoft's motion-sensing Kinect system into an electronic interface developed at NC State that can remotely control cockroaches. The researchers plug in a digitally plotted path for the roach, and use Kinect to identify and track the insect's progress. The program then uses the Kinect tracking data to automatically steer the roach along the desired path.

The program also uses Kinect to collect data on how the roaches respond to the electrical impulses from the remote-control interface. This data will help the researchers fine-tune the steering parameters needed to control the roaches more precisely.

"Our goal is to be able to guide these roaches as efficiently as possible, and our work with Kinect is helping us do that," says Dr. Alper Bozkurt, an assistant professor of electrical and computer engineering at NC State and co-author of a paper on the work.

"We want to build on this program, incorporating mapping and radio frequency techniques that will allow us to use a small group of cockroaches to explore and map disaster sites," Bozkurt says. "The autopilot program would control the roaches, sending them on the most efficient routes to provide rescuers with a comprehensive view of the situation."

The roaches would also be equipped with sensors, such as microphones, to detect survivors in collapsed buildings or other disaster areas. "We may even be able to attach small speakers, which would allow rescuers to communicate with anyone who is trapped," Bozkurt says.

Bozkurt's team had previously developed the technology that would allow users to steer cockroaches remotely, but the use of Kinect to develop an autopilot program and track the precise response of roaches to electrical impulses is new.

The interface that controls the roach is wired to the roach's antennae and cerci. The cerci are sensory organs on the roach's abdomen, which are normally used to detect movement in the air that could indicate a predator is approaching -- causing the roach to scurry away. But the researchers use the wires attached to the cerci to spur the roach into motion. The wires attached to the antennae send small charges that trick the roach into thinking the antennae are in contact with a barrier and steering them in the opposite direction.

The paper, "Kinect-based System for Automated Control of Terrestrial Insect Biobots," will be presented at the Remote Controlled Insect Biobots Minisymposium at the 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society July 4 in Osaka, Japan. Lead author of the paper is NC State undergraduate Eric Whitmire. Co-authors are Bozkurt and NC State graduate student Tahmid Latif. The research was supported by the National Science Foundation.

Monday, June 24, 2013

More data storage? Here's how to fit 1,000 terabytes on a DVD


We live in a world where digital information is exploding. Some 90% of the world's data was generated in the past two years. The obvious question is: how can we store it all?

Using nanotechnology, researchers have developed a technique to increase the data storage capacity of a DVD from a measly 4.7GB to 1,000TB.
Using nanotechnology, researchers have developed a technique to increase the data storage capacity of a DVD from a measly 4.7GB to 1,000TB. Credit: Nature Communications
In Nature Communications today, we, along with Richard Evans from CSIRO, show how we developed a new technique to enable the data capacity of a single DVD to increase from 4.7 gigabytes up to one petabyte (1,000 terabytes). This is equivalent of 10.6 years of compressed high-definition video or 50,000 full high-definition movies.

So how did we manage to achieve such a huge boost in data storage? First, we need to understand how data is stored on optical discs such as CDs and DVDs.

The basics of digital storage


Although optical discs are used to carry software, films, games, and private data, and have great advantages over other recording media in terms of cost, longevity and reliability, their low data storage capacity is their major limiting factor.

The operation of optical data storage is rather simple. When you burn a CD, for example, the information is transformed to strings of binary digits (0s and 1s, also called bits). Each bit is then laser "burned" into the disc, using a single beam of light, in the form of dots.

The storage capacity of optical discs is mainly limited by the physical dimensions of the dots. But as there's a limit to the size of the disc as well as the size of the dots, many current methods of data storage, such as DVDs and Blu-ray discs, continue to have low level storage density.

To get around this, we had to look at light's fundamental laws.
On the basis of this law, the diameter of a spot of light, obtained by focusing a light beam through a lens, cannot be smaller than half its wavelength – around 500 nanometres (500 billionths of a metre) for visible light.

And while this law plays a huge role in modern optical microscopy, it also sets up a barrier for any efforts from researchers to produce extremely small dots – in the nanometre region – to use as binary bits.

In our study, we showed how to break this fundamental limit by using a two-light-beam method, with different colours, for recording onto discs instead of the conventional single-light-beam method.

Both beams must abide by Abbe's law, so they cannot produce smaller dots individually. But we gave the two beams different functions:
  • The first beam (red, in the figure right) has a round shape, and is used to activate the recording. We called it the writing beam
  • The second beam – the purple donut-shape – plays an anti-recording function, inhibiting the function of the writing beam

The two beams were then overlapped. As the second beam cancelled out the first in its donut ring, the recording process was tightly confined to the centre of the writing beam.

This new technique produces an effective focal spot of nine nanometres – or one ten thousandth the diameter of a human hair.

The technique, in practical terms


Our work will greatly impact the development of super-compact devices as well as nanoscience and nanotechnology research.

The exceptional penetration feature of light beams allow for 3D recording or fabrication, which can dramatically increase the data storage – the number of dots – on a single optical device.

The technique is also cost-effective and portable, as only conventional optical and laser elements are used, and allows for the development of optical data storage with long life and low energy consumption, which could be an ideal platform for a Big Data centre.

As the rate of information generated worldwide continues to accelerate, the aim of more storage capacity in compact devices will continue. Our breakthrough has put that target within our reach.
 
 
Story from: http://phys.org/news/2013-06-storage-terabytes-dvd.html#ajTabs

Monday, September 5, 2011

NASA Web App Lets You Control Space & Time in 3D



NASA has released its “Eyes on the Solar System” 3D environment, a free web browser-based application that lets you navigate a 3D version of the solar system. The app uses video game technology to let you control your point of view from anywhere in our solar system, speeding up time so you can see the motion of the planets, their satellites and NASA spacecraft.


We tried the Eyes on the Solar System app (download here), which first requires a download of the Unity Web Player for Mac and PC. Once you’ve done that, you can fly around beautifully produced models of all the planets, asteroids and the Sun. Or you can enter custom modules created by NASA that highlight missions such as Juno, the recently launched probe that’s currently on a five-year mission to Jupiter.

According to NASA:
“This is the first time the public has been able to see the entire solar system and our missions moving together in real time,” said Jim Green, director of NASA’s Planetary Science Division at the agency’s Headquarters in Washington. “It demonstrates NASA’s continued commitment to share our science with everyone.”
You can even keep tabs on the current locations of NASA spacecraft, with the help of NASA’s actual mission data. Don’t forget to click the Full Screen button for the full effect. Fantastic stuff.



Get the app here.



Saturday, September 3, 2011

Artificial light-harvesting method achieves 100% energy transfer efficiency


In an attempt to mimic the photosynthetic systems found in plants and some bacteria, scientists have taken a step toward developing an artificial light-harvesting system (LHS) that meets one of the crucial requirements for such systems: an approximately 100% energy transfer efficiency. Although high energy transfer efficiency is just one component of the development of a useful artificial LHS, the achievement could lead to clean solar-fuel technology that turns sunlight into chemical fuel.

By arranging porphyrin dye molecules on a clay surface using the “Size-Matching Effect,” researchers have demonstrated an energy transfer efficiency of approximately 100%, which is an important requirement for designing efficient artificial light-harvesting systems. Image credit: Ishida, et al. ©2011 American Chemical Society

The researchers, led by Shinsuke Takagi from the Tokyo Metropolitan University and PRESTO of the Japan Science and Technology Agency, have published their study on their work toward an artificial LHS in a recent issue of the Journal of the American Chemical Society.

“In order to realize an artificial light-harvesting system, almost 100% efficiency is necessary,” Takagi told PhysOrg.com. “Since light-harvesting systems consist of many steps of energy transfer, the total energy transfer efficiency becomes low if the energy transfer efficiency of each step is 90%. For example, if there are five energy transfer steps, the total energy transfer is 0.9 x 0.9 x 0.9 x 0.9 x 0.9 = 0.59. In this way, an efficient energy transfer reaction plays an important role in realizing efficient sunlight collection for an artificial light-harvesting system.”

As the researchers explain in their study, a natural LHS (like those in purple bacteria or plant leaves) is composed of regularly arranged molecules that efficiently collect sunlight and carry the excitation energy to the system’s reaction center. An artificial LHS (or “artificial leaf”) attempts to do the same thing by using functional dye molecules.

Building on the results of previous research, the scientists chose to use two types of porphyrin dye molecules for this purpose, which they arranged on a clay surface. The molecules’ tendency to aggregate or segregate on the clay surface made it challenging for the researchers to arrange the molecules in a regular pattern like their natural counterparts.

“A molecular arrangement with an appropriate intermolecular distance is important to achieve nearly 100% energy transfer efficiency,” Takagi said. “If the intermolecular distance is too near, other reactions such as electron transfer and/or photochemical reactions would occur. If the intermolecular distance is too far, deactivation of excited dye surpasses the energy transfer reaction.”




In order to achieve the appropriate intermolecular distance, the scientists developed a novel preparation technique based on matching the distances between the charged sites in the porphyrin molecules and the distances between negatively charged (anionic) sites on the clay surface. This effect, which the researchers call the “Size-Matching Rule,” helped to suppress the major factors that contributed to the porphyrin molecules’ tendency to aggregate or segregate, and fixed the molecules in an appropriate uniform intermolecular distance. As Takagi explained, this strategy is significantly different than other attempts at achieving molecular patterns.

“The methodology is unique,” he said. “In the case of usual self-assembly systems, the arrangement is realized by guest-guest interactions. In our system, host-guest interactions play a crucial role for realizing the special arrangement of dyes. Thus, by changing the host material, it is possible to control the molecular arrangement of dyes on the clay surface.”

As the researchers demonstrated, the regular arrangement of the molecules leads to an excited energy transfer efficiency of up to 100%. The results indicate that porphyrin dye molecules and clay host materials look like promising candidates for an artificial LHS.

“At the present, our system includes only two dyes,” Takagi said. “As the next step, the combination of several dyes to adsorb all sunlight is necessary. One of the characteristic points of our system is that it is easy to use several dyes at once. Thus, our system is a promising candidate for a real light-harvesting system that can use all sunlight. We believe that even photochemical reaction parts can be combined on the same clay surface. If this system is realized and is combined with a photochemical reaction center, this system can be called an ‘inorganic leaf.’”

More information: Yohei Ishida, et al. “Efficient Excited Energy Transfer Reaction in Clay/Porphyrin Complex toward an Artificial Light-Harvesting System.” Journal of the American Chemical Society


Saturday, August 6, 2011

Engineers Solve Longstanding Problem in Photonic Chip Technology: Findings Help Pave Way for Next Generation of Computer Chips


Stretching for thousands of miles beneath oceans, optical fibers now connect every continent except for Antarctica. With less data loss and higher bandwidth, optical-fiber technology allows information to zip around the world, bringing pictures, video, and other data from every corner of the globe to your computer in a split second. But although optical fibers are increasingly replacing copper wires, carrying information via photons instead of electrons, today's computer technology still relies on electronic chips.
Caltech engineers have developed a new way to 
isolate light on a photonic chip, allowing light to 
travel in only one direction. This finding can lead 
to the next generation of computer-chip technology: 
photonic chips that allow for faster computers 
and less data loss. (Credit: Caltech/Liang Feng)

Now, researchers led by engineers at the California Institute of Technology (Caltech) are paving the way for the next generation of computer-chip technology: photonic chips. With integrated circuits that use light instead of electricity, photonic chips will allow for faster computers and less data loss when connected to the global fiber-optic network.

"We want to take everything on an electronic chip and reproduce it on a photonic chip," says Liang Feng, a postdoctoral scholar in electrical engineering and the lead author on a paper to be published in the August 5 issue of the journal Science. Feng is part of Caltech's nanofabrication group, led by Axel Scherer, Bernard A. Neches Professor of Electrical Engineering, Applied Physics, and Physics, and co-director of the Kavli Nanoscience Institute at Caltech.

In that paper, the researchers describe a new technique to isolate light signals on a silicon chip, solving a longstanding problem in engineering photonic chips.

An isolated light signal can only travel in one direction. If light weren't isolated, signals sent and received between different components on a photonic circuit could interfere with one another, causing the chip to become unstable. In an electrical circuit, a device called a diode isolates electrical signals by allowing current to travel in one direction but not the other. The goal, then, is to create the photonic analog of a diode, a device called an optical isolator. "This is something scientists have been pursuing for 20 years," Feng says.

Normally, a light beam has exactly the same properties when it moves forward as when it's reflected backward. "If you can see me, then I can see you," he says. In order to isolate light, its properties need to somehow change when going in the opposite direction. An optical isolator can then block light that has these changed properties, which allows light signals to travel only in one direction between devices on a chip.

"We want to build something where you can see me, but I can't see you," Feng explains. "That means there's no signal from your side to me. The device on my side is isolated; it won't be affected by my surroundings, so the functionality of my device will be stable."

To isolate light, Feng and his colleagues designed a new type of optical waveguide, a 0.8-micron-wide silicon device that channels light. The waveguide allows light to go in one direction but changes the mode of the light when it travels in the opposite direction.

A light wave's mode corresponds to the pattern of the electromagnetic field lines that make up the wave. In the researchers' new waveguide, the light travels in a symmetric mode in one direction, but changes to an asymmetric mode in the other. Because different light modes can't interact with one another, the two beams of light thus pass through each other.



Previously, there were two main ways to achieve this kind of optical isolation. The first way -- developed almost a century ago -- is to use a magnetic field. The magnetic field changes the polarization of light -- the orientation of the light's electric-field lines -- when it travels in the opposite direction, so that the light going one way can't interfere with the light going the other way. "The problem is, you can't put a large magnetic field next to a computer," Feng says. "It's not healthy."

The second conventional method requires so-called nonlinear optical materials, which change light's frequency rather than its polarization. This technique was developed about 50 years ago, but is problematic because silicon, the material that's the basis for the integrated circuit, is a linear material. If computers were to use optical isolators made out of nonlinear materials, silicon would have to be replaced, which would require revamping all of computer technology. But with their new silicon waveguides, the researchers have become the first to isolate light with a linear material.

Although this work is just a proof-of-principle experiment, the researchers are already building an optical isolator that can be integrated onto a silicon chip. An optical isolator is essential for building the integrated, nanoscale photonic devices and components that will enable future integrated information systems on a chip. Current, state-of-the-art photonic chips operate at 10 gigabits per second (Gbps) -- hundreds of times the data-transfer rates of today's personal computers -- with the next generation expected to soon hit 40 Gbps. But without built-in optical isolators, those chips are much simpler than their electronic counterparts and are not yet ready for the market. Optical isolators like those based on the researchers' designs will therefore be crucial for commercially viable photonic chips.

In addition to Feng and Scherer, the other authors on the Science paper, "Non-reciprocal light propagation in a silicon photonic circuit," are Jingqing Huang, a Caltech graduate student; Maurice Ayache of UC San Diego and Yeshaiahu Fainman, Cymer Professor in Advanced Optical Technologies at UC San Diego; and Ye-Long Xu, Ming-Hui Lu, and Yan-Feng Chen of the Nanjing National Laboratory of Microstructures in China. This research was done as part of the Center for Integrated Access Networks (CIAN), one of the National Science Foundation's Engineering Research Centers. Fainman is also the deputy director of CIAN. Funding was provided by the National Science Foundation, and the Defense Advanced Research Projects Agency.

Tuesday, July 19, 2011

Automakers Give Flywheels a Spin An old technology could make hybrid cars much cheaper.


The automakers Volvo and Jaguar are testing the possibility of using flywheels instead of batteries in hybrid electric vehicles to aid acceleration and help engines operate more efficiently. The devices could reduce fuel consumption by 20 percent and would cost a third as much as batteries. Volvo will begin road-testing a car with the technology this fall.
A computer model of Volvo's flywheel, with an outer section cut away. Credit: Volvo

In a flywheel system, energy from the wheels is used to spin a flywheel at high speeds. The flywheel continues spinning, storing energy until that motion can be transferred back to the wheels via a transmission. The idea isn't new, but it's hard to make flywheels efficient—a lot of energy can be lost to friction. In 1982, for example, GM engineered a flywheel system that was intended for its 1985 vehicles, but they canceled the project after discovering that the fuel efficiency improvements were less than half of what they'd expected. Advances in the technology now have automakers taking a second look. "Industry has gone from being skeptical to thinking it can be done, but there are enormous challenges," says Derek Crabb, vice president of powertrain engineering for Volvo.

Engineers who design Formula 1 race cars have tried to overcome the challenges of a flywheel system by using composite materials to save weight. To reduce friction, they've sealed the flywheels inside a vacuum chamber. In translating that system to passenger cars, automakers face the problem of how to maintain the vacuum, since the seals that connect the flywheel to a transmission aren't perfect.



This is fine in racing, where the system only has to last a couple of hours at a time, and can be overhauled by team mechanics. Consumer cars using a similar design would need a system to maintain the vacuum with pumps and valves—and that adds complexity and cost. In another approach, from the U.K. engineering firm Ricardo, the mechanical connection between the flywheel and the transmission is severed. Instead, energy from the flywheel is transferred to a transmission via magnets arranged around the circumference of the flywheel and in a ring outside the flywheel housing. By varying the ratio of the magnets in the flywheel to those arranged around it, it's possible to make the flywheel spin six times faster than the ring around it, which simplifies the transmission of energy.

One advantage of flywheel systems over batteries is their compact size. "Most hybrids with batteries provide a 15- to 25-kilowatt boost of power. The flywheel can deliver 60 kilowatts in a way smaller package," says Andrew Atkins, chief engineer of technology at Ricardo. The trade-off is that flywheels can't supply energy for very long.

Crabb says Volvo hasn't decided if it will use a system such as Ricardo's or something else to maintain the vacuum. Many challenges remain in bringing a flywheel hybrid to market. For instance, automakers will have to ensure that the systems can be durable, and can be manufactured on a large scale, he says. Flywheels will also have to compete with batteries and other electrical storage devices such as ultracapacitors, which are getting more powerful and less expensive. .

Sunday, July 17, 2011

Some People Talk About Space-Time Invisibility Cloaks. At Cornell, They Built One


Physicists have created a "hole in time" using the temporal equivalent of an invisibility cloak.
A Temporal 'Time Cloak' Envisioned Moti 
Fridman et al. via arXiv

We’ve written previously about the theoretical possibility of “event cloaks” metamaterial space-time devices that could theoretically conceal an entire event in time from the view of an outsider. Well, while some bright minds were just talking about bending space-time to their whims, a team at Cornell was doing it. And it works. For 110 nanoseconds.

There’s a more thorough explanation of this notion in our previous coverage, but briefly this is the idea: basically, you need two time-lenses--lenses that can compress and decompress light in time. This is actually possible to do using an electro-optic modulator (what, you don’t have one?). Basically, using two of these modulators you would slow down or compress the light traveling through the first lens, and then set up a second lens downrange from the first that would decompress, or accelerate, the incoming photons from the first lens.

Got that? Refer to this handy gif, courtesy of some blokes working on a similar idea at Imperial College London:
Paul Kinsler, Imperial College London



Think of the photons like steadily flowing traffic on a highway. If you slow the traffic at a point upstream, you create a gap. You can cross the highway through the gap and then accelerate that traffic to catch up to the traffic ahead, closing the gap. To someone further downstream, the gap is not there--to that observer, the gap might as well have never existed because there’s no evidence of it.

During that gap, whatever occurs goes unrecorded. But, as we noted above, you’d have to be pretty quick were you to use such a device to pull some kind of shenanigans. The current device the Cornell gents have built creates a 110 nanosecond event gap, and they concede that the best it could achieve is 120 microseconds. But, as KFC notes at Technology Review, rarely is anything final in cutting edge theoretical physics.

Ref: arxiv.org/abs/1107.2062: Demonstration Of Temporal Cloaking


Saturday, July 16, 2011

Soft Memory Device Opens Door to New Biocompatible Electronics


Researchers from North Carolina State University have developed a memory device that is soft and functions well in wet environments -- opening the door to a new generation of biocompatible electronic devices.
Researchers have created a memory device with the 
physical properties of Jell-O, and that functions well 
in wet environments. (Credit: Michael Dickey, North 
Carolina State University)

"We've created a memory device with the physical properties of Jell-O," says Dr. Michael Dickey, an assistant professor of chemical and biomolecular engineering at NC State and co-author of a paper describing the research.

Conventional electronics are typically made of rigid, brittle materials and don't function well in a wet environment. "Our memory device is soft and pliable, and functions extremely well in wet environments -- similar to the human brain," Dickey says.

Prototypes of the device have not yet been optimized to hold significant amounts of memory, but work well in environments that would be hostile to traditional electronics. The devices are made using a liquid alloy of gallium and indium metals set into water-based gels, similar to gels used in biological research.

The device's ability to function in wet environments, and the biocompatibility of the gels, mean that this technology holds promise for interfacing electronics with biological systems -- such as cells, enzymes or tissue. "These properties may be used for biological sensors or for medical monitoring," Dickey says.



The device functions much like so-called "memristors," which are vaunted as a possible next-generation memory technology. The individual components of the "mushy" memory device have two states: one that conducts electricity and one that does not. These two states can be used to represent the 1s and 0s used in binary language. Most conventional electronics use electrons to create these 1s and 0s in computer chips. The mushy memory device uses charged molecules called ions to do the same thing.

In each of the memory device's circuits, the metal alloy is the circuit's electrode and sits on either side of a conductive piece of gel. When the alloy electrode is exposed to a positive charge it creates an oxidized skin that makes it resistive to electricity. We'll call that the 0. When the electrode is exposed to a negative charge, the oxidized skin disappears, and it becomes conducive to electricity. We'll call that the 1.

Normally, whenever a negative charge is applied to one side of the electrode, the positive charge would move to the other side and create another oxidized skin -- meaning the electrode would always be resistive. To solve that problem, the researchers "doped" one side of the gel slab with a polymer that prevents the formation of a stable oxidized skin. That way one electrode is always conducive -- giving the device the 1s and 0s it needs for electronic memory.

The paper was published online July 4 by Advanced Materials. The paper was co-authored by NC State Ph.D. students Hyung-Jun Koo and Ju-Hee So, and NC State INVISTA Professor of Chemical and Biomolecular Engineering Orlin Velev. The research was supported by the National Science Foundation and the U.S. Department of Energy.

NC State's Department of Chemical and Biomolecular Engineering is part of the university's College of Engineering.

Tuesday, July 12, 2011

How Google+ Will Balkanize Your Social Life


For many, the new service offers the chance to press "reset on Facebook."

Google launched its Facebook competitor, Google+, just over a week ago now. Even though sign-ups have so far been limited to a fraction of Facebook's 750 million users, it already appears that, for a lot of people, Google+ will become the other social network they need to use. Why? Because a significant fraction of their friends will force them to. 

It's not just that Google+ has 10-person video hangouts, or that Google+ is magically free of privacy worries. It's that Google has created the opportunity for Facebook-weary people to perform what one called "a reset on Facebook," allowing them to escape from Facebook members they've friended over the years but don't really want to interact with—and can't quite bring themselves to defriend.

The killer feature of Google+ is that, unlike Facebook, LinkedIn, or most other social networks, there's no such thing as a friend request. Users can create groups of friends, called Circles in Google+ terminology. These circles can include both other Google+ users and nonusers who receive status updates via e-mail rather than via the site. As a Google+ user, you can share your status updates and favorite links with those in one or more of these easily created circles, or with everyone. And you can see what other users have shared with you, or with everyone, in a Facebook-like feed that runs down the middle of the page.

But you'll never be put in the awkward situation of receiving a friend request from someone you don't really want to be Google+ friends with. Nor will you have to face the awkward decision of whether or not to defriend a former confidant with whom you've fallen out. Just remove them from your circles, which are never revealed to other users. Other than that, Google+ looks and behaves a lot like Facebook.



Sure, Facebook has ways to filter, block, and organize other members so you don't have to share every update with, say, your parents. But on Google+, your parents can't send you a friend request, and the Circles system makes it one-click easy to share a tasteless video clip or a story of public drunkenness with your college friends without having to customize the update first. There's no way yet to share a post with everyone in your Best Buddies circle except those who are also in your Coworkers circle, but it would be easy to add to the system before Google takes Google+ out of its limited-membership trial period.

Another subtle difference from Facebook: Google+ doesn't yet have ads running down the side of the page, nor are there viral apps that spam all of one's Google+ friends with updates such as, "Jane Smith has taken a test!" Given the low-key format of Google's ads on its search engine and in its Gmail service, it seem likely that while some sort of advertising is inevitable, it won't be the kind that addles users' eyeballs and infuriates them with its intrusiveness. The most annoying ads Google sells will probably still be the ones that pop up at the bottom of Google's YouTube videos.

Having been on Google+ for a week, I'm enjoying the private-club feel of the place. The only updates I see are mostly from people I personally invited to join last week.

My feed also includes frequent posts from the usual social media early adopters, such as SoupSoup blogger Anthony de Rosa, who have an ear for the interesting. Checking two social networks instead of one is inconvenient, but the difference between Facebook and Google+ is currently like work versus play —people I feel obliged to network with (Facebook), and people I'm happy to kick back with on the other (Google+).

Google has since turned off the ability to invite others to join Google+ temporarily, blocking new sign-ups with the message, "We have temporarily exceeded our capacity." Since when does Google have capacity issues, by the way? Most likely, Google is just taking it slow, while the first few users find their way around.

Eventually, Google will open up Google+ to everyone, which means former coworkers I've forgotten, people I went to school with 30 years ago, and an army of public relations professionals trying to network with me will show up. But unlike Facebook, I won't have to approve 984 friend requests. And unlike Facebook, on Google+ I won't feel rude when I block their updates from my feed. It's time for a reset.

Monday, July 11, 2011

U of T researchers build an antenna for light


University of Toronto researchers have derived inspiration from the photosynthetic apparatus in plants to engineer a new generation of nanomaterials that control and direct the energy absorbed from light.

Their findings are reported in a forthcoming issue of Nature Nanotechnology, which will be released on July 10, 2011.

The U of T researchers, led by Professors Shana Kelley and Ted Sargent, report the construction of what they term "artificial molecules."

"Nanotechnologists have for many years been captivated by quantum dots – particles of semiconductor that can absorb and emit light efficiently, and at custom-chosen wavelengths," explained co-author Kelley, a Professor at the Leslie Dan Faculty of Pharmacy, the Department of Biochemistry in the Faculty of Medicine, and the Department of Chemistry in the Faculty of Arts & Science. "What the community has lacked – until now – is a strategy to build higher-order structures, or complexes, out of multiple different types of quantum dots. This discovery fills that gap."

The team combined its expertise in DNA and in semiconductors to invent a generalized strategy to bind certain classes of nanoparticles to one another.

"The credit for this remarkable result actually goes to DNA: its high degree of specificity – its willingness to bind only to a complementary sequence – enabled us to build rationally-engineered, designer structures out of nanomaterials," said Sargent, a Professor in The Edward S. Rogers Sr. Department of Electrical & Computer Engineering at the University of Toronto, who is also the Canada Research Chair in Nanotechnology. "The amazing thing is that our antennas built themselves – we coated different classes of nanoparticles with selected sequences of DNA, combined the different families in one beaker, and nature took its course. The result is a beautiful new set of self-assembled materials with exciting properties."



Traditional antennas increase the amount of an electromagnetic wave – such as a radio frequency – that is absorbed, and then funnel that energy to a circuit. The U of T nanoantennas instead increased the amount of light that is absorbed and funneled it to a single site within their molecule-like complexes. This concept is already used in nature in light harvesting antennas, constituents of leaves that make photosynthesis efficient. "Like the antennas in radios and mobile phones, our complexes captured dispersed energy and concentrated it to a desired location. Like the light harvesting antennas in the leaves of a tree, our complexes do so using wavelengths found in sunlight," explained Sargent.

"Professors Kelley and Sargent have invented a novel class of materials with entirely new properties. Their insight and innovative research demonstrates why the University of Toronto leads in the field of nanotechnology," said Professor Henry Mann, Dean of the Leslie Dan Faculty of Pharmacy.

"This is a terrific piece of work that demonstrates our growing ability to assemble precise structures, to tailor their properties, and to build in the capability to control these properties using external stimuli," noted Paul S. Weiss, Fred Kavli Chair in NanoSystems Sciences at UCLA and Director of the California NanoSystems Institute.

Kelley explained that the concept published in today's Nature Nanotechnology paper is a broad one that goes beyond light antennas alone.

"What this work shows is that our capacity to manipulate materials at the nanoscale is limited only by human imagination. If semiconductor quantum dots are artificial atoms, then we have rationally synthesized artificial molecules from these versatile building blocks."

Friday, July 8, 2011

Teaching the neurons to meditate


In the late 1990s, Jane Anderson was working as a landscape architect. That meant she didn't work much in the winter, and she struggled with seasonal affective disorder in the dreary Minnesota winter months. She decided to try meditation and noticed a change within a month. "My experience was a sense of calmness, of better ability to regulate my emotions," she says. Her experience inspired a new study which will be published in an upcoming issue of Psychological Science, a journal of the Association for Psychological Science, which finds changes in brain activity after only five weeks of meditation training.

Previous studies have found that Buddhist monks, who have spent tens of thousands of hours of meditating, have different patterns of brain activity. But Anderson, who did this research as an undergraduate student together with a team of University of Wisconsin-Stout faculty and students, wanted to know if they could see a change in brain activity after a shorter period.

At the beginning of the study, each participant had an EEG, a measurement of the brain's electrical activity. They were told: "Relax with your eyes closed, and focus on the flow of your breath at the tip of your nose; if a random thought arises, acknowledge the thought and then simply let it go by gently bringing your attention back to the flow of your breath."

Then 11 people were invited to take part in meditation training, while the other 10 were told they would be trained later. The 11 were offered two half-hour sessions a week, and encouraged to practice as much as they could between sessions, but there wasn't any particular requirement for how much they should practice.



After five weeks, the researchers did an EEG on each person again. Each person had done, on average, about seven hours of training and practice. But even with that little meditation practice, their brain activity was different from the 10 people who hadn't had training yet. People who had done the meditation training showed a greater proportion of activity in the left frontal region of the brain in response to subsequent attempts to meditate. Other research has found that this pattern of brain activity is associated with positive moods.

The shift in brain activity "was clearly evident even with a small number of subjects," says Christopher Moyer, one of Anderson's coauthors at the University of Wisconsin-Stout. "If someone is thinking about trying meditation and they were thinking, 'It's too big of a commitment, it's going to take too much rigorous training before it has an effect on my mind,' this research suggests that's not the case." For those people, meditation might be worth a try, he says. "It can't hurt and it might do you a lot of good."

"I think this implies that meditation is likely to create a shift in outlook toward life," Anderson says. "It has really worked for me."

Provided by Association for Psychological Science

Thursday, July 7, 2011

Solar Cells that See Red


Metamaterials that convert lower-energy photons to usable wavelengths could offer solar cells an efficiency boost.
Light switch: In a process that could make
solar cells more efficient, green laser light
is "upconverted" to blue light by a
solution of dyes and metal nanoparticles.
Credit: Jennifer Dionne

Researchers at Stanford University have demonstrated a set of materials that could enable solar cells to use a band of the solar spectrum that otherwise goes to waste. The materials layered on the back of solar cells would convert red and near-infrared light—unusable by today's solar cells—into shorter-wavelength light that the cells can turn into energy. The university researchers will collaborate with the Bosch Research and Technology Center in Palo Alto, California, to demonstrate a system in working solar cells in the next four years.

Even the best of today's silicon solar cells can't use about 30 percent of the light from the sun: that's because the active materials in solar cells can't interact with photons whose energy is too low. But though each of these individual photons is low energy, as a whole they represent a large amount of untapped solar energy that could make solar cells more cost-competitive.

The process, called "upconversion," relies on pairs of dyes that absorb photons of a given wavelength and re-emit them as fewer, shorter-wavelength photons. In this case, the Bosch and Stanford researchers will work on systems that convert near-infrared wavelengths (most of which are unusable by today's solar cells). The leader of the Stanford group, assistant professor Jennifer Dionne, believes the group can improve the sunlight-to-electricity conversion efficiency of amorphous-silicon solar cells from 11 percent to 15 percent.



The concept of upconversion isn't new, but it's never been demonstrated in a working solar cell, says Inna Kozinsky, a senior engineer at Bosch. Upconversion typically requires two types of molecules to absorb relatively high-wavelength photons, combine their energy, and re-emit it as higher-energy, lower-wavelength photons. However, the chances of the molecules encountering each other at the right time when they're in the right energetic states are low. Dionne is developing nanoparticles to add to these systems in order to increase those chances. To make better upconversion systems, Dionne is designing metal nanoparticles that act like tiny optical antennas, directing light in these dye systems in such a way that the dyes are exposed to more light at the right time, which creates more upconverted light, and then directing more of that upconverted light out of the system in the end.

The ultimate vision, says Dionne, is to create a solid. Sheets of such a material could be laid down on the bottom of the cell, separated from the cell itself by an electrically insulating layer. Low-wavelength photons that pass through the active layer would be absorbed by the upconverter layer, then re-emitted back into the active layer as usable, higher-wavelength light.

Kozinsky says Bosch's goal is to demonstrate upconversion of red light in working solar cells in three years, and upconversion of infrared light in four years. Factoring in the time needed to scale up to manufacturing, she says, the technology could be in Bosch's commercial solar cells in seven to 10 years.