BTemplates.com

Powered by Blogger.

Pageviews past week

Quantum mechanics

Auto News

artificial intelligence

About Me

Recommend us on Google!

Information Technology

Popular Posts

Sunday, July 3, 2011

Researchers decipher protein structure of key molecule in DNA transcription system


The research adds an important link to discoveries that have enabled scientists to gain a deeper understanding of how cells translate genetic information into the proteins and processes of life. The findings, published in the July 3 advance online issue of the journal Nature, were reported by a research team led by Yuichiro Takagi, Ph.D., assistant professor of biochemistry and molecular biology at Indiana University School of Medicine.
Scientists have deciphered the structure of an essential 
part of Mediator, a complex molecular machine that plays 
a vital role in regulating the transcription of DNA.

The fundamental operations of all cells are controlled by the genetic information – the genes –stored in each cell's DNA, a long double-stranded chain. Information copied from sections of the DNA – through a process called transcription – leads to synthesis of messenger RNA, eventually enabling the production of proteins necessary for cellular function. Transcription is undertaken by the enzyme called RNA polymerase II.

As cellular operations proceed, signals are sent to the DNA asking that some genes be activated and others be shut down. The Mediator transcription regulator accepts and interprets those instructions, telling RNA polymerase II where and when to begin the transcription process.

Mediator is a gigantic molecular machine composed of 25 proteins organized into three modules known as the head, the middle, and the tail. Using X-ray crystallography, the Takagi team was able to describe in detail the structure of the Mediator Head module, the most important for interactions with RNA polymerase II.

"It's turned out to be extremely novel, revealing how a molecular machine is built from multiple proteins," said Takagi.

"As a molecular machine, the Mediator head module needs to have elements of both stability and flexibility in order to accommodate numerous interactions. A portion of the head we named the neck domain provides the stability by arranging the five proteins in a polymer-like structure," he said.



"We call it the alpha helical bundle," said Dr. Takagi. "People have seen structures of alpha helical bundles before but not coming from five different proteins."
A research team led by Yuichiro Takagi, Ph.D.,
Indiana University School of Medicine, has
deciphered the structure of an essential part
of Mediator, a complex molecular machine
that plays a vital role in regulating the
transcription of DNA.

"This is a completely noble structure," he said.

One immediate benefit of the research will be to provide detailed mapping of previously known mutations that affect the regulation of the transcription process, he said.

The ability to solve such complex structures will be important because multi-protein complexes such as Mediator will most likely become a new generation of drug targets for treatment of disease, he said.

Previously, the structure of RNA polymerase II was determined by Roger Kornberg of Stanford University, with whom Dr. Takagi worked prior to coming to IU School of Medicine. Kornberg received the Nobel Prize in 2006 for his discoveries. The researchers who described the structure of the ribosome, the protein production machine, were awarded the Nobel Prize in 2009. The structure of the entire Mediator has yet to be described, and thus remains the one of grand challenges in structure biology. Dr. Takagi's work on the Mediator head module structure represents a major step towards a structure determination of the entire Mediator. In addition to Dr. Takagi as a senior author, the lead author for the Nature paper was Tsuyoshi Imasaki, Ph.D., of the IU School of Medicine. Other collaborators included researchers at The Scripps Research Institute, Stanford University, Memorial Sloan-Kettering Cancer Center and the European Molecular Biology Laboratory.

Flapping micro air vehicles inspired by swifts


Scientists have designed a micro aircraft that will be able to flap, glide and hover like a bird.
This shows the wake of a swift in slow forward flight, the new design mimics these birds to improve MAV performance. Credit: William Thielicke

Researchers from the Biomimetics-Innovation-Centre in Germany have been inspired by birds to produce a new versatile design of Micro air vehicle (MAV) that combines flapping wings, which allow it to fly at slow speeds and hover, with the ability to glide, ensuring good quality images from any on-board camera.

"In birds, the combination of demanding tasks like take-off, travelling long distances, manoeuvring in confined areas and landing is daily practice," explains PhD researcher Mr. William Thielicke, who is presenting this work at the Society for Experimental Biology Annual Conference in Glasgow on the 2nd of July.



Micro air vehicles (MAVs) are small unmanned
aircraft, often used for rescue or reconnaissance
missions in areas where it would be dangerous
or impractical for humans to go. Credit: William
Thielicke
This innovative design was inspired by one bird in particular, the swift. "We know that swifts are very manoeuvrable and they can glide very efficiently. So we thought these birds would be a very good starting point for an energy efficient flapping-wing MAV," says Mr. Thielicke.

While fixed wing MAVs are energy efficient, their manoeuvrability is low. The new design would allow the flapping wing MAV to glide, improving energy efficiency and ensuring good images but when needed it can also slow its flight and manoeuvre in confined spaces.

"Although the models are not yet ready to be used, initial tests are positive and we hope that this design will combine the best of both worlds," says Mr. Thielicke.

Provided by Society for Experimental Biology

A Tablet that Wants to Take Over the Desktop


Cisco has redesigned the Android operating system to make a tablet that also works as a desktop computer—but it takes some control away from users.
New look: The Cius tablet features a radically
different version of Android.
Credit: Cisco.

The latest entrant in the increasingly crowded tablet computing field, Cisco's Cius, is bulkier than the iPad, and has a smaller screen (7-inches wide, compared to the iPad's 9.7). But it packs a number of tricks all of its own, designed to woo business users. The Cius is designed to integrate closely with Cisco's voice and video phone systems, and it can even replace a desktop computer when docked to a new Cisco deskphone, which connects to a monitor, keyboard and mouse.

A Cius tablet makes a user's desk number mobile, enabling a person to make and receive voice and video calls anywhere, if their company has a Cisco phone system. The tablet features HD quality cameras front and back and can be used with a Bluetooth headset for more private calling. The tablet can also be used as a desktop videoconferencing device when docked on a special desktop phone, and can smoothly switch between a WiFi a cellular network connection.

That dock can also be plugged into a monitor keyboard and mouse to act like a desktop computer. "It can replace my desktop operating system," says Tom Puorro, senior director for Cisco's collaboration technologies.

The Cius runs Google's Android mobile operating system, which is used on a rapidly growing number of smartphones and tablets as well. Android is open source, meaning it can be modified by anyone for free, yet so far most companies that have built gadgets running Android have tinkered with it little. The Cius, in contrast, features a radical reworking of Android.

This gives an IT department much greater control over what a Cius user can do. IT managers can shut down access to the Android app market to protect a company from malicious apps. Cisco has also created its own app store, AppHQ, that contains only apps deemed stable and secure by Cisco. Companies can even create their own app store within AppHQ and limit employees to certain applications, or apps built in house.

A WiFi only version of the tablet will be available worldwide from July 31 at an estimated price of $750. Cisco will sell it along with related services and infrastructure, so the cost to businesses will vary, and could be as low as $650. AT&T and Verizon will each offer versions for their 4G networks this fall.



A person can use the tablet's own OS or Windows even via a virtual desktop that runs in the cloud, as Puorro demonstrated at a launch event held in San Jose today. The tablet's powerful 1.6GHz Intel Atom processor allows desktop-like performance when hooked up to a keyboard, mouse and monitor. Although iPads are showing up in workplaces, they can't offer the same integration with everyday tasks like phone calls, and are limited to email, Web browsing and video, says Puorro.

Cisco worked with Google to get advice on its modifications to Android, says Puorro. These modifications enable Android to deal with video and operations like group calling and transferring calls, and make use of a dedicated chip in the tablet that encrypt all its data.
Desk mode: The Cius can be plugged into a new Cisco phone, and function as a desktop computer.
Credit: Cisco

However, the Cius lags other Android tablets in that it uses a now-outdated version of the operating system, code-named FroYo, which was intended only for phones. Cisco say they will catch up, but are waiting for the fall release of Android, code-named Ice Cream Sandwich, a version that Google says will seamlessly span phones and tablets.

Ken Dulaney, a VP and analyst with Gartner specializing in mobile devices says that Cisco has likely delivered something that none of the 200 or so other tablets launching this year can match. "Samsung's latest Galaxy Tab has much more advanced hardware," he says, "what Cisco has done is create a special case of Android that adds things the enterprise needs and is a unique combination of phone, tablet and videoconferencing device."

Other companies have hinted at plans for enterprise-friendly revamps of Android, says Dulaney, including Motorola, but none have so far yet delivered.

Although the Cius may not seem competitive with Apple's iPad 2 to consumers, to businesses concerned about their security it likely see distinct advantages. Apps such as MobileIron exist to help IT staff control iPads used by their staff, but Apple's operating system fundamentally limits the extent to which the iPad can be managed remotely, says Dulaney. "With Android, Cisco could go in at a low level and change how the device is managed so a company can manage everything for the user."

Without an existing investment in Cisco phone and communication systems, though, many company may see little appeal. Puorro says that Cisco continues to develop and release iPad and iPhone apps for its collaboration software, a strategy Dulaney says is wise. "Of course Cisco will also aggressively support iPads," he says, "I think they're gonna see how the Cius does, and if it doesn't work out, work hard to support the most popular tablets."

Saturday, July 2, 2011

Auto-pilots need a birds-eye view



New research on how birds can fly so quickly and accurately through dense forests may lead to new developments in robotics and auto-pilots.
The pigeons were fitted with a tiny head-camera 
before they flew through the artificial forest. 
Credit: Talia Moore

Scientists from Harvard University trained pigeons to fly through an artificial forest with a tiny camera attached to their heads, literally giving a birds-eye view. "Attaching the camera to the bird as well as filming them from either side means we can reconstruct both what the bird sees and how it moves," says Dr. Huai-Ti Lin, a lead researcher for this work who has special insight into flying as he is a remote control airplane pilot himself.

The methods pigeons use to navigate through difficult environments could be used as a model for auto-pilot technology. Pigeons, with >300 degree panoramic vision, are well suited to this task because this wrap-round vision allows them to assess obstacles on either side. They can also stabilise their vision and switch rapidly between views using what is called a "head saccade", a small rapid movement of the head.
This image shows a pigeon, fitted with 
a camera, about to fly through the 
artificial forest that can be seen in 
the background. Credit: Talia Moore

This research is being presented at the Society for Experimental Biology annual conference in Glasgow on the 1st of July, 2011.

The researchers also showed that the birds have other skills that would be important for auto-piloted machines, for example they tend to choose the straightest routes. "This is a very efficient way of getting through the forest, because the birds have to do less turns and therefore use less energy but also because they reach the other side quicker," says Dr Lin. "Another interesting finding is that pigeons seems to exit the forest heading in exactly the same direction as when they entered, in spite of all the twist and turns they made in the forest."

When using a robot or an unmanned air-craft it would be invaluable to simply provide it with the coordinates of the destination without having to give it detailed information of all the obstacles it might meet on the way. "If we could develop the technology to follow the same methods as birds we could let the robot get on with it without giving it any more input," says Dr. Lin

Provided by Society for Experimental Biology

Magnetic memory and logic could achieve ultimate energy efficiency


Future computers may rely on magnetic microprocessors that consume the least amount of energy allowed by the laws of physics, according to an analysis by University of California, Berkeley, electrical engineers.
In magnetic contrast images (top) taken by the Advanced
Light Source at Lawrence Berkeley National Laboratory,
the bright spots are nanomagnets with their north ends
pointing down (represented by red bar below) and the
dark spots are north-up nanomagnets (blue). The six
nanomagnets form a majority logic gate transistor, where the
output on the right of the center bar is determined by the
majority of three inputs on the top, left and bottom.
Horizontal neighboring magnets tend to point in alternate
directions, while vertical neighbors prefer to point in the
same direction. Credit: Jeffrey Bokor lab, UC Berkeley

Today's silicon-based microprocessor chips rely on electric currents, or moving electrons, that generate a lot of waste heat. But microprocessors employing nanometer-sized bar magnets – like tiny refrigerator magnets – for memory, logic and switching operations theoretically would require no moving electrons.

Such chips would dissipate only 18 millielectron volts of energy per operation at room temperature, the minimum allowed by the second law of thermodynamics and called the Landauer limit. That's 1 million times less energy per operation than consumed by today's computers.

"Today, computers run on electricity; by moving electrons around a circuit, you can process information," said Brian Lambson, a UC Berkeley graduate student in the Department of Electrical Engineering and Computer Sciences. "A magnetic computer, on the other hand, doesn't involve any moving electrons. You store and process information using magnets, and if you make these magnets really small, you can basically pack them very close together so that they interact with one another. This is how we are able to do computations, have memory and conduct all the functions of a computer."

Lambson is working with Jeffrey Bokor, UC Berkeley professor of electrical engineering and computer sciences, to develop magnetic computers.
Nanomagnetic computers use tiny bar magnets to store and process information. The interactions between the polarized, north-south magnetic fields of closely spaced magnets allow logic operations like those in conventional transistors. Credit: Jeffrey Bokor lab, UC Berkeley
"In principle, one could, I think, build real circuits that would operate right at the Landauer limit," said Bokor, who is a codirector of the Center for Energy Efficient Electronics Science (E3S), a Science and Technology Center founded last year with a $25 million grant from the National Science Foundation. "Even if we could get within one order of magnitude, a factor of 10, of the Landauer limit, it would represent a huge reduction in energy consumption for electronics. It would be absolutely revolutionary."

One of the center's goals is to build computers that operate at the Landauer limit.



Lambson, Bokor and UC Berkeley graduate student David Carlton published a paper about their analysis online today (Friday, July 1) in the journal Physical Review Letters.

Fifty years ago, Rolf Landauer used newly developed information theory to calculate the minimum energy a logical operation, such as an AND or OR operation, would dissipate given the limitation imposed by the second law of thermodynamics. (In a standard logic gate with two inputs and one output, an AND operation produces an output when it has two positive inputs, while an OR operation produces an output when one or both inputs are positive.) That law states that an irreversible process – a logical operation or the erasure of a bit of information – dissipates energy that cannot be recovered. In other words, the entropy of any closed system cannot decrease.

In today's transistors and microprocessors, this limit is far below other energy losses that generate heat, primarily through the electrical resistance of moving electrons. However, researchers such as Bokor are trying to develop computers that don't rely on moving electrons, and thus could approach the Landauer limit. Lambson decided to theoretically and experimentally test the limiting energy efficiency of a simple magnetic logic circuit and magnetic memory.

The nanomagnets that Bokor, Lambson and his lab use to build magnetic memory and logic devices are about 100 nanometers wide and about 200 nanometers long. Because they have the same north-south polarity as a bar magnet, the up-or-down orientation of the pole can be used to represent the 0 and 1 of binary computer memory. In addition, when multiple nanomagnets are brought together, their north and south poles interact via dipole-dipole forces to exhibit transistor behavior, allowing simple logic operations.

"The magnets themselves are the built-in memory," Lambson said. "The real challenge is getting the wires and transistors working."

Lambson showed through calculations and computer simulations that a simple memory operation – erasing a magnetic bit, an operation often called "restore to one" – can be conducted with an energy dissipation very close, if not identical to, the Landauer limit.

He subsequently analyzed a simple magnetic logical operation. The first successful demonstration of a logical operation using magnetic nanoparticles was achieved by researchers at the University of Notre Dame in 2006. In that case, they built a three-input majority logic gate using 16 coupled nanomagnets. Lambson calculated that a computation with such a circuit would also dissipate energy at the Landauer limit.

Because the Landauer limit is proportional to temperature, circuits cooled to low temperatures would be even more efficient.

At the moment, electrical currents are used to generate a magnetic field to erase or flip the polarity of nanomagnets, which dissipates a lot of energy. Ideally, new materials will make electrical currents unnecessary, except perhaps for relaying information from one chip to another.

"Then you can start thinking about operating these circuits at the upper efficiency limits," Lambson said.

"We are working now with collaborators to figure out a way to put that energy in without using a magnetic field, which is very hard to do efficiently," Bokor said. "A multiferroic material, for example, may be able to control magnetism directly with a voltage rather than an external magnetic field."

Other obstacles remain as well. For example, as researchers push the power consumption down, devices become more susceptible to random fluctuations from thermal effects, stray electromagnetic fields and other kinds of noise.

"The magnetic technology we are working on looks very interesting for ultra low power uses," Bokor said. "We are trying to figure out how to make it more competitive in speed, performance and reliability. We need to guarantee that it gets the right answer every single time with a very, very, very high degree of reliability."
Provided by University of California - Berkeley

Loudest Animal Is Recorded for the First Time


Scientists have shown for the first time that the loudest animal on earth, relative to its body size, is the tiny water boatman, Micronecta scholtzi. At 99.2 decibels, this represents the equivalent of listening to an orchestra play loudly while sitting in the front row.


The water boatman (Micronecta scholtzi), shown at the top left, is only 2mm long but is the loudest animal ever to be recorded, relative to its body size, outperforming all marine and terrestrial species. (Credit: Images courtesy of Dr. Jérôme Sueur, Muséum national d'Histoire naturelle, Paris)

The frequency of the sound (around 10 kHz) is within human hearing range and Dr. James Windmill of the University of Strathclyde, explains one clue as to how loud the animals are: "Remarkably, even though 99% of sound is lost when transferring from water to air, the song is so loud that a person walking along the bank can actually hear these tiny creatures singing from the bottom of the river."

The song, used by males to attract mates, is produced by rubbing two body parts together, in a process called stridulation. In water boatmen the area used for stridulation is only about 50 micrometres across, roughly the width of a human hair. "We really don't know how they make such a loud sound using such a small area," says Dr. Windmill.

The researchers, who are presenting their work at the Society for Experimental Biology Annual Conference in Glasgow, are now keen to bring together aspects of biology and engineering to clarify how and why such a small animal makes such a loud noise, and to explore the practical applications. Dr. Windmill explains: "Biologically this work could be helpful in conservation as recordings of insect sounds could be used to monitor biodiversity. From the engineering side it could be used to inform our work in acoustics, such as in sonar systems."

Quantum 'Graininess' of Space at Smaller Scales? Gamma-Ray Observatory Challenges Physics Beyond Einstein


The European Space Agency's Integral gamma-ray observatory has provided results that will dramatically affect the search for physics beyond Einstein. It has shown that any underlying quantum 'graininess' of space must be at much smaller scales than previously predicted.
Gamma-ray burst captured by Integral's IBIS instrument. 
(Credit: ESA/SPI Team/ECF)

Einstein's General Theory of Relativity describes the properties of gravity and assumes that space is a smooth, continuous fabric. Yet quantum theory suggests that space should be grainy at the smallest scales, like sand on a beach.

One of the great concerns of modern physics is to marry these two concepts into a single theory of quantum gravity.

Now, Integral has placed stringent new limits on the size of these quantum 'grains' in space, showing them to be much smaller than some quantum gravity ideas would suggest.

According to calculations, the tiny grains would affect the way that gamma rays travel through space. The grains should 'twist' the light rays, changing the direction in which they oscillate, a property called polarisation.

High-energy gamma rays should be twisted more than the lower energy ones, and the difference in the polarisation can be used to estimate the size of the grains.

Philippe Laurent of CEA Saclay and his collaborators used data from Integral's IBIS instrument to search for the difference in polarisation between high- and low-energy gamma rays emitted during one of the most powerful gamma-ray bursts (GRBs) ever seen.



GRBs come from some of the most energetic explosions known in the Universe. Most are thought to occur when very massive stars collapse into neutron stars or black holes during a supernova, leading to a huge pulse of gamma rays lasting just seconds or minutes, but briefly outshining entire galaxies.

GRB 041219A took place on 19 December 2004 and was immediately recognised as being in the top 1% of GRBs for brightness. It was so bright that Integral was able to measure the polarisation of its gamma rays accurately.

Dr Laurent and colleagues searched for differences in the polarisation at different energies, but found none to the accuracy limits of the data.

Some theories suggest that the quantum nature of space should manifest itself at the 'Planck scale': the minuscule 10-35 of a metre, where a millimetre is 10-3 m.

However, Integral's observations are about 10 000 times more accurate than any previous and show that any quantum graininess must be at a level of 10-48 m or smaller.

"This is a very important result in fundamental physics and will rule out some string theories and quantum loop gravity theories," says Dr Laurent.

Integral made a similar observation in 2006, when it detected polarised emission from the Crab Nebula, the remnant of a supernova explosion just 6500 light years from Earth in our own galaxy.

This new observation is much more stringent, however, because GRB 041219A was at a distance estimated to be at least 300 million light years.

In principle, the tiny twisting effect due to the quantum grains should have accumulated over the very large distance into a detectable signal. Because nothing was seen, the grains must be even smaller than previously suspected.

"Fundamental physics is a less obvious application for the gamma-ray observatory, Integral," notes Christoph Winkler, ESA's Integral Project Scientist. "Nevertheless, it has allowed us to take a big step forward in investigating the nature of space itself."

Now it's over to the theoreticians, who must re-examine their theories in the light of this new result.

Friday, July 1, 2011

NASA engineer proposes new type of fusion thruster for space travel


John J. Chapman, a physicist working for NASA has presented an idea for a new type of fusion thruster for possible use by space traveling vehicles at the IEEE Symposium going on in Chicago this week. In the presentation, as explained on IEEE Spectrum, Chapman suggests that boron be used as an “aneutronic” fuel source, stating that doing so makes the energetic particles easier to deal with than traditional materials.
Illustration: NASA Langley Research Center

Chapman’s idea is to use an off-the-shelf laser to shoot at a double-layer target. The first would be comprised of a “thick” sheet of metal foil, which would respond to the laser shots by accelerating the protons. The ensuing out-rush of electrons would leave behind an increased positive charge, which would wind up creating an unbalance between the protons left behind, resulting in a small explosion, which in turn would speed up the protons hurtling towards the second layer, a thin slice of boron-11.



When those protons hit the boron, carbon nuclei would be formed, excited by the impact, which would immediately decay to a helium-4 nucleus and a beryllium nucleus, which would then decay to a pair of alpha particles. This means that each reaction would result in the creation of three alpha particles, which Chapman describes as “very efficient.” Electromagnetic forces would then force the alpha particles and the stuff it hits, in opposite directions, with the alpha particles exiting out a nozzle. The end result would be the craft carrying the fusion thruster, being pushed forward. With the amounts tested, each blip of the laser should theoretically create 100,000 particles, and with some fine tuning, according to Chapman, that would make it far more efficient than current ion propulsion systems. Unfortunately, as great as this all sounds, it doesn’t mean we’ll have spacecraft utilizing such technology any time soon; even if it pans out as Chapman suggests, he says it would still likely be a decade before anything tangible could be produced, and that’s if a concerted effort were made over that time frame by scientists all over the world to figure out how to make it all work as proposed.

WiFi 'napping' doubles phone battery life


A Duke University graduate student has found a way to double the battery life of mobile devices – such as smartphones or laptop computers – by making changes to WiFi technology.
This is Justin Manweiler from Duke University.
Credit: Justin Manweiler

WiFi is a popular wireless technology that helps users download information from the Internet. Such downloads, including pictures, music and video streaming, can be a major drain of battery.

The energy drain is especially severe in the presence of other WiFi devices in the neighborhood. In such cases, each device has to "stay awake" before it gets its turn to download a small piece of the desired information. This means that the battery drainage in downloading a movie in Manhattan is far higher than downloading the same movie in a farmhouse in the Midwest, the researchers said.

The Duke-developed software eliminates this problem by allowing mobile devices to sleep while a neighboring device is downloading information. This not only saves energy for the sleeping device, but also for competing devices as well.

The new system has been termed SleepWell by Justin Manweiler, a graduate student in computer science under the direction of Romit Roy Choudhury, assistant professor of electrical and computer engineering at Duke's Pratt School of Engineering. The SleepWell system was presented at the ninth Association for Computing Machinery's International Conference on Mobile Systems, Applications and Services (MobiSys), being held in Washington, D.C.

Manweiler described the system by analogy: "Big cities face heavy rush hours as workers come and leave their jobs at similar times. If work schedules were more flexible, different companies could stagger their office hours to reduce the rush. With less of a rush, there would be more free time for all, and yet, the total number of working hours would remain the same."



"The same is true of mobile devices trying to access the Internet at the same time," Manweiler said. "The SleepWell-enabled WiFi access points can stagger their activity cycles to minimally overlap with others, ultimately resulting in promising energy gains with negligible loss of performance."

With cloud computing on the horizon, mobile devices will need to access the Internet more frequently -- however, such frequent access could be severely constrained by the energy toll that WiFi takes on the device's battery life, according to Roy Choudhury.

"Energy is certainly a key problem for the future of mobile devices, such as iPhones, iPads and Android smartphones," Roy Choudhury said. "The SleepWell system can certainly be an important upgrade to WiFi technology, especially in the light of increasing WiFi density."

Manweiler said that "the testing we conducted across a number of device types and situations gives us confidence that SleepWell is a viable approach for the near future."
Provided by Duke University

How Social Pressure Can Affect What We Remember: Scientists Track Brain Activity as False Memories Are Formed



How easy is it to falsify memory? New research at the Weizmann Institute shows that a bit of social pressure may be all that is needed. The study, which appears in the journal Science, reveals a unique pattern of brain activity when false memories are formed -- one that hints at a surprising connection between our social selves and memory.
New research reveals a unique pattern of brain 
activity when false memories are formed -- one 
that hints at a surprising connection between 
our social selves and memory. (Credit: Image 
courtesy of Weizmann Institute of Science)

The experiment, conducted by Prof. Yadin Dudai and research student Micah Edelson of the Institute's Neurobiology Department with Prof. Raymond Dolan and Dr. Tali Sharot of University College London, took place in four stages. In the first, volunteers watched a documentary film in small groups. Three days later, they returned to the lab individually to take a memory test, answering questions about the film. They were also asked how confident they were in their answers.

They were later invited back to the lab to retake the test while being scanned in a functional MRI (fMRI) that revealed their brain activity. This time, the subjects were also given a "lifeline": the supposed answers of the others in their film viewing group (along with social-media-style photos). Planted among these were false answers to questions the volunteers had previously answered correctly and confidently. The participants conformed to the group on these "planted" responses, giving incorrect answers nearly 70% of the time.

But were they simply conforming to perceived social demands, or had their memory of the film actually undergone a change? To find out, the researchers invited the subjects back to the lab to take the memory test once again, telling them that the answers they had previously been fed were not those of their fellow film watchers, but random computer generations. Some of the responses reverted back to the original, correct ones, but close to half remained erroneous, implying that the subjects were relying on false memories implanted in the earlier session.

An analysis of the fMRI data showed differences in brain activity between the persistent false memories and the temporary errors of social compliance. The most outstanding feature of the false memories was a strong co-activation and connectivity between two brain areas: the hippocampus and the amygdala. The hippocampus is known to play a role in long-term memory formation, while the amygdala, sometimes known as the emotion center of the brain, plays a role in social interaction. The scientists think that the amygdala may act as a gateway connecting the social and memory processing parts of our brain; its "stamp" may be needed for some types of memories, giving them approval to be uploaded to the memory banks. Thus social reinforcement could act on the amygdala to persuade our brains to replace a strong memory with a false one.


Prof. Yadin Dudai's research is supported by the Norman and Helen Asher Center for Human Brain Imaging, which he heads; the Nella and Leon Benoziyo Center for Neurological Diseases; the Carl and Micaela Einhorn-Dominic Institute of Brain Research, which he heads; the Marc Besen and the Pratt Foundation, Australia; Lisa Mierins Smith, Canada; Abe and Kathryn Selsky Memorial Research Project; and Miel de Botton, UK. Prof. Dudai is the incumbent of the Sara and Michael Sela Professorial Chair of Neurobiology.