BTemplates.com

Powered by Blogger.

Pageviews past week

Quantum mechanics

Auto News

artificial intelligence

About Me

Recommend us on Google!

Information Technology

Popular Posts

Showing posts with label Google. Show all posts
Showing posts with label Google. Show all posts

Monday, April 17, 2023

Google Project Magi: The Future of Search


 

  • Google's New AI Search Engine Will Change the Way You Search

Magi is designed to be more personalized and helpful than ever before, using artificial intelligence to anticipate your needs and provide you with the information you need, when you need it.

Some of the features that Magi will offer include:

  • Personalized search results: Magi will learn your preferences and interests over time, and use that information to deliver more relevant results.
  • Natural language processing: Magi will be able to understand your natural language queries, even if they are incomplete or ambiguous.
  • Smart answers: Magi will be able to provide you with smart answers to your questions, even if they are open ended or challenging.
  • Transactional search: Magi will allow you to complete transactions directly from the search results, such as booking flights or buying products.

Magi is still in development, but it has the potential to revolutionize the way we search the web. Stay tuned for more information as it becomes available!

Tuesday, July 12, 2011

How Google+ Will Balkanize Your Social Life


For many, the new service offers the chance to press "reset on Facebook."

Google launched its Facebook competitor, Google+, just over a week ago now. Even though sign-ups have so far been limited to a fraction of Facebook's 750 million users, it already appears that, for a lot of people, Google+ will become the other social network they need to use. Why? Because a significant fraction of their friends will force them to. 

It's not just that Google+ has 10-person video hangouts, or that Google+ is magically free of privacy worries. It's that Google has created the opportunity for Facebook-weary people to perform what one called "a reset on Facebook," allowing them to escape from Facebook members they've friended over the years but don't really want to interact with—and can't quite bring themselves to defriend.

The killer feature of Google+ is that, unlike Facebook, LinkedIn, or most other social networks, there's no such thing as a friend request. Users can create groups of friends, called Circles in Google+ terminology. These circles can include both other Google+ users and nonusers who receive status updates via e-mail rather than via the site. As a Google+ user, you can share your status updates and favorite links with those in one or more of these easily created circles, or with everyone. And you can see what other users have shared with you, or with everyone, in a Facebook-like feed that runs down the middle of the page.

But you'll never be put in the awkward situation of receiving a friend request from someone you don't really want to be Google+ friends with. Nor will you have to face the awkward decision of whether or not to defriend a former confidant with whom you've fallen out. Just remove them from your circles, which are never revealed to other users. Other than that, Google+ looks and behaves a lot like Facebook.



Sure, Facebook has ways to filter, block, and organize other members so you don't have to share every update with, say, your parents. But on Google+, your parents can't send you a friend request, and the Circles system makes it one-click easy to share a tasteless video clip or a story of public drunkenness with your college friends without having to customize the update first. There's no way yet to share a post with everyone in your Best Buddies circle except those who are also in your Coworkers circle, but it would be easy to add to the system before Google takes Google+ out of its limited-membership trial period.

Another subtle difference from Facebook: Google+ doesn't yet have ads running down the side of the page, nor are there viral apps that spam all of one's Google+ friends with updates such as, "Jane Smith has taken a test!" Given the low-key format of Google's ads on its search engine and in its Gmail service, it seem likely that while some sort of advertising is inevitable, it won't be the kind that addles users' eyeballs and infuriates them with its intrusiveness. The most annoying ads Google sells will probably still be the ones that pop up at the bottom of Google's YouTube videos.

Having been on Google+ for a week, I'm enjoying the private-club feel of the place. The only updates I see are mostly from people I personally invited to join last week.

My feed also includes frequent posts from the usual social media early adopters, such as SoupSoup blogger Anthony de Rosa, who have an ear for the interesting. Checking two social networks instead of one is inconvenient, but the difference between Facebook and Google+ is currently like work versus play —people I feel obliged to network with (Facebook), and people I'm happy to kick back with on the other (Google+).

Google has since turned off the ability to invite others to join Google+ temporarily, blocking new sign-ups with the message, "We have temporarily exceeded our capacity." Since when does Google have capacity issues, by the way? Most likely, Google is just taking it slow, while the first few users find their way around.

Eventually, Google will open up Google+ to everyone, which means former coworkers I've forgotten, people I went to school with 30 years ago, and an army of public relations professionals trying to network with me will show up. But unlike Facebook, I won't have to approve 984 friend requests. And unlike Facebook, on Google+ I won't feel rude when I block their updates from my feed. It's time for a reset.

Thursday, June 30, 2011

Can Google Get Web Users Talking?


Voice-driven search is a futuristic idea, and may take some getting used to.
Credit: Google

The notion of asking a computer for information out loud is familiar to most of us only from science fiction. Google is trying to change that by adding speech recognition to its search engine, and releasing technology that would allow any browser, website, or app to use the feature.

But are you ready to give up your keyboards and talk to Google instead?

Over the last two weeks, speech input for Google has gradually been rolled out to every person using Google's Chrome browser. A microphone icon appears at the right end of the iconic search box. If you have a microphone built-in or attached to your computer, clicking that icon creates a direct audio connection to Google's servers, which will convert your spoken words into text.

It has been possible to speak Google search queries using a smart phone for almost three years; since last year, Android handsets have been able to take voice input in any situation where a keyboard would normally be used. "That was transformational, because people stopped worrying about when they could and couldn't speak to the phone," says Vincent Vanhoucke, who leads the voice search engineering team at Google. Over the last 12 months, the number of spoken inputs, search or otherwise, via Android devices has climbed six times, and every day, tens of thousands of hours of audio speech are fed into Google's servers. "On Android, a large fraction of the use is people dictating e-mail and SMS," says Vanhoucke.

Vanhoucke's team now wants using voice on the Web to be as easy as it is on Android. "It's a big bet," he says. "Voice search for desktop is the flagship for this, [but] we want to take speech everywhere."

Voice recognition is more technically challenging on a desktop or laptop computer, says Vanhoucke, because it requires noise suppression algorithms that are not needed for mobile speech recognition. These algorithms filter out sounds such as those of a computer's fan or air conditioners. "The quality of the audio is paramount for phone manufacturers, and you hold it close to your mouth," says Vanhoucke. "On a PC, the microphone is an afterthought, and you are further away. You don't get the best quality."



Google asked thousands of people to read phrases aloud to their computers to gather data on the conditions its speech recognition technology would have to handle. As people use the service for real, it is trained further, says Vanhoucke, which should increase its popularity. Data from users of mobile voice search shows that people are much more likely to use the feature again when it is accurate for them the first time.

A bigger challenge to getting users to embrace voice recognition on the desktop could be the existing tools for entering information, says Keith Vertanen, a lecturer at Princeton University who researches voice-recognition technology. "On the desktop, you're up against a very fast and efficient means of input in the keyboard," he says. "On a phone, you don't have that available, and you are often in hands- or eyes-free situations where voice input really helps."

Vertanen says people are less tolerant of glitches when using speech recognition on a desktop computer because of the close proximity of a tried-and-true way of entering text. He says users might find voice recognition more compelling on on other Internet-connected devices in the home. "Nonconventional devices like a DVR, television, or game console don't usually have good text input," he points out. Google TV devices can already take voice input spoken into a connected Android phone.

Vanhoucke acknowledges that speech recognition fulfills a more immediate need on phones, but argues that users are ready for it on conventional computers, too. "People will use it in ways that surprise us," he says. "At this point, it's still an experiment." Situations when people may have their hands full is one example, says Vanhoucke (although it should be noted that desktop voice search today still involves using the mouse to activate the feature).

Google isn't performing this experiment alone. The company is pushing the Web standards body W3C to introduce a standard set of HTML markup that allows any website or app to call on voice recognition via the Web browser, and has already enabled a version of this markup in the Chrome browser. For now, Google is the only major company with a browser able to use the prototype feature, but Mozilla, Microsoft, and AT&T are all working with the W3C effort.

"It's a collaborative effort that other browser makers are part of," says Vanhoucke. "Any designer can add it to their Web page. It's something anyone can use." Extensions for the Chrome browser that make use of voice input (like this one) have already appeared, and can be used to enter text on any website.

However, those extensions reveal that although Google's desktop speech recognition is accurate for search queries, it's not much good for tasks like composing e-mail.

Enabling the system to learn the personal quirks of each person's pronunciation, a feature already enabled on Android phones, could address that. Vertanen points out that the personalization learned through mobile search could easily be ported over to the desktop for people logged into their Google account. It could also make it possible for the technology to spring up elsewhere. "The advantage of Google's networked approach is that a [speech] model in the cloud can adapt to your voice in all these different places and follow you around, whether that's in your living room or in your car."


A Browser that Speaks Your Language The latest version of Google's Chrome shows the potential of HTML5.

Tuesday, June 28, 2011

Facebook May Mobilize on Web Apps


Developers are abuzz with a rumored project that could provide a new platform for mobile apps.

Rumor has it that Facebook is trying to sidestep Apple's App Store and Google's Android Market with a neat technical trick: a Web-based platform for apps.

Facebook has yet to confirm the existence of the effort, allegedly code-named "Project Spartan." But if the rumor is true, the effort could threaten Apple and Google's dominance in mobile software, and give a boost to Web applications over native apps, by appealing to Facebook's huge and captive user base and by leveraging the social connections between users.

Facebook already lets developers build apps to run on top of its platform, and they've created thousands of games, utilities, and even business apps. But these are designed for the desktop, not the mobile or tablet platforms that are growing rapidly in popularity.

Mobile Web apps built on top of Facebook, and that run entirely in the browser, using widely supported technologies like HTML5, JavaScript, and CSS, would free developers from the need to create several version of their software for different mobile platforms. Developers could also use Facebook Credits, which the company is hoping to expand into a universal micropayment system accessible across the Web. Facebook takes the same cut from Credits that Apple does from its App Store: 30 percent.

"If the rumors are true, it means that Facebook is planning to use Web technologies to create a whole new app ecosystem for iOS-based and other mobile devices," says Ron Perry, chief technology officer at Worklight, a company that provides tools for building mobile applications.

Facebook could also increase its influence in the mobile market by creating a platform for apps that Apple would never approve, or giving developers more favorable terms than the current 30 percent cut.

All this might make it seem inevitable that Facebook would undertake something like Project Spartan. But to succeed at creating an alternate Web-only app ecosystem and payment platform that spans many devices, it will need to overcome a number of challenges.



For one thing, Apple is now in the position that Microsoft was in 20 years ago: it controls the software on its devices and has little incentive to make the environment more hospitable to competing models of application delivery. Indeed, in March, some developers accused Apple of crippling native apps that use Web content on the iPhone and iPad by saddling them with a JavaScript engine only half as fast as the Nitro engine that runs in mobile Safari, the default browser on Apple's mobile devices. It's debatable whether or not this bug was intentional.

Apple may ultimately be forced offer better support for applications that reside in the browser. "At the end of the day, for platforms to be successful, they have to give consumers what they want," says David Koretz, CEO of the Web-application security firm Mykonos Software. He argues that consumer demand will push mobile companies to offer the best Web experience possible.

Another, potentially more significant issue hanging over the future of mobile apps is the fact that HTML is poorly suited to the kind of app that has so far made the most money for both Facebook and Apple: games. Long-time Apple observer John Gruber sees HTML's limitations as fundamental to the difference between Apple's App Store and Facebook's rumored effort.

"Don't think of what Facebook is reportedly attempting as a would-be rival to the iOS App Store. Think of it as the mobile equivalent of Flash games for Macs and PCs. Obviously, there would be some competitive overlap, but there's a fundamental difference in scope and quality," Gruber said in an e-mail.

Another truth that Facebook needs to confront is that previous efforts to create Web-based apps have fizzled. Apple, in fact, maintains a directory of Web apps—a holdover from the days before developers were able to create native apps for the iPhone. But it has little incentive to promote these. OpenAppMkt, another repository of mobile Web apps, has failed to make much of a dent in the App Store or Android Market. Google itself sells Web apps, through the Chrome Web Store, but these are primarily for desktops. A significant barrier each one of these efforts has run into is their lack of an easy-to-use payment system. Apple already has 200 million iTunes accounts, allowing its users a level of impulse purchasing unheard of in the history of commerce.

Whatever challenges Facebook faces, if the most-visited website in the United States does start pushing mobile Web apps, this could be huge for the penetration of applications based on open Web-browser standards. "Facebook's reach can definitely bring Web apps to the limelight and make this an attractive option for app publishers," says Worklight's Perry.


Tuesday, June 21, 2011

Genius of Einstein, Fourier key to new humanlike computer vision



Two new techniques for computer-vision technology mimic how humans perceive three-dimensional shapes by instantly recognizing objects no matter how they are twisted or bent, an advance that could help machines see more like people.
This graphic illustrates a new computer-vision technology
that builds on the basic physics and mathematical equations
related to how heat diffuses over surfaces. The technique
mimics how humans perceive three-dimensional shapes
by instantly recognizing objects no matter how they are
twisted or bent, an advance that could help machines see
more like people. Here, a "heat mean signature" of a human
hand model is used to perceive the six segments of the
overall shape and define the fingertips. (Purdue University
image/Karthik Ramani and Yi Fang)

The techniques, called heat mapping and heat distribution, apply mathematical methods to enable machines to perceive three-dimensional objects, said Karthik Ramani, Purdue University's Donald W. Feddersen Professor of Mechanical Engineering.

"Humans can easily perceive 3-D shapes, but it's not so easy for a computer," he said. "We can easily separate an object like a hand into its segments - the palm and five fingers - a difficult operation for computers."

Both of the techniques build on the basic physics and mathematical equations related to how heat diffuses over surfaces.

"Albert Einstein made contributions to diffusion, and 18th century physicist Jean Baptiste Joseph Fourier developed Fourier's law, used to derive the heat equation," Ramani said. "We are standing on the shoulders of giants in creating the algorithms for these new approaches using the heat equation."

As heat diffuses over a surface it follows and captures the precise contours of a shape. The system takes advantage of this "intelligence of heat," simulating heat flowing from one point to another and in the process characterizing the shape of an object, he said.

Findings will be detailed in two papers being presented during the IEEE Computer Vision and Pattern Recognition conference on June 21-23 in Colorado Springs. The paper was written by Ramani, Purdue doctoral students Yi Fang and Mengtian Sun, and Minhyong Kim, a professor of pure mathematics at the University College London.

A major limitation of existing methods is that they require "prior information" about a shape in order for it to be analyzed.
Researchers developing a new machine-vision technique tested
their method on certain complex shapes, including the human form
or a centaur – a mythical half-human, half-horse creature. The heat
mapping allows a computer to recognize the objects no matter how
the figures are bent or twisted and is able to ignore "noise" introduced
by imperfect laser scanning or other erroneous data. (Purdue University
image/Karthik Ramani and Yi Fang)

"For example, in order to do segmentation you have to tell the computer ahead of time how many segments the object has," Ramani said. "You have to tell it that you are expecting, say, 10 segments or 12 segments."

The new methods mimic the human ability to properly perceive objects because they don't require a preconceived idea of how many segments exist.

"We are trying to come as close as possible to human segmentation," Ramani said. "A hot area right now is unsupervised machine learning. This means a machine, such as a robot, can perceive and learn without having any previous training. We are able to estimate the segmentation instead of giving a predefined number of segments."



The work is funded partially by the National Science Foundation. A patent on the technology is pending.

The methods have many potential applications, including a 3-D search engine to find mechanical parts such as automotive components in a database; robot vision and navigation; 3-D medical imaging; military drones; multimedia gaming; creating and manipulating animated characters in film production; helping 3-D cameras to understand human gestures for interactive games; contributing to progress of areas in science and engineering related to pattern recognition; machine learning; and computer vision.

The heat-mapping method works by first breaking an object into a mesh of triangles, the simplest shape that can characterize surfaces, and then calculating the flow of heat over the meshed object. The method does not involve actually tracking heat; it simulates the flow of heat using well-established mathematical principles, Ramani said.

Heat mapping allows a computer to recognize an object, such as a hand or a nose, no matter how the fingers are bent or the nose is deformed and is able to ignore "noise" introduced by imperfect laser scanning or other erroneous data.

"No matter how you move the fingers or deform the palm, a person can still see that it's a hand," Ramani said. "But for a computer to say it's still a hand is going to be hard. You need a framework - a consistent, robust algorithm that will work no matter if you perturb the nose and put noise in it or if it's your nose or mine."

The method accurately simulates how heat flows on the object while revealing its structure and distinguishing unique points needed for segmentation by computing the "heat mean signature." Knowing the heat mean signature allows a computer to determine the center of each segment, assign a "weight" to specific segments and then define the overall shape of the object.

"Being able to assign a weight to segments is critical because certain points are more important than others in terms of understanding a shape," Ramani said. "The tip of the nose is more important than other points on the nose, for example, to properly perceive the shape of the nose or face, and the tips of the fingers are more important than many other points for perceiving a hand."

In temperature distribution, heat flow is used to determine a signature, or histogram, of the entire object.

"A histogram is a two-dimensional mapping of a three-dimensional shape," Ramani said. "So, no matter how a dog bends or twists, it gives you the same signature."

The temperature distribution technique also uses a triangle mesh to perceive 3-D shapes. Both techniques, which could be combined in the same system, require modest computer power and recognize shapes quickly, he said.

"It's very efficient and very compact because you're just using a two-dimensional histogram," Ramani said. "Heat propagation in a mesh happens very fast because the mathematics of matrix computations can be done very quickly and well."

The researchers tested their method on certain complex shapes, including hands, the human form or a centaur, a mythical half-human, half-horse creature.

Sources: Karthik Ramani, 765-494-5725, ramani@purdue.edu

Yi Fang, fang4@purdue.edu

Note to Journalists: The papers are available by contacting Emil Venere, Purdue News Service, at 765-494-4709, venere@purdue.edu


British Library, Google in deal to digitize books



A treatise on a stuffed hippopotamus, an 18th-century English primer for Danish sailors and a description of the first engine-driven submarine are among 250,000 books to be made available online in a deal between Google and the British Library.
People work on laptops in a reading room at the British
Library in London, Monday, June 20, 2011. A treatise on
a stuffed hippopotamus, an 18th-century English primer for
Danish sailors and a description of the first engine-driven
submarine are among 250,000 books to be made available
online in a deal between Google and the British Library.
The agreement, announced Monday, will let Internet
users read, search, download and copy thousands of
texts published between 1700 and 1870.
(AP Photo/Matt Dunham)

The agreement, announced Monday, will let Internet users read, search, download and copy thousands of texts published between 1700 and 1870.

It is a small step toward the library's goal of making the bulk of its 14 million books and 1 million periodicals available in digital form by 2020.

"So far we have only been able to digitize quite a small fraction of the global collection," said the library's chief executive, Lynne Brindley. "There is a long way to go."



The deal with Google, which will see 40 million pages digitized over the next three years, will offer online researchers a selection of rarely seen works from an era of social, political, scientific and technological change that took in the Enlightenment, the Industrial Revolution and the American war of independence.

The books range from Georges Louis Leclerc's "Natural History of the Hippopotamus, or River-Horse" - which includes a description of a stuffed animal owned by the Prince of Orange - to the 1858 work "A Scheme for Underwater Seafaring," describing the first combustion engine-driven submarine.

The books are more than scholarly curiosities. British Library curator Kristian Jensen said an 18th-century guide to English for Danish mariners shows "how English began to emerge from being the language spoken by people over there on that island" to become the world's dominant tongue.

Google will pay to digitize the books, which are no longer covered by copyright restrictions. They will be available on the British Library and Google Books websites.

Peter Barron, Google's European spokesman, declined to say how much the project would cost, beyond describing it as "a substantial sum."

Google has digitized 13 million books in similar deals with more than 40 libraries around the world. But its plan to put millions of copyrighted titles online has been opposed by the publishing industry and is the subject of a legal battle in the United States.

Barron said the company's goal "is to make as wide a range of items as possible" available online.

"Having richer content means people around the world are searching more for it, and that is good for our business," he said.

Last year, the British Library announced plans to digitize up to 40 million pages of newspapers dating back three-and-a-half centuries, and it recently made thousands of 19th-century books digitized in a deal with Microsoft available as an app for iPhone and iPad devices.

More information:
Google Books: http://books.google.com/ British Library: http://www.bl.uk/

Sunday, June 5, 2011

Can Google Know Where the Gmail Attack Came from?



The company blames China, but none of the evidence is definitive—which is the nature of such attacks.

On Tuesday, Google revealed a new spate of attacks aimed at Gmail users, and said the attacks appeared to have come from Jinan, China. The new attacks illustrate the difficulty of stopping hackers who use simple "social engineering" tricks to steal personal data, and they raise questions about how such attacks can ever be traced with certainty.

Personal accounts belonging to U.S. government officials, Chinese political activists, military personnel, and journalists were targeted, the company said in a blog post. Google has pointed to Chinese hackers before—in early 2010 it said attackers from the country had stolen its intellectual property and tried to access the Gmail accounts of human rights activists. The Chinese foreign ministry has vigorously rejected the idea that the Chinese government was responsible for the attacks.

Google says the attackers did not exploit any security holes in the company's e-mail service. Instead, they involved tricking users into sharing their log-in information. Carefully tailored messages, apparently written by a friend or colleague, were used to direct victims to a fake log-in page where their details were captured. This technique, known as "spear phishing," was also used recently to steal information from the prominent security company RSA—information that may have been used to perform further attacks on the company's customers.

Experts say this type of attack is hard to stop; unlike other types of attacks, there is no technical fix. "I think of incidents like this more as a series of successes and failures on the part of the attacker," says Nart Villeneuve, a senior threat researcher at Trend Micro, which makes antivirus, antispam, and Internet security software. "It's more of a campaign than it is a single attack."

Before joining Trend Micro, Villeneuve was heavily involved in tracking attacks on human-rights activists—he was part of the group that revealed a complex hacking operation that spied on figures including the Dalai Lama.

Villeneuve also says it's hard to identify the real source of this type of attack in order to cut it off. To pinpoint the source of the recent incidents, Google likely looked at a variety of clues, he says. The company could examine the IP addresses used to access e-mail accounts, which can reveal a user's location. The company could also look at the servers used to host fake log-in pages and collect users' personal information.

But this alone isn't enough, Villeneuve says. Attackers can easily take over computers located somewhere else, and use them to launch an attack. "Making your attack seem like it came from somewhere else is not hard," he says.

So Villeneuve says Google probably looked at many more clues to decide the source of the recent attacks. For example, he says, the company could have looked for patterns in the times that the attacks took place. Villeneuve believes that "from their point of good visibility, they could build up a lot of information."

Even then, Villeneuve emphasizes, it is extremely difficult to pin responsibility for the attacks on any single entity, organization, or nation.

Bruce Schneier, a prominent computer security expert and chief security officer of the British company BT, agrees. "Attacks don't come with a return address," he says. "This is a perennial problem. It's not a problem of anonymity; it's a problem of how the Internet works."

While there's good reason to suspect Chinese involvement, there's no way to know for sure, Schneier says. Routing an attack through China would be an excellent way for another interested party to throw investigators off their track, he says. But Schneier adds that the type of attack leveled at Gmail users is happening all the time.

Security researcher Mila Parkour identified and posted samples of some of the fake e-mail messages and fake Web pages used to trick Gmail users into handing over their log-in information. She notes that "the spear phishing method used in this attack is far from new or sophisticated," but points out that Web mail services offered by Google, Yahoo, and others don't offer users the same level of protection as many enterprise systems. What's more, she says, many users forward messages from business accounts to personal accounts, making the personal accounts worth targeting.

Villeneuve says that in some of the Web mail attacks he's studied, attackers seem to be gathering information about a user's computer or antivirus software. Since many people check personal e-mail at work, attackers might also be looking to gather information about systems at other locations that they want to target later, Villeneuve believes.



Though Google has gained headlines for coming forward with the recent news, Villeneuve notes that targeted attacks aimed at high-value individuals are "not just a Google problem." He's recently identified similar examples aimed at users of Yahoo mail and Hotmail, but he cannot confirm that they are related.

Thursday, April 21, 2011

Microsoft Browser Would Offer Personalization along with Privacy Protection



Today, many websites ask users to take a devil's deal: share personal information in exchange for receiving useful personalized services. New research from Microsoft, which will be presented at the IEEE Symposium on Security and Privacy in May, suggests the development of a Web browser and associated protocols that could strengthen the user's hand in this exchange. Called RePriv, the system mines a user's behavior via a Web browser but controls how the resulting information is released to websites that want to offer personalized services, such as a shopping site that automatically knows users' interests.
An experimental system would tighten the
limits on information provided to websites.

Today, many websites ask users to take a devil's deal: share personal information in exchange for receiving useful personalized services. New research from Microsoft, which will be presented at the IEEE Symposium on Security and Privacy in May, suggests the development of a Web browser and associated protocols that could strengthen the user's hand in this exchange. Called RePriv, the system mines a user's behavior via a Web browser but controls how the resulting information is released to websites that want to offer personalized services, such as a shopping site that automatically knows users' interests.

"The browser knows more about the user's behavior than any individual site," says Ben Livshits, a researcher at Microsoft who was involved with the work. He and colleagues realized that the browser could therefore offer a better way to track user behavior, while it also protects the information that is collected, because users won't have to give away as much of their data to every site they visit.

The RePriv browser tracks a user's behavior to identify a list of his or her top interests, as well as the level of attention devoted to each. When the user visits a site that wants to offer personalization, a pop-up window will describe the type of information the site is asking for and give the user the option of allowing the exchange or not. Whatever the user decides, the site doesn't get specific information about what the user has been doing—instead, it sees the interest information RePriv has collected.

Livshits explains that a news site could use RePriv to personalize a user's view of the front page. The researchers built a demonstration based on the New York Times website. It reorders the home page to reflect the user's top interests, also taking into account data collected from social sites such as Digg that suggests which stories are most popular within different categories.

Livshits admits that RePriv still gives sites some data about users. But he maintains that the user remains aware and in control. He adds that cookies and other existing tracking techniques sites already collect far more user data than RePriv supplies.

The researchers also developed a way for third parties to extend RePriv's capabilities. They built a demonstration browser extension that tracks a user's interactions with Netflix to collect more detailed data about that person's movie preferences. The extension could be used by a site such as Fandango to personalize the movie information it presents—again, with user permission.


"There is a clear tension between privacy and personalized technologies, including recommendations and targeted ads," says Elie Bursztein, a researcher at the Stanford Security Laboratory, who is developing an extension for the Chrome Web browser that enables more private browsing. "Putting the user in control by moving personalization into the browser offers a new way forward," he says.

"In the medium term, RePriv could provide an attractive interface for service providers that will dissuade them from taking more abusive approaches to customization," says Ari Juels, chief scientist and director of RSA Laboratories, a corporate research center.

Juels says RePriv is generally well engineered and well thought out, but he worries that the tool goes against "the general migration of data and functionality to the cloud." Many services, such as Facebook, now store information in the cloud, and RePriv wouldn't be able to get at data there—an omission that could hobble the system, he points out.

Juels is also concerned that most people would be permissive about the information they allow RePriv to release, and he believes many sites would exploit this. And he points out that websites with a substantial competitive advantage in the huge consumer-preference databases they maintain would likely resist such technology. "RePriv levels the playing field," he says. "This may be good for privacy, but it will leave service providers hungry." Therefore, he thinks, big players will be reluctant to cooperate with a system like this.

Livshits argues that some companies could use these characteristics of RePriv to their advantage. He says the system could appeal to new services, which struggle to give users a personalized experience the first time they visit a site. And larger sites might welcome the opportunity to get user data from across a person's browsing experience, rather than only from when the user visits their site. Livshits believes they might be willing to use the system and protect user privacy in exchange.

Sunday, April 10, 2011

A Browser that Speaks Your Language The latest version of Google's Chrome shows the potential of HTML5.


Early adopters can now get a sneak peek at the future of the Web by downloading the latest prerelease, or "beta," version of Chrome, Google's Web browser. One of the most interesting new features is an ability to translate speech to text—entirely via the Web.
Credit: Technology Review

The feature is the result of work Google has been doing with the World Wide Web Consortium's HTML Speech Incubator Group, the mission of which is "to determine the feasibility of integrating speech technology in HTML5," the Web's new, emerging standard language.

A Web page employing the new HTML5 feature could have an icon that, when clicked, initiates a recording through the computer's microphone, via the browser. Speech is captured and sent to Google's servers for transcription, and the resulting text is sent back to the website.

To experiment with the voice-to-text feature, download the latest beta version of Chrome here. Then go to this webpage, click on the microphone, and start talking. You'll probably find the results mixed, and sometimes hilarious. Using the finest elocution I could muster, I read the opening passage of Richard Yates's Revolutionary Road: "The final dying sounds of their dress rehearsal left the Laurel Players with nothing to do but stand there, silent and helpless." I got error messages several times in a row ("speech not recognized" or "connection to speech servers failed"). Once, I received this transcription: "9 sounds good restaurants on the world there's nothing to do with fam vans island."

The new feature derives in large part from experiments Google conducted through its Android operating system for mobile devices. For more than a year, says Vincent Vanhoucke, a member of Google's voice recognition team, Android app developers have been able to integrate voice recognition into their apps using technology provided by Google. This has provided Google with useful voice data with which to train its voice-recognition algorithms. Today, some 20 percent of searches on Android phones are conducted using voice recognition, says Vanhoucke: people use voice recognition to write texts, send emails, or conduct searches. "It has really opened up interesting new avenues," says Vanhoucke.

However, unlike desktop voice-to-text software, which first accustoms itself to a user's voice, Chrome is trying to churn out text from voice without prior training.

"I suppose if they keep track of [the] IP address, they could adapt" to a given user's voice, says Jim Glass, a speech recognition expert at MIT. Glass notes that the mobile phone provides an acoustic environment very different from that of a laptop or desktop computer; for one thing, a phone's microphone is reliably placed right at the user's mouth, unlike computer microphone setups in homes or offices. "This is the beta version of Chrome," says Glass. "They'll be collecting data, and we can be sure they will be refining their models--that's the nature of the speech-recognition game."

Even if it's rough around the edges, sometimes the technology impresses. I tried once again and got back "the final warning sounds of the dress rehearsal at laurel players with nothing to do with stand there." Not so bad. And the Chrome app nailed it to a letter when all I said was "the quick brown fox jumps over the lazy dog."

Third-party programmers have also begun creating Web pages capable of using the new feature of Chrome. Already available for trial is a browser plugin called Speechify that lets you search Google, Hulu, YouTube, Amazon, and other sites using voice with Chrome.

Other inventive uses could soon follow. "Games could be taking keyboard, mouse, touch, accelerometer, and speech input together," says Karl Westin, an expert on HTML5 who works for Nerd Communications, based in Berlin, Germany. "Having an aeroplane game where you could actually scream 'up, UP, UUUPPP!' could be fantastic."

But the technology is more than just a toy—it also points the way to a much more capable Web. HTML4, the last major version of the HTML language, emerged in 1997. Since then, plugins like Silverlight and Flash have added media-processing capabilities to the Web. But HTML5 enables media playback and offline storage via the browser.

"The insight we had was that more and more people were spending all their time in the browser," says Google's Brian Rakowski, group product manager for Chrome. E-mail and instant messaging increasingly take place in browsers rather than in separate e-mail or AIM applications. "We'd like it to be case that you never have to install a native application again," says Rakowski. "The Web should be able to do all of it."

Wednesday, February 23, 2011

Scientists Steer Car With the Power of Thought


You need to keep your thoughts from wandering, if you drive using the new technology from the AutoNOMOS innovation labs of Freie Universität Berlin. The computer scientists have developed a system making it possible to steer a car with your thoughts. Using new commercially available sensors to measure brain waves -- sensors for recording electroencephalograms (EEG) -- the scientists were able to distinguish the bioelectrical wave patterns for control commands such as "left," "right," "accelerate" or "brake" in a test subject.
Computer scientists have developed a system making it 
possible to steer a car with your thoughts. (Credit: Image 
courtesy of Freie Universitaet Berlin)

They then succeeded in developing an interface to connect the sensors to their otherwise purely computer-controlled vehicle, so that it can now be "controlled" via thoughts. Driving by thought control was tested on the site of the former Tempelhof Airport.

The scientists from Freie Universität first used the sensors for measuring brain waves in such a way that a person can move a virtual cube in different directions with the power of his or her thoughts. The test subject thinks of four situations that are associated with driving, for example, "turn left" or "accelerate." In this way the person trained the computer to interpret bioelectrical wave patterns emitted from his or her brain and to link them to a command that could later be used to control the car. The computer scientists connected the measuring device with the steering, accelerator, and brakes of a computer-controlled vehicle, which made it possible for the subject to influence the movement of the car just using his or her thoughts.

"In our test runs, a driver equipped with EEG sensors was able to control the car with no problem -- there was only a slight delay between the envisaged commands and the response of the car," said Prof. Raúl Rojas, who heads the AutoNOMOS project at Freie Universität Berlin. In a second test version, the car drove largely automatically, but via the EEG sensors the driver was able to determine the direction at intersections.

The AutoNOMOS Project at Freie Universität Berlin is studying the technology for the autonomous vehicles of the future. With the EEG experiments they investigate hybrid control approaches, i.e., those in which people work with machines.

The computer scientists have made a short film about their research, which is available at: http://tinyurl.com/BrainDriver

Thursday, August 5, 2010

Atom's Electrons Seen Moving in Real Time


An international team of scientists led by groups from the Max Planck Institute of Quantum Optics (MPQ) in Garching, Germany, and from the U.S. Department of Energy's Lawrence Berkeley National Laboratory and the University of California at Berkeley has used ultrashort flashes of laser light to directly observe the movement of an atom's outer electrons for the first time.
Image
In krypton’s single ionization state, quantum oscillations in the valence shell cycled in a little over six femtoseconds. Attosecond pulses probed the details (black dots), filling the gap in the outer orbital with an electron from an inner orbital, and sensing the changing degrees of coherence between the two quantum states thus formed (below). (Credit: Image courtesy of DOE/Lawrence Berkeley National Laboratory)

Through a process called attosecond absorption spectroscopy, researchers were able to time the oscillations between simultaneously produced quantum states of valence electrons with great precision. These oscillations drive electron motion.

"With a simple system of krypton atoms, we demonstrated, for the first time, that we can measure transient absorption dynamics with attosecond pulses," says Stephen Leone of Berkeley Lab's Chemical Sciences Division, who is also a professor of chemistry and physics at UC Berkeley. "This revealed details of a type of electronic motion -- coherent superposition -- that can control properties in many systems."

Leone cites recent work by the Graham Fleming group at Berkeley on the crucial role of coherent dynamics in photosynthesis as an example of its importance, noting that "the method developed by our team for exploring coherent dynamics has never before been available to researchers. It's truly general and can be applied to attosecond electronic dynamics problems in the physics and chemistry of liquids, solids, biological systems, everything."

The team's demonstration of attosecond absorption spectroscopy began by first ionizing krypton atoms, removing one or more outer valence electrons with pulses of near-infrared laser light that were typically measured on timescales of a few femtoseconds (a femtosecond is 10-15 second, a quadrillionth of a second). Then, with far shorter pulses of extreme ultraviolet light on the 100-attosecond timescale (an attosecond is 10-18 second, a quintillionth of a second), they were able to precisely measure the effects on the valence electron orbitals.

The results of the pioneering measurements performed at MPQ by the Leone and Krausz groups and their colleagues are reported in the August 5 issue of the journal Nature.

Parsing the fine points of valence electron motion

Valence electrons control how atoms bond with other atoms to form molecules or crystal structures, and how these bonds break and reform during chemical reactions. Changes in molecular structures occur on the scale of many femtoseconds and have often been observed with femtosecond spectroscopy, in which both Leone and Krausz are pioneers.

Zhi-Heng Loh of Leone's group at Berkeley Lab and UC Berkeley worked with Eleftherios Goulielmakis of Krausz's group to perform the experiments at MPQ. By firing a femtosecond pulse of infrared laser light through a chamber filled with krypton gas, atoms in the path of the beam were ionized by the loss of from one to three valence electrons from their outermost shells.

The experimenters separately generated extreme-ultraviolet attosecond pulses (using the technique called "high harmonic generation") and sent the beam of attosecond probe pulses through the krypton gas on the same path as the near-infrared pump pulses.

By varying the time delay between the pump pulse and the probe pulse, the researchers found that subsequent states of increasing ionization were being produced at regular intervals, which turned out to be approximately equal to the time for a half cycle of the pump pulse. (The pulse is only a few cycles long; the time from crest to crest is a full cycle, and from crest to trough is a half cycle.)

"The femtosecond pulse produces a strong electromagnetic field, and ionization takes place with every half cycle of the pulse," Leone says. "Therefore little bursts of ions are coming out every half cycle."

Although expected from theory, these isolated bursts were not resolved in the experiment. The attosecond pulses, however, could precisely measure the production of the ionization, because ionization -- the removal of one or more electrons -- leaves gaps or "holes," unfilled orbitals that the ultrashort pulses can probe.

The attosecond pulses do so by exciting electrons from lower energy orbitals to fill the gap in krypton's outermost orbital -- a direct result of the absorption of the transient attosecond pulses by the atoms. After the "long" femtosecond pump pulse liberates an electron from outermost orbital (designated 4p), the short probe pulse boosts an electron from an inner orbital (designated 3d), leaving behind a hole in that orbital while sensing the dynamics of the outermost orbital.

In singly charged krypton ions, two electronic states are formed. A wave-packet of electronic motion is observed between these two states, indicating that the ionization process forms the two states in what's known as quantum coherence.

Says Leone, "There is a continual 'orbital flopping' between the two states, which interfere with each other. A high degree of interference is called coherence." Thus when the attosecond probe pulse clocks the outer valence orbitals, it is really clocking the high degree of coherence in the orbital motion caused by ionization.

Indispensable attosecond pulses

"When the bursts of ions are made quickly enough, with just a few cycles of the ionization pulse, we observe a high degree of coherence," Leone says. "Theoretically, however, with longer ionization pulses the production of the ions gets out of phase with the period of the electron wave-packet motion, as our work showed."

So after just a few cycles of the pump pulse, the coherence is washed out. Thus, says Leone, "Without very short, attosecond-scale probe pulses, we could not have measured the degree of coherence that resulted from ionization."

The physical demonstration of attosecond transient absorption by the combined efforts of the Leone and Krausz groups and their colleagues will, in Leone's words, "allow us to unravel processes within and among atoms, molecules, and crystals on the electronic timescale" -- processes that previously could only be hinted at with studies on the comparatively languorous femtosecond timescale.

The study -- by Eleftherios Goulielmakis, Zhi-Heng Loh, Adrian Wirth, Robin Santra, Nina Rohringer, Vladislav Yakovlev, Sergey Zherebtsov, Thomas Pfeifer, Abdallah Azzeer, Matthias Kling, Stephen Leone, and Ferenc Krausz -- appears in the Aug. 5, 2010 issue of the journal Nature. This work was supported by the Max Planck Society, King Saud University, and the Munich Center for Advanced Photonics. Stephen Leone's group is supported by the Air Force Office of Scientific Research, the National Science Foundation, and U.S. Department of Energy's Office of Science.
Enhanced by Zemanta

Sunday, March 28, 2010

HTC EVO 4G: Better Than the Nexus One?


Sprint's new HTC EVO 4G smartphone is being hailed as the new ruler of the Android empire. But has the crown really been passed?

The HTC EVO 4G, unveiled at the CTIA Wireless exhibition this week, sure has a feature-list fit for a king. The phone boasts a 4.3-inch capacitive touchscreen with HDMI output, dual front- and back-facing cameras, and a superspeedy 1GHz Snapdragon processor. Oh yeah -- and there's that whole 4G thing, too.

Me

HTC EVO 4G vs. Nexus One: The Display

It's hard to miss all the gushing over the HTC EVO 4G's display, and there's a reason for the excitement: The phone has one sweet screen, and you don't have to be an Android fanboy to see that. The EVO 4G's 4.3-inch display beats the Nexus One's 3.7-inch offering (which beat practically everything else back when it debuted). Both devices feature the same WVGA resolution: 800-by-480.

HTC EVO 4G vs. Nexus One: The Data Network

Sprint's biggest selling point with the HTC EVO 4G is all about those final two characters. A 4G data connection, according to Sprint, brings you download speeds as much as 10 times faster than what you'd get on a flimsy old 3G alternative.

But -- and this is a big but (you're welcome, Sir Mix-a-Lot) -- you won't be able to get those tasty 4G connections in much of the country. So far, Sprint's 4G network is available only in 27 U.S. cities. The carrier has plans to expand to a handful of other major markets later this year, but that still leaves everyone else with that aforementioned flimsy old 3G.

Plus, the EVO 4G will be available only on Sprint -- so if you're in an area where network coverage is spotty, you'll be out of luck. The Nexus One, on the other hand, will soon be available on all major carriers, giving you greater choice in the data-providing department.

Which phone wins this category, then, truly depends on where you are and how the carriers' coverage compares for your specific area.

HTC EVO 4G vs. Nexus One: The Hardware

The HTC EVO 4G is powered by the same chip as the Nexus -- that snazzy-sounding 1GHz Snapdragon processor -- so there's a virtual tie in that department.

When it comes to cameras, the HTC EVO 4G is victorious: Its back has an 8-megapixel camera and its front features a 1.3-megapixel one. The Nexus One, in comparison, has a single 5-megapixel photo-snapper.

HTC EVO 4G vs. Nexus One: The Body

The HTC EVO 4G is slightly larger than its Google-endorsed cousin (4.8-by-2.6-by-0.5 inches, compared to 4.69-by-2.35-by-0.45 inches). It's about 1.4 ounces heavier, too.

A deal-breaker? Unless you're Thumbelina, probably not. 


HTC EVO 4G vs. Nexus One: The OS

Both the HTC EVO 4G and the Nexus One are running Android 2.1, the latest version of Google's mobile operating system. Despite the matching versions, however, the user experience will be quite different on the two phones.

The reason is that the HTC EVO 4G runs HTC's Sense user interface, while the Nexus One uses the stock Android interface. The Sense interface gives Android an entirely different look, with specialized home screen widgets and custom navigation tools. As far as which is better, it's really just a matter of personal preference.

One area where the Nexus One's setup will have a distinct advantage, though, is in future Android upgrades: Given the fact that the phone is running the stock Android interface, updating it to a new OS version will be a simple and likely delay-free process (the fact that the Nexus One is Google's baby probably won't hurt, either). Custom interfaces such as HTC's Sense tend to take more time to update, as the manufacturer has to rebuild the interface around the revised platform.

HTC EVO 4G vs. Nexus One: The Data Perks

Sprint is billing the HTC EVO 4G as a mobile hotspot, meaning you can connect up to eight Wi-Fi-enabled devices to the phone and use its data connection to get them on the Internet.

It's not difficult to set up tethering on any Android phone (even if some carriers may discourage it). Still, this built-in multidevice functionality is certainly a perk worth considering.

HTC EVO 4G vs. Nexus One: The Final Judgment

Ultimately, the truth is that there'll never be an end-all Android phone; it really comes down to what's right for you. Given the nature of the platform's open ecosystem, a new contender will always be right around the corner, and hyperbole-loving bloggers will always be chomping at the bit to label it the "killer" of everything else.

That, my friends, is the one thing you can count on.
Reblog this post [with Zemanta]