Follow me?
A researcher waves to a new robot that detects non-verbal commands and can follow its ‘master’, thanks to a new depth-imaging camera (inset) and advanced software
If you like this post, buy me a beer at $3! Imagine a day when you turn to your own personal robot, give it a task and then sit down and relax, confident that your robot is doing exactly what you wanted it to do. A team of US-based engineers is working to bring this futuristic scenario closer to reality, with a new robot that can follow a person – indoors and outdoors – and even understand non-verbal commands through gestures.
“We have created a novel system where the robot will follow you at a precise distance, where you don’t need to wear special clothing; you don’t need to be in a special environment; and you don’t need to look backward to track it,” said team leader Chad Jenkins, assistant professor of computer science at Brown University.
A paper on the research was presented at the 4th ACM/IEEE International Conference on Human-Robot Interaction (HRI 2009) in San Diego on Wednesday.
The team started with a PackBot, a mechanised platform that has been used widely by the US military for bomb disposal, among other tasks.
The researchers outfitted their robot with a commercial depth-imaging camera, which makes it look like “the head on the robot in the film Wall-E”.
They also attached a laptop with novel computer programs that enabled the bot to recognise human gestures, decipher them and respond to them.
In a demonstration, graduate student Sonia Chernova used a variety of hand-arm signals to instruct the automaton, including “follow”, “halt”, “wait” and “door breach”.
She walked with her back to it, turning corners in narrow hallways and walking briskly in an outdoor parking lot. Throughout, the bot followed dutifully, maintaining an approximate three-foot distance – and even backed up when Chernova turned around and approached it.
The team also successfully instructed the machine to turn around (a full 180-degree pivot), and to freeze when the student disappeared from view – essentially idling until the instructor reappeared and gave a nonverbal or verbal command.
HOW IT WORKS
To build the robot, the researchers had to address two key issues. The first involved what scientists call visual recognition, which helps robots orient themselves with respect to the objects in a room.
“Robots can see things, but recognition remains a challenge,” Jenkins explained. The scientists overcame this obstacle by creating a computer program, whereby the robot recognised a human by extracting a silhouette, as if a person were a virtual cut-out.
This let it “home in” on the human and receive commands without being distracted by other objects in the space.
The second advance involved the depth-imaging camera, which uses infrared light to detect objects and to establish their distance from the camera.
This enabled the Brown robot to stay locked in on the human controller, which was essential to maintaining aset distance while following the person.
The result is a robot that doesn’t require remote control or constant vigilance, which is a key step in developing autonomous devices, Jenkins said.
“Advances in enabling intuitive human-robot interaction, such as through speech or gestures, go a long way into making the robot more of a valuable sidekick and less of a machine you have to constantly command,” added Chris Jones, the principal investigator on the project.
The team is now working to add more non-verbal and verbal commands for the robot and to increase the three-foot working distance between it and the commander.
“We have created a novel system where the robot will follow you at a precise distance, where you don’t need to wear special clothing; you don’t need to be in a special environment; and you don’t need to look backward to track it,” said team leader Chad Jenkins, assistant professor of computer science at Brown University.
A paper on the research was presented at the 4th ACM/IEEE International Conference on Human-Robot Interaction (HRI 2009) in San Diego on Wednesday.
The team started with a PackBot, a mechanised platform that has been used widely by the US military for bomb disposal, among other tasks.
The researchers outfitted their robot with a commercial depth-imaging camera, which makes it look like “the head on the robot in the film Wall-E”.
They also attached a laptop with novel computer programs that enabled the bot to recognise human gestures, decipher them and respond to them.
In a demonstration, graduate student Sonia Chernova used a variety of hand-arm signals to instruct the automaton, including “follow”, “halt”, “wait” and “door breach”.
She walked with her back to it, turning corners in narrow hallways and walking briskly in an outdoor parking lot. Throughout, the bot followed dutifully, maintaining an approximate three-foot distance – and even backed up when Chernova turned around and approached it.
The team also successfully instructed the machine to turn around (a full 180-degree pivot), and to freeze when the student disappeared from view – essentially idling until the instructor reappeared and gave a nonverbal or verbal command.
HOW IT WORKS
To build the robot, the researchers had to address two key issues. The first involved what scientists call visual recognition, which helps robots orient themselves with respect to the objects in a room.
“Robots can see things, but recognition remains a challenge,” Jenkins explained. The scientists overcame this obstacle by creating a computer program, whereby the robot recognised a human by extracting a silhouette, as if a person were a virtual cut-out.
This let it “home in” on the human and receive commands without being distracted by other objects in the space.
The second advance involved the depth-imaging camera, which uses infrared light to detect objects and to establish their distance from the camera.
This enabled the Brown robot to stay locked in on the human controller, which was essential to maintaining aset distance while following the person.
The result is a robot that doesn’t require remote control or constant vigilance, which is a key step in developing autonomous devices, Jenkins said.
“Advances in enabling intuitive human-robot interaction, such as through speech or gestures, go a long way into making the robot more of a valuable sidekick and less of a machine you have to constantly command,” added Chris Jones, the principal investigator on the project.
The team is now working to add more non-verbal and verbal commands for the robot and to increase the three-foot working distance between it and the commander.
0 comments:
Post a Comment
Please do not spam Spam comments will be deleted immediately upon my review.