Authors:
Lorenzo Peppoloni
and
Alessandro Di Fava
Affiliation:
Scuola Superiore S. Anna, Italy
Keyword(s):
Programming by Demonstration, Manipulation, Autonomous Navigation, Semantic Learning.
Related
Ontology
Subjects/Areas/Topics:
Human-Robots Interfaces
;
Informatics in Control, Automation and Robotics
;
Mobile Robots and Autonomous Systems
;
Modeling, Simulation and Architectures
;
Robot Design, Development and Control
;
Robotics and Automation
;
Vision, Recognition and Reconstruction
Abstract:
This paper presents an integrated robotic system capable of learning and executing manipulation tasks from
a single human user’s demonstration. The system capabilities are threefold. The system learns tasks from
perceptual stimuli, models and stores the information in the form of semantic knowledge. The system may
employ the model achieved to execute task in an way similar to the example shown and adapt the motion
to robot own constraints in terms of physical limits and interferences. The system integrates perception and
action algorithms in order to autonomously extrapolate the context in which to operate. It robustly changes its
behavior according to the environment evolution. The performances of the system have been verified through
a series of tests. The tests run on the Kuka youBot platform and all the tools and algorithms are integrated into
Willow Garage ”Robotic Operating System” (ROS).