Authors:
Hendry Ferreira Chame
and
Christine Chevallereau
Affiliation:
Ecole Centrale de Nantes, France
Keyword(s):
Humanoid Robotics, Embodied Cognition, Ego-localization, Machine Vision.
Related
Ontology
Subjects/Areas/Topics:
Agents
;
Artificial Intelligence
;
Cognitive Robotics
;
Humanoid Robots
;
Informatics in Control, Automation and Robotics
;
Perception and Awareness
;
Robotics and Automation
;
Vision, Recognition and Reconstruction
Abstract:
Humanoid robots are conceived to resemble the body and comportment of the human beings. Among the behavior repertoire, the possibility of executing visually-guided tasks is crucial for individual adaptation and relies on the on-board sensory system. However, the research on walk and localization is far from conclusive. Given the difficulties in the processing of the visual feedback, some studies have treated the problem by placing external sensors on the environment; thus neglecting the corporal metaphor. Others, despite exploring
on-board solutions; have relied on an extensive model of the environment, thus considering the system as an information processing unit, abstracted from a body. This work presents a methodology to achieve embodied localization to serve visually-guided walk. The solution leans on robust segmentation from monocular vision, ego-cylindrical localization, and minimal knowledge about stimuli in the environment.