Authors:
Sergio Cebollada
;
Luis Payá
;
María Flores
;
Vicente Román
;
Adrián Peidró
and
Oscar Reinoso
Affiliation:
Department of Systems Engineering and Automation, Miguel Hernández University, Elche, Spain
Keyword(s):
Mobile Robotics, Omnidirectional Images, Holistic Description, Deep Learning, Hierarchical Localization.
Abstract:
In this work, a deep learning tool is developed and evaluated to carry out the visual localization task for mobile autonomous robotics. Through deep learning, a convolutional neural network (CNN) is trained with the aim of estimating the room where an image has been captured, within an indoor environment. This CNN is not only used as tool to solve a room estimation, but it is also used to obtain global-appearance descriptors of the input image from its intermediate layers. The localization task is addressed in two different ways: globally, as an image retrieval problem and hierarchically. About the global localization, the position of the robot is estimated by using a nearest neighbour search between the holistic description obtained from a test image and the training dataset (using the CNN to obtain the descriptors). Regarding the hierarchical localization method, first, the CNN is used to solve the rough localization step and after that, it is also used to obtain global-appearance
descriptors; second, the robot estimates its position within the selected room through a nearest neighbour search by comparing the obtained holistic descriptor with the visual model contained in that room. Throughout this work, the localization methods are tested with a visual dataset that provides omnidirectional images from indoor environments under real-operation conditions. The results show that the proposed deep learning tool is an efficient solution to carry out visual localization tasks.
(More)