Authors:
Takeda Koji
and
Tanaka Kanji
Affiliation:
Department of Engineering, University of Fukui, 3-9-1, Bunkyo, Fukui, Japan
Keyword(s):
Visual Robot Self-localization, Graph Convolutional Neural Network, Map to DNN.
Abstract:
Scene graph representation has recently merited attention for being flexible and descriptive where visual robot self-localization is concerned. In a typical self-localization application, the objects, object features and object relationships of the environment map are projected as nodes, node features and edges, respectively, on to the scene graph and subsequently mapped to a query scene graph using a graph matching engine. However, the computational, storage, and communication overhead costs of such a system are directly proportional to the number of feature dimensionalities of the graph nodes, often significant in large-scale applications. In this study, we demonstrate the feasibility of a graph convolutional neural network (GCN) to train and predict alongside a graph matching engine. However, visual features do not often translate well into graph features in modern graph convolution models, thereby affecting their performance. Therefore, we developed a novel knowledge transfer fra
mework that introduces an arbitrary self-localization model as the teacher to train the GCN-based self-localization system i.e., the student. The framework, additionally, facilitated lightweight storage and communication by formulating the compact output signals from the teacher model as training data. Results on the Oxford RobotCar datasets reveal that the proposed method outperforms existing comparative methods and teacher self-localization systems.
(More)