Authors:
Michael Schweitzer
;
Alois Unterholzner
and
Hans-Joachim Wuensche
Affiliation:
Universität der Bundeswehr München, Germany
Keyword(s):
Robot Vision, GPGPU, Structure from motion.
Related
Ontology
Subjects/Areas/Topics:
Active and Robot Vision
;
Applications
;
Computer Vision, Visualization and Computer Graphics
;
Feature Extraction
;
Features Extraction
;
Human-Computer Interaction
;
Image and Video Analysis
;
Image Formation and Preprocessing
;
Informatics in Control, Automation and Robotics
;
Matching Correspondence and Flow
;
Methodologies and Methods
;
Motion and Tracking
;
Motion, Tracking and Stereo Vision
;
Multi-View Geometry
;
Pattern Recognition
;
Physiological Computing Systems
;
Real-Time Vision
;
Signal Processing, Sensors, Systems Modeling and Control
;
Stereo Vision and Structure from Motion
Abstract:
This paper introduces a novel visual odometry framework for ground moving robots.
Recent work showed that assuming non-holonomic motion can simplify the ego motion estimation task to one yaw and one scale parameter.
Furthermore, a very efficient way of computing image frame to frame correspondences for those robots was presented by skipping rotational invariance and optimizing keypoint extraction and matching for massive parallelism on a GPU.
Here, we combine both contributions to a closed framework.
Long term correpondences are preserved, classified and stablized by motion prediction, building up and keeping a trusted map of depth-registered keypoints.
We also allow other ground moving objects.
From this map, the ego motion is infered, extended by constrained rotational perturbations in pitch and roll.
A persistent focus is on keeping algorithms suitable for parallelization and thus achieving up to one hundred frames per second.
Experiments are carried out to compare against
ground-truth given by DGPS and IMU data.
(More)