Using Polynomial Eigenvalue Problem Modeling to Improve Visual
Odometry for Autonomous Vehicles
Anderson Souza
1
, Leonardo Souto
2
, Fabio Fonseca de Oliveira
2
, Biswa Nath Datta
3
and Luiz M. G. Gonc¸alves
2
1
Department of Computing, University of the State of Rio Grande do Norte, Natal, RN, Brazil
2
Department of Computing Engineering and Automation, Federal University of the Rio Grande do Norte, Natal, RN, Brazil
3
Northern Illinois University, U.S.A.
Keywords:
Visual Odometry, Polynomial Eigenvalue Problem, Motion Estimation.
Abstract:
Visual Odometry (VO) is the process of calculating the motion of an agent (such as, robot and vehicle), using
images captured by a single or multiple cameras embedded to it. VO is an important process to supplement
autonomous navigation systems, since VO can provide accurate trajectory estimates. However, algorithms
of VO work with several steps of hard numerical computation which generate numerical errors and demand
considerable processing time. In this paper, we propose the use of a mathematical framework for monocular
VO process based on Polynomial Eigenvalue Problem (PEP) modeling in order to achieve both more accurate
motion estimation and to decrease the processing time of the VO process. Some previous experiments are
shown in order to validate the proposed computation accuracy.
1 INTRODUCTION
Nowadays, it is common to see autonomous agents
such as robot and vehicle to perform different tasks.
It is possible to find out these devices transport-
ing materials in factories, transporting people in ur-
ban zone, monitoring environments, exploring areas,
in surveillance, among other applications (Siegwart
et al., 2011).
Autonomous agents must be able to collect infor-
mation about their environment; considering this in-
formation, they must make some decision in order to
decide how to proceed, facing what exists in the envi-
ronment; and have to actuate in order to perform the
previous decision, towards completing their mission
(Murphy, 2000). An important condition for agents
achieving these abilities is that they must be equipped
with sensors, which provide them useful information
of their environment. In this way, a robot or a vehi-
cle can interact coherently with its environment and
objects, leading with unexpected situations like, dy-
namic obstacles (Souza and Gonc¸alves, 2015).
Cameras are widely used as visual sensor systems
for autonomous robots and vehicles. These systems
can be composed by a single or multiples cameras and
a mechanism for computing cameras data, which
allows the extraction of useful information from raw
data (Ma et al., 2004). With a visual sensor sys-
tem it is possible to infer a plenty of information
such as colors, textures, geometric structures, object
recognition, among others. Furthermore, it is feasible
to estimate relative or absolute motion, from images
captured from different positions, Visual Odometry -
VO. This is an important operation to supplement au-
tonomous navigation systems, since VO can provide
accurate trajectory estimates (Scaramuzza and Fraun-
dorfer, 2011).
However, processing raw data captured by cam-
eras in order to extract useful information, goes
through several numerical computation steps, which
produce numerical errors (overflow and underflow),
round-off errors and error bound. Moreover, these
computations consume significant processing time
demanded by algebraic calculations with vectors and
matrices (Datta, 2010). Both aspects (errors effects
and time processing) need to be minimized so that
they do not cause damages to the autonomous navi-
gation process by misinterpreted information (due to
miscalculations), or through decisions not taken in
time.
In this context, this paper proposes the use of a
mathematical framework for monocular VO process