tain a solution for the problem of SLAM. In this
sense, different SLAM algorithms have been ex-
tensively used, differentiating between online meth-
ods such as, EKF (Davison and Murray, 2002),
Rao-Blackwellized particle filters (Montemerlo et al.,
2002) and offline algorithms, such as, for example,
Stochastic Gradient Descent (Grisetti et al., 2007).
Thus, the success of an efficient SLAM solution is
closely dependent on the suitability of the combi-
nation between data sensor information, map repre-
sentation model and the SLAM algorithm. A large
amount of research has been conducted to obtain the
map representation, dealing simultaneously with the
estimation of the position of a set of 3D visual land-
marks expressed in a general reference system (Davi-
son and Murray, 2002; Gil et al., 2010; Andrew
J. Davison et al., 2004; Civera et al., 2008). They
rely on the capability of an EKF filter to converge to
a proper solution for the problem of SLAM. More re-
cently (Valiente et al., 2012), also based on EKF, pro-
posed a different representation of the environment,
where the map is formed by the position and orienta-
tion of a reduced set of image views in the environ-
ment. The most distinctive novelty of this representa-
tion is the capability to compute a relative movement
between views, which allows to retrieve the local-
ization of the robot and simultaneously it provides a
compact representation of the environment by means
of a reduced set of views. Such technique establishes
an estimation of a state vector which includes the
map and the current localization of the robot at each
timestep k. The estimation of the transition between
states at k and k + 1 takes into account the wheel’s
odometry as initial estimate, and the observation mea-
surements gathered by sensors. The major problem-
atic experienced by these methods based on an EKF
is the difficulty to keep the convergence of the estima-
tion when gaussian errors appear in the observation,
since they usually cause important data association
problems (Neira and Tard
´
os, 2001). A visual obser-
vation as omnidirectional is susceptible to introduce
non-linearties and therefore responsible for those er-
rors.
Some approaches have demonstrated considerable
improvements thanks to the assumption of a novel
representation of the map (Valiente et al., 2012), com-
posed by a subset of omnidirectional images, denoted
as views. Each view is representative of a large area
of the map since it may encapsulate a huge amount of
visual information in its neighborhood. Considering
a reduced set of images, it leads to a simplification
of the map representation. The scheme allows for a
simple-computation of an observation measurement:
given two images acquired at two poses of the robot,
a specific subset of significant points between images
can be processed in order to get a motion transforma-
tion which establishes the movement between the two
poses of the robot. This approach mitigates the main
EKF’s efficiency issue related to the high number of
variables needed to estimate a solution at each itera-
tion. Here, the number of landmarks, seen as views, is
drastically reduced and thus the time consumption for
the state’s re-estimation too. On the other hand, the
problematic of convergence associated to EKF’s al-
gorithms still exists. In this work we rely on the SGD
algorithm, which it is known to outperform EKF’s
SLAM results. Our main goal is to benefit from
our visual observation model adapted to this kind of
SLAM solution algorithm based on SGD. Note that
even with SGD, solution’s convergence is not trivial
when the observation does not provide range distance
values. We also propose a variation in the procedure
to estimate the solution. Traditionally, either odom-
etry or observation are independently used in order
to iterate up to reach a final solution. By contrast,
we suggest the use of the information provided by
several observations at the same time to quickly find
the SLAM solution. This proposal gives the first im-
pression of being liable to cause an increase of the
required computational resources. Nevertheless, we
have concentrated on the updating of several stages
of the iterative optimization process aiming at preven-
tion from possible harmful bottleneck effects.
The paper is divided as follows: Section 2 ex-
plains the SLAM problem in the mentioned context.
Then, in Section 3 we detail the specifications of a
Maximum Likelihood algorithm such as SGD. Sec-
tion 4 describes the main novelties provided by this
approach. Next, Section 5 shows the experimental re-
sults to test the suitability of the model and its bene-
fits. Finally, Section 6 is referred to the result’s anal-
ysis in order to extract a general conclusion.
2 SLAM
A visual SLAM technique is expected to retrieve a
feasible estimation of the position of the robot in-
side a certain environment, which it has to be well-
determined by the estimation as well. In our ap-
proach, the map is composed by a set of omnidirec-
tional images obtained from different poses in the en-
vironment, denoted as views. These views do not rep-
resent any physical landmarks, since they will be con-
stituted by an omnidirectional image captured at the
pose x
l
= (x
l
,y
l
,θ
l
) and a set of interest points ex-
tracted from that image. Such arrangement, allows us
to exploit the capability of an omnidirectional image
ICINCO2013-10thInternationalConferenceonInformaticsinControl,AutomationandRobotics
330