vironment and the movements u
1:t
= {u
1
, u
2
, . . . , u
t
}
of the robot (Fox et al., 1999). In Monte Carlo Lo-
calization (MCL) (Thrun et al., 2000), the probability
density function p(x
t
|z
1:t
, u
1:t
) is represented by a set
of M random samples χ
t
= {x
i
t
, i = 1. . . M} extracted
from it, named particles. Each particle can be under-
stood as a hypothesis of the true state of the robot
x
i
t
= (x
i
, y
i
, θ
i
). The weight of each sample (parti-
cle) determines the importance of the particle. The
set of samples defines a discrete probability function
that approximates the continuous belief. The Monte
Carlo Localization algorithm is described briefly in
the next lines, and consists of two phases:
Prediction Phase. At time t a set of particles χ
t
is
generated based on the set of particles χ
t−1
and a
control signal u
t
. This step uses the motion model
p(x
t
|x
t−1
, u
t
). In order to represent this probability
function, the movement u
t
is applied to each particle
while adding a pre-defined quantity of noise. As a re-
sult, the new set of particles χ
t
represents the density
p(x
t
|z
1:t−1
, u
1:t
).
Update Phase. In this second phase, for each particle
in the set χ
t
, the observation z
t
obtained by the robot
is used to compute a weight ω
i
t
. This weight repre-
sents the observation model p(z
t
|x
t
) and is computed
as ω
i
t
= p(z
t
|x
i
t
). The weights are normalized so that
∑
ω
i
t
= 1. As a result, a set of particles accompanied
by a weight χ
t
= {x
i
t
, ω
i
t
} are obtained.
The resulting set χ
t
is calculated by resampling
with replacement from the set χ
t
, where the probabil-
ity of resampling each particle is proportional to its
importance weight ω
i
t
, in accordance with the litera-
ture on the SIR algorithm (Sampling Importance Re-
sampling) (Smith and Gelfand, 1992; Rubin, 1988).
Finally, the distribution p(x
t
|z
1:t
, u
1:t
) is represented
by the set χ
t
.
By means of computing a weight w
i
for each par-
ticle the Monte Carlo algorithm introduces the cur-
rent observation z
t
of the robot. In this case we con-
sider that our map is composed of a set of N bi-
dimensional landmarks L = {l
1
, l
2
, . . . , l
N
} and the po-
sition of these marks on the environment is known.
Each landmark l
j
is represented by an omnidirectional
image I
j
associated and a Fourier Signature descrip-
tor d
j
that describes the global appearance of the om-
nidirectional image (Fernandez et al., 2011), thus
l
j
= {(l
j,x
, l
j,y
), d
j
, I
j
}. d
j
is constructed from the bi-
dimensional Fourier signature with all the elements
arranged in a vector. Using this Fourier descriptor we
compare the descriptor d
t
with the rest of descriptors
d
j
, j = 1. . . N and find the B landmarks in the map
that are closest in appearance with the current image
I
t
. In this sense, we allow the correspondence of the
current observation to several landmarks in the map.
In this work, we propose to compute the weight
of each particle ω
i
t
= p(z
t
|x
i
t
) through a sum of gaus-
sian functionscentered on each landmarkposition and
considering the difference in the descriptors of the
landmarks (images).
ω
i
t
=
B
∑
j=1
exp{−v
j
Σ
−1
l
v
T
j
}exp{−h
j
Σ
−1
d
h
T
j
} (1)
where, v
j
= (l
j,x
, l
j,y
) − (x
i
, y
i
) is the difference
between the position of the landmark l
j
and the posi-
tion (x
i
, y
i
) of the particle i. The matrix Σ
l
is a diago-
nal matrix Σ
l
= diag(σ
2
l
, σ
2
l
). The variance σ
2
l
is cho-
sen experimentally in order to minimize the error in
the localization. h
j
= |d
j
− d
t
| defines the difference
between the module of the Fourier descriptor associ-
ated to the current image observed and the module of
the descriptor associated to the landmark l
j
. The ma-
trix Σ
d
= diag(σ
2
d
) is a k×k matrix, being k the length
of the Fourier descriptor.
3 EXPERIMENTAL RESULTS
In order to acquire the necessary data for the exper-
iments we have used a Pioneer P3-AT mobile robot,
equipped with an omnidirectional camera, and a fixed
platform equipped with an omnidirectional camera
and a laptop. The map was built by carefully ob-
taining omnidirectional images at different positions
on a regular grid in an office-like environment using
the fixed platform. Next, the robot performed some
trajectories within this environment, capturing a new
omnidirectional image and odometry data whenever
it traversed a distance equal to 0.1m. The robot cap-
tured a total of 515 images and traveled around 55m.
The map is composed of a set of 381 images placed
on a grid with a resolution of 0.4m. The map has a
size of 11m in the x axis and 25m in the y axis.
To test the performance of our appearance-based
Monte-Carlo Localization method we have carried
out a series of simulation experiments of robot track-
ing using the sets of images described in the previ-
ous paragraph. We have used an associations num-
ber B equal to 4. We have computed the average er-
ror in the robot position along the trajectory, depend-
ing on the number of Fourier components for differ-
ent number of particles. The average error has been
obtained taking as a reference the real path (ground
truth) (Figure 1 (a)). We must take into account the
average error of the robot odometry data comparing
with the ground truth is 0.736m. As shown in Figure
1 (a) as we increase the number of Fourier compo-
nents, the localization error along the trajectory tends
ICINCO2012-9thInternationalConferenceonInformaticsinControl,AutomationandRobotics
440