ω
i
t
= η(x
i
t
)p(y
t
| x
i
t
) (6)
For C
j
t
being the sum of weights of the jth cluster
η(x
i
t
) = 1/C
j
t
(7)
This method allows us to use a procedure simi-
lar to Two-Stage Cluster Sampling as our resampling
scheme. In Two-Stage Cluster Sampling one divides
the whole sample set into clusters. In the first stage, a
cluster is randomly drawn. Then, in the second stage,
the actual sample is drawn (J. Hartung, 1989). Here
we first draw a cluster and then apply systematic re-
sampling on the individual particles in that cluster.
Hence, we draw samples from all clusters and there-
fore are able to keep track of multiple hypotheses.
Note, that we actually draw each of the clusters. If we
only detect one cluster, the resampling procedure be-
haves exactly like usual (systematic) resampling. Fur-
thermore, the association of a sample with a cluster is
determined in every time step. Thus, a sample can
be associated with different clusters in different time
steps.
5.2 Clustering
In the following, we present an extension to Monte
Carlo localization that allows for adequate localiza-
tion in highly symmetric environments as frequently
occurring in office like environments. Furthermore, it
is shown how to incorporate this into the functionality
of Sensor Resetting Localization and KLD-Sampling
with Sensor Resetting.
In order to use two-stage sampling, we need to de-
termine clusters in the sample set. Clustering is a well
known technique to classify data and build groups of
similar objects (W. Lioa, 2004). Here we use cluster-
ing to find groups of particles that occupy the same
area in the state space. Doing this, we try to extract
significant clusters that represent a possible position
of our robot.
When choosing a clustering method, we mainly
need to care about two things. The speed of this
routine and the accuracy. Regarding the speed, grid-
based algorithms are a good choice (W. Lioa, 2004).
For us, a clustering algorithm is accurate, if it finds
all significant particle clusters. That are those clusters
with a high average weight. Clusters with low weight
are not likely to represent the correct pose and can
therefore be discarded. That implies, that the output
of the clustering procedure only includes the signifi-
cant clusters.
We used a fixed spatial grid for a first estimate
of the clusters. Here a coarse grid of approximately
2 m ∗ 2 m ∗ 45 degree is sufficient and since we have
to do the clustering for just a few hundred samples it
is fast as well. If we detect more than one cluster, it
is checked if clusters can be fused. This is the case,
if clusters are in close proximity of each other in the
state space.
Thereby, the importance factors are computed ac-
cording to equation (6). In that way, we do not loose
possible positions while resampling. A cluster is con-
sidered significant if it’s weight is above a manually
chosen threshold. This method showed to be stable
and accurate. Moreover, it allows us to simply dis-
tribute the free samples from clusters with low weight
over the significant clusters. While resampling these
samples are replaced with the high weighted particles
of the according cluster. Alternatively, one can re-
place these samples by newly generated ones.
5.3 Clustered Sensor Resetting
Localization
To incorporate two-stage sampling we need to regard
the fact that SRL simply replaces particles in the sam-
ple set if needed. Thus, we cannot guarantee that all
clusters will still exist after this replacement. In order
to exclude this case, we disable the replacement of
particles whenever we find more than one significant
cluster. The algorithm is summarized in table 3.
The intuition behind CSRL is as follows. When the
robot gets kidnapped while our filter is tracking mul-
tiple distinct hypotheses, we observe the following.
The weights of the clusters become negligible and are
therefore not considered significant any more. Thus,
we do regard all samples as though they were in one
cluster. We then can calculate a number of samples
to be replaced according to Sensor Resetting Local-
ization. The newly generated particles are distributed
according to the most recent sensor reading. Then, for
the new sample set, clustering is done and it is decided
if we have to track multiple or a single position.
In this way, we are able to combine the ability to
track multiple hypotheses stable with the advantage
of using small sample sets.
5.4 Clustered KLD-Sampling with
Sensor Resetting
Clustered Sensor Resetting Localization still suffers
from using a sample set of constant size. Thus, we
are not taking account of the different complexities of
the localization problem. To overcome this issue, we
developed a method to combine two-stage sampling
with KLD-SRL.
Analogous to KLD-SRL the number of particles is
adjusted according to equation (2). Only, if we detect
more than one significant cluster, we do not allow the
number of samples to be reduced. So we can guaran-
tee that we won’t loose any of the detected clusters.
As soon as there is only one cluster left, the number
ICINCO 2006 - ROBOTICS AND AUTOMATION
252