Evolutionary Particle Filters: Model-free Object Tracking
Combining Evolution Strategies and Particle Filters
Silja Meyer-Nieberg, Erik Kropat and Stefan Pickl
Department of Computer Science, Universit¨at der Bundeswehr M¨unchen,
Werner-Heisenberg Weg 37, 85577 Neubiberg, Germany
Keywords:
Tracking, Dynamical Systems, Particle Filter, Dynamic Optimization, Evolutionary Algorithms.
Abstract:
Tracking situations or more generally state estimation of dynamic systems arise in various application contexts.
Usually the state-evolution equations are assumed to be known up to certain parameters. But what can be done
if this is not the case? This paper presents an innovative approach to solve this difficult and complex situation
by using the inherent tracking abilities of evolution strategies. Combining principles of particle filters and
evolution strategies leads to a new type of algorithms: evolutionary particle filters. Their tracking quality is
examined in simulations.
1 INTRODUCTION
Particle filters play an important role in the estimation
of dynamical systems in many areas ranging from en-
gineering over financial modeling to physical and bi-
ological systems. Further applications include track-
ing and localization tasks, for instance, estimating au-
tonomous robot positions via GPS measurements or
tracking the position of air planes in air traffic con-
trol. But what can be done if the evolution equations
for the state variables are unknown? Problems like
these appear for instance in ballistic target tracking or
hand-held GPS-receivers (Johansson and Lehmann,
2009). In this case, we cannot apply common par-
ticle filters since these usually require the knowledge
of the complete probabilistic model. The present pa-
per investigates the potential direct use of the inher-
ent tracking ability of evolution strategies, a specific
evolutionary algorithm. In the area of dynamic opti-
mization, it was shown in (Arnold and Beyer, 2006)
that evolution strategies can follow moving optimiz-
ers which should allow their application to model-free
tracking in principle. The question remains, however,
which measure should be used to guide the search if
information on the system is scarce.
The paper is organized as follows: It starts with
a general description of the dynamic estimation prob-
lem and gives a concise sketch on particle filters. Af-
terwards, evolution strategies with the current state-
of-the-art adaptation of the search direction and step-
size are introduced. This is followed by a brief review
on related approaches, that is, either publications us-
ing evolutionary and related algorithms in the area of
particle filtering or approaches where it was not as-
sumed that the state model is completely known. Af-
terwards, the main ideas for the new algorithms are
presented before they are investigated closer in the ex-
perimental section. Conclusions and potential follow-
up work constitute the last part of the paper.
1.1 A Dynamic Estimation Problem and
Particle Filters
Tracking applications consider variants of the general
system
x
k+1
= f
k+1
(x
k
,
~
ε
k+1
)
z
k+1
= h
k+1
(x
k+1
,
~
ω
k+1
) (1)
(Arulampalam et al., 2002) describing the evolution
of the state variables x
k
and the sequence of the mea-
surements z
k
,k 1. The state variables denote the
“true” position of the target which cannot be obtained
directly. Instead only noisy measurements z
k
can
be taken from which the position has to be inferred.
The random variables
~
ε
k
and
~
ω
k
denote measurement
and process noise, respectively and are assumed to
be independent. In tracking applications, the aims
are to obtain the posterior density p(x
k
|z
1;k
), with
p(x
k
|z
1;k
) := p(x
k
|z
1
,...,z
k
), statistical estimates of
interesting characteristics, or to derive certain statisti-
cal moments - for instance the mean of the target po-
sition. Usually, the non-linear functions f
k
, h
k
, k 1
are assumed to be known.
294
Meyer-Nieberg S., Kropat E. and Pickl S..
Evolutionary Particle Filters: Model-free Object Tracking - Combining Evolution Strategies and Particle Filters.
DOI: 10.5220/0004284300960102
In Proceedings of the 2nd International Conference on Operations Research and Enterprise Systems (ICORES-2013), pages 96-102
ISBN: 978-989-8565-40-2
Copyright
c
2013 SCITEPRESS (Science and Technology Publications, Lda.)
Following a Bayesian approach, the posterior den-
sity is obtained in two steps: a prediction and an up-
date step. The prediction step obtains the density of
the present target position given the past measure-
ments
p(x
k
|z
1;k1
) =
Z
p(x
k
|x
k1
)p(x
k1
|z
1;k1
)dx
k1
. (2)
The required posterior can be given once new mea-
surements are made as
p(x
k
|z
1;k
) =
p(x
k
|z
1;k1
)p(z
k
|x
k
)
p(z
k
|z
1;k1
)
(3)
with
p(z
k
|z
1;k1
) :=
Z
p(z
k
|x
k
)p(x
k
|z
1;k1
)dx
k
. (4)
However, exact analytical solutions for (3) exist only
if quite restrictive assumptions are fulfilled. Other-
wise approximations are required. One of these is the
well-known particle filter which is described shortly
in the following. For a more detailed description, we
refer to (Doucet and Johansen, 2011). It can be shown
that the density can be approximated
p(x
k
|z
1;k
)
N
s
i=1
w
i,k
δ(x
k
x
i,k
) (5)
using N
s
particles x
i,k
which means that the sum con-
verges to the density for N
s
. The symbol δ(u)
denotes the Dirac delta function and the w
i,k
are posi-
tive weights summing up to one.
It can be shown (Arulampalam et al., 2002) that
the weights can be updated according to
w
i,k
= w
i,k1
p(z
k
|x
i,k
)p(x
i,k
|x
i,k1
)
q(x
i,k
|x
i,k1
,z
k
)
(6)
with q(x
i,k
|x
i,k1
,z
k
), the importance density, to be
defined. Since the performance of the particle filter
depends strongly on the importance density its good
choice is critical. Common choices include e.g. the
transition density q(x
k
|x
i,k
,z
k
) = p(x
k
|x
i,k
) although
this can lead to problems. Several approaches exist
which differ in the preconditions they make. Most ap-
proaches assume that the state evolution equations are
known. Only a few publications exist which address
the problem of unknown parameters.
In this paper, we assume that the functions de-
scribing the evolution of the non-measurable state
variables are unknown. Therefore, it is not possible
to compute the transition density p(x
k
|x
k1
) explic-
itly. Only the measurements can be taken into account
to derive and predict the target position.
1.2 Evolution Strategies
Evolutionary algorithms (EAs) are population-based
stochastic search and optimization algorithms. They
construct an implicit probabilistic model based on
good candidate solutions (parent population). The
model is then used to create new solutions (the off-
spring) which are incorporated into the population
if they are sufficiently good. An evolutionary algo-
rithm starts with an initial population which may be
drawn randomly. Evolution strategies (ESs) (Beyer
and Schwefel, 2002) are a variant of evolutionary al-
gorithms that is predominantly applied for continu-
ous search spaces. Mutation, that is, movement ac-
cording to random perturbations, is the main search
operator. Recombination is usually performed by cal-
culating the weighted mean of the µ parents, although
other forms exist (Beyer and Schwefel, 2002). The
result is then mutated – usually by adding a normally
distributed random variable with zero mean and co-
variance matrix σ
2
C. Afterwards, the individuals are
evaluated using the function to be optimized or a de-
rived function which allows an easy ranking of the
population. An important topic in evolution strate-
gies is the continuous adaptation of the covariance
matrix. Evolution strategies with ill-adapted param-
eters show slow convergence or are unable to find the
optimal state at all. Therefore, methods for adapt-
ing the scale factor σ or the full covariance matrix
have received a lot of attention (see (Meyer-Nieberg
and Beyer, 2007)) cumulating in evolution strategies
with covariance matrix adaptation.
1.3 Updating the Covariance Matrix
First, the update of the covariancematrix is addressed.
In evolution strategies two types exist: one used by
the covariance matrix adaptation evolution strategy
(CMA-ES) (Hansen, 2006) which considers past in-
formation from the search and an alternative used by
the covariance matrix self-adaptation evolution strat-
egy (CMSA-ES) (Beyer and Sendhoff, 2008) which
takes only present information into account.
The covariance matrix update of the CMA-ES is
explained first. The CMA-ES uses weighted interme-
diate recombination, in other words, it computes the
weighted centroid of the µ best individuals of the pop-
ulation. This mean m
(g)
is used for creating all off-
spring by adding a random vector drawn from a nor-
mal distribution with covariance matrix (σ
(g)
)
2
C
(g)
,
i.e., the actual covariance matrix consists of a gen-
eral scaling factor (step-size) and the matrix denoting
the directions. Following usual notation in evolution
strategies this matrix C
(g)
will be referred to as co-
EvolutionaryParticleFilters:Model-freeObjectTracking-CombiningEvolutionStrategiesandParticleFilters
295
variance matrix in the following.
The basis for the CMA update is the common esti-
mate of the covariance matrix using the newly created
population. Instead of considering the whole popula-
tion for building the estimates, though, it introduces a
bias towards good search regions by taking only the µ
best individuals into account. Furthermore, it does not
estimate the mean anew but uses the weighted mean
m
(g)
. Following (Hansen, 2006),
y
(g+1)
m:λ
:=
1
σ
(g)
x
(g+1)
m:λ
m
(g)
are determined with x
m:λ
denoting the mth best off the
λ particle according to the fitness ranking. The rank-µ
update then obtains the covariance matrix as
C
(g+1)
µ
:=
µ
m=1
w
m
y
(g+1)
m:λ
(y
(g+1)
m:λ
)
T
(7)
To derive reliable estimates larger population sizes are
usually necessary which is detrimental with regard to
the algorithm’s speed. Therefore, past information,
that is, past covariance matrizes are usually also con-
sidered with parameter c
µ
determining the effective
time-horizon. In CMA-ES it has been found that en-
hancing the general search direction in the covariance
matrix is usual beneficial. For this, the concepts of
the evolutionary path and the rank-one-update are in-
troduced. As its name suggests, an evolutionary path
considers the path in the search space the population
(i.e., the weighted mean) has taken so far. The evo-
lutionary path p
c
gives a general search direction that
the ES has taken in the immediate past. In order to
bias the covariance matrix accordingly, the rank-one-
update is used
C
(g+1)
1
:= p
(g+1)
c
(p
(g+1)
c
)
T
. (8)
Together the components constitute the covariance
update of the CMA-ES
C
(g+1)
:= (1c
1
c
µ
)C
(g)
+ c
1
C
(g+1)
1
+ c
µ
C
(g+1)
µ
see (Hansen, 2006) for details. The CMA-ES is one of
the most powerful evolution strategies. However, as
pointed out in (Beyer and Sendhoff, 2008), its scaling
behavior with the population size is not good. The
CMSA-ES (Beyer and Sendhoff, 2008) updates the
covariance matrix differently by considering
y
(g+1)
m:λ
:=
1
σ
(g+1)
m
x
(g)
m:λ
x
(g)
p
(9)
with x
(g)
p
the base vector of the mutation leading
to x
(g+1)
m:λ
. Using (weighted) recombination, Eq. (9)
equals the rank-µ update of the CMA-ES. The covari-
ance update then reads
C
(g+1)
:= (1
1
c
τ
)C
(g)
+
1
c
τ
µ
m=1
w
m
y
(g+1)
m:λ
(y
(g+1)
m:λ
)
T
(10)
with the weights usually set to w
m
= 1/µ. See (Beyer
and Sendhoff, 2008) for information on the free pa-
rameter c
τ
.
1.4 Updating the Step-size
The CMA-ES uses the so-called cumulative step-size
adaptation (CSA) to adapt the scaling parameter (also
called step-size, mutation strength or step-length). To
this end, the CSA (Hansen, 2006) determines again
an evolutionary path p
σ
by summing up the move-
ment of the population centers eliminating the influ-
ence of the covariance matrix and the step length.
For a detailed description see (Hansen, 2006). The
length of the path p
σ
is important. If the path length
is short, several movements of the centers counter-
act each other which indicates that the step-size is too
large and should be reduced. If on the other hand, the
ES takes steps approximately in the same direction,
progress and algorithm speed would be improved, if
the ES could make larger changes. Therefore, long
path lengths are seen as an indicator for a required
increase of the step length. Ideally, the CSA should
result in uncorrelated steps leading to
ln(σ
(g+1)
) = ln(σ
(g)
) +
c
σ
d
σ
kp
(g+1)
σ
kµ
χ
n
µ
χ
n
(11)
as the CSA-rule. The parameter µ
χ
n
in (11) stands
for the mean of the χ-distribution with n degrees of
freedom and serves as the ideal value for the com-
parison. It can be shown that the original CSA en-
counter problems in large noise regimes. Therefore,
uncertainty handling procedures and other safeguards
are recommended. An alternative approach for adapt-
ing the step-size is self-adaptation mainly developed
in (Schwefel, 1981). It subjects the strategy parame-
ters of the mutation to evolution. In other words, the
scaling parameter or in its full form, the whole co-
variance matrix, undergoes recombination, mutation,
and indirect selection processes. The working princi-
ple is based on an indirect stochastic linkage between
good individuals and appropriate parameters: On av-
erage good parameters should lead to better offspring
than too large or too small values or mis/leading di-
rections. Today, self-adaptation is used mainly to
adapt the step-size or a diagonal covariance matrix.
ICORES2013-InternationalConferenceonOperationsResearchandEnterpriseSystems
296
In the case of the mutation strength, usually a log-
normal distribution σ
(g)
l
= σ
base
exp(τN (0, 1)) is used
for mutation. The parameter τ is called the learning
rate. The variable σ
base
is either the parental mutation
strength or the result of recombination. For the step-
size, it is possible to apply the same type of recombi-
nation as for the positions although different forms
for instance a multiplicative combination could be
used instead. The self-adaptation of the step-size is
referred to as σ-self-adaptation (σSA) in the remain-
der of the paper. The newly created mutation strength
is then directly used in the mutation of the offspring.
If the offspring is sufficiently good, it is passed to the
next generation. The baseline σ
base
is either the mu-
tation strength of the parent or if recombination is
used – the recombination result. Self-adaptation with
recombination has been shown to be “robust” against
noise (Beyer and Meyer-Nieberg, 2006) and is used in
the CMSA-ES (Beyer and Sendhoff, 2008) as update
rule for the scaling factor.
2 EVOLUTION STRATEGIES
FOR MODEL-FREE TRACKING
This paper provides – to our knowledge – the first at-
tempt to use an ES directly for tracking tasks. We pro-
pose to adapt evolution strategies to object tracking
by guiding the search using the remaining available
information. Since the state evolution equations are
unknown, tracking can only take the measurements
and the observation equation into account. We as-
sume that it is possible to evaluate p(z
k
|x) point-wise,
for z
k
given, since we will use this density as the fit-
ness function
f
k
(x) = p(z
k
|x) (12)
to guide the search of the evolution strategy. One of
the aims of the paper is to investigate whether this ap-
proach is feasible. Since the measurements are over-
laid with noise, information from the search so far is
valuable to avoid moving towards false optima pro-
vided the true positional changes of the state variables
are not too large. We postulate that it is possible to
recover the target movement to some extend by con-
sidering the search process of the ES. The maximizer
of (12) or of some derivatives of (12) should provide
a rough estimate on the target position. The dynam-
ical nature of the optimization problem should keep
the ES’s population from collapsing to a point solu-
tion. An ES that tries to optimize the dynamic prob-
lem should provide a better guess for the true position
especially as it keeps statistics of past searches and
therefore is influenced by z
1
,...,z
k
.
Two main approaches are considered, see Figs.
1 and 2. The first called evolutionary particle filter
(EPF) does not use recombination similar to usual
particle filters. Instead it selects one particle of the
parent population as the mean of the mutation vec-
tor. The second approach will use intermediate re-
combination and is notated as EPF
rec
. Both will be
combined with CMA and CMSA adapted for ob-
ject tracking. Let us first address the EPF. It allows
Algorithm 1:
Evolutionary Particle Filter
g=0: Initialize
σ
(0)
m
= σ
m;k1
,
C
(0)
m
=
C
m;k1
,
x
(0)
m
= x
m;k1
,
w
(0)
m
= w
m;k1
,
m = 1, ...,µ
.
REPEAT
Draw
λ
parents
x
(g)
m
with resampling
(uniform or according to weights)
Determine
σ
(g)
m
,
C
(g)
m
Set
w
(g)
l
= w
(g)
m
Create
λ
particles according to
x
l
λ
= x
(g)
m
+ σ
(g)
m
N (0, C
(g)
m
) (13)
Determine the fitness
f(x
l
λ
) = w
(g)
l
p(z
k
|x
l
λ
) (14)
Select the
µ
-best particles
Calculate the weights
w
(g+1)
m
=
f(x
(g+1)
m
)
µ
k=1
f(x
(g+1)
k
)
(15)
g g+ 1
UNTIL STOP
w
m;k
= w
(g
end
)
m
,
σ
m;k1
= σ
(g
end
)
m
,
C
m;k
= C
(g
end
)
m
,
x
m;k
= x
(g
end
)
m
,
m = 1,..., µ
Figure 1: Evolutionary particle filter without recombina-
tion.
a slightly different approach since its base vector is
a concrete point of the parent population: Setting the
importance density in (6) equal to the transition den-
sity leads to the weight update w
ik
= w
ik1
p(z
k
|x
ik
)
which can be used as alternative to the fitness (12)
for the ES in the evolutionary particle filter in Figs.
1. In this way past measurements influence the se-
lection via the old weights. The EPF
rec
strategy uses
recombination which circumvents the usage of previ-
ous weights. Therefore, Eq. (12), p(z
k
|x), will serve
as fitness. Recombination can be realized in several
ways. In ES, static weights are common. This is in
contrast to typical procedures in particle filters which
use a fitness dependent relative weight. Investigating
from which procedure the algorithms benefitbest or
EvolutionaryParticleFilters:Model-freeObjectTracking-CombiningEvolutionStrategiesandParticleFilters
297
whether a combination shall be applied is an interest-
ing point for further research. The parameters of the
normal distribution are updated by adapting the con-
cepts of the CMA-ES and the CMSA-ES to the task at
hand. New measurements can arrive during optimiza-
tion leading to a dynamic optimization problem. In
the case of evolution strategies, problems may appear
if the magnitude of the change is very large which will
lead to a longer adaptation times if the scaling factors
have become too small. This will occur especially
if measurements are taken infrequently. We assume
that in general convergence of the ES will take longer
than the appearance of new measurements (this can
be forced by limiting the search time of the ES).
Since both, the CMA and the CMSA, rely on past
information, finding an appropriate generation time
window will probably important. This paper consid-
ers the performance of systems with short generation
time horizons which will require larger populations.
The ESs are running systems, that is, they are started
in the beginning of the measurements and incorpo-
rate new information as soon as a generation cycle is
complete. In the following, the adaptation of the co-
variance matrix and step-size update are discussed.
In the case of the EPF
rec
, the common ES-update
rules appear appropriate. Using the EPF necessitates
changes to the covariance update. The first step in
the covariance update is the rank-µ update with the
mean of the previous population as the base point.
This, however, does not have a good justification if
recombination is not used. The population mean of
the new population could be used, instead. How-
ever, the question arises whether this method should
be applied at all since there are different base points
for the mutation for each offspring. Alternatives will
be investigated in further research. The question re-
mains whether the evolutionary path which considers
the movement of the population center is suitable for
the covariance update in case of the EPF-algorithm.
Again, this will be investigated experimentally, how-
ever on first sight, path information as direction ap-
pears more suitable for EPF
rec
than EPF.
The update of the scaling parameter remains to be
addressed. As pointed out, two main approaches exist
for ESs: the CSA-rule and the σSA-rule. Again, the
concept of an evolutionary path lends itself more eas-
ily to the EPF
rec
than to the EPF. In the latter case, the
question occurs whether the movementof the mth best
individual might be used, instead. The information
from this path evolution will probably be overlaid by
large stochastic fluctuations requiring possible longer
time horizons. In a first approach, the usual CSA-rule
with small time horizon will be used. The σSA-rule
requires not any changes and could be transferred di-
Algorithm 2:
Evolutionary Particle Filter with
Recombination
g=0: Initialize
σ
(0)
= σ
µ;k1
,
C
(0)
= C
k1
,
x
(0)
m
= m
k1
,
w
(0)
m
= w
m;k1
,
m = 1,...,µ
.
REPEAT
Compute the weighted mean
m
(g)
=
µ
m=1
w
(g)
m
x
(g)
m
(16)
Create
λ
particles according to
x
l
λ
= m
(g)
+ σ
(g)
N (0, C
(g)
) (17)
Determine the fitness
f(x
l
λ
) = p(z
k
|x
l
λ
) (18)
Select the
µ
-best particles
Calculate the weights
a)
w
(g+1)
m
=
1
µ
(19)
b)
w
(g+1)
m
= ln
µ+ 1
2
ln(m) (20)
Determine
σ
(g+1)
,
C
(g+1)
g g+1
UNTIL STOP
w
m;k
= w
(g
end
)
m
,
σ
µ
= σ
(g
end
)
µ
,
C
k
= C
(g
end
)
,
m
k
= m
(g
end
)
,
m = 1, ...,µ
Figure 2: Evolutionary particle filter (EPF
rec
) using recom-
bination.
rectly to both main EPF-types. However, the finding
the most suitable recombination form for the step-size
is an important research task.
We will use the following notation in the remain-
der of the paper. Let EPF-CMSA denote the evolu-
tionary particle filter without recombination using the
update rules of the CMSA-ES, EPF
rec
-CMSA denote
the algorithm with recombination and the CMSA-
rule, whereas EPF-CMA and EPF
rec
-CMA apply the
rules from the CMA-ES most notably the cumulative
search path adaptation.
3 EXPERIMENTS
This paper is devoted to an investigation as to whether
evolution strategies can be applied in a tracking where
the state evolution equations are unknown and aims
at providing a proof-of-concept. Therefore, finding
optimal parameter settings for the algorithms is not
attempted and is left for further work.
ICORES2013-InternationalConferenceonOperationsResearchandEnterpriseSystems
298
3.1 Experimental Set-up
The algorithms have several parameters that can and
should normally be tuned for a practical use. The ex-
periments presented will start from the default values
in CMA-ES and CMSA-ES and conduct only limited
investigations into finding better parameters. The fol-
lowing questions are addressed in our first investiga-
tion of the algorithms using two simple dynamic mod-
els: Are evolution strategies able to track the mov-
ing target without knowledge of the true positions
and state equations? This leads to the next topics:
Are there differences in the tracking quality between
CMA and CMSA using EPF
rec
? Are there differences
between the tracking using EPF and EPF
rec
?
Following particle filter literature, the averageroot
mean squared error (average RMSE) is considered
together with the relative success frequency P
succ
of
having a tracking distance below a given threshold.
In this paper, we investigate the algorithms using two
simple systems. A one-dimensional system which
is inspired by (Uosaki and Hatanaka, 2005), where
only the step-size adaptation mechanism is investi-
gated and a two-dimensional system which requires
the adaptation of a diagonal covariance matrix. For a
proof-of-concept we consider the following systems:
The first, one-dimensional, dynamical system is given
by
x
t
=
x
t1
2
+
25x
k1
1+ x
2
k1
+ 8cos(1.2t) + 10N (0, 1)
z
t
= sign(x
t
)
x
2
t
20
+ N (0,1) (21)
with the maximum of p(z
i
|x) directly discernible. No
covariance matrix is necessary, just step-size adapta-
tion is required. A two-dimensional system is defined
by
x
k
=
0.5 0
0 0.5
x
k1
+
8cos(1.2t)
8sin(1.2t)
+
25x
1k1
1+x
2
1k1
25x
2k1
1+x
2
2k1
+
5N (0,1)
5N (0,1)
z
k
=
sign(x
1k
)
x
2
1t
20
+ N (0,1)
sign(x
2k
)
x
2
2t
20
+ N (0,1)
. (22)
Again, optimal points of the density can be obtained
easily.
3.2 Discussion
In a first analysis, the common CMA-ES and CMSA-
ES versions were applied with the usual parameter
setting excepting the population size λ and the learn-
ing rates of the CMSA-ES. The EPF
rec
-CMA-ES uses
µ = λ/2 parents applying weighted recombination,
whereas the EPF
rec
-CMSA-ES uses µ = λ/4 parents
with equal weights. The fitness independent weights
enable the use of a different fitness function instead
of the normal probability density in (21) as long as
the optimizer remains the same. The offspring pop-
ulation size was set to λ = 100 which is quite large
for ESs but relatively small for particle filters. Pre-
liminary runs revealed that in the case of the first
dynamical system, good results were obtainable for
small population sizes around λ = 10 for the CMA-
variant whereas self-adaptation requires larger popu-
lation sizes. In the case of the second system, larger
populations appear necessary. The learning rates τ
and τ
c
have to be adjusted. In a first approach, the
parameters were set to τ = 1/
10 and τ
c
= 1/(µN).
As a safeguard against strong statistical fluctuations
in the case of the EPF
rec
-CMSA-ES, the mean of
the mutation strengths was substituted by the median
since this is the more robust estimate. We allowed
three generation cycles before new measurements ar-
rived. We conducted 20 different runs of the dynami-
cal system (21) simulating the system for k
max
= 100
movements/measurements. For each of these realiza-
tions, we conducted 30 runs of the EAs. A track-
ing is called successful, if the distance between the
true value and the algorithmic estimate is smaller than
one. Overall, we find that the EPF
rec
-CMA realizes
lower average root mean squared errors. In the case
of dynamical system (21), the minimal average mean
squared error is 7.96, the median lies at 9.31 and
the maximal average error reads 11.96. In contrast,
the values read 13.52, 19.57, and 24.65 for EPF
rec
-
CMSA. We assume that this may be due either to the
parameter setting chosen or that the stronger stochas-
tic influences on the CMSA-ES have lead to stronger
fluctuations in the tracking quality and some extreme
values. Concerning the success frequency, an interest-
ing finding emerges. For the first dynamical system,
the EPF
rec
-CMSA has success frequencies between
0.32 and 0.39 with an average RMSE during track-
ing time of around 0.17-0.18 whereas the success fre-
quency is below 0.1 for the CMA-version.
In the case of the two-dimensional system (22),
both strategies benefit from larger population sizes
and longer generation times until new measurements
arrive. The number of runs of the dynamical sys-
tem was reduced to ten and 30 repeats were used. In
contrast to the system (21), the EPF
rec
-CMSA vari-
ant lead to better results (see Tables (1) and (2)).
First experiments with the EPF-strategy revealed
stability problems if a fitness dependent weight was
EvolutionaryParticleFilters:Model-freeObjectTracking-CombiningEvolutionStrategiesandParticleFilters
299
Table 1: EPF
rec
-CMA: Average RMSE-values (best and
worst) for ten different runs of the system (22), each av-
eraged over 30 repeats.
λ g best worst
100 1 13.221 15.755
100 2 12.612 14.769
100 3 12.076 15.469
100 4 12.161 14.795
100 5 12.706 14.746
200 1 14.733 16.843
200 2 14.977 17.796
200 3 15.93 19.895
200 4 15.991 19.721
200 5 16.424 19.956
Table 2: EPF
rec
-CMSA: Average RMSE-values for ten dif-
ferent runs of the system (22), each averaged over 30 re-
peats.
λ g best worst
100 1 10.335 9.263
100 2 7.39 8.778
100 3 6.072 8.312
100 4 5.54 6.958
100 5 5.592 7.632
200 1 8.829 10.576
200 2 6.505 8.156
200 3 6.358 7.924
200 4 5.81 7.428
200 5 5.832 6.870
used. Therefore, usual rank-dependent weights of the
CMSA-ES were applied. We compared the perfor-
mance of the EPF-strategy with the EPF
rec
-CMSA for
system (21), that is, without adapting the covariance
matrices. The experiments reveal no advantage for
using the EPF strategy. Instead the RMSE is always
larger than the error for the EPF
rec
-CMSA. Not using
recombination probably requires larger populations in
order to cope with stochastic fluctuations.
4 CONCLUSIONS
This paper presents new approaches for state esti-
mation in dynamical systems. State estimations of
dynamical systems have various application areas
ranging from tracking and localization tasks in au-
tonomous robots to air traffic control and fault de-
tection. In contrast to nearly all approaches, the pa-
per does not assume that the probabilistic model gov-
erning the evolution of the state variables of the sys-
tem is completely known. Instead it assumes that the
evolution equation cannot be recovered analytically
and only measurement information arrives during the
tracking. We propose the use of evolution strategies
which have been successfully applied to noisy and dy-
namic optimization. The paper presents a proof-of-
concept applying selected evolutionary particle filters
to simple dynamical systems. Since the first exper-
iments were successful, in-depth investigations will
be carried out in order to fine-tune the approaches.
REFERENCES
Arnold, D. V. and Beyer, H.-G. (2006). Optimum tracking
with evolution strategies. Evolutionary Computation,
14:291 – 308.
Arulampalam, M., Maskell, S., Gordon, N., and Clapp,
T. (2002). A tutorial on particle filters for on-
line nonlinear/non-gaussian bayesian tracking. Signal
Processing, IEEE Transactions on, 50(2):174 –188.
Beyer, H.-G. and Meyer-Nieberg, S. (2006). Self-adaptation
of evolution strategies under noisy fitness evalua-
tions. Genetic Programming and Evolvable Machines,
7(4):295–328.
Beyer, H.-G. and Schwefel, H.-P. (2002). Evolution strate-
gies: A comprehensive introduction. Natural Com-
puting, 1(1):3–52.
Beyer, H.-G. and Sendhoff, B. (2008). Covariance matrix
adaptation revisited - the CMSA evolution strategy -.
In Rudolph, G. et al., editors, PPSN, volume 5199 of
Lecture Notes in Computer Science, pages 123–132.
Springer.
Doucet, A. and Johansen, A. M. (2011). A tutorial on par-
ticle filtering and smoothing: Fifteen years later. In
Crisan, D. and Rozovsky, B., editors, Oxford Hand-
book of Nonlinear Filtering, 2011,. Oxford University
Press.
Hansen, N. (2006). The CMA evolution strategy: A com-
paring review. In Lozano, J. et al., editors, To-
wards a new evolutionary computation. Advances in
estimation of distribution algorithms, pages 75–102.
Springer.
Johansson, A. and Lehmann, E. (2009). Evolutionary op-
timization of dynamics models in sequential monte
carlo target tracking. Evolutionary Computation,
IEEE Transactions on, 13(4):879 –894.
Meyer-Nieberg, S. and Beyer, H.-G. (2007). Self-adaptation
in evolutionary algorithms. In Lobo, F., Lima, C., and
Michalewicz, Z., editors, Parameter Setting in Evo-
lutionary Algorithms, pages 47–76. Springer Verlag,
Heidelberg.
Schwefel, H.-P. (1981). Numerical Optimization of Com-
puter Models. Wiley, Chichester.
Uosaki, K. and Hatanaka, T. (2005). Evolution strategies
based gaussian sum particle filter for nonlinear state
estimation. In Evolutionary Computation, 2005. The
2005 IEEE Congress on, volume 3, pages 2365
2371.
ICORES2013-InternationalConferenceonOperationsResearchandEnterpriseSystems
300