The rest of the paper is organized as follows: in
Sec. 2 the novel algorithm is presented and in Sec. 3,
indications of its performance are illustrated, through
the evaluation by theoretical objective functions, as
well as an application example, corresponding to the
problem of optimizing the riding comfort of a passen-
ger vehicle. In Sec. 4 some final remarks are given,
together with suggestions for further research.
2 THE HYBRID ALGORITHM
2.1 Description
The proposed hybrid algorithm with deterministic
mutation aims, as already mentioned, at interconnect-
ing the advantages of both optimization approaches.
Deterministic methods are characterized, if the opti-
mization function is regular, by a high convergence
rate and accuracy in the search for the optimum. On
the other hand, EA show a low convergence rate but
they can search on a significantly broader area for the
global optimum.
[µ/ρ (+/,) λ, ν]–hES is based on the distribu-
tion of the local and the global search for the opti-
mum and it consists of a super-positioned stochastic
global search, followed by a independent determinis-
tic procedure, which is activated under conditions in
specific members of the involved population. Thus,
every member of the population contributes in the
global search, while single individuals perform the
local search. Similar algorithmic structures, the theo-
retical backgroundof which pertains to the simulation
of insects societies (Monmarche et al., 2000; Rajesh
et al., 2001), have been presented by (Colorni et al.,
1996; Dorigo et al., 2000; Jayaraman et al., 2000).
The stochastic platform has been selected to be
the ES, while the deterministic counterpart is a quasi–
Newton algorithm (see Sec. 2.2). It must be noted that
the selection of ES among the other instances of EA is
justified via numerical experiments in non–linear pa-
rameter estimation problems (Schwefel, 1995; Baeck,
1996), which have provided significant indication that
ES perform better than the other two classes of EA,
namely GA and EP.
The conventional ES is based on three operators
that take on the recombination, the mutation and the
selection tasks. In order to maintain an adequate
stochastic performance in the new algorithm, the re-
combination and selection tasks are retained unal-
tered (refer to (Beyer and Schwefel, 2002) for a brief
discussion about the recombination phase), while its
strong local topology performance is utilized through
the substitution of the original mutation operator by a
quasi–Newton one.
A very important matter that affects significantly
the performance of the [µ/ρ (+/,) λ,ν]–ES involves
the members of the population that are selected for
mutation: there exist indications (Koulocheris et al.,
2003b) that the reason for the poor performance of
EA in non–linear multimodal functions is the loss of
information through the non-privileged individuals of
the population. Thus, the new deterministic mutation
operator is not applied to all λ recombined individu-
als but only to the ν worst among the (µ (+/,) λ),
where ν is an additional algorithm parameter. This
means that a sorting procedure takes place twice in
every iteration step: the first time in order to yield
the ν worst individuals and the second to support the
selection operator, which succeeds the new determin-
istic mutation operator. This modification enables the
strategy to yield the corresponding local optimum for
each of the selected ν worst individuals in every iter-
ation step. The advantage is reflected in terms of in-
creased convergence rate and reliability in the search
for the global optimum, while three other alternatives
were tested. In these, the deterministic mutation op-
erator was activated by:
- every individual of the involved population,
- a number of privileged individuals, and
- a number of randomly selected individuals.
The above alternatives led to three types of problem-
atic behavior. More specifically, the first increased the
computational cost of the algorithm without the desir-
able effect. The second alternative led to premature
convergence of the algorithm to local optima of the
objective function, while the third generated unstable
behavior that led to statistically low performance.
2.2 The Deterministic Mutation
As noted, quasi–Newton type methods replace the
original mutation of ES. Yet, unlike earlier ver-
sions (Vrazopoulos, 2003), it is not wise to limit
the operator in a line–search framework, since trust–
region and mixedcombined methods have also proven
to be competitive alternatives, or to enforce the exclu-
sive use of the BFGS Hessian update, as analytical or
finite–difference derivative information may, in some
cases, be either available, or costless to compute. This
fact leads to the optional implementation of full New-
ton methods, but the term quasi–Newton shall be pre-
served, in order to cover the majority of the problems
faced in practice. Thus, in the following it is assumed
that the gradient of the objective function is approx-
imated using finite–differences, while the Hessian is
ICINCO 2009 - 6th International Conference on Informatics in Control, Automation and Robotics
130