preference for an evolutionary method, based on the
large number of successful multi-objective evolution-
ary algorithms in the literature (Deb et al., 2002; Zit-
zler et al., 2001). Second, we want to have an algo-
rithm that produces the Pareto front of non-dominated
points in the search space (here: the space of parame-
ter vectors of the EA to be tuned). This preference is
motivated by the advantages this approach offers, e.g.,
it allows us to investigate interactions between fitness
functions, parameter values, and the evolutionary al-
gorithm. Furthermore, it allows us to identify various
‘generalists’ rather than a single one, as well as to an-
alyze the robustness of an EA.
The main contributions of this paper can be listed
as follows.
• We introduce a multi-function tuning algorithm
called M-FETA based on a Multi-Objective Evo-
lutionary Algorithm (MOEA) approach that is
able to cope the two principal challenges men-
tioned above.
• We demonstrate the benefits of using an approxi-
mated Pareto front by tuning an EA on the Sphere
and Rastrigin functions.
1
Namely, the parameter
Pareto front allows us to investigate interactions
between fitness functions, parameter values, and
the evolutionary algorithm. Furthermore, it allows
to identify different kinds of ‘generalists’ rather
than a single one, as well as to analyze the robust-
ness of an EA.
2 PARAMETERS, TUNERS, AND
UTILITY LANDSCAPES
In general, one can distinguish three layers in param-
eter tuning: the application layer, the algorithm layer,
and the design or tuning layer. The whole scheme can
be divided into two optimization problems. The lower
part of this three-tier hierarchy consists of a problem
on the application layer (e.g., the traveling salesman
problem) and an EA (e.g., a genetic algorithm) on the
algorithm layer trying to find an optimal solution for
this problem. Simply put, the EA is iteratively gen-
erating candidate solutions (e.g., permutations of city
names) seeking one with maximal quality. The upper
part of the hierarchy contains a tuning method that
is trying to find optimal parameter values for the EA
on the algorithm layer. Similarly to the lower part,
1
A test suite of 2 functions is certainly not large
(enough), but here we are mainly interested in a proof-of-
concept and demonstrating the new technology, rather than
real tuning on an ‘interesting’ test suite.
the tuning method is iteratively generating parame-
ter vectors seeking one with maximal quality, where
the quality of a given parameter vector ¯p is based on
the performance of the EA using the values of it. To
avoid confusion we use distinct terms to designate the
quality function of these two optimization problems.
Conform the usual EC terminology we use the term
fitness for the quality of candidate solutions on the
lower level, and the term utility to denote the quality
of EA parameter vectors.
In simplest case, the utility of a parameter vec-
tor ¯p is the performance of the EA using the values
of ¯p on a given test function F. Tuning an EA (by
whichever performance metric) on one single func-
tion F delivers a specialist, that is, an EA that is very
good in solving F with no claims or indications re-
garding its performance on other problems. This can
be a satisfactory result if one is only interested in solv-
ing that given problem. However, algorithm design-
ers in general, and evolutionary computing experts in
particular, are often interested in so called ‘robust pa-
rameter values’, that is, in parameter values that make
an EA using them work well on many problems. To
this end, test suites consisting of many test functions
are used to evaluate algorithms and to support claims
that a given algorithm is good on a ‘wide range of
problems’. This approach raises serious methodol-
ogy issues as discussed in (Eiben and Jelasity, 2002),
and may also be in conflict with theoretical results, cf.
(Wolpert and Macready, 1997), all depending on how
the claims are formulated. In this paper we do not
elaborate on these issues, but take a pragmatic stance
instead: We are after a method that is able to find pa-
rameter vectors that work well on a given set of test
functions.
3 MULTI-FUNCTION
EVOLUTIONARY TUNING
ALGORITHM
The Multi-Function Evolutionary Tuning Algorithm
(M-FETA) is, in essence, a Multi-Objective Evolu-
tionary Algorithm with a particular technique of as-
sessing the quality of candidate solutions. This tech-
nique is designed for being used within a parameter
tuner for EAs. In such applications candidate solu-
tions are EA parameter vectors whose quality is de-
fined by the performance of the EA on a collection
of functions F = { f
1
, . . . , f
M
}. By the stochastic na-
ture of EAs, this performance is a noisy observable.
In tuning terms, this means that the utility of a pa-
rameter vector ¯x can only be estimated. The usual
ICEC 2010 - International Conference on Evolutionary Computation
262