different tradeoffs with respect to each objective.
Optimizers also need to present the best few, which
means that overwhelming the decision maker by
presenting too many solutions is not preferred.
3.1 More Objectives
Optimization problems that have more than 3
objectives are named many-objective optimization
problems, and problems with 2 or 3 objectives are
named multi-objective optimization problems. In
(Khare, et al., 2003), it was found after testing 3
MOEAs from the 2
nd
generation of MOEAs (NSGA-
II, SPEA2, PESA) that these algorithms showed
vulnerability on problems with a larger number of
objectives.
The main difficulties with many-objective
optimization problems are visualization, how to
handle high dimensionality and the exponential
number of points needed to represent the Pareto
front, the greater proportion of nondominated
solutions, and stagnation of search due to larger
number of incomparable solutions. Our work tackles
the latter two difficulties by changing the definition
of dominance to an approximate one, easing the
criteria of acceptance of nondominated solutions.
3.2 Dominance
Multi-objective optimization algorithms insisting on
both diversity and convergence to the Pareto front
face Pareto sets of substantial sizes, need huge
computation time, and are forced to present very
large solutions to the decision maker. These issues
effectively make them useless until further analysis,
because speed and presenting few solutions are very
important to decision makers.
ϵ-dominance (Laumanns, et al., 2002) tries to fix
these problems by quickly searching for solutions
that are good enough, diverse, and few in number. It
approximates domination in the Pareto set by
relaxing the strict definitions of dominance and
considering individuals to ϵ-dominate other
individuals, whereas previously they would have
been nondominated to each other.
In Figure 1, a visual comparison between ϵ-
dominance and regular dominance is shown
(Laumanns, et al., 2002).
4 GENETIC PROGRAMMING
Genetic programming (GP) is one type of
evolutionary algorithms. Its main characteristic is
Figure 1: Differences between (a) regular and (b) ϵ -
dominance.
that it represents solutions as programs (Koza,
1992). This representation scheme is the main
difference between genetic algorithms and genetic
programming. Each solution (program) is judged
based on its ability to solve the problem, using a
mathematical function, the fitness function. Each
program, or solution, is represented using a decision
tree. GP evolves a population of programs by
selecting some candidates that score high on the
fitness function and using regular evolutionary
variation operators on them (mutation, crossover,
and reproduction). New populations are created from
these outputs until any specific termination criterion
is met.
We use strongly-typed genetic programming
(STGP) in this paper, which is one of many
enhanced versions of GP. STGP makes GP more
flexible, explicitly defining allowed data types
beforehand instead of limiting it to only one data
type. Genetic programming, and STGP specifically,
consists of the following.
1) Representation: individuals are
represented as decision trees, but unlike usual GP
(Koza, 1992), STGP doesn’t limit variables,
constants, arguments for functions, and values
returned from functions to be of the same data type.
We only need to specify the data types beforehand.
Additionally, to ensure consistency, the root node of
the tree must return a value of the type specified by
the problem definition and each nonroot node has to
return a value of the type required by its parent node
as an argument.
2) Fitness function: scores how well a
specific execution matches expected results.
3) Initialization: there are two main methods
to initialize a population: full and grow. Koza (Koza,
1992) recommended using a ramped half-and-half
approach, combining the two methods equally.
4) Genetic operators: crossover and mutation.
5) Parameters: maximum tree depth,
maximum initial tree depth, max mutation tree
depth, population size, and termination criteria.