A LONG-TERM MEMORY APPROACH FOR DYNAMIC
MULTIOBJECTIVE EVOLUTIONARY ALGORITHMS
Alan D
´
ıaz-Manr
´
ıquez, Gregorio Toscano-Pulido and Ricardo Landa-Becerra
Information Technology Laboratory, CINVESTAV-Tamaulipas, Parque Cient
´
ıfico y Tecnol
´
ogico TECNOTAM
Km. 5.5 carretera Cd. Victoria-Soto La Marina, Cd. Victoria, Tamaulipas 87130, Mexico
Keywords:
Evolutionary algorithms, Dynamic multiobjective optimization.
Abstract:
A dynamic optimization problem (DOP) may involve two or more functions to be optimized simultaneously, as
well as constraints and parameters which can be changed over time, it is essential to have a response approach
to react when a change is detected. In the past, several memory-based approaches have been proposed in order
to solve single-objective dynamic problems. Such approaches use a long-term memory to store the best known
solution found so far before a change in the environment occurs, such that the solutions stored can be used
as seeds in subsequent environments. However, when we deal with a Dynamic Multiobjective Problems with
a Pareto-based evolutionary approach, it is natural to expect several traded-off solutions at each environment.
Hence, it would be prohibitive to incorporate a memory-based methodology into it. In this paper, we propose a
viable algorithm to incorporate a long-term memory into evolutionary multiobjective optimization approaches.
Results indicate that the proposed approach is competitive with respect to two previously proposed dynamic
multiobjective evolutionary approaches.
1 INTRODUCTION
Since life is dynamic, it is only natural to expect that
the problems from daily life are dynamics. A dy-
namic optimization problem may involve two or more
functions to be optimized simultaneously (also known
as dynamic multiobjective optimization problems, or
DMOPs for short), as well as constraints and parame-
ters which can be changed over time. Although the
study of this type of problems is not new, most of
the proposed approaches transform the original dy-
namic problem into many static optimization prob-
lems. The evolutionary computation community has
focused their efforts on designing approaches to solve
these problems without performing any transforma-
tion.
This work proposes to incorporate a change re-
sponse methodology into Dynamic Multiobjectve
Evolutionary Algorithms (DMOEAs). Such method-
ology uses a long-term memory which minimizes the
information to be stored in order to replicate a spe-
cific state of the search if it is needed in subsequent
environments.
The remain of this paper is organized as follows:
In Section 2, we present the dynamic multiobjective
state-of-the-art. Section 3 describes the proposed al-
gorithm and its conformed components. Section 4
presents the experiments and comparison of results.
Finally, Section 5 provides our conclusions as well as
some possible directions for future research.
2 RELATED WORK
Evolutionary algorithms have been successfully ap-
plied to solve DMOPs. Their success rate might be
directed for their population-based nature, since this
allows them to use the most of the previous discov-
ered knowledge in order to follow a change in the en-
vironment. Bingul (Bingul, 2007) solved a dynamic
multiobjective optimization problem (DMOP) using
an aggregating function approach with a Genetic Al-
gorithm (GA). Hatzakis and Wallace (Hatzakis and
Wallace, 2006) proposed a forward-looking approach
which combines a forecasting technique with an evo-
lutionary algorithm. Deb et al. proposed two mod-
ifications to the NSGA-II in order to able it to han-
dle DMOPs. In the first modification, the population
is reinitialized while in the second, the population is
mutated depending on the type of change in the en-
vironment (Deb et al., 2006). Talukder and Kirley
(Talukder and Kirley, 2008) used a new variation op-
erator to follow the Pareto front in DMOPs.
333
Díaz Manríquez A., Toscano-Pulido G. and Landa-Becerra R..
A LONG-TERM MEMORY APPROACH FOR DYNAMIC MULTIOBJECTIVE EVOLUTIONARY ALGORITHMS.
DOI: 10.5220/0003675403330337
In Proceedings of the International Conference on Evolutionary Computation Theory and Applications (ECTA-2011), pages 333-337
ISBN: 978-989-8425-83-6
Copyright
c
2011 SCITEPRESS (Science and Technology Publications, Lda.)
3 PROPOSED APPROACH
Since the natural behavior of a DMOP is to be chang-
ing, it is essential to perform a response action when
a change is detected. Several authors (Mori et al.,
1996; Branke, 1999; Branke et al., 2000) have pro-
posed memory-based approaches in order to solve
single-objective dynamic problems. Such approaches
store the best known solution found so far before
a change in the environment occurs and use such
stored solutions to be seeds in subsequent environ-
ments. However, when we deal with a DMOP using a
Pareto-based approach, it is natural to expect several
traded-off solutions for each environment. Hence, it
would be prohibitive to incorporate a memory-based
methodology into it. However, if we had a special
interpolation operator which could restore the previ-
ously known P F using only few points, then it would
be possible to have a memory-based multiobjective
optimization approach.
The basis of our proposal lays on the construction
of a special interpolation operator that will predict
several non-dominated solutions in order to connect
two points located on the extremes of the P F . We
adopt a methodology which uses a long-term mem-
ory that minimizes the information to be store in order
to reproduce a specific state of the search. Thereby,
the methodology uses such information as knowledge
in subsequent environments. However, it is neces-
sary to generate new knowledge from few points.
To achieve this goal we propose a method to gener-
ate non-dominated solutions between a pair of non-
dominated solutions. In this manner, if we choose the
extremes of the original P F , we will be able to gen-
erate solutions which presumable will belong to the
P F . The proposed operator to achieve works as fol-
lows:
3.1 Operator to Create Solutions from
the Extremes of a Bi-objective P F
Given two non-dominated points ~a = [a
1
,..., a
n
]
T
and
~
b = [b
1
,. .., b
n
]
T
, such that f
1
(~a) < f
1
(
~
b) f
2
(~a) >
f
2
(
~
b).
We want to find the point~z P
t
whose evaluation
in the objective space
~
f (~z) is located between
~
f (~a)
and
~
f (
~
b) (i.e., f
1
(~a) < f
1
(~z) < f
1
(
~
b) f
2
(~a) > f
2
(~z) >
f
2
(
~
b). The complete procedure to find~z is shown be-
low:
Step 1. Construct a system of equations of the form
AX = C:
A =
f
1
(~a) f
2
(~a)
f
1
(
~
b) f
2
(
~
b)
X =
X
11
X
12
X
13
... X
1n
X
21
X
22
X
23
... X
2n
C =
a
1
a
2
a
3
... a
n
b
1
b
2
b
3
... b
n
Step 2. Solve the system of equations in order to ob-
tain X.
X = A
1
C (1)
Step 3. Construct
~
f
0
(~z) with desired objectives. This
is performed increasing in the first objective and
decreasing in the second one (using Equation 2).
~
f
0
(~z) =
f
1
(~a) + f
2
(~a)
(2)
Step 4. Multiply X by
~
f
0
(~z) in order to obtain the ~z
values (using Equation 3).
~z =
~
f
0
(~z)X (3)
Step 5. Once the~z value is computed, it is necessary
to evaluate it in order to calculate its true objective
values, i.e. to compute
~
f (~z). The new computed
point will serve as seed for the next point to be
generated.
This mechanism will be used until f
1
(~z) > f
1
(
~
b),
(i.e., when the current approximated point reach
the final position).
With the aim to validate the current operator in
static environments, we selected 30 pairs of points
from several test functions (WFG1 to WFG9, ZDT1
to ZDT3 and Kursawe. Due to space limitations, and
also because of the scope of this paper, the details are
not provided here. However, the main conclusions of
such experiment are:
The proposed operator was able to connect two
points in the decision variables space of the test
functions whose solutions evaluated in the objec-
tive function space became non-dominated.
The proposed approach showed satisfactory re-
sults for: ZDT1 to ZDT3, WFG2 to WFG7 and
WFG9.
The operator malfunctioned when optimizing the
Kursawe test functions due to the disconnection
of the problem in the parameter space. However,
when the extreme points of any connected region
in the parameter space were provided, then the
operator was able to approximate the portion of
the front belonged to such space. On the other
hand, when the operator was trying to approxi-
mate WFG1 and WFG8 did not work as we ex-
pected, since they are disconnected in the param-
eter space, so it was impossible to find a path be-
tween two disconnected points in SP .
3.2 Methodology for Environmental
Change Response
Above we proposed an operator to generate solutions
between two points belonging to a connected region
ECTA 2011 - International Conference on Evolutionary Computation Theory and Applications
334
in the parameter space. Given such operator, we
can develop an algorithm to reduce the information
needed to produce a Pareto by storing only two ex-
treme solutions by change in the enviroment. Below,
we present such methodology:
Once that the change in the environment is de-
tected, we can use the solutions previously stored in
the repository. First, the repository and the current
population should be re-evaluated such that we can
integrate them in order to obtain the overall extreme
points of the current Pareto front. From this two so-
lutions, we should create a ζ% new solutions using
the proposed operator (shown in Section 3.1). The
remain 100 ζ% population will be randomly gen-
erated in order to incorporate diversity to the popula-
tion. This procedure is shown in Algorithm 1. Once
the response to the change is achieve. Then, we can
use the produced population with any MOEA. In this
case, we used the NSGA-II (Algorithm 2).
Algorithm 1: Response to an environmental change.
1: F = Non-dominated solutions of P
t
2: Report F
3: Extreme = extreme solutions from F
4: M = M Extreme
5: Reevaluate M and F
6: Obtain the extreme non-dominated solutions from M F
7: Replace a ζ% of F with new individuals generated with the
procedure shown in Section 3.1).
8: Generate 100 ζ% randomly individuals
Algorithm 2: DNSGA-II-LTM.
1: t = 0
2: M =
/
0
3: Initialize the population P
t
4: F = nondominatedsorting(P
t
)
5: For each F
i
F do
6: crowdingdistance(F
i
)
7: end for
8: repeat
9: if environmental change is detected then
10: Response to the change (Algorithm 1)
11: end if
12: P
t+1
= Elitism
13: t = t +1
14: until Termination criteria is fulfilled
4 EXPERIMENTS AND
COMPARISON OF RESULTS
In order to know how competitive is our approach, we
decided to compare our results with respect to those
obtained by the DNSGA II-A and the DNSGA II-B
(Deb et al., 2006). In order to have a fair comparison,
we hand-tuned each change to be activated every 500
evaluations of the objective function. For such sake,
we setup the following empirical tuning: Simulated
binary crossover (P
c
= 0.9, η
c
= 15), parameter-based
mutation (P
m
= 1/nvars,η
c
= 20), P = 100, ζ = 30.
Two test functions were taken from the specialized lit-
erature to compare our approaches and other two are
proposed by us. In order to allow a quantitative as-
sessment of the performance of a multiobjective op-
timization algorithm two standard performance mea-
sures were adopted: the Inverted Generational Dis-
tance (IGD) and the Hypervolume ratio (HVR).
The tests functions used for the algorithms are
FDA1 and FDA2, both were proposed by Farina et al.
(Farina et al., 2003), and the DZDT2 and DZDT3, are
proposed here. They are modifications to the ZDT2
and ZDT3, respectively and are shown in Table 1.
Table 1: Tests functions.
Dynamic Multiobjective Problems
DZDT2
f
1
(~x
I
) = x
1
, f
2
(~x) = g(~x
II
)
1
x
1
g(~x
II
)
2
,
G(t) = sin (0.5πt), t =
1
n
t
j
τ
τ
T
k
g(~x
II
) = 1 +
9
n1
n
i=2
(x
i
abs(G(t)))
τ is the generation counter and τ
T
is the num-
ber of generations that t is fixed.
x
1
,. . . , x
n
[0,1] and n = 30
DZDT3
f
1
(~x
I
) = x
1
, f
2
(~x) =
g(~x
II
)
1
q
x
1
g(~x
II
)
x
1
g(~x
II
)
sin(10πx
1
)
G(t) = sin (0.5πt), t =
1
n
t
j
τ
τ
T
k
g(~x
II
) = 1 +
9
n1
n
i=2
abs(x
i
abs(G(t)))
τ is the generation counter and τ
T
is the num-
ber of generations that t is fixed.
x
1
,. . . , x
n
[0,1] and n = 30
We measured the number of generations that
requires our approach, the DNSGA-II-A and the
DNSGA-II-B to approximate the true P F at a IGD
distance of 0.01 during 100 changes. This procedure
was executed 100 times and the solutions obtained
were averaged. The we plotted the mean of such so-
lutions during the 100 changes. This experiment was
made in order to check if the use of memory can help
to the algorithm to decrease the number of genera-
tions needed to approximate to the true P F .
In Figure 1(a) we can see as the DNSGA-II-LTM
could stabilize its behavior over the time, in such a
way that it could reduce the number of generations
necessary to follow the P F of FDA1 (to a value of
IGD=0.01). Figures 1(c) and 1(d) show how the two
proposed problems were more difficult that the two
problems taken from the literature, since the num-
ber of generations needed to reach the IGD=0.01 was
considerably higher, in these problems it is easy to
observe a similar behavior that the one observed in
A LONG-TERM MEMORY APPROACH FOR DYNAMIC MULTIOBJECTIVE EVOLUTIONARY ALGORITHMS
335
0
10
20
30
40
50
60
0 10 20 30 40 50 60 70 80 90 100
Number of generations
Changes
DNSGA-II-A
DNSGA-II-B
DNSGA-II-MEMORY
(a) FDA1
0
5
10
15
20
25
30
0 10 20 30 40 50 60 70 80 90 100
Number of generations
Changes
DNSGA-II-A
DNSGA-II-B
DNSGA-II-MEMORY
(b) FDA2
0
20
40
60
80
100
120
140
160
180
200
0 10 20 30 40 50 60 70 80 90 100
Number of generations
Changes
DNSGA-II-A
DNSGA-II-B
DNSGA-II-MEMORY
(c) DZDT2
0
20
40
60
80
100
120
140
0 10 20 30 40 50 60 70 80 90 100
Number of generations
Changes
DNSGA-II-A
DNSGA-II-B
DNSGA-II-MEMORY
(d) DZDT3
Figure 1: Number of generations required for approximate at the P F at a IGD of 0.01. Problem FDA1.
DNSGA−II−A DNSGA−II−B DNSGA−II−MEMORY
0.0 0.2 0.4 0.6 0.8 1.0
Hipervolume ratio
Algoritmh
Hipervolume ratio
(a) FDA1
DNSGA−II−A DNSGA−II−B DNSGA−II−MEMORY
0.0 0.2 0.4 0.6 0.8 1.0
Hipervolume ratio
Algoritmh
Hipervolume ratio
(b) FDA2
DNSGA−II−A DNSGA−II−B DNSGA−II−MEMORY
0.0 0.2 0.4 0.6 0.8 1.0
Hipervolume ratio
Algoritmh
Hipervolume ratio
(c) DZDT2
DNSGA−II−A DNSGA−II−B DNSGA−II−MEMORY
0.0 0.2 0.4 0.6 0.8 1.0
Hipervolume ratio
Algoritmh
Hipervolume ratio
(d) DZDT3
Figure 2: Boxplots of 100 executions for FDA1, FDA2D, DZDT2 and DZDT3 during 100 changes.
FDA1 (the algorithm DNSGA-II-LTM could stabilize
its behavior over the time).
In FDA2 the three algorithm presented a good be-
havior, but again, the DNSGA-II-LTM reduced the
number of generations needed to follow the P F at
a IGD of 0.01 (see Figure 1(b)).
Since results from IGD indicate that the pro-
posed modification could outperform NSGA-II-A and
NSGA-II-B, we decided to perform a second experi-
ment. For this new experiment, we decided to mea-
sure the hypervolume ratio during 100 changes using
the three approaches (ours, the DNSGA-II-A and the
DNSGA-II-B). In order to present such results in a
friendly-comparison way, we decided to present them
as box-plot graphics which are shown in Figure 2.
From box-plots, we can see that the DNSGA-II-
LTM could reach to a hypervolume value close to
1.0 in most of the problems. The anomalous results
shown also in boxplots are due to the start of the
optimization process the algorithm had not sufficient
knowledge in memory and therefore, it was more dif-
ficult to follow the movement of the optimum. But
when the algorithm gained enough knowledge, the al-
gorithm could reach the P F most of the time.
5 CONCLUSIONS
According at the obtained result we can conclude that
the use of a long-term memory in dynamic multiob-
jective evolutionary algorithms reduces the number
of fitness function evaluations needed to optimize a
DMOP. It should be clear that in order to use a long-
term memory-based approach is necessary to use a
method to reduce the amount of information of each
environment to be stored in order to avoid saturation
of the memory. The proposed approach to generate
solutions from two points (in a connected region in
the parameter space) can be used to store only two so-
lutions at each change of the environment. It should
be noted that this method was only tested for prob-
lems with two objective functions. However, we are
planning to explore more dimensions in the objective
space in the future.
ACKNOWLEDGEMENTS
The first author gratefully acknowledges support from
CONACyT through project 105060. Also, this re-
search was partially funded by project number 51623
from “Fondo Mixto Conacyt-Gobierno del Estado
de Tamaulipas”. We would like to thank to Fondo
Mixto de Fomento a la Investigaci
´
on cient
´
ıfica y
Tecnol
´
ogica CONACyT - Gobierno del Estado de
Tamaulipas for their support to publish this paper.
REFERENCES
Bingul, Z. (2007). Adaptive Genetic Algorithms Applied to
Dynamic Multi-Objective Problems. Appl. Soft Com-
put., 7(3):791–799.
Branke, J. (1999). Memory enhanced evolutionary al-
gorithms for changing optimization problems. In
Congress on Evolutionary Computation CEC99,
pages 1875–1882. IEEE.
Branke, J., Kaussler, T., Schmidt, C., and Schmeck., H.
(2000). A multi-population approach to dynamic opti-
mization problems. In Adaptive Computing in Design
and Manufacturing, pages 299–307.
Deb, K., N., U. B. R., and Karthik, S. (2006). Dynamic
multi-objective optimization and decision-making us-
ing modified NSGA-II: A case study on hydro-thermal
power scheduling. In EMO, pages 803–817.
Farina, M., Deb, K., and Amato, P. (2003). Dynamic Mul-
tiobjective Optimization Problems: Test Cases, Ap-
proximation, and Applications. In Fonseca, C. M.,
Fleming, P. J., Zitzler, E., Deb, K., and Thiele, L., edi-
tors, Evolutionary Multi-Criterion Optimization. Sec-
ECTA 2011 - International Conference on Evolutionary Computation Theory and Applications
336
ond International Conference, EMO 2003, pages 311–
326, Faro, Portugal. Springer. Lecture Notes in Com-
puter Science. Volume 2632.
Hatzakis, I. and Wallace, D. (2006). Dynamic Multi-
Objective Optimization with Evolutionary Algo-
rithms: A Forward-Looking Approach. In et al.,
M. K., editor, 2006 Genetic and Evolutionary Compu-
tation Conference (GECCO’2006), volume 2, pages
1201–1208, Seattle, Washington, USA. ACM Press.
ISBN 1-59593-186-4.
Mori, N., Imanishi, S., Kita, H., and Nishikawa, Y. (1996).
Adaptation to a changing environment by means of
the thermodynamical genetic algorithm. In Voigt, H.,
editor, Parallel Problem Solving from Nature, vol-
ume 1141 of LNCS, pages 513–522. Springer Verlag,
Berlin.
Talukder, A. K. A. and Kirley, M. (2008). A pareto follow-
ing variation operator for evolutionary dynamic multi-
objective optimization. In Proceedings of the IEEE
Congress on Evolutionary Computation 2008 (CEC
2008), Hong Kong, China. IEEE Press, Piscataway,
NJ.
A LONG-TERM MEMORY APPROACH FOR DYNAMIC MULTIOBJECTIVE EVOLUTIONARY ALGORITHMS
337