Hybrid Kriging-assisted Level Set Method for Structural Topology
Optimization
Elena Raponi
1 a
, Mariusz Bujny
2,3 b
, Markus Olhofer
2 c
, Simonetta Boria
1 d
and Fabian Duddeck
3,4 e
1
School of Sciences and Technologies, Department of Mathematics, University of Camerino, Camerino, Italy
2
Honda Research Institute Europe GmbH, Offenbach am Main, Germany
3
Department of Civil, Geo and Environmental Engineering, Technical University of Munich, Munich, Germany
4
School of Engineering and Materials Science, Queen Mary University of London, London, U.K.
Keywords:
Topology Optimization, Structural Optimization, Hybrid Methods, Surrogate Modeling, Kriging, Evolution
Strategies, Level Set Method, Moving Morphable Components.
Abstract:
This work presents a hybrid optimization approach that couples Efficient Global Optimization (EGO) and Co-
variance Matrix Adaptation Evolution Strategy (CMA-ES) in the Topology Optimization (TO) of mechanical
structures. Both of these methods are regarded as good optimization strategies for continuous global optimiza-
tion of expensive and multimodal problems, e.g. associated with vehicle crashworthiness. CMA-ES is flexible
and robust to changing circumstances. Moreover, by taking advantage of a low-dimensional parametrization
introduced by the Evolutionary Level Set Method (EA-LSM) for structural Topology Optimization, such Evo-
lution Strategy allows for dealing with costly problems even more efficiently. However, it is characterized
by high computational costs, which can be mitigated by using the EGO algorithm at the early stages of the
optimization process. By means of surrogate models, EGO allows for the construction of cheap-to-evaluate
approximations of the objective functions, leading to an initial fast convergence towards the optimum in op-
position to a poor exploitive behavior. The approach presented here the Hybrid Kriging-assisted Level Set
Method (HKG-LSM) – first uses the Kriging-based method for Level Set Topology Optimization (KG-LSM)
to converge fast at the beginning of the optimization process and explore the design space to find promising
regions. Afterwards, the algorithm switches to the EA-LSM using CMA-ES, whose parameters are initial-
ized based on the previous model. A static benchmark test case is used to assess the proposed methodology
in terms of convergence speed. The obtained results show that the HKG-LSM represents a valuable option
for speeding up the optimization process in real-world applications with limited computational resources. As
such, the proposed methodology exhibits a much more general potential, e.g. when dealing with high-fidelity
crash simulations.
1 INTRODUCTION
Over the last decades, vehicle safety has become one
of the most studied research areas in automotive en-
gineering. In particular, since physical experiments
are characterized by prohibitive times and costs, the
design phase of new structures and components need
particular attention. The development of Computer-
a
https://orcid.org/0000-0001-6841-7409
b
https://orcid.org/0000-0003-4058-3784
c
https://orcid.org/0000-0002-3062-3829
d
https://orcid.org/0000-0003-2073-4012
e
https://orcid.org/0000-0001-8077-5014
Aided Design (CAD) methods and the progress in nu-
merical simulations with the Finite Element Method
(FEM) have made Crashworthiness Design a disci-
pline of increasing interest and potential. An option
to face the concept selection and the generation of a
prototype is embodied by Structural Topology Opti-
mization (TO) (Bendsøe and Sigmund, 2004). TO is a
well-developed discipline that aims to determine con-
structive solutions for component structures through
changing material distribution in a given design space
so as to obtain an optimal concept design under de-
fined boundary conditions (supports, forces). Al-
though it is applied in many engineering fields, the
most known TO approaches for crashworthiness ap-
70
Raponi, E., Bujny, M., Olhofer, M., Boria, S. and Duddeck, F.
Hybrid Kriging-assisted Level Set Method for Structural Topology Optimization.
DOI: 10.5220/0008067800700081
In Proceedings of the 11th International Joint Conference on Computational Intelligence (IJCCI 2019), pages 70-81
ISBN: 978-989-758-384-1
Copyright
c
2019 by SCITEPRESS Science and Technology Publications, Lda. All rights reserved
plications might be questioned because they use sim-
plifications, which frequently do not take into account
important aspects of the crash problem, e.g. nonlin-
earities, numerical noise, and discontinuities of the
objective functions to be optimized. Such approaches
can be collected in the following categories: Equiva-
lent Static Loads method (Duddeck and Volz, 2012;
Lee and Park, 2015), Ground Structure Approaches
(Pedersen, 2003), Bubble and Graph/Heuristic-based
Approaches (Eschenauer et al., 1994; Ortmann
and Schumacher, 2013), Hybrid Cellular Automata
method (Mozumder et al., 2012; Duddeck et al.,
2016), and State-based representation approaches
(Aulig and Olhofer, 2016; Aulig, 2017).
In this context, global optimizers which do not re-
quire any gradient or sensitivity information to carry
out the optimization procedure become of the utmost
importance. Among the proposed algorithms for solv-
ing crash optimization problems, Evolution Strategies
(ES) (e.g., the Covariance Matrix Adaptation Evo-
lution Strategy (CMA-ES) (Hansen and Ostermeier,
1996)) and the Efficient Global Optimization (EGO)
(Forrester et al., 2008) demonstrated to be valid alter-
natives.
On the one hand, ESs handle the update process by
directly evaluating the objective function in different
points of the design space. Compared to other global
optimization techniques, these algorithms are easy to
implement and very often provide adequate solutions,
showing many advantages such as simplicity, robust
responses to changing circumstances and flexibility.
However, they require thousands of calls to the high-
fidelity analysis codes to locate a near optimal solu-
tion, leading to very high numerical costs, which are
proportional to the problem dimensionality.
On the other hand, EGO allows for replacing the
direct optimization of the computationally expensive
model by an iterative process composed of the cre-
ation, optimization and update of an approximation of
the original function, referred to as Surrogate Model.
This model is much cheaper to run and can be used to
perform many more evaluations during the optimiza-
tion process. A limited set of points is initially defined
in the problem domain and the real expensive function
is here evaluated. Based on the obtained training data,
a first approximation of the model is constructed and
its accuracy is improved by the algorithm by evalu-
ating new points. Nonetheless, surrogate-based opti-
mization shows some limitation with the increasing of
dimensionality and iterations of the algorithm, turn-
ing out to have poorer exploitive capabilities than ESs
(Raponi et al., 2019).
This research focuses on the development of
a Hybrid Surrogate-assisted Modeling Technique
for Structural Topology Optimization, the Hybrid
Kriging-assisted Level Set Method (HKG-LSM),
which is composed of three major building blocks:
the Level Set Method (LSM) (Osher and Sethian,
1988; Haber and Bendsøe, 1998; Allaire et al.,
2004), which allows for monitoring the ma-
terial distribution changes according to low-
dimensional implicit parametrization of the mate-
rial boundaries by means of local Level Set Func-
tions (LSFs), also referred to as Movable Mor-
phable Components (MMCs) (Guo et al., 2014);
the Kriging-guided Level Set Method (KG-LSM)
(Raponi et al., 2017; Raponi et al., 2019), which
initializes a set of training points to construct a
cheap approximating model. By updating the
model based on the data points and the committed
error in the approximation, it allows for obtain-
ing a trade-off between exploration and exploita-
tion of the domain space, which produces a fast
convergence of the algorithm towards promising
areas;
the Evolutionary Level Set Method (EA-LSM)
for crash Topology Optimization (Bujny et al.,
2016; Bujny et al., 2018), embedding the Co-
variance Matrix Adaptation Evolution Strategy
(CMA-ES), to which the proposed optimization
algorithm switches when prescribed conditions
are satisfied. Its main role is to investigate the de-
tected good region and further exploit it in order
to locally optimize the current design.
In this study, the proposed method is evaluated
on a static Cantilever Beam benchmark test case for
a fixed budget of FEM evaluations. This choice is
dictated by the fact that in many industrial applica-
tions, FEM evaluations are very expensive and, as
such, limited. Moreover, the static case allows for
testing the developed methodology with limited com-
putational effort. The convergence trends, as well as
the optimized designs resulting from the considered
optimization algorithms, are discussed. Although the
validation of the proposed methodology is based on
a static test case, the derived results show it could be
a valuable tool for identification of optimal concept
designs in crashworthiness applications, which can-
not be addressed by standard topology optimization
methods due to the lack of gradient information.
2 PROBLEM REPRESENTATION
In view of a future application of the HKG-LSM to
the Topology Optimization of crash structures, the
Hybrid Kriging-assisted Level Set Method for Structural Topology Optimization
71
method is aimed to easily handle the unpredictabil-
ity, noisiness, discontinuities and lack of gradient in-
formation which characterize the crashworthiness de-
sign optimization field. Since in structural mechan-
ics optimal topologies frequently consist of ensem-
bles of interconnected beams, on the basis of previous
studies (Guo et al., 2014; Bujny et al., 2018; Raponi
et al., 2019), the mechanical structure consists of an
ensemble of MMCs, whose boundaries are implicitly
defined by iso-contours of a LSF.
2.1 Parametrization
According to the parametrization adopted in previous
works (Bujny et al., 2018; Raponi et al., 2019), let the
global Level Set Function Φ be defined as:
Φ(u) > 0, u ,
Φ(u) = 0, u ∂Ω,
Φ(u) < 0, u D\,
(1)
where is the region of the domain D occupied by
material, D\ is its complementary set, occupied
by void, and ∂Ω is the interface between material
and void. Consequently, intermediate densities are
avoided for each u = (x,y)
T
D.
This global LSF is then composed by local LSFs
given by:
φ
i
(u) > 0, u
i
,
φ
i
(u) = 0, u ∂Ω
i
,
φ
i
(u) < 0, u D\
i
,
(2)
where φ
i
is the LSF that parametrize the i
th
compo-
nent, which occupies the design domain region
i
.
Therefore, if e is the number of elementary com-
ponents, the region of the domain occupied by mate-
rial can be seen as:
=
e
[
i=1
i
. (3)
Now, let us consider for each beam a local basis func-
tion of the form proposed first by Guo et al. (2014):
φ
i
(u) =

cosθ
i
(x x
0
i
) + sin θ
i
(y y
0
i
)
l
i
/2
m
+
sinθ
i
(x x
0
i
) + cos θ
i
(y y
0
i
)
t
i
/2
m
1
,
(4)
where u = (x,y)
T
is a point of the bidimensional do-
main D = R
2
and (x
0
,y
0
) denotes the position of the
center of the component with length l, thickness t, and
oriented inside the domain according to a rotation an-
gle θ, as shown in Figure 1(a). The symbol m stands
(a) (b)
Figure 1: Structural component details (Bujny et al., 2018):
(a) Component parametrization, (b) Corresponding local
LSF, where negative values are set to zero.
for a relatively large even integer number, taken equal
to 6 in this study.
By changing the values of the parameters, each
beam can move, dilate or shrink and rotate in the de-
sign domain. Figure 1(b) shows a 3-dimensional plot
of the i
th
local LSF corresponding to definition (4).
2.2 Geometry Mapping
In Level Set Topology Optimization it is important
to choose a mapping from the geometry defined by
the LSF to the mechanical model. Here, due to its
simplicity, a density-based geometry mapping is used:
E(u) = ρ(u)E
0
0 ρ(u) 1,
(5)
where ρ(u) is the density at the point u D and E
0
is
the reference value of the stiffness tensor.
Hence, the density ρ(u) at position u is computed
from the LSF Φ(u) as:
ρ(u) = H (Φ (u)), (6)
where the global LSF is the maximum of the local
ones at position u:
Φ(u) = max(φ
1
(u),φ
2
(u),...,φ
e
(u)), (7)
and H(x) is the Heaviside function, which is equal to
0 for negative arguments and equal to 1 for nonnega-
tive ones. Figure 2 shows an example of the compo-
sition of local LSFs, resulting in the global LSF given
by Equation 7.
3 OPTIMIZATION PROBLEM
AND CONSTRAINTS
In this work, an optimization problem of the follow-
ing form is considered:
min
x
f
obj
(x),
s.t. g
i
(x) 0, i = 1,...,m, x R
k
,
(8)
where f
obj
is the objective function that is minimized
and g
i
, i = 1,...,m, are the inequality constraints. The
ECTA 2019 - 11th International Conference on Evolutionary Computation Theory and Applications
72
(a)
(b)
Figure 2: Combination of local Level Set Functions: (a) il-
lustrative structural layout, (b) plot of the global LSF, where
negative values are set to zero. (Raponi et al., 2019)
objective function is evaluated on the vector x R
k
of the design variables, which collects all the param-
eters defining the LSM basis functions. This study
minimizes the compliance of a cantilever beam test
case in a 9-variables test case. The optimization is
carried out while respecting a g(x) = V (x) V
req
0
constraint, which requires the volume V of the ana-
lyzed structure not to exceed a prescribed limit V
req
equal to 50% of the design domain volume. Through-
out the optimization process, disconnected structures
can be obtained, as illustrated in Figure 3. There-
fore, in order to be physically consistent, a connec-
tivity constraint is also considered within this work.
Such constraint ensures that the optimized structures
make sense from the physical point of view, while
the volume constraint is imposed in order to make the
structures fulfill the industrial requirements of limited
mass. Each material distribution has to be connected
to the support on the left-hand side of the domain,
to the load which the structure is subjected to on the
right-hand side, and has to respect a connection of the
structure to itself, i.e. a material path from the sup-
port to the load has to be found. Figure 3 shows three
material distributions, each representing the violation
of the connectivity constraint according to the first,
second and third criterion, respectively. According to
the phase of the optimization procedure, i.e. the sub-
algorithm that is currently used, two different tech-
niques are used to handle such constraints, which are
described in Section 4.2.
4 RESOLUTION STRATEGY
The following section focuses on the description of
the optimization procedure defined within this re-
search. In order to drive the considered structures
towards optimum, this work introduces a novel hy-
(a) (b)
(c)
Figure 3: Violation of the connectivity constraint regarding
(a) the connection to the support, (b) the connection to the
load, (c) the connection of the structure itself. The distances
prohibiting the design to fulfill the connectivity criteria are
represented by the dashed lines (Raponi et al., 2019).
brid optimization strategy which combines sequen-
tially the EGO with the Kriging model (KG-LSM)
and the CMA-ES (incorporated in the EA-LSM). This
choice is due to the fact that the EGO is efficient in the
early stage of the optimization due to the explorative
potential of the infill procedure, while CMA-ES has
good exploitative capabilities which makes it to effi-
ciently perform a local search and converge to a final
refined structure.
4.1 Optimization Algorithm
The optimization procedure proposed in this work
(the HKG-LSM) is tailored to solve efficiently prob-
lems in structural mechanics, and applied to the op-
timization of static Cantilever Beam test cases. It is
composed of two sub-algorithms: it first redrafts the
EGO algorithm (Jones et al., 1998; Arsenyev, 2017),
applied to the Kriging surrogate model, and subse-
quently it switches to the CMA-ES to finally refine
the last structure obtained before the sub-algorithm
changes. Algorithm 1 gives an outline of the devel-
oped optimization method in case if each constraint is
used.
The HKG-LSM algorithm starts with the Design
of Experiments (DoE) (Forrester et al., 2008), which
samples the design space and selects a set of train-
ing points where to evaluate the high-fidelity model.
In order to determine a set of training samples, an
Optimal Latin Hypercube Sampling (OHLS) (Fang
et al., 2005) is used in this work, leading to a train-
ing dataset X = {x
(1)
,x
(2)
,..., x
(n)
}
T
with observed
responses y = {y
(1)
,y
(2)
,..., y
(n)
}
T
. For all points, the
feasibility according to the volume and connectivity
constraints is checked, and based on them a Kriging
approximation is fitted and iteratively updated. When
the number of iterations where no improvement on the
best design is detected reaches an allowed maximum,
Hybrid Kriging-assisted Level Set Method for Structural Topology Optimization
73
Data: initial data set (X, Y) with n sample
points, where X = {x
(1)
,x
(2)
,..., x
(n)
}
T
i := 1;
while i < n do
check feasibility of point x
(i)
;
if x
(i)
infeasible then
X := X\{x
(i)
}
end
i := i + 1;
end
y
:= min(Y);
x
:= x X : y(x) = y
;
t := 0;
c := 1
while t < n
max
and c < c
max
do
fit Kriging to available data (X, Y);
find infill point: p := argmax
x
EICD(x)
(Equation (23));
update sample set: X := X p;
update response set: Y := Y y(p);
if min(Y) < y
then
c := 1;
y
:= min(Y);
x
:= x X : y(x) = y
;
else
c := c + 1;
end
t := t + 1;
end
initialize CMA-ES parameters (σ,C);
initialize parent design: x = x
;
run CMA-ES.
Algorithm 1: HKG-LSM optimization algorithm. The ref-
erence algorithm for the CMA-ES can be found in the work
by Hansen (Hansen, 2006).
the optimizer passes from the Kriging-based strategy
to the CMA-ES, whose parent design is set as the best
structure found by the previous sub-algorithm.
4.1.1 Kriging Model
The Kriging Surrogate Model (Cressie, 1990; For-
rester et al., 2008; Kleijnen, 2009; Arsenyev, 2017)
aims to predict the value of an unknown function at a
given point by computing a weighted average of the
already known values of the function in the neighbor-
hood of that point.
Starting from a set of sample data X =
{x
(1)
,x
(2)
,..., x
(n)
}
T
with observed responses y =
{y
(1)
,y
(2)
,..., y
(n)
}
T
, the aim is to predict the objec-
tive function value at the location x.
Let the training data be seen as results of a
stochastic process, which is described with use
of a set of random vectors of the form Y(x) =
(Y (x
(1)
),...,Y (x
(n)
))
T
, with mean 1µ, where 1 is an
nx1 column vector of ones. Moreover, let the cor-
relation between each couple of random variables be
described using a basis function expression:
cor[Y (x
(i)
),Y (x
(l)
)] = exp
k
j=1
θ
j
|x
(i)
j
x
(l)
j
|
2
!
.
(9)
where the θ vector allows the width of the basis func-
tion to differ from variable to variable. The corre-
lations depend on the absolute distance between the
sample points and on the parameters θ
j
, which are es-
timated by using the likelihood of the predicted data
y (Forrester et al., 2008):
L =
1
(2πσ
2
)
n/2
|Ψ|
1/2
exp
(y 1µ)
T
Ψ
1
(y 1µ)
2σ
2
.
(10)
After appropriate substitutions and simplifications,
the natural logarithm of Equation (10) is considered.
By deriving and setting the derivative to zero, the
maximum likelihood estimates (MLEs) for the mean
µ and variance σ
2
are obtained:
ˆµ =
1
T
Ψ
1
y
1
T
Ψ
1
1
,
ˆ
σ
2
=
(y 1µ)
T
Ψ
1
(y 1µ)
n
, (11)
where Ψ is the correlation matrix between the ran-
dom variables. Finally, the model correlation can be
utilized to predict new values based on the observed
data. By augmenting the model data with a new in-
put x and the corresponding output ˆy and maximizing
the likelihood of the augmented data, the prediction
of the response at the new location, the mean value ˆy,
and the committed error in the prediction ˆs
2
are deter-
mined:
ˆy(x) = ˆµ + ψ
T
Ψ
1
(y 1ˆµ), (12)
ˆs
2
(x) =
ˆ
σ
2
1 ψ
T
Ψ
1
ψ +
1 1
T
Ψ
1
ψ
1Ψ
1
1
. (13)
4.1.2 Efficient Global Optimization
After the choice of the surrogate model and fitting
the initial surrogate model, the optimization process
starts. At this stage, the advantages of using a surro-
gate technique over any evolutionary algorithm can be
appreciated. In fact, if not parallelized, only one call
to the expensive high-fidelity model is done at each it-
eration. This evaluation of the true model has the role
of both guiding the search towards the optimum of the
optimization problem and providing new high-fidelity
data (i.e., evaluations of the objective function on new
infill points) to update the surrogate model and en-
hance its accuracy. Among other techniques for the
ECTA 2019 - 11th International Conference on Evolutionary Computation Theory and Applications
74
choice of the locations for the infill points, this work
uses a modified version of Expected Improvement
(EI) (Forrester et al., 2008), taking into account the
connectivity constraint (Raponi et al., 2019). Since
one goal of the infill search is to position the next
point in order to improve the best observed value so
far (y
min
), the EI criterion aims to look for an infill
position that maximizes the amount of improvement
(defined as I(x) = max(y
min
Y (x),0)) over the value
of the objective function measured on the current op-
timum:
x
in f ill
= argmax
x
E[I(x)]. (14)
In turn, the EI is defined as follows:
E[I(x)] =
(y
min
ˆy(x))Φ
Y
y
min
ˆy(x)
ˆs(x)
+ ˆs(x)φ
Y
y
min
ˆy(x)
ˆs(x)
if ˆs(x) > 0
0 if ˆs(x) = 0
(15)
where Φ
Y
and φ
Y
are the Gaussian cumulative distri-
bution function and probability density function, re-
spectively. Equation (15) can be seen as sum of two
terms:
the first term, proportional to (y
min
ˆy(x)), which
controls the exploitative tendency of the search
criterion;
the second term, proportional to ˆs(x), whose pre-
dominance would lead to an explorative choice of
the new infill point.
Therefore, the EI infill criterion represents a good
trade off between exploitation and exploration.
4.1.3 Covariance Matrix Adaptation Evolution
Strategy
The Covariance Matrix Adaptation Evolution Strat-
egy (CMA-ES) (Hansen, 2006) is a popular algorithm
for global optimization problems. The (µ, λ)-CMA-
ES relies on the iterative sampling and updating of a
multi-normal density
~x
(g+1)
k
~
N
h~xi
(g)
w
,σ
(g)
2
C
(g)
, k = 1,...,λ (16)
where g is the iteration counter, σ
(g)
R
+
is the mu-
tation step size, which controls the step length, and
C
(g)
R
n×n
, where n is the dimensionality of the
problem, is a covariance matrix, responsible for the
ellipsoidal shape of the density function. CMA-ES is
a derandomized Evolution Strategy, which adapts the
covariance matrix of the normal distribution on the
basis of the previous search steps (Hansen and Oster-
meier, 2001). Such matrix represents pairwise depen-
dencies between the problem variables and its update
during the optimization procedure is of particular im-
portance when dealing with ill-conditioned objective
functions.
Regarding Equation (16), λ is the offspring pop-
ulation size and h~xi
(g)
w
is the recombination point,
which is computed as the weighted mean of selected
individuals:
h~xi
(g)
w
=
µ
i=1
w
i
~x
(g)
i:λ
, (17)
with w
i
> 0, i = 1,...,µ, and
µ
i=1
w
i
= 1. The indexing
i : λ is used to denote the i-th best individual.
The mutation parameters, i.e. the step size σ
(g)
and the covariance matrix C
(g)
, are automatically
tuned by the algorithm. Such procedure happens in
two steps. First of all, the global step size undergoes
an adaptation process. Afterwards, C
(g)
is updated
according to the evolution path, the µ weighted dif-
ference vectors of the newly selected parents, and the
last recombination point (Hansen, 2005).
4.1.4 Combining EGO and CMA-ES in the
HKG-LSM
The main contribution of this paper is to efficiently
combine CMA-ES and EGO, which are complemen-
tary techniques in the global optimization of com-
putationally expensive functions (Mohammadi et al.,
2015). EGO constructs an approximation of the high-
fidelity model by evaluating the expensive objective
function on a chosen training set and improves the
optimum by adding new points to the model, while
CMA-ES samples points according to multi-normal
distribution and steadily converges towards the opti-
mum by means of a recombination and mutation. As
a consequence, it looks natural to take advantage of
the potential that such methods exhibit at different
stages of the optimization process. This leads to an
algorithm that starts with the Designs of Experiments
(DoE), which is characteristic for surrogate-based op-
timization techniques, and then fits an initial Kriging
approximation of the high-fidelity model, getting on
with the optimization of such model based on EGO.
Hereafter, it switches to CMA-ES when no improve-
ment on the objective function value is observed for
a prescribed number of iterations. As already stated,
such strategy is proposed in this work and referred to
as Hybrid Kriging-assisted Level Set Method (HKG-
LSM) for Structural Topology Optimization.
The combination of EGO and CMA-ES to obtain
the HKG-LSM works as follows. If c
max
is a pre-
Hybrid Kriging-assisted Level Set Method for Structural Topology Optimization
75
determined constant value, the optimization method
switches from the first to the second sub-algorithm
when fitness does not improve within a budget of c
max
iterations. On the contrary, if the value c
max
has not
been reached yet and the EGO finds a design with a
better value of the objective function with respect to
the best so far, the counter c of the iterations without
any improvement on the optimum is reset to 1. The
c
max
parameter can be chosen according to the con-
sidered problem, but, as discussed later in this work,
it is convenient that the transition to CMA-ES occurs
once the explorative capabilities of KG-LSM are suf-
ficiently exploited.
At the moment of the switch between the con-
sidered sub-algorithms, the mutation step size σ of
the CMA-ES optimizer is taken as the exponentially
weighted average (EWA) of the differences in posi-
tion between the unpromising infill points and the cur-
rent optimum design during the last c
max
iterations of
the KG-LSM, rescaled by a constant scalar α. Such
EWA is iteratively computed over the c c
max
itera-
tions as follows:
v
c
=
(
x
i,1
, if c = 1
βv
c1
+ (1 β)x
i,c
, if c > 1
(18)
where the coefficient β represents the degree of
weighting decrease, a constant smoothing factor be-
tween 0 and 1 and chosen in this work equal to 0.1
(a lower β discounts newer observations faster), x
i,c
is the value of the i
th
component of the infill design
at a time period c and v
c
is the value of the EWA at
any time period c. Equation (18) applies weighted
factors to the positions of the infill points so that the
most recent candidates have greater influence on the
stepsize definition. This choice for the σ parameter is
aimed to take into account the predominant tendency
of the KG-LSM to explore/exploit the design space
during the unsuccessful search of a new current opti-
mum within the prescribed number of iterations c
max
,
leading to a larger/stricter mutation step size for the
Evolution Strategy as soon as the transition occurs.
4.2 Constraint Handling Techniques
During the DoE phase of the optimization algorithm,
the sample points that are not feasible are automati-
cally discarded and only feasible points are used to
construct the surrogate model. Different techniques
are used to deal with the connectivity and volume con-
straint during the infill procedure.
4.2.1 Expected Improvement for Connected
Designs
The Expected Improvement for Connected Designs
(EICD) is a variant of the standard EI, defined to en-
sure the connectivity of the promising candidates dur-
ing the infill procedure (Raponi et al., 2017; Raponi
et al., 2019). The main idea is to penalize the infeasi-
ble designs according to the level of infeasibility. To
this end, a graph representation of the structure is used
(Raponi et al., 2019). When disconnected designs
are met during the maximization of EI by Differen-
tial Evolution (DE) (Storn and Price, 1997), the algo-
rithm computes a penalty P, which takes into account
the amount of violation of the connectivity constraint:
P = γ(P
1
+ P
2
+ P
3
), (19)
where each P
i
, for i = 1,2,3, is the minimum extra-
distance that has been computed according to the 1
st
,
2
nd
or 3
rd
type of disconnection presented in 3, and γ
is a suitable penalty factor.
Such penalty is used to modify the EI criterion
(15) as follows:
EICD(x) =
(
E[I(x)] if x is connected,
P(x) if x is disconnected,
(20)
where the introduction of the penalty P is dictated by
the necessity to avoid the creation of large flat areas
in the landscape of the optimization problem, which
might cause stagnation of the DE search.
4.2.2 Constrained Expected Improvement
In this study, the Constrained Expected Improvement
(CEI) (Forrester et al., 2008) is used to generate points
that satisfy a prescribed volume limit. At each iter-
ation of the optimization strategy, an approximation
of the function defining the volume constraint is also
constructed. At this stage, the probability that the
design satisfies the volume constraint (Probability of
Feasibility (PF)) is computed as:
P[F(x)] =
1
ˆ
σ
2
(x)
2π
Z
0
exp
(F ˆg(x))
2
2
ˆ
σ
2
(x)
dG,
(21)
where ˆg(x) is the prediction of the constraint function,
ˆ
σ
2
(x) is the variance of the model of the constraint,
and F = G(x)V
req
is the entity of feasibility. Hence,
the probability that a feasible infill point improves on
the best value so far can be computed by maximizing
the product between EI and PF:
x
in f ill
= argmax
x
CEI(x) = argmax
x
E[I(x)]P[F(x)],
(22)
ECTA 2019 - 11th International Conference on Evolutionary Computation Theory and Applications
76
where the product is justified by the independence of
the models.
To consider both the connectivity and volume con-
straints, the coupling between EICD and CEI is ob-
tained by applying the connectivity check in the max-
imization of EI·PF, leading to the following combined
definition:
EICD(x) =
(
CEI(x) if x is connected,
P(x) if x is disconnected.
(23)
4.2.3 Exterior Penalty Method
The techniques described in Sections 4.2.1 and 4.2.2
are appropriate for the Kriging-based EGO. There-
fore, the Exterior Penalty Method (Rao, 1996) is in-
troduced when the CMA-ES comes in. It works by
penalizing the objective function values every time
a constraint is not satisfied. Therefore, the objective
function assumes the following shape:
f (x) = f
obj
(x) + c ·max(0, g(x)), (24)
where f
obj
is the objective function to be minimized
in the original problem, g is a nonlinear constraint and
c denotes a penalty constant. This leads to a modifi-
cation of the objective function value that automati-
cally rejects the penalized design. The penalty is usu-
ally taken as a very large constant value, in order to
immediately ensure rejection of an infeasible design.
The penalized objective function is referred to as cost
function.
5 TEST CASE
The proposed HKG-LSM optimization strategy is
here evaluated on a linear elastic test case, which is
characterized by low computational costs and suffi-
cient to draw some conclusions about the promising-
ness of the approach. The test is performed on the
standard cantilever beam benchmark problem, whose
structure is optimized to minimize compliance. The
previously developed KG-LSM (Raponi et al., 2017;
Raponi et al., 2019) and the EA-LSM, using CMA-
ES (Hansen, 2006; Bujny et al., 2018) are taken as a
reference to compare the convergence properties and
the optimized designs resulting from the proposed ap-
proach.
As shown in Figure 4, the beam is fixed to a sup-
port at the left-hand side and a static unit load is ap-
plied in the middle of the right-hand side. The domain
dimensions are 20 ×10 [mm]. The FEM mesh was
generated with CalculiX, Version 2.13 (open-source,
3D structural FEM software developed at MTU Aero
Table 1: Cantilever beam test case settings.
Property Symbol Value Unit
Young’s
modulus
E 2.1 ·10
5
MPa
Poisson’s ratio ν 0.3 -
Load F 1 N
Volume fraction V
f
50 % -
Mesh resolution - 100 x 50 -
Element type - 4-node shell
element
-
Solver - CalculiX 2.9 -
Engines in Munich: http://www.calculix.de/). It a
structured grid composed of 5000 four-node square
shell (S4R) finite elements, arranged in a 100 ×50
grid.
Figure 4: Problem definition of the Cantilever beam test
case (Raponi et al., 2019).
If an element is located in an area of the design do-
main where the level-set function assumes a positive
value, then it is assigned a Young’s modulus equal to
the one specified in the Table of Properties 1. Oth-
erwise, only the 1% of this value is set. Therefore,
at each iteration of the optimization process, through-
out which the FEM mesh remains unchanged, areas
corresponding to void are occupied by a very weak
material.
The considered configuration consists of 6 beams
symmetric with respect to a horizontal axis and evenly
positioned all over the design domain (Figure 5).
Figure 5: Reference layout of 6-beams for the static Can-
tilever Beam test case (Raponi et al., 2019).
This structure allows to obtain a 9 dimensional
problem: for each beam, the x
0
and y
0
position param-
eters and the orientation θ from Equation (4) are sub-
jected to optimization, while the length and thickness
are fixed. Therefore, the vector of design variables is
of the form (x
0
1
,y
0
1
,θ
1
,x
0
2
,y
0
2
,θ
2
,x
0
3
,y
0
3
,θ
3
) and
the parameters defining the remaining beams are ob-
tained by symmetry for reducing the problem dimen-
sionality. This test case allows to evaluate the promis-
Hybrid Kriging-assisted Level Set Method for Structural Topology Optimization
77
ingness of the method and to formulate some con-
siderations about the concurrent activity of the con-
nectivity and volume constraints for the treated struc-
tures. As shown in Table 1, the allowed volume limit
for each material distribution is set to 50% of the de-
sign domain.
6 EXPERIMENTAL EVALUATION
The following section presents the experimental setup
used to test the proposed HKG-LSM and the obtained
results.
6.1 Experimental Setup
The developed HKG-LSM is tested on a 6-beams 9-
variables configuration. Starting from a 300-samples
DoE, the following strategies are compared for a total
budget of 500 calls to the objective function:
HKG-LSM-bias10(/30/50)-exp: Hybrid Kriging-
guided Level Set Method where the switch
from KG-LSM to CMA-ES is done when the
best observed objective value is not updated for
10(/30/50) iterations; the step size for the CMA-
ES is chosen as the exponentially weighted aver-
age of the distances between the best structure and
the worse infill points;
KG-LSM: Kriging-guided optimization method
with initial DoE selection by removing the infea-
sible points and the EICD to handle both the con-
nectivity during the infill procedure. It is shown
for comparison purposes;
EA-LSM using CMA-ES and taking the reference
structure in Figure 5 as first parent design. Here,
the exterior penalty method is used to drive the
search towards feasible designs. It is shown for
comparison purposes.
In the KG-LSM optimization algorithm, θ
L
= [1E-3]
9 and θ
U
= [1E3] 9 are chosen as user-defined
bounds for the theta parameter to be optimized
through Maximum Likelihood Estimation, as ex-
plained in Section 4.1.1. A non-elitist (µ,λ)-CMA-
ES strategy is used and initialized with a parent pop-
ulation of µ = 5 individuals, whereas λ = 10 is the
offspring population size. The mutation step is char-
acterized by a normally distributed random vector de-
fined by the initial mean m
init
, while the standard de-
viation σ
init
is chosen in this work as the exponen-
tially weighted average described in Section 4.1.4.
6.2 Results
Due to the non-deterministic nature of the considered
optimization algorithms, for each one 30 optimization
runs are performed (with different random seeds). For
each strategy, the average convergence of the com-
pliance objective function in terms of evaluations is
shown in Figure 6. Here, the objective function is
normalized with respect to the reference design for
the 6-beams test case defined in Section 5.
50 100 150 200 250 300 350 400 450 500
Evaluations
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Cost function
Convergence averaged over 30 runs
CMA-ES
HKG-LSM-bias10-exp
HKG-LSM-bias30-exp
HKG-LSM-bias50-exp
KG-LSM
Figure 6: Convergence of the compliance function averaged
over 30 runs for the HKG-LSM starting from a 300-samples
DoE, with selection of feasible designs. The trends for the
10, 30 and 50-iterations bias for the switch are compared.
The evaluations spent for the DoE phase are not shown in
the plot since they are performed once for all the methods.
The KG-LSM and the CMA-ES are also shown as a refer-
ence.
From Figure 6 it is clear that when evaluation 280
is reached, the CMA-ES seems more suitable for ex-
ploiting the best design obtained so far, leading to
an unquestionably better performance over the KG-
LSM. When observing the HKH-LSM strategies, the
predominance of the CMA-ES is not evident any-
more. In fact, the hybrid techniques take advantage
of the initial rapid decrease of the objective func-
tion resulting from the Kriging-based optimization
and, when no improvement is detected within a cer-
tain number of iterations, the algorithm transits to
the CMA-ES. The HKG-LSM-bias50-exp provides a
better convergence trend than the other hybrid tech-
niques. Indeed, an early switch to the CMA-ES might
cause an excessive exploitation of a partially opti-
mized design, increasing its probability to converge
to a local optimum. On the other hand, a late transi-
tion from KG-LSM to CMA-ES has a better chance
of bringing to the Evolution Strategy a design that is
the result of a more explorative strategy, which did not
focus on localized areas of the problem domain, but
rather found optimal designs by searching emptier ar-
ECTA 2019 - 11th International Conference on Evolutionary Computation Theory and Applications
78
eas, characterized by a high variance of the surrogate
model. Such considerations can be validated by the
following analysis. According to Figure 7, three main
groups of material distributions, each representing a
different (local) optima obtained by the HKG-LSM-
bias50-exp, can be distinguished. The frequency with
which they have been observed are the 20%, 20% and
40%, for the first, second and third optimum respec-
tively. On the other hand, the same local optima could
be detected after the inspection of the results obtained
for the HKG-LSM-bias10-exp. Here, the percentages
drop to 7%, 7% and 53%, respectively. Such data
demonstrate that a premature switch to the CMA-ES
might give preference to local optima, leading to a
worse convergence trend for the minimization of the
cost function.
(a) (b) (c)
Figure 7: Three main topology types obtained with the
HKG-LSM-bias50-exp method and the frequency with
which they have been developed in 30 optimization runs.
(a) First optimum: 20%. (b) Second optimum: 20%. (c)
Third optimum: 40%.
In general, from the final structures optimized by
HKG-LSM (Figure 7), it can be deducted that such
methods allow sufficient flexibility for reaching de-
signs that are not far from the theoretical one pre-
sented by Michell (Michell, 1904) for a 6-beams con-
figuration, showed in Figure 8. In fact, at this stage,
the quality of the obtained beam configurations is
affected by the impossibility to vary for the length
and thickness parameters of the beams. Therefore,
it would be very interesting to assess the convergence
capabilities of the proposed hybrid optimization tech-
niques in higher dimensionality static and dynamic
problems.
Figure 8: Michell theoretical model for a 6-beams static test
case.
To conclude, further information about the opti-
mized cost values can be found in Figure 9, which
presents the statistics for each method after 100, 250
and 500 evaluations. In order to compare the medi-
ans reached by the different optimization strategies,
the statistical Wilcoxon rank sum test is used. At
evaluation 500, the null hypothesis that data from
the different optimization methods have equal me-
dians at the 5% significance level is accepted when
comparing HKG-LSM-bias30/50-exp with CMA-ES.
This means that these strategies are comparable at the
end of the considered maximum range of evaluations.
Moreover, if evaluating the statistics at the beginning
of the optimization process (after 100 evaluations),
the same null hypothesis is rejected when comparing
CMA-ES with the HKG-LSM strategies. Therefore,
the HKG-LSM converges much faster than CMA-ES
towards the optimum, validating the promisingness of
surrogate-based strategies in case if a strict budget of
evaluations is imposed.
7 CONCLUSIONS
The main aim of this work was the development of the
Hybrid Kriging-assisted Level-Set Method for Topol-
ogy Optimization and the evaluation of its poten-
tial and limits. The research was motivated by the
need for cheap procedures in modern automotive in-
dustry, where optimization strategies demonstrated to
be good alternatives to the classical trial and error
method. Aimed to final applications to the crash-
worthiness design optimization field, where no gra-
dient information is available, Efficient Global Opti-
mization (EGO) with the use of Surrogate Models -
Kriging in this case - and Evolution Strategies (ESs),
CMA-ES in particular, represent valid alternatives to
approach the problem. Their principles are different,
yet complementary: EGO constructs an approxima-
tion of the high-fidelity model by evaluating the ex-
pensive objective function on a chosen training set
and improves the optimum by adding new points to
the model, while CMA-ES learns and samples multi-
normal laws in the space of designs variables and con-
verges towards the optimum by means of a recombi-
nation and mutation process of the individuals. On the
one hand, according to their nature, surrogate-based
techniques lead to a fast convergence of the objective
function towards the optimum at the beginning of the
optimization procedure. On the other hand, CMA-ES
steadily converges to the minimum as the number of
calls to the objective function increases and has a high
potential when asked to exploit some promising area
of the domain, leading to a refined design with a com-
petitive fitness if compared to its neighbors. As a con-
sequence, it seemed natural to take advantage of the
potential of such methods at different stages of the op-
timization process, leading to an algorithm that starts
with the Designs of Experiments (DoE), fits an ini-
tial Kriging approximation of the high-fidelity model
on the available data, and gets on with the optimiza-
tion such model based on EGO. Finally, it switches to
CMA-ES when no improvement on the best value of
Hybrid Kriging-assisted Level Set Method for Structural Topology Optimization
79
CMA-ES HKG-LSM-bias10-exp HKG-LSM-bias30-exp HKG-LSM-bias50-exp KG-LSM
Optimization method
0.4
0.6
0.8
1
Cost function
Statistics after 100 evaluations
(a)
CMA-ES HKG-LSM-bias10-exp HKG-LSM-bias30-exp HKG-LSM-bias50-exp KG-LSM
Optimization method
0.4
0.6
0.8
1
1.2
Cost function
Statistics after 250 evaluations
(b)
CMA-ES HKG-LSM-bias10-exp HKG-LSM-bias30-exp HKG-LSM-bias50-exp KG-LSM
Optimization method
0.3
0.35
0.4
0.45
Cost function
Statistics after 500 evaluations
(c)
Figure 9: Statistical evaluation of the optimization methods compared for the 6-beams 9-variables cantilever beam test case:
box plots for 30 runs after (a) 100, (b) 250, and (c) 500 evaluations.
the objective function is found for a prescribed num-
ber of iterations.
In this research, the potential of the proposed
HKG-LSM was assessed through the study of a stan-
dard cantilever beam benchmark problem, where the
objective function to be minimized was the compli-
ance of the structure. The HKG-LSM led to signif-
icant improvements of the convergence properties in
the initial stages of the optimization process, if com-
pared to KG-LSM and EA-LSM using CMA-ES. In
particular, the number of iterations preceding the tran-
sition from the first sub-algorithm to the second one
proved to be relevant. The results from the static tests
show that the proposed approach can be a valid alter-
native to both the surrogate-based and evolutionary-
based methods, if taken alone, in the optimization of
mechanical structures. Indeed, in the initial phase
of the optimization process, significant improvements
of the convergence properties could be observed, if
compared with CMA-ES, while the hybrid algorithms
outperform KG-LSM at the end. In view of an ap-
plication to crash optimization problems, which are
characterized by a limited number of available evalu-
ations, such hybrid techniques are promising. There-
fore, further studies on static test cases of higher di-
mensionality and crash scenarios are planned to be
done in future works. Moreover, some deeper anal-
ysis with regards to the estimation of the initial pa-
rameters for the CMA-ES is needed. In fact, the fitted
Kriging mean as an approximation to the true func-
tion can be used to initialize both the covariance ma-
trix and the step size of CMA-ES (Mohammadi et al.,
2015). Lastly, new methods to combine surrogates
with evolution strategies in order to obtain outper-
forming mechanical structures can be investigated.
REFERENCES
Allaire, G., Jouve, F., and Toader, A. M. (2004). Struc-
tural optimization using sensitivity analysis and a
level-set method. Journal of Computational Physics,
194(1):363–393.
Arsenyev, I. (2017). Efficient Surrogate-based Robust De-
sign Optimization Method. PhD thesis, Technische
Universit
¨
at M
¨
unchen.
Aulig, N. (2017). Generic Topology Optimization Based on
Local State Features. PhD thesis, Technische Univer-
sit
¨
at Darmstadt, VDI Verlag, Germany.
ECTA 2019 - 11th International Conference on Evolutionary Computation Theory and Applications
80
Aulig, N. and Olhofer, M. (2016). State-based representa-
tion for structural topology optimization and applica-
tion to crashworthiness. In 2016 IEEE Congress on
Evolutionary Computation (CEC), pages 1642–1649,
Vancouver, Canada.
Bendsøe, M. P. and Sigmund, O. (2004). Topology
Optimization - Theory, Methods, and Applications.
Springer Berlin Heidelberg, second edition.
Bujny, M., Aulig, N., Olhofer, M., and Duddeck, F. (2016).
Evolutionary Level Set Method for Crashworthiness
Topology Optimization. In VII European Congress
on Computational Methods in Applied Sciences and
Engineering, Crete Island, Greece.
Bujny, M., Aulig, N., Olhofer, M., and Duddeck, F. (2018).
Identification of optimal topologies for crashworthi-
ness with the evolutionary level set method. Interna-
tional Journal of Crashworthiness, 23(4):395–416.
Cressie, N. (1990). The origins of kriging. Mathematical
Geology, 22(3):239–252.
Duddeck, F., Hunkeler, S., Lozano, P., Wehrle, E., and
Zeng, D. (2016). Topology optimization for crash-
worthiness of thin-walled structures under axial im-
pact using hybrid cellular automata. Structural and
Multidisciplinary Optimization, 54(3):415–428.
Duddeck, F. and Volz, K. (2012). A new Topology Op-
timization Approach for Crashworthiness of Passen-
ger Vehicles Based on Physically Defined Equivalent
Static Loads. In ICrash International Crashworthi-
ness Conference, Milano, Italy.
Eschenauer, H. A., Kobelev, V. V., and Schumacher, A.
(1994). Bubble method for topology and shape
optimization of structures. Structural optimization,
8(1):42–51.
Fang, K. T., Li, R., and Sudjianto, A. (2005). Design and
Modeling for Computer Experiments. CRC Press.
Forrester, A. I. J., S
´
obester, A., and Keane, A. J. (2008). En-
gineering Design via Surrogate Modelling - A Practi-
cal Guide. John Wiley & Sons Ltd.
Guo, X., Zhang, W., and Zhong, W. (2014). Doing Topol-
ogy Optimization Explicitly and Geometrically - A
New Moving Morphable Components Based Frame-
work. Journal of Applied Mechanics, 81(8):081009.
Haber, R. and Bendsøe, M. P. (1998). Problem Formula-
tion, Solution Procedures and Geometric Modeling:
Key issues in Variable-Topology Optimization. In 7th
AIAA/USAF/NASA/ISSMO Symposium on Multidisci-
plinary Analysis and Optimization, St. Louis, Mis-
souri, USA.
Hansen, N. (2005). The CMA Evolution Strategy: A Tuto-
rial. hal-01297037f.
Hansen, N. (2006). The CMA Evolution Strategy: A
Comparing Review. In Towards a New Evolutionary
Computation, Studies in Fuzziness and Soft Comput-
ing, pages 75–102. Springer Berlin Heidelberg. DOI:
10.1007/3-540-32494-1
4.
Hansen, N. and Ostermeier, A. (1996). Adapting arbitrary
normal mutation distributions in evolution strategies:
the covariance matrix adaptation. In Proceedings of
IEEE International Conference on Evolutionary Com-
putation, pages 312–317.
Hansen, N. and Ostermeier, A. (2001). Completely De-
randomized Self-Adaptation in Evolution Strategies.
Evolutionary Computation, 9(2):159–195.
Jones, D. R., Schonlau, M., and Welch, W. J. (1998).
Efficient Global Optimization of Expensive Black-
Box Functions. Journal of Global Optimization,
13(4):455–492.
Kleijnen, J. P. C. (2009). Kriging metamodeling in simu-
lation: A review. European Journal of Operational
Research, 192(3):707–716.
Lee, H.-A. and Park, G.-J. (2015). Nonlinear dynamic
response topology optimization using the equivalent
static loads method. Computer Methods in Applied
Mechanics and Engineering, 283:956–970.
Michell, A. G. M. (1904). LVIII. The limits of economy of
material in frame-structures. Philosophical Magazine,
8(47):589–597.
Mohammadi, H., Riche, R. L., and Touboul, E. (2015).
Making EGO and CMA-ES Complementary for
Global Optimization. In Learning and Intelligent Op-
timization, Lecture Notes in Computer Science, pages
287–292. Springer, Cham.
Mozumder, C., Renaud, J. E., and Tovar, A. (2012). Topom-
etry optimisation for crashworthiness design using hy-
brid cellular automata. International Journal of Vehi-
cle Design, 60(1-2).
Ortmann, C. and Schumacher, A. (2013). Graph and
Heuristic Based Topology Optimization of Crash
Loaded Structures. Structural and Multidisciplinary
Optimization, 47(6):839–854.
Osher, S. and Sethian, J. A. (1988). Fronts propagating
with curvature-dependent speed: Algorithms based on
Hamilton-Jacobi formulations. Journal of Computa-
tional Physics, 79(1):12–49.
Pedersen, C. B. W. (2003). Topology optimization design of
crushed 2d-frames for desired energy absorption his-
tory. Structural and Multidisciplinary Optimization,
25(5-6):368–382.
Rao, S. S. (1996). Engineering Optimization: Theory and
Practice. John Wiley & Sons.
Raponi, E., Bujny, M., Olhofer, M., Aulig, N., Boria, S., and
Duddeck, F. (2017). Kriging-guided Level Set Method
for Crash Topology Optimization. In 7th GACM Col-
loquium on Computational Mechanics for Young Sci-
entists from Academia and Industry, Stuttgart, Ger-
many.
Raponi, E., Bujny, M., Olhofer, M., Aulig, N., Boria, S.,
and Duddeck, F. (2019). Kriging-assisted topology
optimization of crash structures. Computer Methods
in Applied Mechanics and Engineering, 348:730–752.
Storn, R. and Price, K. (1997). Differential Evolution A
Simple and Efficient Heuristic for global Optimization
over Continuous Spaces. Journal of Global Optimiza-
tion, 11(4):341–359.
Hybrid Kriging-assisted Level Set Method for Structural Topology Optimization
81