EVOLUTIONARY ALGORITHMS FOR THE IDENTIFICATION
OF STRUCTURAL SYSTEMS IN EARTHQUAKE ENGINEERING
Anastasia Athanasiou
1
, Matteo De Felice
2
, Giuseppe Oliveto
1
and Pietro S. Oliveto
3
1
Institute of Structural Engineering, University of Catania, Catania, Italy
2
Energy and Environment Modeling Technical Unit, ENEA, Rome, Italy
3
School of Computer Science, University of Birmingham, Birmingham, U.K.
Keywords:
Earthquake engineering, Structural system identification, Evolution strategies, CMA-ES, Real-world applica-
tions.
Abstract:
An application of Evolution Strategies (ESs) to the structural identification of base isolation systems in earth-
quake engineering is presented. The analysis of a test problem considered in the literature clearly shows the
effectiveness of ESs for the problem. Simple ESs outperform the previously used methods while state-of-the-
art ones, such as the CMA-ES, provide practically exact solutions. The application of the CMA-ES to the real
data recorded in 2004, when releasing imposed displacements on a building in Solarino, leads to improved
identification results and gives hints of limitations in the model available in literature. Improved models of
higher dimensionality are introduced to overcome such limitations. The application of the CMA-ES with the
new models leads to improvements of up to 53% compared to the previous solutions in literature. Thus, ESs
are shown to be a very powerful tool for the dynamic identification of structural systems and an important aid
in the design and evaluation of models of high dimensionality for structure identification.
1 INTRODUCTION
Structural engineering is a special technological field
dealing with the analysis and design of engineer-
ing structures that must resist internal and/or external
loads. Such structures may be integral parts of build-
ings, bridges, dams, ship hulls, aircraft, engines and
so on. The design of such structures is an optimisa-
tion process by which the resistance capacity of the
system is made to meet the demands posed to it by
the environment. This process is based on the satis-
faction of the basic design inequality by which the ca-
pacity must be no lower than the demand. While the
capacity can be established by the engineer at each
step of the design process, the demand depends both
on the characteristics of the system itself and on its
interaction with the surrounding environment.
The evaluation of the demands requires the sim-
ulation of the behaviour of the structural system (i.e.
the response) under service and/or extreme loading
conditions (eg. earthquakes, tornadoes, turbulence
etc). Such simulations require the construction of me-
chanical models which enable the prediction of the
system’s behaviour. Usually a mechanical model is
described by a system of linear or non-linear differen-
tial equations and a set of physical parameters. While
the system of differential equations is derived from
first principles in mechanics, the physical parameters
are derived from laboratory tests on materials and/or
on structural parts of the system.
Structural identification can serve the dual pur-
pose of establishing whether a given model is suitable
to describe the behaviour of a structural system or to
verify that the physical parameters fed into a reliable
model correspond to the characteristics of the actual
materials used in the construction of the system.
Base isolation is a modern system for the pro-
tection of buildings and other constructions against
earthquake excitations and works on the principle of
decoupling the motion of the ground from that of the
building. Ideally the building should stay still while
the ground moves beneath it. This is achieved by in-
terposing a set of special bearings (i.e. seismic isola-
tors) between the foundation and the superstructure.
Given the low stiffness of the building structure
due to the presence of the seismic isolators it is rather
easy to displace (i.e. move) the building by pushing
it at the base with suitable actuators (i.e. hydraulic
jacks). This system has been used in a handful of
applications around the world (see the literature re-
52
Athanasiou A., De Felice M., Oliveto G. and S. Oliveto P..
EVOLUTIONARY ALGORITHMS FOR THE IDENTIFICATION OF STRUCTURAL SYSTEMS IN EARTHQUAKE ENGINEERING.
DOI: 10.5220/0003672900520062
In Proceedings of the International Conference on Evolutionary Computation Theory and Applications (ECTA-2011), pages 52-62
ISBN: 978-989-8425-83-6
Copyright
c
2011 SCITEPRESS (Science and Technology Publications, Lda.)
view in (Oliveto et al., 2010) and references (15–19)
therein) including one in the town of Solarino in East-
ern Sicily (Oliveto et al., 2010).
The Solarino building was tested by release of im-
posed displacements in July 2004 and accelerations
were recorded at each floor. These recordings were
used as response functions for the identification of the
base isolation system (Oliveto et al., 2010).
An iterative procedure based on the least squares
method was used in (Oliveto et al., 2010) for the iden-
tification. This required tedious calculations of gradi-
ents which were done approximately by means of an
ingenious numerical procedure. Before applying the
identification procedure to the experimental data, the
same procedure was evaluated against a test problem
for which the solution was known. Hence, the abil-
ity of the optimisation algorithm was assessed in the
absence of measurement noise and with the guarantee
that the function to be identified fits the model. The
procedure, was then applied to the real data derived
from the tests on the Solarino building.
Although, the authors of (Oliveto et al., 2010) are
satisfied with their results, they conclude that a “need
for improvement both in the models and testing pro-
cedures also emerges from the numerical applications
and results obtained”. In particular, finding the “best”
algorithm for the identification of such a kind of prob-
lem would provide an improvement on the state-of-
the-art on the identification of building and structures
from dynamic tests.
In this paper the described problem is addressed
by applying Evolutionary Algorithms (EAs) for the
identification of structural engineering systems. In-
deed, the first applications of evolution computa-
tions were directed towards parameter optimisation
for civil engineering simulation models e.g. simu-
lating stress and displacement patterns of structures
under load (Schwefel, 1993).
Firstly, the performance of well known evolution-
ary algorithms for numerical optimization (i.e. Evo-
lution Strategies (ESs)) is evaluated on the same test
problem considered in (Oliveto et al., 2010). Several
ESs are applied and their performance is compared
amongst themselves and against the previous results
obtained in (Olivetoet al., 2010). It isshown that even
simple ESs outperform the previously used methods,
while state-of-the-art ones such as the CMA-ES, pro-
vide solutions improved by several orders of magni-
tude, practically the exact solution.
By applying efficient ESs to the real data from
the Solarino experiments, further and convincing evi-
dence is given of the limitations of the model for the
identification of the base isolation system. Such lim-
itations could not be as visible from the results ob-
tained with the previously used optimisation meth-
ods. Finally, new improved models designed to over-
come the limitations exhibited by the previous ones
are tested. It is stressed that application simplicity
and performance reliability of ESs allowed to eval-
uate improved models of higher dimensionality in a
much smaller amount of time than otherwise would
have been required.
The paper is structured as follows. The system
identification problem is described in Section 2 where
previous results are presented together with a brief in-
troduction of ESs. A comparative study is performed
on the test problem in Section 3. The best performing
ESs are applied in Section 4 to data from experimen-
tal tests on the Solarino building. Two new models
for the identification of hybrid base-isolation systems
are presented in Section 5 together with the results ob-
tained from the identification of the Solarino building.
In the final section conclusions are drawn.
2 PRELIMINARIES
2.1 The Mechanical Model
The mechanical model simulating the experiments
performed on the Solarino building is provided by the
one degree of freedom system shown in Fig. 1. The
justification for its use can be found in (Oliveto et al.,
2010). The mechanical model consists of a mass re-
strained by a bi-linear spring (BS) in parallel with a
linear damper (LD) and a friction device (FD). Fig.
1 (a) describes the mechanical system, while Fig. 1
(b) shows the constitutive behaviour of the bi-linear
spring (modelling rubber bearings). Fig. 1 (c) shows
the relationship between the force in the friction de-
vice and the corresponding displacement (modelling
sliding bearings).
The mechanical model is governed by the follow-
ing second order ordinary differential equation
m· ¨u+ c · ˙u+ f
s
(u, ˙u) + f
d
·sign( ˙u) = 0 (1)
where c is the constant of the linear damper (LD),
f
d
is the dynamic friction force in the friction device
while ˙u and ¨u are respectively the first and the sec-
ond derivatives of the displacement u(t) with respect
to time. Physically, the derivatives represent the ve-
locity (˙u) and the acceleration (¨u) of the mass m of
the building. Finally, the restoring force in the bi-
linear spring f
s
(u, ˙u) depends on the various phases
of motion of the mechanical model, that is the various
branches shown in Fig. 1 (b):
f
s
(u, ˙u) = k
0
·[uu
i
u
y
·sign( ˙u)] + k
1
·[u
i
+ u
y
·sign( ˙u)]
EVOLUTIONARY ALGORITHMS FOR THE IDENTIFICATION OF STRUCTURAL SYSTEMS IN EARTHQUAKE
ENGINEERING
53
Figure 1: The mechanical model: (a) mechanical system: (b) constitutive behaviour of the bi-linear spring; (c) friction force-
displacement relationship.
for the branches of slope k
0
and
f
s
(u, ˙u) = k
0
·u
y
·sign( ˙u) + k
1
·[u u
y
·sign( ˙u)]
for the branches of slope k
1
. In the above equations u
i
is the displacement at the beginning of the considered
phase of motion while u
y
is the yield displacement of
the bi-linear spring as shown in Fig. 1 (b). The equa-
tion of motion (1), is supplemented by the following
two initial conditions which are implicit to the con-
sidered experiment:
u(t
0
) = u
0
˙u(t
0
) = 0
and u
0
is the imposed displacement.
The stated problem is highly non-linear but due
to the very simple excitation it nevertheless admits an
analytical solution (refer to (Oliveto et al., 2010) for
the solution). The existence of the analytical solu-
tion is convenient but by no means essential because
the equation of motion could be solved numerically at
the expense of additional computational costs and of
some loss in precision.
The parameters that define the mechanical model
are shown in Fig. 1, and for convenience are listed in
the following vector: (m,c,k
0
,k
1
,u
y
, f
d
). They repre-
sent the basic physical properties that must be iden-
tified. However, in view of the form given to the so-
lution in (Oliveto et al., 2010), a new set of parame-
ters is defined as follows: (ω
0
,ω
1
,u
d
,u
y
,ζ
0
). This is
related to the previous one by the following relation-
ships:
ω
0
=
r
k
0
m
, ω
1
=
r
k
1
m
, u
d
=
f
d
k
0
, ζ
0
=
c
2mω
0
From Eq. (1) it can be inferred that three related
response functions could be used for identification
purposes: the displacement u(t), the velocity ˙u(t),
and the acceleration ¨u(t). For the application at hand,
the acceleration is the function that can be measured
most easily and therefore is the one that will be used.
As already mentioned an initial displacement u
0
is imposed in the dynamical tests. Since the measure-
ment of u
0
can be difficult it may be considered as an
0 1 2 3 4 5
−0.8
−0.6
−0.4
−0.2
0
0.2
0.4
0.6
Time (s)
Acceleration (m/s
2
)
S
0
S
1
Figure 2: Acceleration functions corresponding to the sets
of system parameters shown in Table 1.
additional parameter that must be identified. There-
fore, the system parameter vector to be optimised is
the following one: S = (u
0
,ω
0
,ω
1
,u
d
,u
y
,ζ
0
)
Let A
0
be a vector of accelerations and t
0
be the
vector of the corresponding times. Furthermore, let A
be a vector of the same length as A
0
and t the vector
of the corresponding times representing a candidate
solution. Then, a measure of the distance between the
experimental data and the modelled ones is provided
by the following expression:
e
2
=
(A
0
A,A
0
A)
(A
0
,A
0
)
+
(t
0
t,t
0
t)
(t
0
,t
0
)
where (A,B) =
N
i=1
A
i
·B
i
and N is the length of the
considered vectors.
2.2 Previous results
Before applying the iterative least squares method to
the experimental data, in (Oliveto et al., 2010) the
procedure was tested on a mathematically generated
dataset. In such a way, the optimal solution was
known beforehand and the measurement noise ex-
cluded. Two system parameter vectors were defined
so that one could be used to generate a set of exper-
imental (analytical) data (i.e. S
1
), and the other to
define the starting point of the identification process
ECTA 2011 - International Conference on Evolutionary Computation Theory and Applications
54
Table 1: First line: the two sets of system parameters considered in Fig. 2. S
1
is also the solution vector of the Test problem.
Second line: range of admissible values for the system parameters. Third line: performance of the identification procedure
according to the number of branches considered.
System - u
0
(m) ω
0
(Hz) ω
1
(Hz) u
d
(m) u
y
(m) ζ
0
S
0
- 0.12 0.50 0.40 0.005 0.02 0.05
S
1
(opt) - 0.12 0.55 0.35 0.004 0.03 0.03
Lower Bound - 0.10 0.52 0.24 0.003 0.02 0.01
Upper Bound - 0.14 0.58 0.48 0.005 0.04 0.05
Branches Error
1 9.0981·10
17
0.12 0.5500 0.3500 0.0040 0.0300 0.0300
1-2 9.4827·10
10
0.12 0.5500 0.3500 0.0040 0.0300 0.0300
1-2-3 5.4252·10
5
0.12 0.5503 0.3520 0.0038 0.0296 0.0338
1-2-3-4 2.4896·10
4
0.12 0.5509 0.3534 0.0037 0.0294 0.0372
(i.e. S
0
). The two vectors are given in Table 1 and the
related acceleration series are shown in Fig. 2.
The observation of Fig. 2 shows two kinds of
discontinuities in the acceleration graphs, a function
discontinuity and a slope discontinuity. Between two
function discontinuities a continuous branch can be
identified. The two graphs show the same number
of branches; however, depending on the values of the
system parameter vector the number of branches can
be different.
Table 1 shows the results obtained in (Oliveto
et al., 2010) with the iterative least squares method for
the test problem. Better results are obtained by using
only one or two branches of the acceleration record
as may be seen from the error amplitude and by the
coincidence of the identified parameters with the as-
sumed ones. As the number of branches included in
the identification procedure increases so does the er-
ror and the identified parameters are no longer coinci-
dent with the assumed ones.
The iterative least squares procedure described in
(Oliveto et al., 2010) is based on the numerical cal-
culation of the gradients of the test functions with re-
spect to the system parameters (refer to (Oliveto et al.,
2010), equations (29) – (35) for the actual procedure).
A suitable arc length is chosen in the trial system pa-
rameter vector space by appropriately modifying the
system of equations so that each component of the un-
known vector is dimensionless. In order to make the
procedure efficient it is necessary to “manually” re-
duce the “arc length” as the procedure converges and
the error becomes smaller.
Although there certainly is no presence of noise
and the function to be identified does “fit” the model,
the best found solution exhibits an error of the order of
10
4
. The identified parameters differ from the given
ones already at the third decimal digit. This is far
from desired and excludes obtaining better results for
the real data recorded at Solarino for the presence of
noise and modelling errors.
A main problem in applying the least squares
method or any optimization method is the definition
of the starting point which in the case shown above
was the set of parameters in Table 1 corresponding to
the acceleration graph denoted by S
0
in Fig. 2. In
practice this problem can be overcome by providing
suitable lower and upper bounds to the sought param-
eters. This can be done by using physical insight on
the observation of the given acceleration record. The
bounds are shown in Table 1.
2.3 Evolution Strategies
The first applications of ESs were directed towards
the parameter optimisation for civil engineering sim-
ulation models e.g. simulating stress and/or displace-
ment states of structures under load (Schwefel, 1993).
Obviously the performance of the ESs was compared
against their natural competitors i.e. the mathemati-
cal methods used for the purpose, especially those not
requiring the use of derivatives explicitly (Schwefel,
1993). Similarly, in the next section the performance
of ESs will be compared against the methods applied
in (Oliveto et al., 2010) on the same mathematically
generated dataset (i.e. the test problem). To this end,
first it will be seen how very simple ESs perform on
the problem and then more complicated versions such
as the CMA-ES will be applied.
A general (µ
+
, λ)-ES maintains a parent popula-
tion of µ individuals, each consisting of a solution
vector and of a strategy vector. The solution vector
represents the candidate solution to the optimisation
problem. The strategy vector is a set of one or more
parameters that are used in combination with the so-
lution vector to create new candidate solutions.
In the considered problem an individual is rep-
resented as (x, s) where the solution vector x =
(x
1
,x
2
,... , x
6
) R
6
, is a real-valued vector for the
candidate solution of the system parameter vector S
introduced in Section 2. The initial µ solution vectors
are generated uniformly at random within the param-
eter bounds given in Table 1.
EVOLUTIONARY ALGORITHMS FOR THE IDENTIFICATION OF STRUCTURAL SYSTEMS IN EARTHQUAKE
ENGINEERING
55
Table 2: Average (Avg), Minimum (Min) and Median (Med) fitness found by 100 algorithm runs. (a) (1+1)-ES with no
adaptation; (b) (1+1)-ES with no adaptation with normalised vectors; (c) (1+1)-ES with normalised vectors and rejecting
points with fitness f = 1 as starting search point. The best results found are highlighted in bold.
(a) (α = 1) (b) (α = 1) (Norm) (c) (α = 1) (Norm) (f
0
6= 1)
σ Avg Min Med Avg Min Med Avg Min Med
10
5
0.3455 5.03E-04 0.1257 0.4643 0.006 2.74E-01 0.28 5.60E-03 6.63E-02
10
4
0.1108 5.23E-05 0.021 0.3218 7.80E-04 9.51E-02 0.1966 5.70E-03 5.73E-02
10
3
0.0255 7.74E-05 0.0028 0.2553 2.76E-05 9.40E-03 0.012 1.44E-04 6.40E-03
10
2
0.0475 4.85E-04 0.0093 0.0595 4.51E-06 1.48E-04 0.0025 8.89E-06 1.24E-04
0.025 0.2219 3.40E-03 0.00527 0.0114 2.17E-05 1.25E-04 1.56E-04 1.42E-05 1.33E-04
0.05 0.3778 6.10E-03 0.1403 0.0097 4.24E-05 1.86E-04 2.52E-04 4.04E-05 2.06E-04
0.1 0.342 1.00E-02 0.140 4.79E-04 7.77E-05 4.45E-04 4.55E-04 7.29E-05 4.19E-04
0.5 0.466 6.10E-03 0.278 0.0056 3.24E-04 5.10E-03 0.064 1.40E-03 6.00E-03
Figure 3: Fitness value versus number of evaluations of all the 100 runs (no adaptation and normalisation); (left) σ = 0.0001,
(centre) σ = 0.01 and (right) σ = 0.5. The median and the mean are in bold.
In each optimisation step an offspring population
of λ individuals is generated. Each individual is cre-
ated by first selecting one of the µ individuals out of
the parent population uniformly at random, and then
by moving it in the search space of a quantity deter-
mined by applying its strategy vector s. The genera-
tion is completed by selecting the best µ individuals
out of the parent and of the offspring populations if
the plus selection strategy is used (i.e. (µ+λ)-EA) or
out of the offspring population if the comma selection
strategy is adopted (i.e. (µ,λ)-EA). The latter requires
that λ be greater than µ.
The way the strategy vector s is applied to gener-
ate new individualsis explained when describing each
ES considered in this paper. The main differences be-
tween various subclasses of ESs are in the size of the
strategy vector,on how it is used and on how its values
change during the optimisation process (i.e. adapta-
tion).
3 TEST PROBLEM STUDY
In this section a study of popular ESs for the test prob-
lem considered in (Oliveto et al., 2010) is performed.
Throughout the whole section (unless stated other-
wise) each algorithm is run 100 times allowing 1000
fitness function evaluations in each run.
3.1 Experimental Setup: No Adaptation
The study begins by applying the simple (1+1)-ES
and using a strategy vector consisting of only one
strategy parameter σ (i.e. s R
1
). The solution
vector x of the parent individual (x R
6
, σ) is ini-
tialised uniformly at random in the bounded search
space. At each generation a new candidate solu-
tion is obtained by applying
˜
x := x + z where z :=
σ(N
1
(0,1),. .. ,N
N
(0,1)) and N
i
(0,1) are indepen-
dent random samples from the standard normal dis-
tribution.
The only parameter of the algorithm is the stan-
dard deviation σ of the normal distribution used to
generate the offspring solution vector. The value of
σ does not change throughout the run of the algo-
rithm. Given the “tight” bounds on the feasible search
space provided in Section 2.2, the best fixed σ-values
are expected to be small. By generating 100 random
search points a minimum fitness (i.e. the error func-
tion value) of 0.0075 and an average fitness of 0.3950
are obtained. These values help to understand how
well the algorithms are fine-tuning the initial solution.
The algorithm is tested for standard deviations σ
distributed in the range [10
6
,5 ·10
2
]. The results
are shown in Table 2 (a). The best solutions which are
obtained for σ = 0.001 show average and minimum
fitness of 0.0255 and of 7.74·10
5
respectively.
Given the different value ranges for the bounds of
each parameter, the solution is normalised according
ECTA 2011 - International Conference on Evolutionary Computation Theory and Applications
56
Table 3: Effects of initial standard deviation σ
0
and 1/5 adaptation rule on Average (Avg), Minimum (Min) and Median (Med)
fitness on 100 algorithm runs. (a) (1+1)-ES with 1/5 rule (α = 0.85 and G = 5); (b) (1+1)-ES with 1/5 rule (α = 0.95 and
G = 5); (c) (1+1)-ES with “local” 1/5-rule. New Best results are highlighted in bold.
(a) (α = 0.85) (b) (α = 0.95) (c) “local”-1/5
σ
0
Avg Min Med Avg Min Med Avg Min Med
10
4
0.0007 1.750E-06 1.52E-04 0.0029 2.21E-06 2.23E-04 3.15E-04 2.41E-06 2.20E-04
10
3
1.36E-04 9.45E-07 1.02E-04 1.54E-04 1.19E-06 1.28E-04 3.47E-04 2.60E-06 1.87E-04
10
2
0.0028 2.10E-07 8.87E-05 1.20E-04 2.08E-07 7.97E-05 0.0028 2.64E-06 1.72E-04
10
1
0.0026 1.90E-07 8.06E-05 0.0047 1.00E-06 7.01E-05 0.0027 4.88E-06 2.04E-04
0.5 1.62E-04 2.56E-06 1.18E-04 1.25E-04 7.24E-07 8.00E-05 2.52E-04 3.62E-06 1.90E-04
1 1.57E-04 1.47E-06 1.16E-04 1.35E-04 1.07E-06 1.41E-04 0.0027 2.01E-06 1.74E-04
Figure 4: Performance of the three 1/5 success rule strategies, the isotropic (sSA) and the ellipsoidal (SA) on Sphere (left)
and on the test problem (right) when σ
0
= 0.01. Median values out of 100 runs are plotted.
to x
n
= (x)/(u) where x
n
is the normalised so-
lution and u, are the upper and lower bound vectors
on the solution space.
In Table 2 (b) the results for the (1+1)-ES using
the normalised values are shown. If σ is too small the
starting point does not get improved much whether
the solution is normalised or not. However, for large
enough σ values the normalised version outperforms
the default one.
Examination of Table 2 (a) and (b) reveals a large
difference between the average and the best solution.
This indicates that the quality of the final solution may
depend considerably on the random starting point.
Fig. 3 shows the 100 runs for small, medium and large
σ values. From the graphs it can be noticed how hard
it is to escape from the penalty point (i.e. fitness=1)
corresponding to a different number of branches be-
tween candidate and optimal solutions. A large σ is
needed in order to escape. Table 2 (c) shows the re-
sults when such a point is rejected as a starting search
point. If σ is not too small, the mean values are very
close to the median ones, as desired.
In the following sections adaptation techniques
are applied to removethe dependence of the results on
the initial value of the standard deviation and achieve
solutions of improved quality. For the rest of the pa-
per, all the experiments are performed with the nor-
malisation of the vector bounds and with rejection of
starting points with different number of branches.
3.2 1/5 Rule Adaptation
One of the first methods proposed to control the mu-
tation rate in an ES was the 1/5-rule adaptation strat-
egy (Beyer and Schwefel, 2002). The idea is to tune
σ in a way that the success rate (i.e. the measured
ratio between the number of steps when the offspring
is retained and that when it is discarded) is about 1/5.
For the sphere function the 1/5-rule has been proved
to lead the (1+1)-ES to optimal mutation rates, hence
optimal performance (Beyer and Schwefel, 2002).
The strategy works as follows. After a given num-
ber of steps G, the mutation strength (i.e. the standard
deviation σ) is reduced by α if the rate of success-
ful mutations P
s
:= G
s
/G is less than 1/5. On the
other hand, if P
S
> 5, the mutation rate is increased by
α. Otherwise it remains unchanged. Recommended
values are G = N, if the dimensionality of the search
space N is sufficiently large, and 0.85 α < 1 (Beyer
and Schwefel, 2002).
The 1/5-rule strategy recently proposed by Kern
et al. (Kern et al., 2003) that allows to update the step
size after each generation rather than waiting until the
end of the “window phase“ G is also applied. In this
simpler implementation, at each generation the step
size is updated according to:
σ
t+1
= σ
t
·
α if f(x
t+1
) f (x
t
)
α
(1/4)
otherwise.
EVOLUTIONARY ALGORITHMS FOR THE IDENTIFICATION OF STRUCTURAL SYSTEMS IN EARTHQUAKE
ENGINEERING
57
Here α = 1/3 and the (1/4) in the exponent corre-
sponds to the success rate of 1/5.
The same (1+1)-ES of the previous section with
the 1/5-rule adaptation schemes are run with stan-
dard parameters. Table 3 shows the results for a wide
range of initial standard deviations σ
0
. The algorithm
achieves mean fitness values and best solutions of the
order of 10
4
and 10
7
respectively whatever the ini-
tial standard deviation σ
0
provided that it is not too
small. Also the median results are considerably im-
proved with values of the order of 10
5
.
Fig. 4 shows the performance of the different 1/5-
rule algorithms considered on the Sphere function and
on the test function respectively. While on the Sphere,
the “local” 1/5 success rule is optimal, this does not
seem to be the case for the test problem at hand. The
larger “window” of G = 5 leads to a better adaptation
of the step size.
The step size adaptation is shown in Fig. 5 for
the test problem and the runs which provide median
results. It appears that the decrease of the step size in
the “local” 1/5 rule is too fast while it is too slow with
the “ellipsoidal” self adaptation technique introduced
and discussed in the next section.
3.3 Self-adaptation
Self-adaptation has been introduced as a mecha-
nism for the ES to automatically adjust the mutation
strength, by evolving not only the solution vector but
also the strategy parameters. The strategy vector is
also mutated such that standard deviations producing
fitter solutions have higher probabilities of survival,
hence are evolved implicitly.
By still considering only one strategy parameter, a
mutation with self-adaptation of individual (x,σ) in-
volves first generating a new σ-value and then apply-
ing it to the object vector x. This is done by setting
˜
σ := σexp(τN (0,1)), z :=
˜
σ(N
1
(0,1),. .. ,N
N
(0,1))
and ˜x := x+ z. Here τ = 1/
2N is generally rec-
ommended as standard deviation for
˜
σ (Beyer and
Schwefel, 2002). By using only one strategy param-
eter σ, the mutation distribution is isotropic, i.e. sur-
faces of equal probability densities are hyper-spheres
(or circles for N = 2).
If N strategy parameters are used, individual step
sizes for each dimension are obtained leading to el-
lipsoidal surfaces of constant probability density as
the standard deviations evolve. With a strategy vec-
tor s := (σ
1
,... σ
N
) a new individual is generated by
setting:
˜
s := exp(τ
0
N
0
(0,1))
·
σ
1
exp(τN
1
(0,1),. .. ,σ
N
exp(τN
N
(0,1))
Figure 5: Step size evolution of the median fitness run for
the three 1/5 success rule strategies, the isotropic (sSA) and
the ellipsoidal (SA) on the test problem when σ
0
= 0.01.
and z := (σ
1
N
1
(0,1),. .., σ
N
N
N
(0,1)). Recom-
mended values for the parameters are τ
0
= 1/
2N
and τ = 1/
p
2
N, (Beyer and Schwefel, 2002).
The results for both strategies, isotropic (sSA) and
ellipsoidal (SA), are shown in Table 4, for the (1+10)-
ES. The results are very similar for other offspring
population sizes (from λ = 3 to λ = 20 were tested)
both with plus and comma selection strategies. Con-
cerning comma selection, bad average fitness values
are achieved if the initial σ is too high (i.e. σ > 0.5)
even though a penalty proportional to the distance
from the bounds is applied for comma strategies.
Overall, comparable average performance to that
of the (1+1)-ES with the “local” 1/5-Rule is obtained
independently of the initial values of σ as long as they
are not too high or too low. Hence, not as much in-
dependence from the initial σ as with the 1/5-rule is
obtained. The typical evolution rate of σ is shown in
Fig. 5 where it is compared with that of the 1/5-rule.
No significant improvementis achieved with the ellip-
soidal distributions with axes parallel to the reference
frame.
3.4 CMA-ES
Since the success of the described self-adaptation
technique relies on one-step improvements it is often
referred to as a local adaptation approach. By intro-
ducing correlations between the components of z the
ellipsoid may be arbitrarily rotated in the search space
and evolved to point in the direction of optimal solu-
tions. Another step towards more advanced parameter
adaptation techniques is to consider non-local infor-
mation gathered from more than one generation. Both
features are used by the (µ,λ)-CMA-ES considered in
this section.
The CMA-ES creates a multivariate normal distri-
bution N (m,C) determined by its mean vector m
R
N
and its covariance matrix C R
NXN
. Instead of
keeping a population of µ individuals (as in the pre-
ECTA 2011 - International Conference on Evolutionary Computation Theory and Applications
58
Table 4: Results for the (1+10)-ESs.
Isotropic Mutations Ellipsoidal Mutations
σ
0
Avg Min σ
0
i
Avg Min
10
4
0.0032 1.54E-05 10
4
0.0179 7.08E-05
10
3
4.23E-04 7.59E-06 10
3
0.0086 6.74E-05
10
2
0.0031 2.91E-06 10
2
0.0013 2.45E-05
0.05 3.17E-04 3.68E-06 0.05 9.38E-04 8.74E-06
0.1 3.02E-04 9.35E-06 0.1 69.25E-04 1.25E-05
1 0.0033 1.45E-05 1 0.0116 5.67E-05
Table 5: (a) Summary of best results found for the Test function with 1000 fitness function evaluations; (b) 10000 fitness
function evaluations.
ES adaptation (a) Avg Med Min (b) Avg Med Min
(1+1)-ES No (σ
0
= 0.01) 0.0595 1.48E-04 4.51E-06 1.02E-05 8.35E-06 8.47E-07
(1+1)-ES 1/5rule (α = 0.95, G = 5, σ
0
= 0.01) 1.20E-04 7.97E-05 2.08E-07 1.32E-08 2.36E-09 1.39E-12
(1+λ)-ES self ((1+5)-ES, σ
0
= 1) 0.0023 3.54E-04 2.30E-06 7.89E-05 5.69E-06 2.16E-08
CMA-ES non-loc. rot. ellips. self. 2.51E-05 5.79E-06 2.35E-10 2.22E-15 5.66E-16 2.14E-16
viously considered ESs), the covariance matrix C and
the mean vector m are evolved. At each step λ indi-
viduals are sampled from N (m,σ
2
C) and the best µ
are used to generate the new mean
˜
m and covariance
matrix
˜
C. For further details on the CMA-ES refer to
(Hansen and Ostermeier, 2001).
In the present experiments, all the algorithmic pa-
rameters are set as recommended in (Hansen and Os-
termeier, 2001) including λ = 4 + 3lnN and µ =
λ/2. In the experiment set on the test problem as pre-
viously described the CMA-ES provided a set of so-
lutions with mean 2.51·10
5
, median 5.79·10
6
and
minimum 2.35 ·10
10
thus outperforming other con-
sidered algorithms of an order of magnitude in mean
and median and of more than three orders of mag-
nitude concerning the minimum solutions. No im-
provements are obtained by increasing or diminishing
the population size (i.e. values from (1,4)-CMA-ES
through (3,6)-CMA-ES up to (8,16)-CMA-ES).
Finally, 100 runs of the best performing EAs each
of 10000 fitness function evaluations were done. The
results are shown in Table 5(b). On this set of runs
the mean and best results scored by the CMA-ES are
10
15
and 10
17
respectively.
4 SOLARINO DATA
In July 2004, six free vibration tests under im-
posed displacement were performed on a four story
reinforced concrete building, seismically retrofitted
by base isolation. The nominal displacements var-
ied from a minimum of 4.06cm to a maximum of
13.29cm in the six dynamic tests. Unfortunately,
these may not be true displacements since they may
include residual displacement from tests performed
previously in the sequence. The main objective, is
the identification of the properties of the isolation
system (i.e. the parameters of the previously de-
scribed model) and the initial displacement as dis-
cussed above using the recorded accelerations.
The results obtained in (Oliveto et al., 2010) using
an iterative least squares method are shown in Table
6. The procedure implied to start from a reasonable
guess for the system parameters (for the identification
of the first of the six tests), and use the first branch
of the recorded acceleration for the identification of
a new set of improved parameter values. This new
set of parameters was then used for the identification
from the first two branches and so on until five of the
branches were considered at once. The last branch
was derived from the governing equations using the
parameters identified from the previous segments of
the signal. For the identification of the following tests
the result of the first test was used as the initial guess
(i.e. the correct solution should be the same for all
tests excluding damage of the building caused by the
tests and/or effects due to environmental changes).
4.1 Previous Model
In this section advantage is taken of the simplicity of
ES application and the ESs that performed best on the
test problem are used. The parameter bounds to the
search space are shown in table 6.
While the CMA-ES once again outperforms the
other strategies, the difficulty of the identification
problem is highlighted by the need to increase the
population size to a (18,36)-CMA-ES to obtain im-
proved results. Ten runs each are performed of (4,9),
(9,18) and (18,36) CMA-ES and of the three 1/5 rule
ESs allowing 100k fitness evaluations in each run.
EVOLUTIONARY ALGORITHMS FOR THE IDENTIFICATION OF STRUCTURAL SYSTEMS IN EARTHQUAKE
ENGINEERING
59
Table 6: Identification of the Solarino building base isolation system. Used lower and upper bounds of system parameters
are shown in lines starting with LB and UB repectively. The best identification results obtained in the literature by the least
squares (LS) method from five tests are shown first. The corresponding results obtained by the CMA-ES are shown next. The
number of times the best found solution was obtained in the runs is shown in brackets next to the test number. Quadratic
errors and improvement achieved by CMA-ES are shown in the last two columns.
Test u
0
(m) u
d
(m) u
y
(m) ζ
0
ω
0
(Hz) ω
1
(Hz) e
2
improvement
LB 0.0813 0.0018 0.0092 0 0.4832 0.2819 - -
UB 0.1333 0.0069 0.0366 0.1 0.5651 0.4417 - -
LS
3 0.1108 0.0034 0.0181 4.5E-08 0.5235 0.3947 0.0234 -
5 0.1169 0.0034 0.0167 0.0127 0.5117 0.4070 0.0105 -
6 0.1228 0.0035 0.0179 3.6E-08 0.5269 0.3909 0.0129 -
7 0.0927 0.0033 0.0173 3.87E-08 0.5222 0.3964 0.0122 -
8 0.0965 0.0025 0.0118 0.0306 0.5402 0.4242 0.0055 -
ES
3 (4) 0.1063 0.0026 0.0132 1E-10 0.5507 0.4014 0.0147 37%
5 (2) 0.1153 0.0025 0.0127 0.0153 0.5384 0.4108 0.0049 53%
6 (2) 0.1122 0.0027 0.0130 1E-10 0.5559 0.4036 0.0096 25%
7 (3) 0.0856 0.0030 0.0132 1E-10 0.5455 0.4130 0.0106 13%
8 (4) 0.0976 0.0026 0.0109 0.0307493278 0.5382 0.4235 0.0052 5%
Starting from parent population sizes of µ = 9 the
CMA-ES converges more than once to the same so-
lution.
Table 6 shows the best found solutions and the
number of times they were repeated. The other so-
lutions do not differ significantly from the best ones
but for slightly larger errors and final parameter digits.
Compared to the results from previous work, closer
bounds for the likely real values of the parameters are
now established.
The nominal values of the initial displacements,
in all likelyhood affected by measurement errors, are
given for each test in (Oliveto et al., 2010) together
with the identified ones. The estimated displacements
were always smaller than the nominal ones; here they
are found even smaller, except for test 8.
In three tests out of ve damping (ζ
0
) is practi-
cally zero in line with what found in (Oliveto et al.,
2010), while in the remaining two ζ
0
is small but not
negligible and a little larger than in (Oliveto et al.,
2010). This presence of damping in tests 5 and 8
could point to an incapacity of the optimization proce-
dures to identify the absolute minimum, producing in-
stead local minima. Alternatively, data inconsistency
due to measurement noise and/or inadequate signal
treatment could be responsible for the discrepancy.
The remaining physical parameters show less dis-
persion in the identified values than before, with a co-
efficient of variation nearly halved in each case. Ac-
tual values change from 0.13 to 0.07 for u
d
, from 0.16
to 0.08 for u
y
, from 0.02 to 0.01 for ω
0
and from
0.03 to 0.01 for ω
1
. The situation could improve even
further if the inconsistency highlighted by damping
could be solved.
From the results shown in Table 6 and as com-
Figure 6: Model 2 friction force-displacement relationship.
mented above, a considerable improvement in the
identification procedure has been achieved by using
ES with the Solarino test data. In particular the results
are improvedup to 53% compared to those previously
available in literature. The performance discrepancy
of ES between the test problem and the real problem
seems to suggest that further improvements might be
required in the formulation of the mechanical model.
Some preliminary investigations in that direction are
pursued in the next section.
4.2 New Models
Two small changes to the described model are inves-
tigated herein. They affect only the description of the
friction damper of Fig.1 and will reflect in changes to
the diagram of Fig. 1(c). The changes stem from the
experimental observation that the friction force is not
constant during the motion. The diagram of Fig.5 as-
sumes that the friction force is constant within a half-
ECTA 2011 - International Conference on Evolutionary Computation Theory and Applications
60
Figure 7: Model 3 friction force-displacement relationship.
cycle of motion but it can change from one half-cycle
to the next. The considered change does not affect the
equations of the mechanical model as it only changes
one parameter value, but not the structure of the prob-
lem. However the dimension of the system parameter
vector is affected because an additional parameter is
required for each half-cycle of motion as compared to
the single one required earlier.
The considered model, for convenience denoted
model 2, provides results improved only slightly by
running the CMA-ES on it (i.e. quadratic error re-
duced from 0.0147 to 0.0145 on test 3). Contrary
to the previous model 1 the CMA-ES does not seem
to succeed to repeat results in more than one run on
model 2. This may be due to the greatly increased di-
mension of the problem (number of parameters nearly
doubled) and to having maintained the same number
of function evaluations (100k). The feeble success
with this model may be due to two factors; the first
is the computational expensiveness due to the higher
dimension of the problem, the second is the insignifi-
cant mechanical advantage because the friction force
changes more within a half-cycle than it does from
half-cycle to half-cycle. The latter aspect will be
clearer with the introduction of model 3.
An analytical solution for the mechanical model
shown in Fig.1 with the friction law sketched in Fig.6
has just been givenin (Athanasiou and Oliveto, 2011).
This third model helps to spread some light on the lit-
tle efficiency of model 2. While in model 3 the fric-
tion force varies linearly over a half-cycle, in model
2 the friction force is constant over the same half-
cycle. It can be expected therefore that model 3 can
fine tune the friction force much better than model 2
and for that matter much better than model 1, where
the friction force is of constant amplitude. All this
is achieved at the expense of only one additional pa-
rameter, the slope k
d
in Fig.6. The application of the
(18,36)-CMA-ES with Model 3 to Test 3 converges
4 times out of 10 to a solution with quadratic error
e
2
= 0.0139 which is an extra improvement of more
than 5% on the final fitness.
5 CONCLUSIONS
A study of ESs for the identification of base isolation
systems in buildings designed for earthquake action
has been presented. The results clearly show that ESs
are highly effective for the optimisation of the test
problem defined in previous work for methodology
validation. All the considered ESs perform at least
as well as previous methods and the quality of so-
lutions improves with more sophisticated adaptation
strategies. The CMA-ES, considerably outperforms
previous algorithms of several orders of magnitude.
Having established the good performance of ESs
for the system identification, the same ESs are applied
to the real data recorded on the Solarino building in
July 2004 with the model used previously in litera-
ture. Although the CMA-ES converges to more pre-
cise solutions, these do not allow the sought system
parameter values to be established with the highest
confidence level. Two possible causes have been con-
sidered: the presence of noise in the recorded data
(i.e. the function to be optimised) and the model ade-
quacy to properly simulate the system response.
To investigatethe latter problem two new mechan-
ical models with higher number of parameters have
been developed. Both models, in conjunction with the
application of CMA-ES, enable to obtain improved
results. Model 3 allows fine tuning friction damping
and provides a 5% improvement in the overall solu-
tion quality on Test 3.
Thus, in this paper ESs are shown to be a very
powerful tool for the dynamic identification of struc-
tural systems. In particular, the CMA-ES combines
application simplicity with convergence reliability. In
view of the effectiveness shown in the applications
considered in the present work, the authors believe
that CMA-ES could be used advantageously with
more complex models and various excitation sources
including environmental ones. Presently improved
models are under development as a replacement of the
bi-linear description of rubber isolators.
REFERENCES
Athanasiou, A. and Oliveto, G. (2011). Modelling hybrid
base isolation systems for free vibration simulations.
In Proc. of 8CUEE, pages 1293–1302, Tokyo, Japan.
Beyer, H.-G. and Schwefel, H.-P. (2002). Evolution strate-
EVOLUTIONARY ALGORITHMS FOR THE IDENTIFICATION OF STRUCTURAL SYSTEMS IN EARTHQUAKE
ENGINEERING
61
gies, a comprehensive introduction. Natural Comput-
ing, 1:3–52.
Hansen, N. and Ostermeier, A. (2001). Completely deran-
domized self-adaptation in evolution strategies. Evo-
lutionary Computation, 9(2):159–195.
Kern, S., M¨uller, S. D., Hansen, N., B¨uche, D., Ocenasek,
J., and Koumoutsakos, P. (2003). Learning probability
distributions in continuous evolutionary algorithms - a
comparative review. Natural Computing, 3:77–112.
Oliveto, N. D., Scalia, G., and Oliveto, G. (2010). Time
domain identification of hybrid base isolation systems
using free vibration tests. Earthquake engineering and
structural dynamics, 39(9):1015–1038.
Schwefel, H. P., editor (1993). Evolution and Optimum
Seeking: The Sixth Generation. Wiley & Sons, NY,
USA.
ECTA 2011 - International Conference on Evolutionary Computation Theory and Applications
62