Figure 8: Comparison of sequential masking and adaptive
hill climbing optimization procedures.
6.4 Benchmarks and Conclusions
In conclusion our approach has been proved
effective in tuning a complex flight simulation
model finding the optimal values of 50 parameters
of the model. The entire process requires less than 2
days of machine-time on a single desktop computer
(with just a few hours actually dedicated to finding
those values, and most of the time devoted to
generating the database for parameter ranking). The
main benchmark against which this result must be
compared is manual tuning, which is still the state-
of-the-art in industrial applications. Our_company’s
experienced engineers would require from 10 to 20
days to accomplish the same result. Concerning
previous attempts to automatic tuning, there is little
work done for the tuning of industrial level
computer simulators, and to the best of our
knowledge, none in the area of flight simulations.
The best related work is in medical context (Vidal et
al, 2013): this paper presents an evolutionary
strategy for tuning, but the approach is used only for
lower dimensional problem with just 15 parameters.
Thanks to an integrated approach combining
screening and optimization (tightly coupled
especially in the sequential masking algorithm), our
methodology allows to significantly expand the
range of application of automatic techniques for
parameter tuning. When comparing to other attempt
of automatic tuning it is important to notice that
combining screening and optimization is not only
crucial in order to achieve fast convergence to a
really low simulation error, but it is also crucial in
order to avoid an issue we have anticipated in
previous sections of the paper: the introduction of
peculiar side effects that can make simulations
unrealistic to a human eye (such as odd small
oscillations and vibrations difficult to control). The
reason for such side-effects is that parameters that
have a low impact on the performance metrics and
thus on the global simulation error are free to deviate
randomly from their default values, because there is
no selective pressure capable of limiting their erratic
wandering. Our methodology solves the issue by
restricting tuning to the set of parameters with direct
impact on the performance metrics, so, during
optimization, all non-fixed parameters are directed
towards their optimal values instead of being free to
roam around. The proposed methodology is
therefore the first real alternative to manual tuning,
allowing an impressive speed up of the tuning
process while preserving high quality results.
Having applied the machine learning algorithms
without exploiting any prior domain knowledge we
also believe that is fully general, as future research it
would be therefore interesting to apply the proposed
technique to other application domains.
REFERENCES
Fisher, R.A., 1935, The design of experiments. Oxford,
England: Oliver & Boyd. xi 251 pp.
Rosenblatt F., 1958, The perceptron: a probabilistic model
for information storage and organization in the brain.
Psychological Review 65: 386—408.
Nelder J., Wedderburn R., 1972 Generalized Linear
Models, Journal of the Royal Statistical Society. Series
A (General) 135 (3): 370-384.
Rumelhart D.E., Hinton G.E., Williams R.J., 1986,
Learning representations by back-propagating errors.
Nature 323 (6088): 533–536. doi:10.1038/323533a0.
Cybenko G., 1989 Approximations by superpositions of
sigmoidal functions, Mathematics of Control, Signals,
and Systems, 2 (4), 303-314.
Le Cessie S., Van Houwelingen J.C., 1992, Ridge
estimators in Logistic Regression. Applied Statistics.
Bettonvil B., Kleijnen J.P.C., 1997, Searching for
important factors in simulation models with many
factors: Sequential bifurcation, European Journal of
Operational Research, Volume 96, Issue 1, Pages 180–
194.
Haykin S., 1998, Neural Networks: A Comprehensive
Foundation (2 ed.). Prentice Hall. ISBN 0-13-273350-
1.
Harrel F., 2001 Regression Modeling Strategies, Springer-
Ve rl ag .
Kern S., Muller S.D., Hansen N., Büche D., Ocenasek J.,
Koumoutsakos P., 2004, learning probability
distributions in continuous evolutionary strategies – a
comparative review, Journal of Natural Computing
Volume 3 Issue 1, Pages 77 - 112.
Bishop C., 2006 Pattern Recognition and Machine
Learning, Springer Science+Business Media, LLC, pp
SIMULTECH2014-4thInternationalConferenceonSimulationandModelingMethodologies,Technologiesand
Applications
34