question its use for more general problem feature de-
tection or future algorithm selection. Previous work
has suggested that a normalization procedure should
be applied to the function values before ELA calcula-
tion (Prager and Trautmann, 2023a), but this merely
shifts the question to, e.g., logarithmic transforma-
tions of the function value.
An overarching question we identify here is how
robust the intuitive link is in practice between low-
level landscape features, such as ELA, and the high-
level properties which they aim to capture. Many
studies using ELA are rather limited in scope, and
while they show great performance within bench-
marking suites, generalizability to other setups seems
rather poor (Vermetten et al., 2023; Kostovska et al.,
2022). More research into the link between high-level
landscape properties, ELA features and algorithm be-
haviour is required to better understand how we can
move towards more generalizable results for our au-
tomated algorithm selection studies.
With the introduction of new alternatives to ELA,
such as DoE2Vec and Deep-ELA (Seiler et al., 2024),
the question of whether low-level features should in-
deed be invariant to search space transformations be-
comes even more relevant. While we observe that
DoE2Vec is still impacted by most transformations,
we note that because these features rely on training
neural networks, their training data could be aug-
mented to e.g. include different transformations of
the used samples, which should result in more stable
features. However, it is not certain that these invari-
ances will be present in the used algorithms, leading
to a loss of information if not accounted for in the fea-
ture space. The goals of landscape features are often
inherently linked to algorithmic behaviour, and this
should not be forgotten when designing or generating
new sets of features.
REFERENCES
Ahrari, A., Elsayed, S., Sarker, R., Essam, D., and Coello,
C. A. C. (2022). Problem definition and evaluation
criteria for the cec’2022 competition on dynamic mul-
timodal optimization. EvOpt Report 2022001.
Anonymous (2024). Impact of spatial transformations on
landscape features of cec 2022 basic benchmark prob-
lems.
B
¨
ack, T. H., Kononova, A. V., van Stein, B., Wang, H.,
Antonov, K. A., Kalkreuth, R. T., de Nobel, J., Ver-
metten, D., de Winter, R., and Ye, F. (2023). Evolu-
tionary algorithms for parameter optimization—thirty
years later. Evolutionary Computation, 31(2):81–122.
Bartz-Beielstein, T., Doerr, C., van den Berg, D., Bossek,
J., Chandrasekaran, S., Eftimov, T., Fischbach, A.,
Kerschke, P., Cava, W. L., Lopez-Ibanez, M., Malan,
K. M., Moore, J. H., Naujoks, B., Orzechowski, P.,
Volz, V., Wagner, M., and Weise, T. (2020). Bench-
marking in optimization: Best practice and open is-
sues.
Bartz-Beielstein, T., Lasarczyk, C., and Preuss, M. (2010).
The sequential parameter optimization toolbox. In Ex-
perimental methods for the analysis of optimization
algorithms, pages 337–362. Springer.
Beyer, H.-G. and Schwefel, H.-P. (2002). Evolution
strategies–a comprehensive introduction. Natural
computing, 1:3–52.
de Nobel, J., Ye, F., Vermetten, D., Wang, H., Doerr, C.,
and B
¨
ack, T. (2023). Iohexperimenter: Benchmarking
platform for iterative optimization heuristics. Evolu-
tionary Computation, pages 1–6.
Eglajs, V. and Audze, P. (1977). New approach to the de-
sign of multifactor experiments. Problems of Dynam-
ics and Strengths, 35(1):104–107.
Floudas, C. A., Pardalos, P. M., Adjiman, C., Esposito,
W. R., G
¨
um
¨
us, Z. H., Harding, S. T., Klepeis, J. L.,
Meyer, C. A., and Schweiger, C. A. (2013). Hand-
book of test problems in local and global optimization,
volume 33. Springer Science & Business Media.
Hall, B. C. and Hall, B. C. (2013). Lie groups, Lie algebras,
and representations. Springer.
Hansen, N., Auger, A., Ros, R., Mersmann, O., Tu
ˇ
sar, T.,
and Brockhoff, D. (2021). Coco: A platform for com-
paring continuous optimizers in a black-box setting.
Optimization Methods and Software, 36(1):114–144.
Hansen, N., Finck, S., Ros, R., and Auger, A. (2009).
Real-parameter black-box optimization benchmark-
ing 2009: Noiseless functions definitions. Technical
Report RR-6829, INRIA.
Hansen, N., Ros, R., Mauny, N., Schoenauer, M.,
and Auger, A. (2011). Impacts of invariance in
search: When cma-es and pso face ill-conditioned
and non-separable problems. Applied Soft Computing,
11(8):5755–5769.
He, J., Reeves, C., Witt, C., and Yao, X. (2007). A
note on problem difficulty measures in black-box opti-
mization: Classification, realizations and predictabil-
ity. Evolutionary Computation, 15(4):435–443.
Jankovi
´
c, A. and Doerr, C. (2019). Adaptive landscape
analysis. In Proceedings of the Genetic and Evolu-
tionary Computation Conference Companion, pages
2032–2035.
Kantorovich, L. V. (1960). Mathematical methods of orga-
nizing and planning production. Management science,
6(4):366–422.
Kerschke, P. and Trautmann, H. (2019). Comprehen-
sive feature-based landscape analysis of continuous
and constrained optimization problems using the r-
package flacco. In Bauer, N., Ickstadt, K., L
¨
ubke,
K., Szepannek, G., Trautmann, H., and Vichi, M., ed-
itors, Applications in Statistical Computing – From
Music Data Analysis to Industrial Quality Improve-
ment, Studies in Classification, Data Analysis, and
Knowledge Organization, pages 93 – 123. Springer.
Kolmogorov, A. N. (1933). Sulla determinazione empirica
ECTA 2024 - 16th International Conference on Evolutionary Computation Theory and Applications
70