
Fenton, M., McDermott, J., Fagan, D., Forstenlechner, S.,
Hemberg, E., and O’Neill, M. (2017). Ponyge2:
grammatical evolution in python. In Proceedings of
the Genetic and Evolutionary Computation Confer-
ence Companion, GECCO ’17. ACM.
Horv
´
ath, S., Klein, A., Richt
´
arik, P., and Archambeau, C.
(2021). Hyperparameter transfer learning with adap-
tive complexity.
Jamieson, K. G. and Talwalkar, A. (2015). Non-stochastic
best arm identification and hyperparameter optimiza-
tion. CoRR, abs/1502.07943.
LeCun, Y., Denker, J., and Solla, S. (1989). Optimal brain
damage. In Touretzky, D., editor, Advances in Neural
Information Processing Systems, volume 2. Morgan-
Kaufmann.
Lee, K. and Yim, J. (2022). Hyperparameter optimization
with neural network pruning.
Lee, N., Ajanthan, T., and Torr, P. (2019). SNIP: Single-
shot pruning based on connecion sensitivity. In Inter-
national Conference on Learning Representations.
Li, L., Jamieson, K. G., DeSalvo, G., Rostamizadeh, A., and
Talwalkar, A. (2016). Efficient hyperparameter opti-
mization and infinitely many armed bandits. CoRR,
abs/1603.06560.
Mallik, N., Bergman, E., Hvarfner, C., Stoll, D., Janowski,
M., Lindauer, M., Nardi, L., and Hutter, F. (2023).
Priorband: Practical hyperparameter optimization in
the age of deep learning.
Perrone, V., Shen, H., Seeger, M., Archambeau, C., and Je-
natton, R. (2019). Learning search spaces for bayesian
optimization: Another view of hyperparameter trans-
fer learning.
Ryan, C., Collins, J., and Neill, M. O. (1998). Grammati-
cal evolution: Evolving programs for an arbitrary lan-
guage. In Banzhaf, W., Poli, R., Schoenauer, M., and
Fogarty, T. C., editors, Genetic Programming, pages
83–96, Berlin, Heidelberg. Springer Berlin Heidel-
berg.
Simonyan, K. and Zisserman, A. (2015). Very deep con-
volutional networks for large-scale image recognition.
In International Conference on Learning Representa-
tions.
Strubell, E., Ganesh, A., and McCallum, A. (2019). Energy
and policy considerations for deep learning in NLP.
Vaidya, G., Ilg, L., Kshirsagar, M., Naredo, E., and Ryan,
C. (2022). Hyperestimator: Evolving computationally
efficient cnn models with grammatical evolution. In
Proceedings of the 19th International Conference on
Smart Business Technologies. SCITEPRESS - Science
and Technology Publications.
Vaidya, G., Kshirsagar, M., and Ryan, C. (2023). Gram-
matical evolution-driven algorithm for efficient and
automatic hyperparameter optimisation of neural net-
works. Algorithms, 16(7).
Wistuba, M., Schilling, N., and Schmidt-Thieme, L. (2015).
Hyperparameter search space pruning – a new compo-
nent for sequential model-based hyperparameter op-
timization. In Appice, A., Rodrigues, P. P., San-
tos Costa, V., Gama, J., Jorge, A., and Soares, C., ed-
itors, Machine Learning and Knowledge Discovery in
Databases, pages 104–119, Cham. Springer Interna-
tional Publishing.
Yang, L. and Shami, A. (2020). On hyperparameter opti-
mization of machine learning algorithms: Theory and
practice. Neurocomputing, 415:295–316.
Yu, T. and Zhu, H. (2020). Hyper-parameter optimization:
A review of algorithms and applications.
PurGE: Towards Responsible Artificial Intelligence Through Sustainable Hyperparameter Optimization
633