• Transfer Learning: Exploring transfer learning
techniques to adapt models trained on one set
of problem instances to handle others effectively
could contribute to a general purpose AAS.
• Interpretable AI Models: Enhancing the inter-
pretability of deep learning models used in AAS
to provide insights into why certain algorithms are
preferred for specific instances could help refine
the models further and in gaining trust from users.
• Benchmarks and Datasets: Applying our frame-
work to other domains, possibly including opti-
mization problems whose domain metrics ˆm in-
volve the values of the objective function.
In conclusion, the research presented in this paper sets
a new benchmark in the field of feature-free AAS and
opens up numerous possibilities for the evolution of
more intelligent and autonomous algorithm selection
systems. Our future efforts will focus on expanding
the capabilities of our framework and exploring these
promising directions to further enhance the field of
algorithm selection.
ACKNOWLEDGEMENTS
Authors 1st, 2nd, 3rd, 4th, and 6th are supported in
part by AI institute NSF award 2112533.
REFERENCES
Ach
´
a, R. A., L
´
opez, R., Hagedorn, S., and Baier, J. A.
(2022). Multi-agent path finding: A new boolean en-
coding. Journal of Artificial Intelligence Research,
75:323–350.
Adam, S. P., Alexandropoulos, S.-A. N., Pardalos, P. M.,
and Vrahatis, M. N. (2019). No free lunch theorem: A
review. Approximation and optimization: Algorithms,
complexity and applications, pages 57–82.
Ba, J. L., Kiros, J. R., and Hinton, G. E. (2016). Layer
normalization. arXiv preprint arXiv:1607.06450.
Bossek, J. and Neumann, F. (2022). Exploring the feature
space of tsp instances using quality diversity. In Pro-
ceedings of the Genetic and Evolutionary Computa-
tion Conference, pages 186–194.
Bulitko, V. (2016). Evolving real-time heuristic search al-
gorithms. In Artificial Life Conference Proceedings
13, pages 108–115. MIT Press.
Christlein, V., Spranger, L., Seuret, M., Nicolaou, A., Kr
´
al,
P., and Maier, A. (2019). Deep generalized max pool-
ing. In 2019 International conference on document
analysis and recognition (ICDAR), pages 1090–1096.
IEEE.
He, K., Zhang, X., Ren, S., and Sun, J. (2016). Deep resid-
ual learning for image recognition. In Proceedings of
the IEEE conference on computer vision and pattern
recognition, pages 770–778.
Hurley, B., Kotthoff, L., Malitsky, Y., and O’Sullivan, B.
(2014). Proteus: A hierarchical portfolio of solvers
and transformations. In Integration of AI and OR
Techniques in Constraint Programming: 11th Inter-
national Conference, CPAIOR 2014, Cork, Ireland,
May 19-23, 2014. Proceedings 11, pages 301–317.
Springer.
Ioffe, S. and Szegedy, C. (2015). Batch normalization: Ac-
celerating deep network training by reducing internal
covariate shift. In International conference on ma-
chine learning, pages 448–456. pmlr.
Kadioglu, S., Malitsky, Y., Sabharwal, A., Samulowitz, H.,
and Sellmann, M. (2011). Algorithm selection and
scheduling. In International Conference on Principles
and Practice of Constraint Programming, pages 454–
469. Springer.
Kerschke, P., Hoos, H. H., Neumann, F., and Trautmann, H.
(2019). Automated algorithm selection: Survey and
perspectives. Evolutionary computation, 27(1):3–45.
Kudo, T. and Richardson, J. (2018). Sentencepiece: A sim-
ple and language independent subword tokenizer and
detokenizer for neural text processing. arXiv preprint
arXiv:1808.06226.
Lindauer, M., Hoos, H. H., Hutter, F., and Schaub, T.
(2015). Autofolio: An automatically configured al-
gorithm selector. Journal of Artificial Intelligence Re-
search, 53:745–778.
Lindauer, M., van Rijn, J. N., and Kotthoff, L. (2019). The
algorithm selection competitions 2015 and 2017. Ar-
tificial Intelligence, 272:86–100.
Loreggia, A., Malitsky, Y., Samulowitz, H., and Saraswat,
V. (2016). Deep learning for algorithm portfolios. In
Thirtieth AAAI Conference on Artificial Intelligence.
Loshchilov, I. and Hutter, F. (2017). Decoupled weight de-
cay regularization. arXiv preprint arXiv:1711.05101.
Rice, J. R. (1976). The algorithm selection problem. In Ad-
vances in computers, volume 15, pages 65–118. Else-
vier.
Seiler, M., Pohl, J., Bossek, J., Kerschke, P., and Traut-
mann, H. (2020). Deep learning as a competitive
feature-free approach for automated algorithm selec-
tion on the traveling salesperson problem. In Interna-
tional Conference on Parallel Problem Solving from
Nature, pages 48–64. Springer.
Tay, Y., Tran, V. Q., Ruder, S., Gupta, J., Chung, H. W.,
Bahri, D., Qin, Z., Baumgartner, S., Yu, C., and Met-
zler, D. (2021). Charformer: Fast character transform-
ers via gradient-based subword tokenization. arXiv
preprint arXiv:2106.12672.
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones,
L., Gomez, A. N., Kaiser, Ł., and Polosukhin, I.
(2017). Attention is all you need. Advances in neural
information processing systems, 30.
Xu, L., Hutter, F., Hoos, H. H., and Leyton-Brown, K.
(2008). Satzilla: portfolio-based algorithm selection
for sat. Journal of artificial intelligence research,
32:565–606.
KDIR 2024 - 16th International Conference on Knowledge Discovery and Information Retrieval
274