
Table 2: Performance of the best 5 models on Fashion MNIST dataset.
Best Model Accuracy F1-Score Memory Footprint(MB) Inference Time
Fashion MNIST
M1 0.895 0.850 28 10:22:35
M2 0.906 0.887 25 6:53:43
M3 0.914 0.890 21 7:10:12
M4 0.946 0.902 25.7 24:21:52
M5 0.961 0.941 22.3 17:15:25
Table 3: Comparaison of MOCA and baseline algorithms
performance.
Algorithm Performance
SH-EMOA ≈ 0.92 /15.3
MO-BOHB ≈ 0.93 /32.0
MS-EHVI ≈ 0.90 /9.5
MO-BANANAS-SH ≈ 0.93 /19.3
BULK & CUT ≈ 0.94 /15.3
Random Search ≈ 0.92 /38.1
MOCA 0.96/22.3
techniques into the MOCA algorithm could expedite
model convergence and boost performance.
REFERENCES
Ahmadianfar, I., Adib, A., and Taghian, M. (2015). A
multi-objective evolutionary algorithm using decom-
position (moea/d) and its application in multipurpose
multi-reservoir operations. Iran University of Science
& Technology, 5:167–187.
Arora, J. (2017). Chapter 18 – multi-objective optimum
design concepts and methods. In Multi-objective Op-
timum Design Concepts and Methods.
Bigham, J. P., Bernstein, M. S., and Adar, E. (2015).
Human-computer interaction and collective intelli-
gence. Handbook of collective intelligence, 57(4).
Cetin, U. and Gundogmus, Y. E. (2019). Feature selection
with evolving, fast and slow using two parallel genetic
algorithms. In 2019 4th International Conference on
Computer Science and Engineering (UBMK), pages
699–703. IEEE.
Dong, J.-D., Cheng, A.-C., Juan, D.-C., Wei, W., and
Sun, M. (2018). Ppp-net: Platform-aware progressive
search for pareto-optimal neural architectures. arXiv
preprint arXiv:1806.08198v2.
Elsken, T., Metzen, J. H., and Hutter, F. (2018). Efficient
multi-objective neural architecture search via lamar-
ckian evolution. arXiv preprint arXiv:1804.09081.
Elsken, T., Metzen, J. H., and Hutter, F. (2019). Neural
architecture search: A survey. The Journal of Machine
Learning Research, 20(1):1997–2017.
Feurer, M., Klein, A., Eggensperger, K., Springenberg, J.,
Blum, M., and Hutter, F. (2015). Efficient and robust
automated machine learning. Advances in neural in-
formation processing systems, 28.
Guerrero-Viu, J., Hauns, S., Izquierdo, S., Miotto, G.,
Schrodi, S., Biedenkapp, A., Elsken, T., Deng, D.,
Lindauer, M., and Hutter, F. (2021). Bag of baselines
for multi-objective joint neural architecture search
and hyperparameter optimization. arXiv preprint
arXiv:2105.01015.
Gupta, O. and Raskar, R. (2018). Distributed learning of
deep neural networks over multiple agents. Journal of
Network and Computer Applications, 116:1–8.
Jin, H., Song, Q., and Hu, X. (2019). Auto-keras: An ef-
ficient neural architecture search system. In Proceed-
ings of the 25th ACM SIGKDD international confer-
ence on knowledge discovery & data mining, pages
1946–1956.
Kahneman, D. (2015). Kahneman’s thinking fast and slow:
From bestseller to textbook: Thinking, fast and slow.
RAE Revista de Administracao de Empresas.
Lu, Z., Whalen, I., Boddeti, V., Dhebar, Y., Deb, K., Good-
man, E., and Banzhaf, W. (2019). Nsga-net: neural
architecture search using multi-objective genetic algo-
rithm. In Proceedings of the genetic and evolutionary
computation conference, pages 419–427.
Park, S., Lee, J., Mo, S., and Shin, J. (2020). Lookahead:
a far-sighted alternative of magnitude-based pruning.
CoRR, abs/2002.04809.
Real, E., Aggarwal, A., Huang, Y., and Le, Q. V. (2019).
Regularized evolution for image classifier architecture
search. In Proceedings of the aaai conference on arti-
ficial intelligence, volume 33, pages 4780–4789.
Ren, P., Xiao, Y., Chang, X., Huang, P.-Y., Li, Z., Chen,
X., and Wang, X. (2021). A comprehensive survey of
neural architecture search: Challenges and solutions.
ACM Computing Surveys (CSUR), 54(4):1–34.
Zoph, B., Vasudevan, V., Shlens, J., and Le, Q. V. (2018).
Learning transferable architectures for scalable image
recognition. In Proceedings of the IEEE conference on
computer vision and pattern recognition, pages 8697–
8710.
Agent Based Model for AUTODL Optimisation
575