6 CONCLUSION
We performed a fitness landscape analysis of the
NATS-Bench topology search space, analyzing and
visualizing features of density of states, FDC and fit-
ness distance rank correlations, local optima, basins
of attraction, neutral networks, and autocorrelation.
Our analyses indicated that the problem difficulty of
search on the topology search space of NATS-Bench
for architectures that can perform well on CIFAR-10,
CIFAR-100, and ImageNet16-120 datasets may cor-
respond to the difficulties of the image classification
problems themselves. We also demonstrated the im-
portance of multiple metrics and nuance in the inter-
pretation of an NAS fitness landscape.
While these metrics can help to characterize the
fitness landscape, ultimately they are not exact. Fu-
ture work may include the comparison of different
algorithms on NATS-Bench in order to discern how
useful these metrics are for describing the true fitness
landscape of NATS-Bench. As our current under-
standing of NAS search spaces is limited, future work
may also include fitness landscape analyses of other
NAS search spaces, such as non-tabular search spaces
(Siems et al., 2020) or for problems other than im-
age classification (Klyuchnikov et al., 2022; Mehrotra
et al., 2020). Another possible direction is to investi-
gate what properties of the architectures themselves
cause the fitness landscapes to appear this way.
REFERENCES
Abdel-Hamid, O., Mohamed, A.-r., Jiang, H., Deng, L.,
Penn, G., and Yu, D. (2014). Convolutional neural
networks for speech recognition. IEEE/ACM Trans-
actions on audio, speech, and language processing,
22(10):1533–1545.
Bacardit, J., Brownlee, A. E., Cagnoni, S., Iacca, G., Mc-
Call, J., and Walker, D. (2022). The intersection of
evolutionary computation and explainable ai. In Pro-
ceedings of the Genetic and Evolutionary Computa-
tion conference companion, pages 1757–1762.
Chen, W., Gong, X., and Wang, Z. (2021). Neural ar-
chitecture search on imagenet in four gpu hours:
A theoretically inspired perspective. arXiv preprint
arXiv:2102.11535.
Chitty-Venkata, K. T., Emani, M., Vishwanath, V., and So-
mani, A. K. (2023). Neural architecture search bench-
marks: Insights and survey. IEEE Access, 11:25217–
25236.
Chrabaszcz, P., Loshchilov, I., and Hutter, F. (2017). A
downsampled variant of imagenet as an alternative to
the cifar datasets. arXiv preprint arXiv:1707.08819.
Dong, X., Liu, L., Musial, K., and Gabrys, B. (2021). Nats-
bench: Benchmarking nas algorithms for architecture
topology and size. IEEE transactions on pattern anal-
ysis and machine intelligence, 44(7):3634–3646.
Dong, X. and Yang, Y. (2019). One-shot neural architecture
search via self-evaluated template network. In Pro-
ceedings of the IEEE/CVF International Conference
on Computer Vision, pages 3681–3690.
Dong, X. and Yang, Y. (2020). Nas-bench-201: Extending
the scope of reproducible neural architecture search.
arXiv preprint arXiv:2001.00326.
Guo, Z., Zhang, X., Mu, H., Heng, W., Liu, Z., Wei, Y.,
and Sun, J. (2020). Single path one-shot neural ar-
chitecture search with uniform sampling. In Com-
puter Vision–ECCV 2020: 16th European Confer-
ence, Glasgow, UK, August 23–28, 2020, Proceed-
ings, Part XVI 16, pages 544–560. Springer.
He, K., Zhang, X., Ren, S., and Sun, J. (2016). Deep resid-
ual learning for image recognition. In Proceedings of
the IEEE conference on computer vision and pattern
recognition, pages 770–778.
Jones, T., Forrest, S., et al. (1995). Fitness distance corre-
lation as a measure of problem difficulty for genetic
algorithms. In ICGA, volume 95, pages 184–192.
Klyuchnikov, N., Trofimov, I., Artemova, E., Salnikov,
M., Fedorov, M., Filippov, A., and Burnaev, E.
(2022). Nas-bench-nlp: neural architecture search
benchmark for natural language processing. IEEE Ac-
cess, 10:45736–45747.
Krizhevsky, A. and Hinton, G. (2009). Learning multiple
layers of features from tiny images. Technical Re-
port 0, University of Toronto, Toronto, Ontario.
Krizhevsky, A., Sutskever, I., and Hinton, G. E. (2012). Im-
agenet classification with deep convolutional neural
networks. Advances in neural information processing
systems, 25.
Li, L. and Talwalkar, A. (2020). Random search and re-
producibility for neural architecture search. In Un-
certainty in artificial intelligence, pages 367–377.
PMLR.
Liu, H., Simonyan, K., and Yang, Y. (2018). Darts:
Differentiable architecture search. arXiv preprint
arXiv:1806.09055.
Mehrotra, A., Ramos, A. G. C., Bhattacharya, S., Dudziak,
Ł., Vipperla, R., Chau, T., Abdelfattah, M. S., Ishtiaq,
S., and Lane, N. D. (2020). Nas-bench-asr: Repro-
ducible neural architecture search for speech recogni-
tion. In International Conference on Learning Repre-
sentations.
Mellor, J., Turner, J., Storkey, A., and Crowley, E. J. (2021).
Neural architecture search without training. In In-
ternational Conference on Machine Learning, pages
7588–7598. PMLR.
Merz, P. and Freisleben, B. (2000). Fitness landscape anal-
ysis and memetic algorithms for the quadratic assign-
ment problem. IEEE Transactions on Evolutionary
Computation, 4(4):337–352.
Nunes, M., Fraga, P. M., and Pappa, G. L. (2021). Fitness
landscape analysis of graph neural network architec-
ture search spaces. In Proceedings of the Genetic and
Evolutionary Computation Conference, pages 876–
884.
Fitness Landscape Analysis of a Cell-Based Neural Architecture Search Space
85