An important aspect to highlight is that it is not
necessary to use specialized technologies such as
GPUs or TPUs to carry out the experiments. Al-
though these technologies can significantly speed up
the neural network training process, our approach has
been designed to be efficient and accessible, allowing
experiments to be performed effectively using only a
CPU. This is possible because the search space we
have defined contains only 400 possible architectures.
This restriction on the search space not only reduces
the computational complexity but also makes the opti-
mization process manageable even with limited hard-
ware resources. Furthermore, this feature of our
method allows for greater reproducibility and scala-
bility, since experiments can be performed in more
common and affordable hardware environments. This
facilitates the adoption of our approach in various aca-
demic and research institutions, where computational
resources may be limited.
On the other hand, our work has several limita-
tions. Firstly, our code has been developed in MAT-
LAB which, although it is specialized research soft-
ware, is not accessible to the general public. For this
reason, in future work it is proposed to translate the
code to Python using libraries such as PyTorch or Ten-
sorFlow. This will allow the code to be used in open
source environments, thus expanding its accessibility
and potential to a broader community of students, re-
searchers and professionals.
Also, it’s important to know that our current ap-
proach is specifically designed to work with time se-
ries. However, to achieve a completely modular ap-
proach adaptable to a wide range of applications, we
propose the development of a modular NAS. This new
approach would allow users to select the specific task
they want to tackle, such as image restoration, image
classification, or natural language processing.
By adopting a modular approach, users could ex-
change data sets and search spaces in a flexible and
customizable manner. This would not only increase
the versatility of our approach but also make it appli-
cable to a variety of domains and specific needs.
Finally, new efficient evaluation methods can be
successfully applied in our approach. For exam-
ple, implementing a Population Memory ((Xie et al.,
2023; Liu et al., 2021)) could prevent retraining of
the same models. In the current state of our code, the
evolutionary algorithm, upon converging, can con-
tinue training the same architectures until the speci-
fied number of generations is completed. This is due
to the limited range of the search space and in some
cases can even result in overflows, where variable val-
ues exceed the defined range of 1 to 20.
ACKNOWLEDGEMENTS
The authors would like to acknowledge the financial
support provided by the National Council of Human-
ities, Sciences and Technologies (CONAHCyT).
REFERENCES
Abhishek, K., Singh, M. P., Ghosh, S., and Anand, A.
(2012). Weather forecasting model using artificial
neural network. Procedia Technology, 4:311–318.
Ahn, C. W. and Ramakrishna, R. S. (2003). Elitism-based
compact genetic algorithms. IEEE Transactions on
Evolutionary Computation, 7(4):367–385.
Box, G. E., Jenkins, G. M., Reinsel, G. C., and Ljung, G. M.
(2015). Time series analysis: forecasting and control.
John Wiley & Sons.
Chong, E., Han, C., and Park, F. C. (2017). Deep learn-
ing networks for stock market analysis and prediction:
Methodology, data representations, and case studies.
Expert Systems with Applications, 83:187–205.
De Jong, K. (2017). Evolutionary computation: a unified
approach. In Proceedings of the Genetic and Evolu-
tionary Computation Conference Companion, pages
373–388.
Eiben, A. E. and Smith, J. E. (2015). Introduction to evolu-
tionary computing. Springer.
Elliott, G. and Timmermann, A. (2016). Forecasting in eco-
nomics and finance. Annual Review of Economics,
8:81–110.
Elsken, T., Metzen, J. H., and Hutter, F. (2019). Neural
architecture search: A survey. Journal of Machine
Learning Research, 20(55):1–21.
Fadlalla, A. and Lin, C.-H. (2001). An analysis of the ap-
plications of neural networks in finance. Interfaces,
31(4):112–122.
Fogel, D. B. and Computation, E. (1995). Toward a new
philosophy of machine intelligence. IEEE Evolution-
ary Computation, 1080.
Gneiting, T. and Raftery, A. E. (2005). Weather forecast-
ing with ensemble methods. Science, 310(5746):248–
249.
Hamilton, J. D. (2020). Time series analysis. Princeton
university press.
Hyndman, R. J. and Athanasopoulos, G. (2018). Forecast-
ing: principles and practice. OTexts.
Li, L. and Talwalkar, A. (2020). Random search and re-
producibility for neural architecture search. In Un-
certainty in artificial intelligence, pages 367–377.
PMLR.
Liang, Z. and Sun, Y. (2024). Evolutionary neural architec-
ture search for multivariate time series forecasting. In
Asian Conference on Machine Learning, pages 771–
786. PMLR.
Lim, B., Arık, S.
¨
O., Loeff, N., and Pfister, T. (2021).
Temporal fusion transformers for interpretable multi-
horizon time series forecasting. International Journal
of Forecasting, 37(4):1748–1764.
ECTA 2024 - 16th International Conference on Evolutionary Computation Theory and Applications
240