
REFERENCES
Alexandrov, A., Benidis, K., Bohlke-Schneider, M.,
Flunkert, V., Gasthaus, J., Januschowski, T., Maddix,
D. C., Rangapuram, S., Salinas, D., Schulz, J., et al.
(2020). Gluonts: Probabilistic and neural time series
modeling in python. The Journal of Machine Learn-
ing Research, 21(1):4629–4634.
Alsharef, A., Aggarwal, K., Sonia, Kumar, M., and Mishra,
A. (2022). Review of ml and automl solutions to
forecast time-series data. Archives of Computational
Methods in Engineering, 29(7):5297–5311.
Bai, S., Kolter, J. Z., and Koltun, V. (2018). An em-
pirical evaluation of generic convolutional and recur-
rent networks for sequence modeling. arXiv preprint
arXiv:1803.01271.
Benidis, K., Rangapuram, S. S., Flunkert, V., Wang, Y.,
Maddix, D., Turkmen, C., Gasthaus, J., Bohlke-
Schneider, M., Salinas, D., Stella, L., et al. (2022).
Deep learning for time series forecasting: Tutorial and
literature survey. ACM Computing Surveys, 55(6):1–
36.
Breiman, L. (2001). Random forests. Machine learning,
45:5–32.
Chen, T. and Guestrin, C. (2016). Xgboost: A scalable
tree boosting system. In Proceedings of the 22nd acm
sigkdd international conference on knowledge discov-
ery and data mining, pages 785–794.
Herbold, S. (2020). Autorank: A python package for auto-
mated ranking of classifiers. Journal of Open Source
Software, 5(48):2173.
Herzen, J., L
¨
assig, F., Piazzetta, S. G., Neuer, T., Tafti, L.,
Raille, G., Van Pottelbergh, T., Pasieka, M., Skrodzki,
A., Huguenin, N., et al. (2022). Darts: User-friendly
modern machine learning for time series. The Journal
of Machine Learning Research, 23(1):5442–5447.
Hyndman, R. J. and Athanasopoulos, G. (2018). Forecast-
ing: principles and practice. OTexts.
Kaya, G. O., Sahin, M., and Demirel, O. F. (2020). Inter-
mittent demand forecasting: A guideline for method
selection. S
¯
adhan
¯
a, 45:1–7.
Ke, G., Meng, Q., Finley, T., Wang, T., Chen, W., Ma, W.,
Ye, Q., and Liu, T.-Y. (2017). Lightgbm: A highly
efficient gradient boosting decision tree. Advances in
neural information processing systems, 30.
Killick, R., Fearnhead, P., and Eckley, I. A. (2012). Optimal
detection of changepoints with a linear computational
cost. Journal of the American Statistical Association,
107(500):1590–1598.
Liboschik, T., Fokianos, K., and Fried, R. (2017). tscount:
An r package for analysis of count time series follow-
ing generalized linear models. Journal of Statistical
Software, 82:1–51.
Lim, B., Arık, S.
¨
O., Loeff, N., and Pfister, T. (2021).
Temporal fusion transformers for interpretable multi-
horizon time series forecasting. International Journal
of Forecasting, 37(4):1748–1764.
Makridakis, S., Spiliotis, E., and Assimakopoulos, V.
(2022a). M5 accuracy competition: Results, findings,
and conclusions. International Journal of Forecast-
ing, 38(4):1346–1364.
Makridakis, S., Spiliotis, E., and Assimakopoulos, V.
(2022b). M5 accuracy competition: Results, findings,
and conclusions. International Journal of Forecast-
ing, 38(4):1346–1364.
O’Leary, C., Toosi, F. G., and Lynch, C. (2023). A review of
automl software tools for time series forecasting and
anomaly detection. In ICAART (3), pages 421–433.
Prokhorenkova, L., Gusev, G., Vorobev, A., Dorogush,
A. V., and Gulin, A. (2018). Catboost: unbiased boost-
ing with categorical features. Advances in neural in-
formation processing systems, 31.
Salinas, D., Flunkert, V., Gasthaus, J., and Januschowski, T.
(2020). Deepar: Probabilistic forecasting with autore-
gressive recurrent networks. International Journal of
Forecasting, 36(3):1181–1191.
Syntetos, A. A. and Boylan, J. E. (2005). The accuracy of
intermittent demand estimates. International Journal
of forecasting, 21(2):303–314.
Waring, J., Lindvall, C., and Umeton, R. (2020). Automated
machine learning: Review of the state-of-the-art and
opportunities for healthcare. Artificial intelligence in
medicine, 104:101822.
Sales Forecasting for Pricing Strategies Based on Time Series and Learning Techniques
1067