into an RMSE series. On the other hand, PyTorch
is the most efficient DL framework among the three,
enabling NP-Free to provide instant processing and
RMSE generation. However, similar to TensorFlow-
Keras, it generates unstable RMSE series that cannot
preserve the characteristics of the original time series.
Deeplearning4j is considered the most suitable DL
framework among the three studied. Although it only
supports LSTM rather than other RNNs, this com-
bination preserves the characteristics of the original
time series in a time-efficient manner, leading to satis-
factory classification accuracy, especially when early
stopping is enabled. Therefore, DL4J-LSTM with
early stopping is the most recommended choice due
to its superior ability to preserve the characteristics of
the original time series, time-efficient processing, and
enabling k-means algorithm to achieve high classifi-
cation accuracy. Our study offers valuable guidelines
for future research on real-time time series represen-
tation using deep learning.
In our future work, we plan to enhance the time ef-
ficiency of NP-Free by adopting strategies such as re-
ducing the number of hidden units and the number of
epochs. Additionally, we intend to release the source
code for all the implementations studied in this paper,
with the aim of advancing research in this area.
ACKNOWLEDEGMENTS
The authors would like to thank the anonymous edi-
tors and reviewers for their reviews and valuable sug-
gestions for this article. This work received funding
from the Research Council of Norway through the SFI
Norwegian Centre for Cybersecurity in Critical Sec-
tors (NORCICS), project no. 310105.
REFERENCES
Abadi, M., Barham, P., Chen, J., Chen, Z., Davis, A., Dean,
J., Devin, M., Ghemawat, S., Irving, G., Isard, M.,
et al. (2016). Tensorflow: a system for large-scale
machine learning. In Osdi, volume 16, pages 265–
283. Savannah, GA, USA.
Aghabozorgi, S., Shirkhorshidi, A. S., and Wah, T. Y.
(2015). Time-series clustering–a decade review. In-
formation systems, 53:16–38.
Bagnall, A., Lines, J., Bostrom, A., Large, J., and Keogh, E.
(2017). The great time series classification bake off: a
review and experimental evaluation of recent algorith-
mic advances. Data mining and knowledge discovery,
31:606–660.
Bagnall, A., Ratanamahatana, C. A., Keogh, E., Lonardi, S.,
and Janacek, G. (2006). A bit level representation for
time series data mining with shape based similarity.
Data mining and knowledge discovery, 13(1):11–40.
Cho, K., Van Merri
¨
enboer, B., Bahdanau, D., and Bengio,
Y. (2014). On the properties of neural machine trans-
lation: Encoder-decoder approaches. arXiv preprint
arXiv:1409.1259.
Codecademy (2024). Normalization. https://www.
codecademy.com/article/normalization. [Online; ac-
cessed 25-September-2024].
Dau, H. A., Bagnall, A., Kamgar, K., Yeh, C.-C. M., Zhu,
Y., Gharghabi, S., Ratanamahatana, C. A., and Keogh,
E. (2019). The ucr time series archive. IEEE/CAA
Journal of Automatica Sinica, 6(6):1293–1305.
Deeplearning4j (2023). Introduction to core Deeplearning4j
concepts. https://deeplearning4j.konduit.ai/. [Online;
accessed 24-September-2024].
Ding, H., Trajcevski, G., Scheuermann, P., Wang, X., and
Keogh, E. (2008). Querying and mining of time series
data: experimental comparison of representations and
distance measures. Proceedings of the VLDB Endow-
ment, 1(2):1542–1552.
EarlyStopping (2023). What is early stopping? https:
//deeplearning4j.konduit.ai/. [Online; accessed 24-
September-2024].
Graves, A. and Schmidhuber, J. (2005). Framewise
phoneme classification with bidirectional lstm and
other neural network architectures. Neural networks,
18(5-6):602–610.
Helwig, N. and Hsiao-Wecksler, E. (2022). Multivariate
Gait Data. UCI Machine Learning Repository. DOI:
https://doi.org/10.24432/C5861T.
Hochreiter, S. (1998). The vanishing gradient problem dur-
ing learning recurrent neural nets and problem solu-
tions. International Journal of Uncertainty, Fuzziness
and Knowledge-Based Systems, 6(02):107–116.
Hochreiter, S. and Schmidhuber, J. (1997). Long short-term
memory. Neural computation, 9(8):1735–1780.
Hopfield, J. J. (1982). Neural networks and physical sys-
tems with emergent collective computational abili-
ties. Proceedings of the national academy of sciences,
79(8):2554–2558.
H
¨
oppner, F. (2014). Less is more: similarity of time series
under linear transformations. In Proceedings of the
2014 SIAM International Conference on Data Mining,
pages 560–568. SIAM.
Ismail Fawaz, H., Forestier, G., Weber, J., Idoumghar, L.,
and Muller, P.-A. (2019). Deep learning for time series
classification: a review. Data mining and knowledge
discovery, 33(4):917–963.
Keogh, E., Chakrabarti, K., Pazzani, M., and Mehrotra, S.
(2001). Dimensionality reduction for fast similarity
search in large time series databases. Knowledge and
information Systems, 3:263–286.
Keras (2023). Keras - a deep learning API written in
python. https://keras.io/about/. [Online; accessed 25-
September-2024].
Ketkar, N. and Santana, E. (2017). Deep learning with
Python, volume 1. Springer.
Kovalev, V., Kalinovsky, A., and Kovalev, S. (2016). Deep
learning with theano, torch, caffe, tensorflow, and
Comparative Analysis of Real-Time Time Series Representation Across RNNs, Deep Learning Frameworks, and Early Stopping
381