
Ahmed, M., Seraj, R., and Islam, S. M. S. (2020). The
k-means algorithm: A comprehensive survey and per-
formance evaluation. Electronics, 9(8):1295.
Codecademy-Team (2022). Normalization. https://www.
codecademy.com/article/normalization. [Online;
accessed 14-September-2023].
Dau, H. A., Bagnall, A., Kamgar, K., Yeh, C.-C. M., Zhu,
Y., Gharghabi, S., Ratanamahatana, C. A., and Keogh,
E. (2019). The UCR time series archive. IEEE/CAA
Journal of Automatica Sinica, 6(6):1293–1305.
Deeplearning4j (2023). Introduction to core deeplearning4j
concepts. https://deeplearning4j.konduit.ai/. [Online;
accessed 14-September-2023].
Dhillon, I. S., Guan, Y., and Kulis, B. (2004). Kernel k-
means: spectral clustering and normalized cuts. In
Proceedings of the tenth ACM SIGKDD international
conference on Knowledge discovery and data mining,
pages 551–556.
Gupta, M. K. and Chandra, P. (2020). An empirical evalu-
ation of k-means clustering algorithm using different
distance/similarity metrics. In Proceedings of ICETIT
2019: Emerging Trends in Information Technology,
pages 884–892. Springer.
H
¨
oppner, F. (2014). Less is more: similarity of time series
under linear transformations. In Proceedings of the
2014 SIAM International Conference on Data Mining,
pages 560–568. SIAM.
Ikotun, A. M., Ezugwu, A. E., Abualigah, L., Abuhaija,
B., and Heming, J. (2023). K-means clustering al-
gorithms: A comprehensive review, variants analysis,
and advances in the era of big data. Information Sci-
ences, 622:178–210.
Kapil, S. and Chawla, M. (2016). Performance evaluation
of k-means clustering algorithm with various distance
metrics. In 2016 IEEE 1st international conference
on power electronics, intelligent control and energy
systems (ICPEICES), pages 1–4. IEEE.
Kaufman, L. and Rousseeuw, P. J. (2009). Finding groups in
data: an introduction to cluster analysis. John Wiley
& Sons.
Kuncheva, L. I. and Vetrov, D. P. (2006). Evaluation of
stability of k-means cluster ensembles with respect
to random initialization. IEEE transactions on pat-
tern analysis and machine intelligence, 28(11):1798–
1808.
Lee, M.-C. and Lin, J.-C. (2023). RePAD2: Real-time,
lightweight, and adaptive anomaly detection for open-
ended time series. In Proceedings of the 8th Inter-
national Conference on Internet of Things, Big Data
and Security - IoTBDS, pages 208–217. INSTICC,
SciTePress. arXiv preprint arXiv:2303.00409.
Lee, M.-C., Lin, J.-C., and Gran, E. G. (2020). RePAD:
real-time proactive anomaly detection for time series.
In Advanced Information Networking and Applica-
tions: Proceedings of the 34th International Confer-
ence on Advanced Information Networking and Ap-
plications (AINA-2020), pages 1291–1302. Springer.
arXiv preprint arXiv:2001.08922.
Lee, M.-C., Lin, J.-C., and Gran, E. G. (2021). How far
should we look back to achieve effective real-time
time-series anomaly detection? In Advanced Infor-
mation Networking and Applications: Proceedings of
the 35th International Conference on Advanced In-
formation Networking and Applications (AINA-2021),
Volume 1, pages 136–148. Springer. arXiv preprint
arXiv:2102.06560.
Lee, M.-C., Lin, J.-C., and Stolz, V. (2023a). NP-Free:
A real-time normalization-free and parameter-tuning-
free representation approach for open-ended time se-
ries. https://arxiv.org/pdf/2304.06168.pdf. [Online;
accessed 14-September-2023].
Lee, M.-C., Lin, J.-C., and Stolz, V. (2023b). NP-
Free: A real-time normalization-free and parameter-
tuning-free representation approach for open-ended
time series. In 2023 IEEE 47th Annual Computers,
Software, and Applications Conference (COMPSAC),
pages 334–339.
Lloyd, S. (1982). Least squares quantization in pcm. IEEE
transactions on information theory, 28(2):129–137.
MacQueen, J. et al. (1967). Some methods for classification
and analysis of multivariate observations. In Proceed-
ings of the fifth Berkeley symposium on mathematical
statistics and probability, volume 1, pages 281–297.
Oakland, CA, USA.
Paepe, D. D., Avendano, D. N., and Hoecke, S. V. (2019).
Implications of z-normalization in the matrix profile.
In International Conference on Pattern Recognition
Applications and Methods, pages 95–118. Springer.
Paparrizos, J. and Gravano, L. (2015). k-shape: Efficient
and accurate clustering of time series. In Proceedings
of the 2015 ACM SIGMOD international conference
on management of data, pages 1855–1870.
Rousseeuw, P. J. (1987). Silhouettes: a graphical aid to
the interpretation and validation of cluster analysis.
Journal of computational and applied mathematics,
20:53–65.
Ruiz, L. G. B., Pegalajar, M., Arcucci, R., and Molina-
Solana, M. (2020). A time-series clustering methodol-
ogy for knowledge extraction in energy consumption
data. Expert Systems with Applications, 160:113731.
Senin, P. (2016). Z-normalization of time series. https:
//jmotif.github.io/sax-vsm site/morea/algorithm/znor
m.html. [Online; accessed 14-September-2023].
Tavenard, R., Faouzi, J., Vandewiele, G., Divo, F., Androz,
G., Holtz, C., Payne, M., Yurchak, R., Rußwurm, M.,
Kolar, K., and Woods, E. (2020). Tslearn, a machine
learning toolkit for time series data. Journal of Ma-
chine Learning Research, 21(118):1–6.
Vats, S. and Sagar, B. (2019). Performance evaluation of
k-means clustering on hadoop infrastructure. Journal
of Discrete Mathematical Sciences and Cryptography,
22(8):1349–1363.
Evaluation of K-Means Time Series Clustering Based on Z-Normalization and NP-Free
477