ment. Computational Intelligence, 37(1):635–656.
Anowar, F. and Sadaoui, S. (2021b). Incremental learning
with self-labeling of incoming high-dimensional data.
In The 34th Canadian Conference on Artificial Intel-
ligence, pages 1–12.
Anowar, F., Sadaoui, S., and Selim, B. (2021). Conceptual
and empirical comparison of dimensionality reduction
algorithms (pca, kpca, lda, mds, svd, lle, isomap, le,
ica, t-sne). Computer Science Review, 40:1–13.
Chormunge, S. and Jena, S. (2018). Correlation based
feature selection with clustering for high dimensional
data. Journal of Electrical Systems and Information
Technology, 5(3):542–549.
Dash, B., Mishra, D., Rath, A., and Acharya, M. (2010). A
hybridized k-means clustering approach for high di-
mensional dataset. International Journal of Engineer-
ing, Science and Technology, 2(2):59–66.
Huang, X., Ye, Y., Xiong, L., Lau, R. Y., Jiang, N., and
Wang, S. (2016). Time series k-means: A new k-
means type smooth subspace clustering for time series
data. Information Sciences, 367:1–13.
Jadhav, A., Pramod, D., and Ramanathan, K. (2019). Com-
parison of performance of data imputation methods
for numeric dataset. Applied Artificial Intelligence,
33(10):913–933.
Jindal, P. and Kumar, D. (2017). A review on dimension-
ality reduction techniques. International journal of
computer applications, 173(2):42–46.
Kaoungku, N., Suksut, K., Chanklan, R., Kerdprasop, K.,
and Kerdrasop, N. (2018). The silhouette width crite-
rion for clustering and association mining to select im-
age features. International journal of machine learn-
ing and computing, 8(1):1–5.
Lawton, G. (2020). Autoencoders’ example
uses augment data for machine learning.
https://searchenterpriseai.techtarget.com/feature/
Autoencoders-example-uses-augment-data-for-
machine-learning. Last accessed 15 November
2021.
McInnes, L. and Healy, J. (2017). Accelerated hierarchical
density based clustering. In 2017 IEEE International
Conference on Data Mining Workshops (ICDMW),
pages 33–42. IEEE.
Messaoud, T. A., Smiti, A., and Louati, A. (2019). A
novel density-based clustering approach for outlier
detection in high-dimensional data. In International
Conference on Hybrid Artificial Intelligence Systems,
pages 322–331. Springer.
Niennattrakul, V. and Ratanamahatana, C. A. (2007). On
clustering multimedia time series data using k-means
and dynamic time warping. In 2007 International
Conference on Multimedia and Ubiquitous Engineer-
ing (MUE’07), pages 733–738. IEEE.
Paparrizos, J. and Gravano, L. (2015). k-shape: Efficient
and accurate clustering of time series. In 2015 ACM
SIGMOD International Conference on Management
of Data, pages 1855–1870.
Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V.,
Thirion, B., Grisel, O., Blondel, M., Prettenhofer,
P., Weiss, R., Dubourg, V., Vanderplas, J., Passos,
A., Cournapeau, D., Brucher, M., Perrot, M., and
Duchesnay, E. (2011). Scikit-learn: Machine learning
in Python. Journal of Machine Learning Research,
12:2825–2830.
Prabhu, P. and Anbazhagan, N. (2011). Improving the per-
formance of k-means clustering for high dimensional
data set. International journal on computer science
and engineering, 3(6):2317–2322.
Prometheus (2021). From metrics to insight.
https://prometheus.io/docs/concepts/metric types/.
Last accessed 15 November 2021.
Rani, S. and Sikka, G. (2012). Recent techniques of cluster-
ing of time series data: a survey. International Journal
of Computer Applications, 52(15):1–9.
Saul, N. (2017). How hdbscan works.
https://hdbscan.readthedocs.io/en/latest/
how hdbscan works.html. Last accessed 15 Novem-
ber 2021.
Song, Q., Ni, J., and Wang, G. (2011). A fast clustering-
based feature subset selection algorithm for high-
dimensional data. IEEE transactions on knowledge
and data engineering, 25(1):1–14.
Stekhoven, D. J. and B
¨
uhlmann, P. (2012).
Missforest—non-parametric missing value im-
putation for mixed-type data. Bioinformatics,
28(1):112–118.
Tavenard, R., Faouzi, J., Vandewiele, G., Divo, F., Androz,
G., Holtz, C., Payne, M., Yurchak, R., Rußwurm, M.,
Kolar, K., et al. (2020). Tslearn, a machine learning
toolkit for time series data. Journal of Machine Learn-
ing Research, 21(118):1–6.
Wang, X. and Xu, Y. (2019). An improved index for cluster-
ing validation based on silhouette index and calinski-
harabasz index. IOP Conference Series: Materials
Science and Engineering, 569(5):1–7.
Wang, Y., Yao, H., and Zhao, S. (2016). Auto-encoder
based dimensionality reduction. Neurocomputing,
184:232–242.
Wu, W., Xu, Z., Kou, G., and Shi, Y. (2020). Decision-
making support for the evaluation of clustering algo-
rithms based on mcdm. Complexity, 2020:1–17.
Yuan, C. and Yang, H. (2019). Research on k-value se-
lection method of k-means clustering algorithm. J,
2(2):226–235.
Zhang, Y. and Li, D. (2013). Cluster analysis by vari-
ance ratio criterion and firefly algorithm. International
Journal of Digital Content Technology and its Appli-
cations, 7(3):689–697.
ICAART 2022 - 14th International Conference on Agents and Artificial Intelligence
192