land Systems, Applications and Technology Confer-
ence (LISAT), pages 1–5. IEEE.
Anowar, F. and Sadaoui, S. (2021). Incremental learn-
ing framework for real-world fraud detection environ-
ment. Computational Intelligence, 37(1):635–656.
Anowar, F., Sadaoui, S., and Dalal, H. (2022). Cluster-
ing quality of a high-dimensional service monitoring
time-series dataset. In 14th International Conference
on Agents and Artificial Intelligence (ICAART), pages
1–11.
Anowar, F., Sadaoui, S., and Selim, B. (2021). Conceptual
and empirical comparison of dimensionality reduction
algorithms (pca, kpca, lda, mds, svd, lle, isomap, le,
ica, t-sne). Computer Science Review, 40:1–13.
Chin, T.-J. and Suter, D. (2007). Incremental kernel princi-
pal component analysis. IEEE transactions on image
processing, 16(6):1662–1674.
Fan, Z., Wang, J., Xu, B., and Tang, P. (2014). An
efficient kpca algorithm based on feature correla-
tion evaluation. Neural Computing and Applications,
24(7):1795–1806.
Goel, A. and Vishwakarma, V. P. (2016). Gender classifica-
tion using kpca and svm. In 2016 IEEE International
Conference on Recent Trends in Electronics, Informa-
tion & Communication Technology (RTEICT), pages
291–295. IEEE.
Hallgren, F. and Northrop, P. (2018). Incremental ker-
nel pca and the nystrom method. arXiv preprint
arXiv:1802.00043, pages 1–9.
Hawkins, D. M. (2004). The problem of overfitting. Jour-
nal of chemical information and computer sciences,
44(1):1–12.
Hoffmann, H. (2007). Kernel pca for novelty detection. Pat-
tern recognition, 40(3):863–874.
Jindal, P. and Kumar, D. (2017). A review on dimension-
ality reduction techniques. International journal of
computer applications, 173(2):42–46.
Joseph, A. A., Tokumoto, T., and Ozawa, S. (2016). Online
feature extraction based on accelerated kernel princi-
pal component analysis for data stream. Evolving Sys-
tems, 7(1):15–27.
Kim, K. I., Franz, M. O., and Scholkopf, B. (2005). Iter-
ative kernel principal component analysis for image
modeling. IEEE transactions on pattern analysis and
machine intelligence, 27(9):1351–1366.
Kumar, A. (2020). Pca explained variance concepts with
python example. https://vitalflux.com/pca-explained-
variance-concept-python-example/. Last accessed 12
March 2022.
Lawton, G. (2020). Autoencoders’ example
uses augment data for machine learning.
https://searchenterpriseai.techtarget.com/feature
/Autoencoders-example-uses-augment-data-for-
machine-learning. Last accessed 12 March 2022.
Lim, J., Ross, D. A., Lin, R.-S., and Yang, M.-H. (2004).
Incremental learning for visual tracking. In Advances
in neural information processing systems, pages 793–
800. Citeseer.
Losing, V., Hammer, B., and Wersing, H. (2018). Incre-
mental on-line learning: A review and comparison of
state of the art algorithms. Neurocomputing, Elsevier,
275(1):1261–1274.
Oja, E. (1992). Principal components, minor compo-
nents, and linear neural networks. Neural networks,
5(6):927–935.
Paparrizos, J. and Gravano, L. (2015). k-shape: Efficient
and accurate clustering of time series. In The 2015
ACM SIGMOD International Conference on Manage-
ment of Data, pages 1855–1870.
Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V.,
Thirion, B., Grisel, O., Blondel, M., Prettenhofer,
P., Weiss, R., Dubourg, V., Vanderplas, J., Passos,
A., Cournapeau, D., Brucher, M., Perrot, M., and
Duchesnay, E. (2011). Scikit-learn: Machine learning
in Python. Journal of Machine Learning Research,
12:2825–2830.
Sanger, T. D. (1989). Optimal unsupervised learning in a
single-layer linear feedforward neural network. Neu-
ral networks, 2(6):459–473.
Spruyt, V. (2014). The curse of dimensionality in classifi-
cation. Computer vision for dummies, 21(3):35–40.
Takeuchi, Y., Ozawa, S., and Abe, S. (2007). An efficient
incremental kernel principal component analysis for
online feature selection. In 2007 International Joint
Conference on Neural Networks, pages 2346–2351.
IEEE.
Tokumoto, T. and Ozawa, S. (2011). A fast incremental ker-
nel principal component analysis for learning stream
of data chunks. In The 2011 International Joint Con-
ference on Neural Networks, pages 2881–2888. IEEE.
Van Der Maaten, L., Postma, E., and Van den Herik, J.
(2009). Dimensionality reduction: A comparative re-
view. J Mach Learn Res, 10(66-71):13.
Verleysen, M. and Franc¸ois, D. (2005). The curse of dimen-
sionality in data mining and time series prediction.
In International work-conference on artificial neural
networks, pages 758–770. Springer.
Wang, Y., Yao, H., and Zhao, S. (2016). Auto-encoder
based dimensionality reduction. Neurocomputing,
184:232–242.
Zhang, Y. and Li, D. (2013). Cluster analysis by vari-
ance ratio criterion and firefly algorithm. International
Journal of Digital Content Technology and its Appli-
cations, 7(3):689–697.
DeLTA 2022 - 3rd International Conference on Deep Learning Theory and Applications
124