
underlying structure of data distributions when select-
ing dimensionality reduction techniques, particularly
in complex scenarios where traditional methods may
struggle.
6 CONCLUSION
In this study, we explored various dimensionality re-
duction techniques for Symmetric Positive Definite
(SPD) matrices, including both linear and non-linear
approaches. The results highlight the lack of robust-
ness of existing methods in handling overlapping dis-
tributions in a classification context. Interestingly,
linear and non-linear methods showed similar perfor-
mance with SPD matrices. Two possible explanations
could be: the convexity of the SPD space and the nu-
merical issues raised by the logarithmic calculation.
In future work, a deeper analysis of these methods
according to the local geometry of the SPD space is
needed to discard or validate these hypotheses. In-
vestigating dimensionality reduction in non-convex
spaces is also extremely relevant. Finally, we aim
to extend the dimensionality reduction methods for
SPD matrices to more complex configurations, such
as highly overlapping distributions.
REFERENCES
Arsigny, V., Fillard, P., Pennec, X., and Ayache, N. (2007).
Geometric means in a novel vector space structure on
symmetric positive-definite matrices. SIAM Journal
on Matrix Analysis and Applications, 29(1):328–347.
Boumal, N., Mishra, B., Absil, P.-A., and Sepulchre, R.
(2014). Manopt, a Matlab toolbox for optimization on
manifolds. Journal of Machine Learning Research,
15:1455–1459.
Chen, K.-X., Ren, J.-Y., Wu, X.-J., and Kittler, J. (2020).
Covariance descriptors on a gaussian manifold and
their application to image set classification. Pattern
Recognition, 107:107463.
Cherian, A., Sra, S., Banerjee, A., and Papanikolopoulos,
N. (2013). Jensen-bregman logdet divergence with ap-
plication to efficient similarity search for covariance
matrices. IEEE Transactions on Pattern Analysis and
Machine Intelligence, 35:2161–2174.
Drira, W., Neji, W., and Ghorbel, F. (2012). Dimension re-
duction by an orthogonal series estimate of the prob-
abilistic dependence measure. In ICPRAM (1), pages
314–317.
Fletcher, P. T. and Joshi, S. (2004). Principal geodesic anal-
ysis on symmetric spaces: Statistics of diffusion ten-
sors. In Sonka, M., Kakadiaris, I. A., and Kybic, J.,
editors, Computer Vision and Mathematical Methods
in Medical and Biomedical Image Analysis, pages 87–
98, Berlin, Heidelberg. Springer Berlin Heidelberg.
Fletcher, P. T., Lu, C., Pizer, S. M., and Joshi, S. C. (2004).
Principal geodesic analysis for the study of nonlin-
ear statistics of shape. IEEE Transactions on Medical
Imaging, 23:995–1005.
Fr
´
echet, M. R. (1948). Les
´
el
´
ements al
´
eatoires de nature
quelconque dans un espace distanci
´
e.
Ghorbel, E., Boonaert, J., Boutteau, R., Lecoeuche, S., and
Savatier, X. (2018). An extension of kernel learning
methods using a modified log-euclidean distance for
fast and accurate skeleton-based human action recog-
nition. Computer Vision and Image Understanding,
175:32–43.
Harandi, M., Salzmann, M., and Hartley, R. (2018). Dimen-
sionality reduction on spd manifolds: The emergence
of geometry-aware methods. IEEE Transactions on
Pattern Analysis and Machine Intelligence, 40(1):48–
62.
Harandi, M. T., Sanderson, C., Hartley, R. I., and Lovell,
B. C. (2012). Sparse coding and dictionary learning
for symmetric positive definite matrices: A kernel ap-
proach. In European Conference on Computer Vision.
Hinton, G. E. and Salakhutdinov, R. R. (2006). Reducing
the dimensionality of data with neural networks. Sci-
ence, 313(5786):504–507.
Hotelling, H. (1933). Analysis of a complex of statistical
variables into principal components. Journal of Edu-
cational Psychology, 24:498–520.
Jayasumana, S., Hartley, R., Salzmann, M., Li, H., and
Harandi, M. (2015). Kernel methods on riemannian
manifolds with gaussian rbf kernels. IEEE Transac-
tions on Pattern Analysis and Machine Intelligence,
37(12):2464–2477.
Miolane, N., Brigant, A. L., Mathe, J., Hou, B., Guigui, N.,
Thanwerdas, Y., Heyder, S., Peltre, O., Koep, N., Za-
atiti, H., Hajri, H., Cabanes, Y., Gerald, T., Chauchat,
P., Shewmake, C., Kainz, B., Donnat, C., Holmes, S.,
and Pennec, X. (2020). Geomstats: A python package
for riemannian geometry in machine learning.
Pennec, X., Fillard, P., and Ayache, N. (2006). A rieman-
nian framework for tensor computing. International
Journal of Computer Vision, 66(1):41–66.
Tuzel, O., Porikli, F., and Meer, P. (2006). Region covari-
ance: A fast descriptor for detection and classifica-
tion. In Leonardis, A., Bischof, H., and Pinz, A., edi-
tors, Computer Vision – ECCV 2006, pages 589–600,
Berlin, Heidelberg. Springer Berlin Heidelberg.
Tuzel, O., Porikli, F., and Meer, P. (2008). Pedestrian detec-
tion via classification on riemannian manifolds. IEEE
Transactions on Pattern Analysis and Machine Intel-
ligence, 30(10):1713–1727.
Vincent, P., Larochelle, H., Bengio, Y., and Manzagol, P.-
A. (2008). Extracting and composing robust features
with denoising autoencoders. In Proceedings of the
25th International Conference on Machine Learning,
ICML ’08, page 1096–1103, New York, NY, USA.
Association for Computing Machinery.
Wang, R., Wu, X.-J., Xu, T., Hu, C., and Kittler, J. (2023).
U-spdnet: An spd manifold learning-based neural
network for visual classification. Neural Networks,
161:382–396.
ICAART 2025 - 17th International Conference on Agents and Artificial Intelligence
812