Authors:
Carlos Eduardo Thomaz
1
and
Gilson Antonio Giraldi
2
Affiliations:
1
Centro Universitário da FEI, Brazil
;
2
National Laboratory for Scientific Computing, Brazil
Keyword(s):
Non-linear discriminant analysis, Limited sample size problems, Face recognition.
Related
Ontology
Subjects/Areas/Topics:
Computer Vision, Visualization and Computer Graphics
;
Feature Extraction
;
Features Extraction
;
Image and Video Analysis
;
Informatics in Control, Automation and Robotics
;
Signal Processing, Sensors, Systems Modeling and Control
;
Statistical Approach
Abstract:
In this paper, we extend the Maximum uncertainty Linear Discriminant Analysis (MLDA), proposed recently for limited sample size problems, to its kernel version. The new Kernel Maximum uncertainty Discriminant Analysis (KMDA) is a two-stage method composed of Kernel Principal Component Analysis (KPCA) followed by the standard MLDA. In order to evaluate its effectiveness, experiments on face recognition using the well-known ORL and FERET face databases were carried out and compared with other existing kernel discriminant methods, such as Generalized Discriminant Analysis (GDA) and Regularized Kernel Discriminant Analysis (RKDA). The classification results indicate that KMDA performs as well as GDA and RKDA, with the advantage of being a straightforward stabilization approach for the within-class scatter matrix that uses higher-order features for further classification improvements.