Authors:
Jing Peng
1
and
Alex J. Aved
2
Affiliations:
1
Department of Computer Science, Montclair State University, Montclair, NJ 07043, U.S.A.
;
2
Information Directorate, AFRL, Rome, NY 13441, U.S.A.
Keyword(s):
Classification, Dimensionality Reduction, Feature Selection.
Abstract:
In classification, a large number of features often make the design of a classifier difficult and degrade its performance. This is particularly pronounced when the number of examples is small relative to the number of features, which is due to the curse of dimensionality. There are many dimensionality reduction techniques in the literature. However, most these techniques are either informative (or minimum information loss), as in principal component analysis (PCA), or discriminant, as in linear discriminant analysis (LDA). Each type of technique has its strengths and weaknesses. Motivated by Gaussian Processes Latent Variable Models, we propose a simple linear projection technique that explores the characteristics of both PCA and LDA in latent representations. The proposed technique optimizes a regularized information preserving objective, where the regularizer is a LDA based criterion. And as such, it prefers a latent space that is both informative and discriminant, thereby providin
g better generalization performance. Experimental results based on a variety of data sets are provided to validate the proposed technique.
(More)