Authors:
Kanishka Tyagi
1
;
Nojun Kwak
2
and
Michael Manry
1
Affiliations:
1
The University of Texas at Arlington, United States
;
2
Seoul National University, Korea, Republic of
Keyword(s):
Linear Discriminant Analysis, L1 Norm, Dimension Reduction, Conjugate Gradient, Learning Factor.
Related
Ontology
Subjects/Areas/Topics:
Classification
;
Feature Selection and Extraction
;
ICA, PCA, CCA and other Linear Models
;
Pattern Recognition
;
Theory and Methods
Abstract:
This paper analyzes a linear discriminant subspace technique from an L1 point of view. We propose an efficient
and optimal algorithm that addresses several major issues with prior work based on, not only the L1 based LDA
algorithm but also its L2 counterpart. This includes algorithm implementation, effect of outliers and optimality
of parameters used. The key idea is to use conjugate gradient to optimize the L1 cost function and to find an
learning factor during the update of the weight vector in the subspace. Experimental results on UCI datasets
reveal that the present method is a significant improvement over the previous work. Mathematical treatment
for the proposed algorithm and calculations for learning factor are the main subject of this paper.