would suffer from the overfitting to data if the
number of training data is insufficient or ineffective.
Hence, we propose an iterative solution via a virtual
Mahalanobis distance in original high-dimensional
input space that can pass low-dimensional pairwise
distance relation. Based on this virtual Mahalanobis
distance the pairwise distance relation of data in
input space can be adjusted and then the classifier-
concerning subspace can be updated. Ongoing work
is to circumvent overfitting via adding some
mechanics for choosing effective training data or
applying regularization information.
REFERENCES
Bar-Hillel, A., Hertz, T., Shental, N., and Weinshall, D.
(2005). Learning a mahalanobis metric from
equivalence constraints. Journal of Machine Learning
Research, 6: 937-965.
Belkin, M. and Niyogi, P. (2003). Laplacian eigenmaps
for dimensionality reduction and data representation.
Journal of Neural Computation, 15(6): 1373-1396.
Belhumeur, P.N., Hespanha, J.P., and Kriegman, D.J.
(1997). Eigenfaces vs. fisherfaces: Recognition using
class specific linear projection. IEEE Transactions on
Pattern Analysis and Machine Intelligence, 19(7):
711-720.
Cai, D., He, X., and Zhou, K. (2007). Spectral regression
for efficient regularized subspace learning. In
International Conference on Computer Vision.
Cai, D., He, X., Zhou, K., Han, J., and Bao, H. (2007).
Locality sensitive discriminant analysis. In
International Joint Conference on Artificial
Intelligence.
Etemad, K. and Chellapa, R. (1997). Discriminant analysis
for recognition of human face images. Journal of the
Optical Society of America A, 14(8): 1724-1733.
Goldberger, J., Roweis, S. T., Hinton, G.E., and
Salakhutdinov, R. (2004). Neighbourhood components
analysis. In Advances in Neural Information
Processing Systems.
He, X. and Niyogi, P. (2003). Locality preserving
projections. In Advances in Neural Information
Processing Systems.
Murase, H. and Nayar, S. K. (1995). Visual learning and
recognition of 3-d objects from appearance.
International Journal of Computer Vision, 14(1): 5-24.
Roweis, S. T. and Saul, L. K. (2000). Nonlinear
dimensionality reduction by locally linear embedding.
Science, 290(5500): 2323-2326.
Tenebaum, J. B., de Silva, V. and Langford, J. C. (2000).
A global geometric framework for nonlinear
dimensionality reduction. Science, 290(5500): 2319-
2323.
Turk, M. and Pentland, A. (1991). Eigenfaces for
recognition. Journal of Cognitive Neuroscience, 3(1):
71-86.
Weinberger, K. Q. and Saul, L. K. (2009). Distance metric
learning for large margin nearest neighbor
classification. Journal of Machine Learning Research,
10: 209-244.
Xing, E., Ng, A., Jordan, and M. Russell, S. (2003).
Distance metric learning, with application to cluster
with side information. In Advances in Neural
Information Processing Systems.
Yan, S., Xu, D., Zhang, B., Zhang, H.-J., Yang, Q., and
Lin, S. (2007). Graph embedding and extension: A
general framework for dimensionality reduction. IEEE
Transactions on Pattern Analysis Machine
Intelligence, 29(1): 40-51.
Ye, J. P. and Wang, T. (2006). Regularized discriminant
analysis for high dimensional, low sample size data. In
International Conference. on Knowledge Discovery
and Data Mining.
Zheng, Z., Yang, F., Tan, W., Jia, J. and Yang, J. (2007).
Gabor feature-based face recognition using supervised
locality preserving projection. Signal Processing, 87:
2473-2483.
VISAPP 2010 - International Conference on Computer Vision Theory and Applications
388