Inverse of Lorentzian Mixture for Simultaneous Training of Prototypes and Weights
Atsushi Sato, Masato Ishii
2013
Abstract
This paper presents a novel distance-based classifier based on the multiplicative inverse of Lorentzian mixture, which can be regarded as a natural extension of the conventional nearest neighbor rule. We show that prototypes and weights can be trained simultaneously by General Loss Minimization, which is a generalized version of supervised learning framework used in Generalized Learning Vector Quantization. Experimental results for UCI machine learning repository reveal that the proposed method achieves almost the same as or higher classification accuracy than Support Vector Machine with a much fewer prototypes than support vectors.
References
- Blake, C. and Merz, C. (1998). UCI repository of machine learning databases. University of California, Irvine, Dept. of Information and Computer Sciences.
- Cortes, C. and Vapnik, V. (1995). Support vector networks. Machine Learning, 20:273-297.
- Crammer, K., Gilad-Bachrach, R., Navot, A., and Tishby, N. (2003). Margin analysis of the lvq algorithm. In Advances in Neural Information Processing Systems, volume 15, pages 462-469. MIT Press.
- Giraud, B. G., Lapedes, A. S., Liu, L. C., and Lemm, J. C. (1995). Lorentzian neural nets. Neural Networks, 8(5):757-767.
- Grbovic, M. and Vucetic, S. (2009). Learning vector quantization with adaptive prototype addition and removal. In International Conference on Neural Networks, pages 994-1001.
- Hammer, B. and Villmann, T. (2002). Generalized relevance learning vector quantization. Neural Networks, 15(8-9):1059-1068.
- Karayiannis, N. (1996). Weighted fuzzy learning vector quantization and weighted generalized fuzzy cmeans algorithm. In IEEE International Conference on Fuzzy Systems, pages 773-779.
- Kohonen, T. (1995). Verlag.
- Self-Organizing Maps. SpringerMeyer, D., Leisch, F., and Hornik, K. (2003). The support vector machine under test. Neurocomputing, 55(1- 2):169-186.
- Qin, A. K. and Suganthan, P. N. (2004). A novel kernel prototype-based learning algorithm. In International Conference on Pattern Recognition (ICPR), pages 621-624.
- Sato, A. (1998). A formulation of learning vector quantization using a new misclassification measure. In the 14th International Conference on Pattern Recognition, volume 1, pages 322-325.
- Sato, A. (2010). A new learning formulation for kernel classifier design. In International Conference on Pattern Recognition (ICPR), pages 2897-2900.
- Sato, A. and Yamada, K. (1996). Generalized learning vector quantization. In Advances in Neural Information Processing Systems, volume 8, pages 423-429. MIT Press.
- Schneider, P., Biehl, M., and Hammer, B. (2009). Adaptive relevance matrices in learning vector quantization. Neural Computation, 21(12):3532-3561.
- Villmann, T. and Haase, S. (2011). Divergence-based vector quantization. Neural Computation, 23(5):1343-1392.
Paper Citation
in Harvard Style
Sato A. and Ishii M. (2013). Inverse of Lorentzian Mixture for Simultaneous Training of Prototypes and Weights . In Proceedings of the 2nd International Conference on Pattern Recognition Applications and Methods - Volume 1: ICPRAM, ISBN 978-989-8565-41-9, pages 151-158. DOI: 10.5220/0004240201510158
in Bibtex Style
@conference{icpram13,
author={Atsushi Sato and Masato Ishii},
title={Inverse of Lorentzian Mixture for Simultaneous Training of Prototypes and Weights},
booktitle={Proceedings of the 2nd International Conference on Pattern Recognition Applications and Methods - Volume 1: ICPRAM,},
year={2013},
pages={151-158},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0004240201510158},
isbn={978-989-8565-41-9},
}
in EndNote Style
TY - CONF
JO - Proceedings of the 2nd International Conference on Pattern Recognition Applications and Methods - Volume 1: ICPRAM,
TI - Inverse of Lorentzian Mixture for Simultaneous Training of Prototypes and Weights
SN - 978-989-8565-41-9
AU - Sato A.
AU - Ishii M.
PY - 2013
SP - 151
EP - 158
DO - 10.5220/0004240201510158