Kenji Nishida, Jun Fujiki, Takio Kurita


In this paper, the Ensemble Random-Subset SVM algorithm is proposed. In a random-subset SVM, multiple SVMs are used, and each SVM is considered a weak classifier; a subset of training samples is randomly selected for each weak classifier with randomly set parameters, and the SVMs with optimal weights are combined for classification. A linear SVM is adopted to determine the optimal kernel weights; therefore, an ensemble random-subset SVMis based on a hierarchical SVMmodel. An ensemble random-subset SVM outperforms a single SVMeven when using a small number of samples (10 or 100 samples out of 20,000 training samples for each weak classifier); in contrast, a single SVM requires more than 4,000 support vectors.


  1. Baudat, G. “Feature Vector Selection and Projection Using Kernels”, in NeuroComputing, Vol.55, No.1, pp.21- 38, 2003.
  2. Breiman, L. “Bagging Predictors”, Machine Learning, Vol.24, pp.123-140, 1996.
  3. Chang, C. C. and Lin, C. J. “LIBSVM: a library for support vector machines”, http://www.csie.ntu.edu.tw/˜cjlin/libsvm, 2001.
  4. Chapelle, O. “Training a Support Vector Machine in the Primal”, in Large-Scale Kernel Machines, pp.29-50, The MIT Press, 2007.
  5. Cristianini, N. and Taylor, J. S. An Introduction to Support Vector Machines and other kernel-based learning methods, Cambridge University Press, 2000.
  6. Demir, B. and Erturk, S. “Hyperspectral Image Classification Using Relevance Vector Machines”, IEEE Geoscience and Remote Sensing Letters, Vol.4, No.4, pp.586-590, 2007.
  7. Fischler, M. A. and Bolles, R. C. “Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography”, Communications of the ACM, Vol.24, pp.381- 395, 1981.
  8. Freund, Y. and Schapire, R. E. “Experiments with a New Boosting Algorithm”,in Proc. of International Conf. on Machine Learning (ICML96), pp.148-156, 1996.
  9. Keerthi, S. S. and Chapelle, O. and DeCoste, D. “Building SVMs with Reduced Classifier Complexity”, in Large-Scale Kernel Machines, pp.251-274, The MIT Press, 2007.
  10. Lin, K.-M. and Lin, C.-J. “A Study on Reduced Support Vector Machines”, IEEE Transactions on Neural Networks, Vol.14, pp.1449-1459, 2003.
  11. Nishida, K. and Kurita, T. “RANSAC-SVM for Large-Scale Datasets”, in proc. International COnference on Pattern Recognition (ICPR2008), 2008. (CD-ROM).
  12. Oosugi, Y. and Uehara, K. “Constructing a Minimal Instance-base by Storing Prototype Instances”, in J. of Information Processing, Vol.39, No.11, pp.2949- 2959, 1998. (in Japanese).
  13. Schölkopf, B. and Burges, C. J. C. and Smola, A. J. Advances in Kernel Methods - Support Vector Learning, The MIT Press, 1999.
  14. Vapnik, V. N. Statistical Learning Theory, John Wiley & Sons, 1998.
  15. Zhu, J. and Hastie, T. “Kernel Logistic Regression and the Import Vector Machine”, J.of Computational and Graphical Statistics, Vol.14, No.1, pp.185-205, 2005.
  16. LIBSVM data set, http://www.csie.ntu.edu.tw/˜cjlin/libsvm tools/datasets/binary.html#cod-rna, 2006.

Paper Citation

in Harvard Style

Nishida K., Fujiki J. and Kurita T. (2011). ENSEMBLE RANDOM-SUBSET SVM . In Proceedings of the International Conference on Neural Computation Theory and Applications - Volume 1: NCTA, (IJCCI 2011) ISBN 978-989-8425-84-3, pages 334-339. DOI: 10.5220/0003668903340339

in Bibtex Style

author={Kenji Nishida and Jun Fujiki and Takio Kurita},
booktitle={Proceedings of the International Conference on Neural Computation Theory and Applications - Volume 1: NCTA, (IJCCI 2011)},

in EndNote Style

JO - Proceedings of the International Conference on Neural Computation Theory and Applications - Volume 1: NCTA, (IJCCI 2011)
SN - 978-989-8425-84-3
AU - Nishida K.
AU - Fujiki J.
AU - Kurita T.
PY - 2011
SP - 334
EP - 339
DO - 10.5220/0003668903340339