Authors:
Kenji Nishida
1
;
Jun Fujiki
1
and
Takio Kurita
2
Affiliations:
1
National Institute of Advanced Industrial Science and Technology (AIST), Japan
;
2
Hiroshima University, Japan
Keyword(s):
Ensemble learning, Bagging, Boosting, Generalization performance, Support vector machine.
Related
Ontology
Subjects/Areas/Topics:
Artificial Intelligence
;
Biomedical Engineering
;
Biomedical Signal Processing
;
Computational Intelligence
;
Data Manipulation
;
Health Engineering and Technology Applications
;
Human-Computer Interaction
;
Methodologies and Methods
;
Neural Networks
;
Neurocomputing
;
Neurotechnology, Electronics and Informatics
;
Pattern Recognition
;
Physiological Computing Systems
;
Sensor Networks
;
Signal Processing
;
Soft Computing
;
Support Vector Machines and Applications
;
Theory and Methods
Abstract:
In this paper, the Ensemble Random-Subset SVM algorithm is proposed. In a random-subset SVM, multiple SVMs are used, and each SVM is considered a weak classifier; a subset of training samples is randomly selected for each weak classifier with randomly set parameters, and the SVMs with optimal weights are combined for classification. A linear SVM is adopted to determine the optimal kernel weights; therefore, an ensemble random-subset SVMis based on a hierarchical SVMmodel. An ensemble random-subset SVM outperforms a single SVMeven when using a small number of samples (10 or 100 samples out of 20,000 training samples for each weak classifier); in contrast, a single SVM requires more than 4,000 support vectors.