CLASSIFIER AGGREGATION USING LOCAL CLASSIFICATION CONFIDENCE

David Štefka, Martin Holeňa

Abstract

Classifier aggregation is a method for improving quality of classification. Instead of using just one classifier, a team of classifiers is created, and the outputs of the individual classifiers are aggregated into the final prediction. In this paper, we study the potential of using measures of local classification confidence in classifier aggregation methods. We introduce four measures of local classification confidence and study their suitability for classifier aggregation. We develop two novel classifier aggregation methods which utilize local classification confidence and we compare them to two commonly used methods for classifier aggregation. The results on four artificial and five real-world benchmark datasets show that by incorporating local classification confidence into classifier aggregation methods, significant improvement in classification quality can be obtained.

References

  1. Aksela, M. (2003). Comparison of classifier selection methods for improving committee performance. In Multiple Classifier Systems, pages 84-93.
  2. Avnimelech, R. and Intrator, N. (1999). Boosted mixture of experts: An ensemble learning scheme. Neural Computation, 11(2):483-497.
  3. Bay, S. D. (1999). Nearest neighbor classification from multiple feature subsets. Intelligent Data Analysis, 3(3):191-209.
  4. Breiman, L. (1996). Bagging predictors. Machine Learning, 24(2):123-140.
  5. Cheetham, W. and Price, J. (2004). Measures of Solution Accuracy in Case-Based Reasoning Systems. In Calero, P. A. G. and Funk, P., editors, Proceedings of the European Conference on Case-Based Reasoning (ECCBR-04), pages 106-118. Springer. Madrid, Spain.
  6. Delany, S. J., Cunningham, P., Doyle, D., and Zamolotskikh, A. (2005). Generating estimates of classification confidence for a case-based spam filter. In Mun˜ oz-Avila, H. and Ricci, F., editors, Case-Based Reasoning, Research and Development, 6th International Conference, on Case-Based Reasoning, ICCBR 2005, Chicago, USA, Proceedings, volume 3620 of LNCS, pages 177-190. Springer.
  7. Duda, R. O., Hart, P. E., and Stork, D. G. (2000). Pattern Classification (2nd Edition). Wiley-Interscience.
  8. Freund, Y. and Schapire, R. E. (1996). Experiments with a new boosting algorithm. In International Conference on Machine Learning, pages 148-156.
  9. Hand, D. J. (1997). Construction and Assessment of Classification Rules. Wiley.
  10. Kittler, J., Hatef, M., Duin, R. P. W., and Matas, J. (1998). On combining classifiers. IEEE Trans. Pattern Anal. Mach. Intell., 20(3):226-239.
  11. Kuncheva, L. I. (2004). Combining Pattern Classifiers: Methods and Algorithms. Wiley-Interscience.
  12. Kuncheva, L. I., Bezdek, J. C., and Duin, R. P. W. (2001). Decision templates for multiple classifier fusion: an experimental comparison. Pattern Recognition, 34(2):299-314.
  13. Newman, D., Hettich, S., Blake, C., and Merz, C. (1998). UCI repository of machine learning databases. http://www.ics.uci.edu/ mlearn/MLRepository.html.
  14. Robnik- S?ikonja, M. (2004). Improving random forests. In Boulicaut, J., Esposito, F., Giannotti, F., and Pedreschi, D., editors, ECML, volume 3201 of Lecture Notes in Computer Science, pages 359-370. Springer.
  15. Tsymbal, A., Pechenizkiy, M., and Cunningham, P. (2006). Dynamic integration with random forests. In Frnkranz, J., Scheffer, T., and Spiliopoulou, M., editors, ECML, volume 4212 of Lecture Notes in Computer Science, pages 801-808. Springer.
  16. UCL MLG (1995). Elena database. http://www.dice.ucl.ac.be/mlg/?page=Elena.
  17. Wilson, D. R. and Martinez, T. R. (1999). Combining cross-validation and confidence to measure fitness. In Proceedings of the International Joint Conference on Neural Networks (IJCNN'99), paper 163.
  18. Woods, K., W. Philip Kegelmeyer, J., and Bowyer, K. (1997). Combination of multiple classifiers using local accuracy estimates. IEEE Trans. Pattern Anal. Mach. Intell., 19(4):405-410.
  19. Zhu, X., Wu, X., and Yang, Y. (2004). Dynamic classifier selection for effective mining from noisy data streams. In ICDM 7804: Proceedings of the Fourth IEEE International Conference on Data Mining (ICDM'04), pages 305-312, Washington, DC, USA. IEEE Computer Society.
Download


Paper Citation


in Harvard Style

Štefka D. and Holeňa M. (2009). CLASSIFIER AGGREGATION USING LOCAL CLASSIFICATION CONFIDENCE . In Proceedings of the International Conference on Agents and Artificial Intelligence - Volume 1: ICAART, ISBN 978-989-8111-66-1, pages 173-178. DOI: 10.5220/0001545101730178


in Bibtex Style

@conference{icaart09,
author={David Štefka and Martin Holeňa},
title={CLASSIFIER AGGREGATION USING LOCAL CLASSIFICATION CONFIDENCE},
booktitle={Proceedings of the International Conference on Agents and Artificial Intelligence - Volume 1: ICAART,},
year={2009},
pages={173-178},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0001545101730178},
isbn={978-989-8111-66-1},
}


in EndNote Style

TY - CONF
JO - Proceedings of the International Conference on Agents and Artificial Intelligence - Volume 1: ICAART,
TI - CLASSIFIER AGGREGATION USING LOCAL CLASSIFICATION CONFIDENCE
SN - 978-989-8111-66-1
AU - Štefka D.
AU - Holeňa M.
PY - 2009
SP - 173
EP - 178
DO - 10.5220/0001545101730178