On a New Measure of Classifier Competence Applied to the Design of Multiclassifier Systems

  • Tomasz Woloszynski
  • Marek Kurzynski
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5716)

Abstract

This paper presents a new method for calculating competence of a classifier in the feature space. The idea is based on relating the response of the classifier with the response obtained by a random guessing. The measure of competence reflects this relation and rates the classifier with respect to the random guessing in a continuous manner. Two multiclassifier systems representing fusion and selection strategies were developed using proposed measure of competence. The performance of multiclassifiers was evaluated using five benchmark databases from the UCI Machine Learning Repository and Ludmila Kuncheva Collection. Classification results obtained for three simple fusion methods and one multiclassifier system with selection strategy were used for a comparison. The experimental results showed that, regardless of the strategy used by the multiclassifier system, the classification accuracy has increased when the measure of competence was employed. The improvement was most significant for simple fusion methods (sum, product and majority vote). For all databases, two developed multiclassifier systems produced the best classification scores.

Keywords

Feature Space Local Accuracy Fusion Method Validation Dataset Correct Class 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Asuncion, A., Newman, D.: UCI Machine Learning Repository. University of California, Department of Information and Computer Science, Irvine, CA (2007), http://www.ics.uci.edu/~mlearn/MLRepository.html
  2. 2.
    Didaci, L., Giacinto, G., Roli, F., Arcialis, G.: A study on the performance of dynamic classifier selection based on local accuracy estimation. Pattern Recognition 38, 2188–2191 (2005)CrossRefMATHGoogle Scholar
  3. 3.
    Duda, R., Hart, P., Stork, D.: Pattern Classification. Wiley-Interscience, Hoboken (2001)MATHGoogle Scholar
  4. 4.
    Freund, Y., Schapire, R.: Experiments with a new boosting algorithm. In: Machine Learning: Proceedings of the Thirteenth International Conference, pp. 148–156 (1996)Google Scholar
  5. 5.
    Giacinto, G., Roli, F.: Design of effective neural network ensembles for image classification processes. Image Vision and Computing Journal 19, 699–707 (2001)CrossRefGoogle Scholar
  6. 6.
    Ko, A., Sabourin, R., Britto, A.: From dynamic classifier selection to dynamic ensamble selection. Pattern Recognition 41, 1718–1733 (2008)CrossRefMATHGoogle Scholar
  7. 7.
    Kuncheva, L.: Combining Pattern Classifiers: Methods and Algorithms. Wiley-Interscience, New Jersey (2004)CrossRefMATHGoogle Scholar
  8. 8.
  9. 9.
    Rastrigin, L.A., Erenstein, R.H.: Method of Collective Recognition. Energoizdat, Moscow (1981)Google Scholar
  10. 10.
    Woods, K., Kegelmeyer, W.P., Bowyer, K.: Combination of multiple classifiers using local accuracy estimates. IEEE Transactions on Pattern Analysis and Machine Intelligence 19, 405–410 (1997)CrossRefGoogle Scholar
  11. 11.
    Woloszynski, T., Kurzynski, M.: On a new measure of classifier competence in the feature space. Computer Recognition Systems 3 (2009) (in print)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2009

Authors and Affiliations

  • Tomasz Woloszynski
    • 1
  • Marek Kurzynski
    • 1
  1. 1.Chair of Systems and Computer NetworksWroclaw University of TechnologyWroclawPoland

Personalised recommendations