Advertisement

Ensemble of Classifiers Based on Hard Instances

  • Isis Bonet
  • Abdel Rodríguez
  • Ricardo Grau
  • María M. García
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6718)

Abstract

There are several classification problems, which are difficult to solve using a single classifier because of the complexity of the decision boundary. Whereas, a wide variety of multiple classifier systems have been built with the purpose of improving the recognition process. There is no universal method performing the best. The aim of this paper is to show another model of combining classifiers. This model is based on the use of different classifier models. It makes clusters to divide the dataset, taking into account the performance of the base classifiers. The system learns how to decide from the groups, by a meta-classifier, who are the best classifiers for a given pattern. In order to compare the new model with well-known classifier ensembles, we carried out experiments with some international databases. The results demonstrate that this new model can achieve similar or better performance than the classic ensembles.

Keywords

multiple classifiers ensemble classifiers classification pattern recognition 

References

  1. 1.
    Polikar, R.: Ensemble based systems in decision making. IEEE Circuits and Systems Magazine 6, 21–44 (2006)CrossRefGoogle Scholar
  2. 2.
    Dietterich, T.G.: Ensemble methods in machine learning. In: Kittler, J., Roli, F. (eds.) MCS 2000. LNCS, vol. 1857, pp. 1–15. Springer, Heidelberg (2000)CrossRefGoogle Scholar
  3. 3.
    Ghosh, J.: Multiclassifier systems: Back to the future. In: Roli, F., Kittler, J. (eds.) MCS 2002. LNCS, vol. 2364, pp. 1–15. Springer, Heidelberg (2002)CrossRefGoogle Scholar
  4. 4.
    Kuncheva, L.I.: Combining Pattern Classifiers, Methods and Algorithms. Wiley Interscience, New York (2004)CrossRefzbMATHGoogle Scholar
  5. 5.
    Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996)zbMATHGoogle Scholar
  6. 6.
    Schapire, R.E.: The strength of weak learnability. Machine Learning 5, 197–227 (1990)Google Scholar
  7. 7.
    Wolpert, D.: Stacked generalization. Neural Networks 5, 241–259 (1992)CrossRefGoogle Scholar
  8. 8.
    Freund, Y., Schapire, R.E.: Decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences 55, 119–139 (1997)MathSciNetCrossRefzbMATHGoogle Scholar
  9. 9.
    Canuto, A.M.P., Abreu, M.C.C., Oliveira, L.D., Xavier, J.C., Santos, A.D.: Investigating the influence of the choice of the ensemble members in accuracy and diversity of selection-based and fusion-based methods for ensembles. Pattern Recognition Letters 28, 472–486 (2007)CrossRefGoogle Scholar
  10. 10.
    Kuncheva, L.I., Whitaker, C.J.: Measures of diversity in classifier ensembles and their relationship with the ensemble accuracy. Machine Learning 51, 181–207 (2003)CrossRefzbMATHGoogle Scholar
  11. 11.
    Banfield, R.E., Hall, L.O., Bowyer, K.W., Kegelmeyer, W.P.: Ensemble diversity measures and their application to thinning. Information Fusion 6, 49–62 (2005)CrossRefGoogle Scholar
  12. 12.
    Tang, E.K., Suganthan, P.N., Yao, X.: An analysis of diversity measures. Machine Learning 65, 247–271 (2006)CrossRefGoogle Scholar
  13. 13.
    Giacinto, G., Roli., F.: Dynamic classifier selection. In: 1st International Workshop on Multiple Classier Systems, pp. 177–189 (Year) Google Scholar
  14. 14.
    Witten, I., Frank, E.: Data Mining: Practical Machine Learning Tools and Techniques. Diane Cerra, San Francisco (2005)zbMATHGoogle Scholar
  15. 15.
    University of California, Irvine, School of Information and Computer Sciences, http://www.ics.uci.edu/~mlearn/MLRepository.html

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Isis Bonet
    • 1
  • Abdel Rodríguez
    • 1
  • Ricardo Grau
    • 1
  • María M. García
    • 1
  1. 1.Center of Studies on InformaticsCentral University of Las VillasSanta ClaraCuba

Personalised recommendations