Skip to main content

Dynamic Classifier Systems and Their Applications to Random Forest Ensembles

  • Conference paper
Adaptive and Natural Computing Algorithms (ICANNGA 2009)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 5495))

Included in the following conference series:

Abstract

Classifier combining is a popular method for improving quality of classification – instead of using one classifier, several classifiers are organized into a classifier system and their results are aggregated into a final prediction. However, most of the commonly used aggregation methods are static, i.e., they do not adapt to the currently classified pattern. In this paper, we provide a general framework for dynamic classifier systems, which use dynamic confidence measures to adapt to a particular pattern. Our experiments with random forests on 5 artificial and 11 real-world benchmark datasets show that dynamic classifier systems can significantly outperform both confidence-free and static classifier systems.

The research presented in this paper was partially supported by the Program “Information Society” under project 1ET100300517 (D. Štefka) and by the grant No. 201/08/0802 of the Grant Agency of the Czech Republic and by the Institutional Research Plan AV0Z10300504 (M. Holeňa).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Duda, R.O., Hart, P.E., Stork, D.G.: Pattern Classification, 2nd edn. Wiley Interscience, Hoboken (2000)

    MATH  Google Scholar 

  2. Kuncheva, L.I.: Combining Pattern Classifiers: Methods and Algorithms. Wiley-Interscience, Hoboken (2004)

    Book  MATH  Google Scholar 

  3. Robnik-Šikonja, M.: Improving random forests. In: Boulicaut, J.-F., Esposito, F., Giannotti, F., Pedreschi, D. (eds.) ECML 2004. LNCS, vol. 3201, pp. 359–370. Springer, Heidelberg (2004)

    Chapter  Google Scholar 

  4. Tsymbal, A., Pechenizkiy, M., Cunningham, P.: Dynamic integration with random forests. In: Fürnkranz, J., Scheffer, T., Spiliopoulou, M. (eds.) ECML 2006. LNCS, vol. 4212, pp. 801–808. Springer, Heidelberg (2006)

    Chapter  Google Scholar 

  5. Woods, K., Philip Kegelmeyer Jr., W., Bowyer, K.: Combination of multiple classifiers using local accuracy estimates. IEEE Trans. Pattern Anal. Mach. Intell. 19(4), 405–410 (1997)

    Article  Google Scholar 

  6. Delany, S.J., Cunningham, P., Doyle, D., Zamolotskikh, A.: Generating estimates of classification confidence for a case-based spam filter. In: Muñoz-Ávila, H., Ricci, F. (eds.) ICCBR 2005. LNCS, vol. 3620, pp. 177–190. Springer, Heidelberg (2005)

    Chapter  Google Scholar 

  7. Wilson, D.R., Martinez, T.R.: Combining cross-validation and confidence to measure fitness. In: Proceedings of the International Joint Conference on Neural Networks (IJCNN 1999), paper 163 (1999)

    Google Scholar 

  8. Avnimelech, R., Intrator, N.: Boosted mixture of experts: An ensemble learning scheme. Neural Computation 11(2), 483–497 (1999)

    Article  Google Scholar 

  9. Breiman, L.: Random forests. Machine Learning 45(1), 5–32 (2001)

    Article  MATH  Google Scholar 

  10. Hand, D.J.: Construction and Assessment of Classification Rules. Wiley, Chichester (1997)

    MATH  Google Scholar 

  11. Breiman, L.: Bagging predictors. Machine Learning 24(2), 123–140 (1996)

    MATH  Google Scholar 

  12. Freund, Y., Schapire, R.E.: Experiments with a new boosting algorithm. In: International Conference on Machine Learning, pp. 148–156 (1996)

    Google Scholar 

  13. Kuncheva, L.I., Bezdek, J.C., Duin, R.P.W.: Decision templates for multiple classifier fusion: an experimental comparison. Pattern Recognition 34(2), 299–314 (2001)

    Article  MATH  Google Scholar 

  14. UCL MLG: Elena database (1995), http://www.dice.ucl.ac.be/mlg/?page=Elena

  15. Newman, D.J., Hettich, S., Blake, C.L., Merz, C.J.: UCI repository of machine learning databases (1998), http://www.ics.uci.edu/~mlearn/MLRepository.html

  16. Štefka, D., Holeňa, M.: Classifier aggregation using local classification confidence. In: Filipe, J., Fred, A., Sharp, B. (eds.) Proceedings of the First International Conference on Agents and Artificial Intelligence, ICAART 2009, Porto, Portugal, pp. 173–178 (2009)

    Google Scholar 

  17. Štefka, D., Holeňa, M.: The use of fuzzy t-conorm integral for combining classifiers. In: Mellouli, K. (ed.) ECSQARU 2007. LNCS, vol. 4724, pp. 755–766. Springer, Heidelberg (2007)

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2009 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Štefka, D., Holeňa, M. (2009). Dynamic Classifier Systems and Their Applications to Random Forest Ensembles. In: Kolehmainen, M., Toivanen, P., Beliczynski, B. (eds) Adaptive and Natural Computing Algorithms. ICANNGA 2009. Lecture Notes in Computer Science, vol 5495. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-04921-7_47

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-04921-7_47

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-04920-0

  • Online ISBN: 978-3-642-04921-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics