Encyclopedia of Database Systems

2018 Edition
| Editors: Ling Liu, M. Tamer Özsu


  • Zhi-Hua ZhouEmail author
Reference work entry
DOI: https://doi.org/10.1007/978-1-4614-8265-9_768


Classifier combination; Committee-based learning; Multiple classifier system


Ensemble is a learning paradigm where multiple learners are trained to solve the same problem. In contrast to ordinary learning approaches that try to learn one hypothesis from training data, ensemble methods try to construct and combine a set of hypotheses.

Historical Background

It is difficult to trace the starting point of the history of ensemble methods since the basic idea of deploying multiple models has been in use for a long time. However, it is clear that the hot wave of research on ensemble methods since the 1990s owes much to two works. The first is an applied research conducted by Hansen and Salamon at the end of 1980s [1], where they found that predictions made by the combination of a set of neural networks are often more accurate than predictions made by the best single neural network. The second is a theoretical research conducted in 1990, where Schapire proved that weak...

This is a preview of subscription content, log in to check access.

Recommended Reading

  1. 1.
    Hansen LK, Salamon P. Neural network ensembles. IEEE Trans Pattern Anal Mach Intell. 1990;12(10):993–1001.CrossRefGoogle Scholar
  2. 2.
    Schapire RE. The Boosting approach to machine learning: an overview. In: Denison DD, Hansen MH, Holmes C, Mallick B, Yu B, editors. Nonlinear Estimation and Classification. Berlin: Springer; 2003.Google Scholar
  3. 3.
    Breiman L. Bagging predictors. Mach Learn. 1996;24(2):123–40.zbMATHGoogle Scholar
  4. 4.
    Wolpert DH. Stacked generalization. Neural Netw. 1992;5(2):241–60.CrossRefGoogle Scholar
  5. 5.
    Breiman L. Random forests. Mach Learn. 2001;45(1):5–32.zbMATHCrossRefGoogle Scholar
  6. 6.
    Ho TK. The random subspace method for constructing decision forests. IEEE Trans Pattern Anal Mach Intell. 1998;20(8):832–44.CrossRefGoogle Scholar
  7. 7.
    Strehl A, Ghosh J. Cluster ensembles – a knowledge reuse framework for combining multiple partitionings. J Mach Learn Res. 2002;3(3):583–617.zbMATHGoogle Scholar
  8. 8.
    Zhou Z-H. Ensemble Method: Foundations andAlgorithms. Boca Raton: CRC Press; 2012.CrossRefGoogle Scholar
  9. 9.
    Dietterich TG. Machine learning research: four current directions. AI Magn. 1997;18(4):97–136.Google Scholar
  10. 10.
    Bauer E, Kohavi R. An empirical comparison of voting classification algorithms: bagging, Boosting, and variants. Mach Learn. 1999;36(1–2):105–39.CrossRefGoogle Scholar
  11. 11.
    Gao W, Zhou Z-H. On the doubt about margin explanation of Boosting. Artifi Intell. 2013;203:1–18.MathSciNetzbMATHCrossRefGoogle Scholar
  12. 12.
    Krogh A. Neural network ensembles, cross validation, and active learning. In: Tesauro G, Touretzky DS, Leen TK, editors. Advances in Neural Information Processing Systems 7. Cambridge, MA: MIT Press; 1995. p. 231–8.Google Scholar
  13. 13.
    Kuncheva LI, Whitaker CJ. Measures of diversity in classifier ensembles and their relationship with the ensemble accuracy. Mach Learn. 2003;51(2):181–207.zbMATHCrossRefGoogle Scholar
  14. 14.
    Opitz D, Maclin R. Popular ensemble methods: an empirical study. J Artif Intell Res. 1999;11:169–98.zbMATHCrossRefGoogle Scholar
  15. 15.
    Ting KM, Witten IH. Issues in stacked generalization. J Artif Intell Res. 1999;10:271–89.zbMATHCrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2018

Authors and Affiliations

  1. 1.National Key Lab for Novel Software TechnologyNanjing UniversityNanjingChina

Section editors and affiliations

  • Kyuseok Shim
    • 1
  1. 1.School of Elec. Eng. and Computer ScienceSeoul National Univ.SeoulRepublic of Korea