Synonyms
Classifier combination; Committee-based learning; Multiple classifier system
Definition
Ensemble is a learning paradigm where multiple learners are trained to solve the same problem. In contrast to ordinary learning approaches that try to learn one hypothesis from training data, ensemble methods try to construct and combine a set of hypotheses.
Historical Background
It is difficult to trace the starting point of the history of ensemble methods since the basic idea of deploying multiple models has been in use for a long time. However, it is clear that the hot wave of research on ensemble methods since the 1990s owes much to two works. The first is an applied research conducted by Hansen and Salamon at the end of 1980s [1], where they found that predictions made by the combination of a set of neural networks are often more accurate than predictions made by the best single neural network. The second is a theoretical research conducted in 1990, where Schapire proved that weak...
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsRecommended Reading
Hansen LK, Salamon P. Neural network ensembles. IEEE Trans Pattern Anal Mach Intell. 1990;12(10):993–1001.
Schapire RE. The Boosting approach to machine learning: an overview. In: Denison DD, Hansen MH, Holmes C, Mallick B, Yu B, editors. Nonlinear Estimation and Classification. Berlin: Springer; 2003.
Breiman L. Bagging predictors. Mach Learn. 1996;24(2):123–40.
Wolpert DH. Stacked generalization. Neural Netw. 1992;5(2):241–60.
Breiman L. Random forests. Mach Learn. 2001;45(1):5–32.
Ho TK. The random subspace method for constructing decision forests. IEEE Trans Pattern Anal Mach Intell. 1998;20(8):832–44.
Strehl A, Ghosh J. Cluster ensembles – a knowledge reuse framework for combining multiple partitionings. J Mach Learn Res. 2002;3(3):583–617.
Zhou Z-H. Ensemble Method: Foundations andAlgorithms. Boca Raton: CRC Press; 2012.
Dietterich TG. Machine learning research: four current directions. AI Magn. 1997;18(4):97–136.
Bauer E, Kohavi R. An empirical comparison of voting classification algorithms: bagging, Boosting, and variants. Mach Learn. 1999;36(1–2):105–39.
Gao W, Zhou Z-H. On the doubt about margin explanation of Boosting. Artifi Intell. 2013;203:1–18.
Krogh A. Neural network ensembles, cross validation, and active learning. In: Tesauro G, Touretzky DS, Leen TK, editors. Advances in Neural Information Processing Systems 7. Cambridge, MA: MIT Press; 1995. p. 231–8.
Kuncheva LI, Whitaker CJ. Measures of diversity in classifier ensembles and their relationship with the ensemble accuracy. Mach Learn. 2003;51(2):181–207.
Opitz D, Maclin R. Popular ensemble methods: an empirical study. J Artif Intell Res. 1999;11:169–98.
Ting KM, Witten IH. Issues in stacked generalization. J Artif Intell Res. 1999;10:271–89.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Section Editor information
Rights and permissions
Copyright information
© 2018 Springer Science+Business Media, LLC, part of Springer Nature
About this entry
Cite this entry
Zhou, ZH. (2018). Ensemble. In: Liu, L., Özsu, M.T. (eds) Encyclopedia of Database Systems. Springer, New York, NY. https://doi.org/10.1007/978-1-4614-8265-9_768
Download citation
DOI: https://doi.org/10.1007/978-1-4614-8265-9_768
Published:
Publisher Name: Springer, New York, NY
Print ISBN: 978-1-4614-8266-6
Online ISBN: 978-1-4614-8265-9
eBook Packages: Computer ScienceReference Module Computer Science and Engineering