Skip to main content

Ensemble

  • Reference work entry
  • First Online:
Encyclopedia of Database Systems
  • 28 Accesses

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 4,499.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 6,499.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Recommended Reading

  1. Hansen LK, Salamon P. Neural network ensembles. IEEE Trans Pattern Anal Mach Intell. 1990;12(10):993–1001.

    Article  Google Scholar 

  2. Schapire RE. The Boosting approach to machine learning: an overview. In: Denison DD, Hansen MH, Holmes C, Mallick B, Yu B, editors. Nonlinear Estimation and Classification. Berlin: Springer; 2003.

    Google Scholar 

  3. Breiman L. Bagging predictors. Mach Learn. 1996;24(2):123–40.

    MATH  Google Scholar 

  4. Wolpert DH. Stacked generalization. Neural Netw. 1992;5(2):241–60.

    Article  Google Scholar 

  5. Breiman L. Random forests. Mach Learn. 2001;45(1):5–32.

    Article  MATH  Google Scholar 

  6. Ho TK. The random subspace method for constructing decision forests. IEEE Trans Pattern Anal Mach Intell. 1998;20(8):832–44.

    Article  Google Scholar 

  7. Strehl A, Ghosh J. Cluster ensembles – a knowledge reuse framework for combining multiple partitionings. J Mach Learn Res. 2002;3(3):583–617.

    MATH  Google Scholar 

  8. Zhou Z-H. Ensemble Method: Foundations andAlgorithms. Boca Raton: CRC Press; 2012.

    Book  Google Scholar 

  9. Dietterich TG. Machine learning research: four current directions. AI Magn. 1997;18(4):97–136.

    Google Scholar 

  10. Bauer E, Kohavi R. An empirical comparison of voting classification algorithms: bagging, Boosting, and variants. Mach Learn. 1999;36(1–2):105–39.

    Article  Google Scholar 

  11. Gao W, Zhou Z-H. On the doubt about margin explanation of Boosting. Artifi Intell. 2013;203:1–18.

    Article  MathSciNet  MATH  Google Scholar 

  12. Krogh A. Neural network ensembles, cross validation, and active learning. In: Tesauro G, Touretzky DS, Leen TK, editors. Advances in Neural Information Processing Systems 7. Cambridge, MA: MIT Press; 1995. p. 231–8.

    Google Scholar 

  13. Kuncheva LI, Whitaker CJ. Measures of diversity in classifier ensembles and their relationship with the ensemble accuracy. Mach Learn. 2003;51(2):181–207.

    Article  MATH  Google Scholar 

  14. Opitz D, Maclin R. Popular ensemble methods: an empirical study. J Artif Intell Res. 1999;11:169–98.

    Article  MATH  Google Scholar 

  15. Ting KM, Witten IH. Issues in stacked generalization. J Artif Intell Res. 1999;10:271–89.

    Article  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhi-Hua Zhou .

Editor information

Editors and Affiliations

Section Editor information

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Science+Business Media, LLC, part of Springer Nature

About this entry

Check for updates. Verify currency and authenticity via CrossMark

Cite this entry

Zhou, ZH. (2018). Ensemble. In: Liu, L., Özsu, M.T. (eds) Encyclopedia of Database Systems. Springer, New York, NY. https://doi.org/10.1007/978-1-4614-8265-9_768

Download citation

Publish with us

Policies and ethics