Encyclopedia of Database Systems

2018 Edition
| Editors: Ling Liu, M. Tamer Özsu


  • Hwanjo YuEmail author
Reference work entry
DOI: https://doi.org/10.1007/978-1-4614-8265-9_566


Bootstrap estimation; Bootstrap sampling


The bootstrap is a statistical method for estimating the performance (e.g., accuracy) of classification or regression methods. The bootstrap is based on the statistical procedure of sampling with replacement. Unlike other estimation methods such as cross-validation, the same object or tuple can be selected for the training set more than once in the boostrap. That is, each time a tuple is selected, it is equally likely to be selected again and re-added to the training set.

Historical Background

The bootstrap sampling was developed by Bradley Efron in 1979, and mainly used for estimating the statistical parameters such as mean, standard errors, etc. [2]. A meta-classification method using the bootstrap called bootstrap aggregating (or bagging) was proposed by Leo Breiman in 1994 to improve the classification by combining classifications of randomly generated training sets [1].


This section discusses a commonly used...

This is a preview of subscription content, log in to check access.

Recommended Reading

  1. 1.
    Breiman L. Bagging predictors. Machine Learning; 1996.Google Scholar
  2. 2.
    Efron B, Tibshirani RJ. An introduction to the bootstrap. Boca Raton: CRC Press; 1994.zbMATHGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2018

Authors and Affiliations

  1. 1.University of IowaIowa CityUSA

Section editors and affiliations

  • Kyuseok Shim
    • 1
  1. 1.School of Elec. Eng. and Computer ScienceSeoul National Univ.SeoulRepublic of Korea