Targeted Bootstrap

  • Jeremy Coyle
  • Mark J. van der Laan
Part of the Springer Series in Statistics book series (SSS)


The bootstrap is used to obtain statistical inference (confidence intervals, hypothesis tests) in a wide variety of settings (Efron and Tibshirani 1993; Davison and Hinkley 1997). Bootstrap-based confidence intervals have been shown in some settings to have higher-order accuracy compared to Wald-style intervals based on the normal approximation (Hall 19881992; DiCiccio and Romano 1988). For this reason it has been widely adopted as a method for generating inference in a range of contexts, not all of which have theoretical support. One setting in which it fails to work in the manner it is typically applied is in the framework of targeted learning. We describe the reasons for this failure in detail and present a solution in the form of a targeted bootstrap, designed to be consistent for the first two moments of the sampling distribution.


  1. P.J. Bickel, F. Götze, W.R. van Zwet, Resampling fewer than n observations: gains, losses, and remedies for losses. Stat. Sin. 7(1), 1–31 (1997a)Google Scholar
  2. A.C. Davison, D.V. Hinkley, Bootstrap methods and Their Application. Cambridge Series in Statistical and Probabilistic Mathematics, vol. 1 (Cambridge University Press, Cambridge, Cambridge, 1997)Google Scholar
  3. T.J. DiCiccio, J.P. Romano, A review of bootstrap confidence intervals. J. R. Stat. Soc. Ser. B (1988)Google Scholar
  4. T.J. DiCiccio, J.P. Romano, Nonparametric confidence limits by resampling methods and least favorable families. Int. Stat. Rev./Revue Internationale de Statistique 58(1), 59 (1990)Google Scholar
  5. S. Dudoit, M.J. van der Laan, Asymptotics of cross-validated risk estimation in estimator selection and performance assessment. Stat. Methodol. 2(2), 131–154 (2005)MathSciNetCrossRefzbMATHGoogle Scholar
  6. B. Efron, Better bootstrap confidence intervals. J. Am. Stat. Assoc. 82(397), 171–185 (1987)MathSciNetCrossRefzbMATHGoogle Scholar
  7. B. Efron, R.J. Tibshirani, An Introduction to the Bootstrap (Chapman & Hall, Boca Raton, 1993)Google Scholar
  8. P Hall, Theoretical comparison of bootstrap confidence intervals. Ann. Stat. 16, 927–953 (1988)Google Scholar
  9. P. Hall, The Bootstrap and Edgeworth Expansion. Springer Series in Statistics (Springer, New York, NY, 1992)Google Scholar
  10. T.J. Hastie, R.J. Tibshirani, J.H. Friedman, The Elements of Statistical Learning: Data Mining, Inference, and Prediction (Springer, Berlin Heidelberg New York, 2001)Google Scholar
  11. M.J. van der Laan, S. Dudoit, Unified cross-validation methodology for selection among estimators and a general cross-validated adaptive epsilon-net estimator: finite sample oracle inequalities and examples. Technical Report, Division of Biostatistics, University of California, Berkeley (2003)Google Scholar
  12. M.J. van der Laan, J.M. Robins, Unified Methods for Censored Longitudinal Data and Causality (Springer, Berlin Heidelberg New York, 2003)CrossRefzbMATHGoogle Scholar
  13. A.W. van der Vaart, S. Dudoit, M.J. van der Laan, Oracle inequalities for multi-fold cross-validation. Stat. Decis. 24(3), 351–371 (2006)MathSciNetzbMATHGoogle Scholar

Copyright information

© Springer International Publishing AG 2018

Authors and Affiliations

  1. 1.Division of Biostatistics University of California, BerkeleyBerkeleyUSA
  2. 2.Division of Biostatistics and Department of Statistics University of California, BerkeleyBerkeleyUSA

Personalised recommendations