Abstract
The bootstrap is used to obtain statistical inference (confidence intervals, hypothesis tests) in a wide variety of settings (Efron and Tibshirani 1993; Davison and Hinkley 1997). Bootstrap-based confidence intervals have been shown in some settings to have higher-order accuracy compared to Wald-style intervals based on the normal approximation (Hall 1988, 1992; DiCiccio and Romano 1988). For this reason it has been widely adopted as a method for generating inference in a range of contexts, not all of which have theoretical support. One setting in which it fails to work in the manner it is typically applied is in the framework of targeted learning. We describe the reasons for this failure in detail and present a solution in the form of a targeted bootstrap, designed to be consistent for the first two moments of the sampling distribution.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
P.J. Bickel, F. Götze, W.R. van Zwet, Resampling fewer than n observations: gains, losses, and remedies for losses. Stat. Sin. 7(1), 1–31 (1997a)
A.C. Davison, D.V. Hinkley, Bootstrap methods and Their Application. Cambridge Series in Statistical and Probabilistic Mathematics, vol. 1 (Cambridge University Press, Cambridge, Cambridge, 1997)
T.J. DiCiccio, J.P. Romano, A review of bootstrap confidence intervals. J. R. Stat. Soc. Ser. B (1988)
T.J. DiCiccio, J.P. Romano, Nonparametric confidence limits by resampling methods and least favorable families. Int. Stat. Rev./Revue Internationale de Statistique 58(1), 59 (1990)
S. Dudoit, M.J. van der Laan, Asymptotics of cross-validated risk estimation in estimator selection and performance assessment. Stat. Methodol. 2(2), 131–154 (2005)
B. Efron, Better bootstrap confidence intervals. J. Am. Stat. Assoc. 82(397), 171–185 (1987)
B. Efron, R.J. Tibshirani, An Introduction to the Bootstrap (Chapman & Hall, Boca Raton, 1993)
P Hall, Theoretical comparison of bootstrap confidence intervals. Ann. Stat. 16, 927–953 (1988)
P. Hall, The Bootstrap and Edgeworth Expansion. Springer Series in Statistics (Springer, New York, NY, 1992)
T.J. Hastie, R.J. Tibshirani, J.H. Friedman, The Elements of Statistical Learning: Data Mining, Inference, and Prediction (Springer, Berlin Heidelberg New York, 2001)
M.J. van der Laan, S. Dudoit, Unified cross-validation methodology for selection among estimators and a general cross-validated adaptive epsilon-net estimator: finite sample oracle inequalities and examples. Technical Report, Division of Biostatistics, University of California, Berkeley (2003)
M.J. van der Laan, J.M. Robins, Unified Methods for Censored Longitudinal Data and Causality (Springer, Berlin Heidelberg New York, 2003)
A.W. van der Vaart, S. Dudoit, M.J. van der Laan, Oracle inequalities for multi-fold cross-validation. Stat. Decis. 24(3), 351–371 (2006)
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Copyright information
© 2018 Springer International Publishing AG
About this chapter
Cite this chapter
Coyle, J., van der Laan, M.J. (2018). Targeted Bootstrap. In: Targeted Learning in Data Science. Springer Series in Statistics. Springer, Cham. https://doi.org/10.1007/978-3-319-65304-4_28
Download citation
DOI: https://doi.org/10.1007/978-3-319-65304-4_28
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-65303-7
Online ISBN: 978-3-319-65304-4
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)