Encyclopedia of Operations Research and Management Science

2001 Edition
| Editors: Saul I. Gass, Carl M. Harris

BOOTSTRAPPING: RESAMPLING METHODOLOGY

  • Linda Weiser Friedman
  • Hershey H. Friedman
Reference work entry
DOI: https://doi.org/10.1007/1-4020-0611-X_84
  • 21 Downloads

Researchers typically encounter many situations in which parametric statistical techniques are less than ideal. The t-statistic, for example, assumes that the data were sampled from a normal distribution. Of course, much real-world data follow distributions that are far from normal, and may in fact be quite skewed. Suppose a researcher is investigating data that is known to follow an exponential distribution. Clearly, it would take an extremely large sample and a great deal of manipulation (e.g., averages of averages), for the central limit theorem to apply. In many cases, there is no parametric test for the measurement of interest because we may not know the sampling distribution of that measurement and thus we would not have tractable analytic formulas for estimating such measures, for example, the difference between two medians (Mooney and Duval, 1993, p. 8).

There are a number of nonparametric statistical techniques that do not rely on distributional assumptions and often may be...

This is a preview of subscription content, log in to check access.

References

  1. [1]
    Cho, K. (1997). “Performance assessment through bootstrap,” IEEE Trans. on Pattern Analysis and Machine Intelligence, 19, 1185–1198.Google Scholar
  2. [2]
    Efron, B. (1981). “Nonparametric estimates of standard error: The jackknife, the bootstrap and other methods,” Biometrika, 68, 589–599.Google Scholar
  3. [3]
    Efron, B. (1982). The Jackknife, the Bootstrap and Other Resampling Plans. SIAM, Philadelphia.Google Scholar
  4. [4]
    Efron, B. and Tibshirani, R. (1991). “Statistical Data Analysis in the Computer Age,” Science, 253, 390–395.Google Scholar
  5. [5]
    Fan, X. and Jacoby, W.G. (1995). “BOOTSREG: An SAS matrix language program for bootstrapping linear regression models,” Educational and Psychological Measurement, 55, 764–768.Google Scholar
  6. [6]
    Friedman, L.W. and Friedman, H.H. (1995). “Analyzing simulation output using the bootstrap method,” Simulation, 64, 95–100.Google Scholar
  7. [7]
    Jeske, D.R. (1997). “Alternative prediction intervals for Pareto proportions,” J. of Quality Technology 29, 317–326.Google Scholar
  8. [8]
    Jochen, V.A. (1997). “Using the bootstrap method to obtain probabilistic reserves estimates from production data,” Petroleum Engineer International 70, 55+.Google Scholar
  9. [9]
    Kim, Y.B., Willemain, T.R., Haddock, J., and Runger, G.C. (1993). “The threshold bootstrap: A new approach to simulation output analysis,” Proceedings of the 1993 Winter Simulation Conference, 498–502. Google Scholar
  10. [10]
    LeBaron, B. (1998). “A bootstrap evaluation of the effect of data splitting on financial time series,”IEEE Trans. on Neural Networks 9, 213–220.Google Scholar
  11. [11]
    Mooney, C.Z. and Duval, R.D. (1993). Bootstrapping: A Nonparametric Approach to Statistical Inference. Sage Publications, Newbury Park, California.Google Scholar
  12. [12]
    Seppala, T. (1995). “Statistical process control via the subgroup Bootstrap,” J. of Quality Technology 27, 139–153.Google Scholar
  13. [13]
    Shimshoni, Y. (1998). “Classification of seismic signals by integrating ensembles of neural networks,” IEEE Trans. on Signal Processing 46, 1194–1120.Google Scholar
  14. [14]
    Simon, J.L. (1995). Resampling Stats User's Guide. Resampling Stats, Inc., Arlington, Virginia.Google Scholar
  15. [15]
    Willemain, T.R. (1994). “Bootstrap on a shoestring: Resampling using spreadsheets,” The American Statistician 48, 40–42.Google Scholar

Copyright information

© Kluwer Academic Publishers 2001

Authors and Affiliations

  • Linda Weiser Friedman
    • 1
  • Hershey H. Friedman
    • 2
  1. 1.Baruch College, City University of New YorkNew YorkUSA
  2. 2.Brooklyn College, City University of New YorkNew YorkUSA