Statistical Power

  • Chester L. Britt
  • David Weisburd


Researchers in criminology and criminal justice have placed much more emphasis on the statistical significance of a study rather than on the statistical power of a study. The lack of attention given to issues of statistical power by criminologists likely reflects a lack of familiarity with the techniques used for assessing the statistical power of a study. Consequently, the purpose of this chapter is to present the key components in an assessment of statistical power, so that criminologists will have a basic understanding of how they can estimate the statistical power of a research design or to estimate the size of sample necessary to achieve a given level of statistical power. Our discussion of statistical power presents the basic conceptual and statistical background on statistical power, with an emphasis on the three key components of statistical power. We illustrate the computation of statistical power estimates, as well as estimates of sample size, for some of the most common (and basic) types of statistical tests researchers will confront. We conclude by highlighting some of the more recent developments in assessing statistical power for more complex multivariate models and future directions for estimating statistical power in criminology and criminal justice.


Null Hypothesis Criminal Justice Sampling Distribution Research Hypothesis Standardize Effect Size 


  1. Borenstein M, Rothstein H, Cohen J (2001) Power and precision. Biostat, Inc., Englewood, NJGoogle Scholar
  2. Brown SE (1989) Statistical power and criminal justice research. J Crim Justice 17:115–122CrossRefGoogle Scholar
  3. Champely S (2007) pwr: Basic functions for power analysis. Rpackage version 1.1.Google Scholar
  4. Cohen J (1988) Statistical power analysis for the behavioral sciences, 2nd edn. Lawrence Erlbaum, Hillsdale, NJGoogle Scholar
  5. Dattalo P (2008) Determining sample size. Oxford University Press, New YorkCrossRefGoogle Scholar
  6. Efron B, Tibshirani RJ (1993) An introduction to the bootstrap. Chapman & Hall, New YorkGoogle Scholar
  7. Faul F, Erdfelder E, Land AG, Buchner A (2007) G*Power 3: a flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behav Res Methods 39:175–191CrossRefGoogle Scholar
  8. Hayes JP, Steidl RJ (1997) Statistical power analysis and amphibian population trends. Conserv Biol 11:273–275CrossRefGoogle Scholar
  9. Kleiber C, Zeileis A (2008) Applied econometrics in R. Springer, New YorkCrossRefGoogle Scholar
  10. Kraemer HC, Thiemann S (1987) How many subjects: statistical power analysis in research. Sage, Newbury Park, CAGoogle Scholar
  11. Lipsey MW (1990) Design sensitivity: statistical power for experimental research. Sage, Newbury Park, CAGoogle Scholar
  12. Lipsey M, Wilson D (2001) Practical meta-analysis. Sage, Thousand Oaks, CAGoogle Scholar
  13. Maltz MD (1994) Deviating from the mean: the declining significance of significance. J Res Crime Delinq 31: 434–463CrossRefGoogle Scholar
  14. Maxwell SE, Kelley K, Rausch JR (2008) Sample size planning for accuracy in parameter estimation. Annu Rev Psychol 59:537–563CrossRefGoogle Scholar
  15. Murphy KR, Myors B (2003) Statistical power analysis, 2nd edn. Lawrnce Erlbaum, Mahwah, NJGoogle Scholar
  16. Muthén LK, Muthén BO (2002) How to use a monte carlo study to decide on sample size and determine power. Struct Equ Model 9:599–620CrossRefGoogle Scholar
  17. Petersilia J (1989) Randomized experiments: lessons from BJA’s intensive supervision project. Eval Rev 13:435–458CrossRefGoogle Scholar
  18. Raudenbusch S, Liu X (2000) Statistical power and optimal design for multisite randomized trials. Psychol Methods 5:199–213CrossRefGoogle Scholar
  19. R Core Development Team (2009) R: a language and environment for statistical computing.
  20. Rosenthal R (1984) Meta-analytic procedures for social research. Sage, Beverly HillsGoogle Scholar
  21. Thomas L (1997) Retrospective power analysis. Conserv Biol 11:276–280CrossRefGoogle Scholar
  22. Weisburd D (1991) Design sensitivity in criminal justice experiments. Crim Justice 17:337–379Google Scholar

Copyright information

© Springer Science+Business Media, LLC 2010

Authors and Affiliations

  • Chester L. Britt
    • 1
  • David Weisburd
    • 2
    • 3
  1. 1.College of Criminal JusticeNortheastern UniversityBostonUSA
  2. 2.Administration of JusticeGeorge Mason UniversityManassasUSA
  3. 3.Institute of CriminologyHebrew University of JerusalemJerusalemIsrael

Personalised recommendations