Power Calculations for Statistical Design

  • Larry R. Muenz
Part of the The Springer Series in Behavioral Psychophysiology and Medicine book series (SSBP)


Power is the probability that a statistical analysis of experimental data will detect a true effect. Experiments have high or low power owing to decisions made at the planning stage. Although a carefully chosen method of analysis is more likely to find interesting results than routine or thoughtlessly chosen methods, nothing can be done to increase power for a particular analysis once the data have already been collected. High enough values of power—there is no universal definition of “enough”—give the experimenter good reason to hope that when the data are analyzed, if an experimental effect exists it will be found. Contrarily, low power means that negative results will be impossible to interpret. Was no effect found because none exists or because the experiment is unlikely to find an effect? Because this question cannot be answered in low-power studies, statisticians believe that such experiments should not be conducted.


Statistical Design Acceptance Region Statistical Concern Population Means Noncentrality Parameter 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. Bishop, T. A., & Dudewicz, E. J. (1978). Exact analysis of variance with unequal variances: Test procedures and tables. Technometrics, 20, 419–430.CrossRefGoogle Scholar
  2. Box, G. E. P., Hunter, W. G., & Hunter, J. S. (1978). Statistics for experimenters: An introduction to design, data analysis, and model building. New York: Wiley.Google Scholar
  3. Brown, C. C., & Green, S. B. (1982). Additional power computations for designing comparative Poisson trials. American Journal of Epidemiology, 115, 752–758.PubMedGoogle Scholar
  4. Buyse, M. E., Staquet, M. J., & Sylvester, R. J. (Eds.). (1984). Cancer clinical trials. London: Oxford University Press.Google Scholar
  5. Cohen, J. (1977). Statistical power analysis for the behavioral sciences (rev. ed.). New York: Academic Press.Google Scholar
  6. Cox, D. R. (1958). Planning of experiments. New York: Wiley.Google Scholar
  7. Donner, A. (1984). Approaches to sample size estimation in the design of clinical trials—A review. Statistics in Medicine, 3, 199–214.PubMedCrossRefGoogle Scholar
  8. Fleiss, J. L. (1986). The design and analysis of clinical experiments. New York: Wiley.Google Scholar
  9. Fleiss, J. L., Tytun, A., & Ury, H. K. (1980). A simple approximation for calculating sample size for comparing independent proportions. Biometrics, 36, 343–346.CrossRefGoogle Scholar
  10. Friedman, L. M., Furberg, C. D., & DeMets, D. L. (1981). Fundamentals of clinical trials. Boston: John Wright.Google Scholar
  11. Gail, M. (1974). Power computations for designing comparative Poisson trials. Biometrics, 30, 231–237.CrossRefGoogle Scholar
  12. Guenther, W. C. (1977). Power and sample size for approximate chi-square tests. American Statistician, 31(2), 83–85.Google Scholar
  13. Johnson, N. L., & Kotz, S. (1970). Continuous univariate distributions (Vol. 2). New York: Wiley.Google Scholar
  14. IMSL, Inc. User’s Manual (1984). FORTRAN subroutines for mathematics and statistics (Vol. 3, Edition 9.2). Houston: IMSL.Google Scholar
  15. Kempthorne, O. (1952). Design and analysis of experiments. New York: Wiley. (Reprinted by Krieger, Huntington, NY, 1975).Google Scholar
  16. Kotz, S., & Johnson, N. L. (Eds.). (1982). The encyclopedia of statistics (Vol. 2). New York: Wiley.Google Scholar
  17. Lachin, J. M. (1981). to sample size determination and power analysis for clinical trials. Controlled Clinical Trials, 2, 93–113.PubMedCrossRefGoogle Scholar
  18. Lachin, J. M., & Foulkes, M. A. (1986). Evaluation of sample size and power for analyses of survival with allowance for nonuniform patient entry, losses to follow-up, non-compliance, and stratification. Biometrics, 42, 507–519.PubMedCrossRefGoogle Scholar
  19. Lakatos, E. (1986). Sample size determination in clinical trials with time-dependent rates of losses and noncompliance. Controlled Clinical Trials, 7, 189–199.PubMedCrossRefGoogle Scholar
  20. Lee, E. T. (1980). Statistical methods for survival data analysis. Belmont, CA: Lifetime Learning.Google Scholar
  21. Mantel, N. (1963). Chi-square tests with one degree of freedom; extensions of the Mantel-Haenszel procedure. Journal of the American Statistical Association, 58, 690–700.Google Scholar
  22. Moses, L. (1985). The 2 × k contingency table with ordered columns: How important to take account of the order? Technical Report No. 109, Stanford University Division of Biostatistics.Google Scholar
  23. SAS Institute (1985). SAS user’s guide: Basics (Version 5). Cary, NC: Author.Google Scholar
  24. Scheffe, H. (1959). The analysis of variance. New York: Wiley.Google Scholar
  25. Stein, C. M. (1945). A two-sample test for a linear hypothesis whose power is independent of the variance. Annals of Mathematical Statistics, 16, 243–258.CrossRefGoogle Scholar
  26. Tukey, J. W. (1977). Exploratory data analysis. Reading, MA: Addison-Wesley.Google Scholar
  27. Winer, B. J. (1971). Statistical principles in experimental design (2nd ed.). New York: McGraw-Hill.Google Scholar

Copyright information

© Springer Science+Business Media New York 1989

Authors and Affiliations

  • Larry R. Muenz
    • 1
  1. 1.SRA Technologies, Inc.AlexandriaUSA

Personalised recommendations