Reporting Effect Sizes: The New Star System

  • Bruno LecoutreEmail author
  • Jacques Poitevineau
Part of the SpringerBriefs in Statistics book series (BRIEFSSTATIST)


This chapter demonstrates the shortcomings of the widespread practice that consists of simply reporting effect size [ES] indicators in addition to NSHT (without interval estimates). It also questions the consequences of restricting the use of ES to standardized measures, as commonly done in psychology and related fields.


Effect sizes indicators Phi coefficient Sample and population effect sizes Shortcomings of standardized effect sizes Simple effect sizes The new star system 


  1. Baguley, T.: Standardized or simple effect size: What should be reported? Brit. J. Psychol. 100, 603–617 (2009)CrossRefGoogle Scholar
  2. Baguley, T.: When correlations go bad. The Psychologist 23, 122–123 (2010)Google Scholar
  3. Bird, K.: Analysis of Variance via Confidence Intervals. Sage, London (2004)Google Scholar
  4. Cohen, J.: The statistical power of abnormal-social psychological research: A review. J. Abnorm. Soc. Psych. 65, 145–153 (1962)CrossRefGoogle Scholar
  5. Cohen, J.: Statistical Power Analysis for the Behavioral Sciences, revised edn. Academic Press, New York (1977)Google Scholar
  6. Fidler, F., Thompson, B.: Computing correct confidence intervals for ANOVA fixed and random-effects effect sizes. Educ. Psychol. Meas. 61, 575–604 (2001)MathSciNetGoogle Scholar
  7. Field, A., Miles, J.: Discovering Statistics Using SAS. Sage, London (2010)Google Scholar
  8. Fleiss, J.L.: Estimating the magnitude of experimental effects. Psychol. Bull. 72, 273–276 (1969)CrossRefGoogle Scholar
  9. Gibbons, R.D., Hedeker, D.R., Davis, J.M.: Estimation of effect size from a series of experiments involving paired comparisons. J. Educ. Stat. 18, 271–279 (1993)Google Scholar
  10. Glass, G.V.: Primary, secondary, and meta-analysis of research. Educ. Researcher 5, 3–8 (1976)Google Scholar
  11. Hays, W.L.: Statistics for Psychologists. Holt, Rinehart and Winston, New York (1963)Google Scholar
  12. Hedges, L.V.: Distribution theory for Glass’s estimator of effect size and related estimators. J. Educ. Stat. 7, 107–128 (1981)Google Scholar
  13. Howell, D.C.: Fundamental Statistics for the Behavioral Sciences, 7th edn. Wadsworth, Belmont (2010)Google Scholar
  14. Hunter, J.E., Schmidt, F.L.: Methods of Meta-Analysis: Correcting Error and Bias in Research Findings, 2nd edn. Sage, Thousand Oaks, CA (2004)Google Scholar
  15. Jaccard, J., Guilamo-Ramos, V.: Analysis of variance frameworks in clinical child and adolescent psychology: Advances issues and recommendations. J. Clin. Child Adolesc. 31, 278–294 (2002)CrossRefGoogle Scholar
  16. Kirk, R.E.: Effect magnitude: A different focus. J. Stat. Plan. Infer. 137, 1634–1646 (2007)CrossRefzbMATHMathSciNetGoogle Scholar
  17. Lecoutre, B., Derzko, G.: Intervalles de confiance et de crédibilité pour le rapport de taux d’évènements rares. 4èmes Journées de Statistique, SFdS, Bordeaux (2009). Cited 13 March 2014
  18. Lenth, R.V.: Some practical guidelines for effective sample size determination. Amer. Statist. 55, 187–193 (2001)CrossRefMathSciNetGoogle Scholar
  19. McMillan, J.H., Foley, J.: Reporting and discussing effect size: Still the road less traveled? Pract. Ass., Res. Eval. 16(14) (2011) Available Cited 13 March 2014
  20. Olejnik, S., Algina, J.: Generalized eta and omega squared statistics: measures of effect size for some common research designs. Psychol. Methods 8, 434–447 (2003)CrossRefGoogle Scholar
  21. Robey, R.R.: Reporting point and interval estimates of effect-size for planned contrasts: fixed within effect analyses of variance (tutorial). J. Fluency Disord. 29, 307–341 (2004)CrossRefGoogle Scholar
  22. Rosnow, R.L., Rosenthal, R.: Effect sizes: Why, when, and how to use them. J. Psychol. 217, 6–14 (2009)Google Scholar
  23. Rothman, K.J., Greenland S.: Modern Epidemiology, 2nd edn. Lippincott-Raven, Philadelphia (1998)Google Scholar
  24. SAS Institute Inc.: SAS/SAT 9.22 User’s Guide. SAS Institute Inc., Cary, NC (2010)Google Scholar
  25. Smithson, M.: Correct confidence intervals for various regression effect sizes and parameters: The importance of noncentral distributions in computing intervals. Educ. Psychol. Meas. 61, 605–632 (2001)CrossRefMathSciNetGoogle Scholar
  26. Smithson, M.: Confidence Intervals. Sage, Thousand Oaks (2003)Google Scholar
  27. Steiger, J.H., Fouladi, R.T.: Noncentrality interval estimation and the evaluation of statistical models. In: Harlow, L.L., Mulaik, S.A., Steiger, J.H. (eds.) What If There Were No Significance Tests?, pp. 221–257. Erlbaum, Hillsdale (1997)Google Scholar
  28. Steiger, J.H.: Beyond the F test: Effect size confidence intervals and tests of close fit in the analysis of variance and contrast analysis. Psychol. Methods 9, 164–182 (2004)CrossRefGoogle Scholar
  29. Thompson, B.: Significance, effect sizes, stepwise methods, and other issues: Strong arguments move the field. J. Exp. Educ. 70, 80–93 (2001)CrossRefGoogle Scholar
  30. Tukey, J.W.: Analyzing data: Sanctification or detective work? Am. Psychol. 24, 83–91 (1969)CrossRefGoogle Scholar
  31. Wilkinson, L., Task Force on Statistical Inference, APA Board of Scientific Affairs: Statistical methods in psychology journals: Guidelines and explanations. Am. Psychol. 54, 594–604 (1999)Google Scholar

Copyright information

© The Author(s) 2014

Authors and Affiliations

  1. 1.ERIS, Laboratoire de Mathématiques Raphaël SalemUMR 6085, CNRS Université de RouenSaint-Étienne-du-RouvrayFrance
  2. 2.ERIS, IJLRA UMR-7190, CNRSUniversité Pierre et Marie CurieParisFrance

Personalised recommendations