Abstract

The research literature of behavioral medicine, along with that of many other areas of scientific research, is experiencing dramatic growth. Many important research areas now have several or even dozens of research studies or clinical trials that address similar questions. These circumstances pose new problems in the appraisal, evaluation, and use of the collective body of research evidence on important questions. The use of quantitative procedures for combining research results has become important as a response to increasingly complex research literatures in biostatistics as well as in the social, behavioral, and physical sciences. Such procedures can make the synthesis of research findings more rigorous and potentially more valid, while providing feasible means for addressing ever larger and more complex collections of research findings.

Keywords

Behavioral Medicine Effect Magnitude Omnibus Test Statistical Concern Construct Definition 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Adcock, C. J. (1960). A note on combining probabilities. Psy-chometrika, 25, 303–305.Google Scholar
  2. Barnett, V., & Lewis, T. (1978). Outliers in statistical data. New York: Wiley.Google Scholar
  3. Birge, R. T. (1932). The calculation of errors by the method of least squares. Physical Review, 16, 1–32.Google Scholar
  4. Bozarth, J. D., & Roberts, R. R. (1972). Signifying significant significance. American Psychologist, 27, 774–775.CrossRefGoogle Scholar
  5. Canner, P. L. (1983). Aspirin in coronary heart disease: A comparison of six clinical trials. Israel Journal of Medical Sciences, 19, 413–423.PubMedGoogle Scholar
  6. Chalmers, T. C. (1982). The randomized controlled trial as a basis for therapeutic decisions. In J. M. Lachin, N. Tygstrup, & E. Juhl (Eds.), The randomized clinical trial and therapeutic decisions. New York: Dekker.Google Scholar
  7. Champney, T. F. (1983). Adjustments for selection: Publication bias in quantitative research synthesis. Unpublished doctoral dissertation, University of Chicago.Google Scholar
  8. Cochran, W. C. (1937). Problems arising in the analysis of a series of similar experiments. Journal of the Royal Statistical Society (Supplement), 4, 102–118.CrossRefGoogle Scholar
  9. Cochran, W. C. (1943). The comparison of different scales of measurement for experimental results. Annals of Mathematical Statistics, 14, 205–216.CrossRefGoogle Scholar
  10. Cook, T. D., & Campbell, D. T. (1979). Quasi-experimentation. Chicago: Rand McNally.Google Scholar
  11. Cooper, H. M. (1984). The integrative research review: A systematic approach. Beverly Hills, CA: Sage.Google Scholar
  12. Cronbach, L. J. (1980). Toward reform of program evaluation. San Francisco: Jossey-Bass.Google Scholar
  13. Devine, E. C., & Cook, T. D. (1983). Effects of psychoeducational interventions on length of hospital stay: A meta-analytic review of 34 studies. In R. J. Light (Ed.), Evaluation studies review annual (Vol. 8.) Beverly Hills, CA: Sage.Google Scholar
  14. Elashoff, J. D. (1978). Combining the results of clinical trials. Gastroenterology, 28, 1170–1172.Google Scholar
  15. Fisher, R. A. (1932). Statistical methods for research workers (4th ed.). London: Oliver & Boyd.Google Scholar
  16. Fleiss, J. L. (1973). Statistical methods for rates and proportions. New York: Wiley.Google Scholar
  17. French-Belgian Collaborative Group (1982). Ischemic heart disease and psychological patterns. Advances in Cardiology. 29, 25–31.Google Scholar
  18. Gilbert, J. P., McPeek, B., & Mosteller, F. (1977). Progress in surgery and anesthesia: Benefits and risks of innovation therapy. In J. Bunker, B. Barnes, & F. Mosteller (Eds.), Costs, risks, and benefits of surgery. London: Oxford University Press.Google Scholar
  19. Glass, G. V., McGaw, B., & Smith, M. L. (1981). Metaanalysis in social research. Beverly Hills, CA: Sage.Google Scholar
  20. Hawkins, D. M. (1980). Identification of outliers. London: Chapman & Hall.CrossRefGoogle Scholar
  21. Haynes, S. G., Feinleib, M., & Kannel, W. B. (1980). The relationship of psychosocial factors to coronary heart disease in the Framingham study. American Journal of Epidemiology, 111, 37–58.PubMedGoogle Scholar
  22. Hedges, L. V. (1983). Combining independent estimators in research synthesis. British Journal of Mathematical and Statistical Psychology, 36, 123–131.CrossRefGoogle Scholar
  23. Hedges, L. V. (1984). Estimation of effect size under nonrandom sampling: The effects of censoring studies yielding statistically insignificant mean differences. Journal of Educational Statistics, 9, 61–85.CrossRefGoogle Scholar
  24. Hedges, L. V., & Olkin, I. (1985). Statistical methods for meta-analysis. New York: Academic Press.Google Scholar
  25. Hunter, J. E., Schmidt, F. L., & Jackson, G. B. (1982). Meta-analysis: Cumulating research findings across studies. Beverly Hills, CA: Sage.Google Scholar
  26. Lane, D. M., & Dunlap, W. P. (1978). Estimating effect size: Bias resulting from the significance criterion in editorial decisions. British Journal of Mathematical and Statistical Psychology, 31, 107–112.CrossRefGoogle Scholar
  27. Light, R. J., & Pillemer, D. B. (1984). Summing up: The science of reviewing research. Cambridge, MA: Harvard University Press.Google Scholar
  28. Mantel, N., & Haenszel, W. (1959). Statistical aspects of the analysis of data from retrospective studies. Journal of the National Cancer Institute, 22, 719–748.PubMedGoogle Scholar
  29. Orwin, R. G., & Cordray, D. S. (1985). Effects of deficient reporting on meta-analysis: A conceptual framework. Psychological Bulletin, 97, 134–147.PubMedCrossRefGoogle Scholar
  30. Pearson, K. (1933). On a method of determining whether a sample of given size n supposed to have been drawn from a parent population having a known probability integral has probably been drawn at random. Biometrika, 25, 379–410.Google Scholar
  31. Raudenbush, S. W., & Bryk, A. S. (1985). Empirical Bayes meta-analysis. Journal of Educational Statistics, 10, 75–98.CrossRefGoogle Scholar
  32. Rosenman, R. H., Brand, R. J., Jenkins, C. D., Friedman, M., Straus, R., & Wurm, M. (1975). Coronary heart disease in the Western Collaborative Group Study. Journal of the American Medical Association, 233, 872–877.PubMedCrossRefGoogle Scholar
  33. Rosenthal, R. (1984). M eta-analytic procedures for social research. Beverly Hills, CA: Sage.Google Scholar
  34. Rubin, D. B. (1976). Inference and missing data. Biometrika, 63, 581–592.CrossRefGoogle Scholar
  35. Sackett, D. L. (1979). Bias in analytic research. Journal of Chronic Diseases, 32, 51–63.PubMedCrossRefGoogle Scholar
  36. Schmidt, F. L., & Hunter, J. E. (1977). Development of a general solution to the problem of validity generalization. Journal of Applied Psychology, 62, 529–540.CrossRefGoogle Scholar
  37. Sheele, P. R. (1966). Combination of log-relative risks in retrospective studies of disease. American Journal of Public Health, 56, 1745–1750.CrossRefGoogle Scholar
  38. Shekelle, R. B., Hulley, S. B., Neaton, J. D., Billings, J. H., Borhani, N. O., Gerace, T. A., Jacobs, D. R., Tasser, N. L., Mittlemark, M. B., & Stamler, J. (1985). The MRFIT Behavior Pattern Study. American Journal of Epidemiology, 122, 559–570.PubMedGoogle Scholar
  39. Smith, M. L., & Glass, G. V. (1977). Meta-analysis of psychotherapy outcome studies. American Psychologist, 32, 752–760.PubMedCrossRefGoogle Scholar
  40. Sterling, T. C. (1959). Publication decisions and their possible effects on inferences drawn from tests of significance—or vice versa. Journal of the American Statistical Association, 54, 30–34.Google Scholar
  41. Stock, W. A., Okun, M. A., Haring, M. J., Miller, W., Kinney, C., & Cuervorst, R. W. (1982). Rigor in data synthesis: A case study of reliability in meta-analysis. Educational Researcher, 11, 10–14, 20.CrossRefGoogle Scholar
  42. Stouffer, S. A., Suchman, E. A., DeVinney, L. C., Star, S. A., & Williams, R. M., Jr. (1949). The American soldier: Vol. I. Adjustment during Army life. Princeton, NJ: Princeton University Press.Google Scholar
  43. Tippett, L. H. C. (1931). The method of statistics. London: Williams & Norgate.Google Scholar
  44. Wallis, W. A. (1942). Compounding probabilities from independent significance tests. Econometrica, 10, 229–248.CrossRefGoogle Scholar
  45. Webb, E., Campbell, D. T., Schwartz, R., Sechrest, L., & Grove, J. (1981). Nonreactive measures in the social sciences. Boston: Houghton Mifflin.Google Scholar
  46. Yates, F., & Cochran, W. G. (1938). The analysis of groups of experiments. Journal of Agricultural Research, 28, 556–580.Google Scholar
  47. Yusuf, S., Peto, R., Lewis, J., Collins, R., & Sleight, P. (1985). Beta blockade during and after myocardial infarction: An overview of the randomized trials. Progress in Cardiovascular Diseases, 27, 335–371.PubMedCrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media New York 1989

Authors and Affiliations

  • Larry V. Hedges
    • 1
  1. 1.Department of EducationUniversity of ChicagoChicagoUSA

Personalised recommendations