Metaanalysis of Related Research
Abstract
The research literature of behavioral medicine, along with that of many other areas of scientific research, is experiencing dramatic growth. Many important research areas now have several or even dozens of research studies or clinical trials that address similar questions. These circumstances pose new problems in the appraisal, evaluation, and use of the collective body of research evidence on important questions. The use of quantitative procedures for combining research results has become important as a response to increasingly complex research literatures in biostatistics as well as in the social, behavioral, and physical sciences. Such procedures can make the synthesis of research findings more rigorous and potentially more valid, while providing feasible means for addressing ever larger and more complex collections of research findings.
Keywords
Behavioral Medicine Effect Magnitude Omnibus Test Statistical Concern Construct DefinitionPreview
Unable to display preview. Download preview PDF.
References
- Adcock, C. J. (1960). A note on combining probabilities. Psy-chometrika, 25, 303–305.Google Scholar
- Barnett, V., & Lewis, T. (1978). Outliers in statistical data. New York: Wiley.Google Scholar
- Birge, R. T. (1932). The calculation of errors by the method of least squares. Physical Review, 16, 1–32.Google Scholar
- Bozarth, J. D., & Roberts, R. R. (1972). Signifying significant significance. American Psychologist, 27, 774–775.CrossRefGoogle Scholar
- Canner, P. L. (1983). Aspirin in coronary heart disease: A comparison of six clinical trials. Israel Journal of Medical Sciences, 19, 413–423.PubMedGoogle Scholar
- Chalmers, T. C. (1982). The randomized controlled trial as a basis for therapeutic decisions. In J. M. Lachin, N. Tygstrup, & E. Juhl (Eds.), The randomized clinical trial and therapeutic decisions. New York: Dekker.Google Scholar
- Champney, T. F. (1983). Adjustments for selection: Publication bias in quantitative research synthesis. Unpublished doctoral dissertation, University of Chicago.Google Scholar
- Cochran, W. C. (1937). Problems arising in the analysis of a series of similar experiments. Journal of the Royal Statistical Society (Supplement), 4, 102–118.CrossRefGoogle Scholar
- Cochran, W. C. (1943). The comparison of different scales of measurement for experimental results. Annals of Mathematical Statistics, 14, 205–216.CrossRefGoogle Scholar
- Cook, T. D., & Campbell, D. T. (1979). Quasi-experimentation. Chicago: Rand McNally.Google Scholar
- Cooper, H. M. (1984). The integrative research review: A systematic approach. Beverly Hills, CA: Sage.Google Scholar
- Cronbach, L. J. (1980). Toward reform of program evaluation. San Francisco: Jossey-Bass.Google Scholar
- Devine, E. C., & Cook, T. D. (1983). Effects of psychoeducational interventions on length of hospital stay: A meta-analytic review of 34 studies. In R. J. Light (Ed.), Evaluation studies review annual (Vol. 8.) Beverly Hills, CA: Sage.Google Scholar
- Elashoff, J. D. (1978). Combining the results of clinical trials. Gastroenterology, 28, 1170–1172.Google Scholar
- Fisher, R. A. (1932). Statistical methods for research workers (4th ed.). London: Oliver & Boyd.Google Scholar
- Fleiss, J. L. (1973). Statistical methods for rates and proportions. New York: Wiley.Google Scholar
- French-Belgian Collaborative Group (1982). Ischemic heart disease and psychological patterns. Advances in Cardiology. 29, 25–31.Google Scholar
- Gilbert, J. P., McPeek, B., & Mosteller, F. (1977). Progress in surgery and anesthesia: Benefits and risks of innovation therapy. In J. Bunker, B. Barnes, & F. Mosteller (Eds.), Costs, risks, and benefits of surgery. London: Oxford University Press.Google Scholar
- Glass, G. V., McGaw, B., & Smith, M. L. (1981). Metaanalysis in social research. Beverly Hills, CA: Sage.Google Scholar
- Hawkins, D. M. (1980). Identification of outliers. London: Chapman & Hall.CrossRefGoogle Scholar
- Haynes, S. G., Feinleib, M., & Kannel, W. B. (1980). The relationship of psychosocial factors to coronary heart disease in the Framingham study. American Journal of Epidemiology, 111, 37–58.PubMedGoogle Scholar
- Hedges, L. V. (1983). Combining independent estimators in research synthesis. British Journal of Mathematical and Statistical Psychology, 36, 123–131.CrossRefGoogle Scholar
- Hedges, L. V. (1984). Estimation of effect size under nonrandom sampling: The effects of censoring studies yielding statistically insignificant mean differences. Journal of Educational Statistics, 9, 61–85.CrossRefGoogle Scholar
- Hedges, L. V., & Olkin, I. (1985). Statistical methods for meta-analysis. New York: Academic Press.Google Scholar
- Hunter, J. E., Schmidt, F. L., & Jackson, G. B. (1982). Meta-analysis: Cumulating research findings across studies. Beverly Hills, CA: Sage.Google Scholar
- Lane, D. M., & Dunlap, W. P. (1978). Estimating effect size: Bias resulting from the significance criterion in editorial decisions. British Journal of Mathematical and Statistical Psychology, 31, 107–112.CrossRefGoogle Scholar
- Light, R. J., & Pillemer, D. B. (1984). Summing up: The science of reviewing research. Cambridge, MA: Harvard University Press.Google Scholar
- Mantel, N., & Haenszel, W. (1959). Statistical aspects of the analysis of data from retrospective studies. Journal of the National Cancer Institute, 22, 719–748.PubMedGoogle Scholar
- Orwin, R. G., & Cordray, D. S. (1985). Effects of deficient reporting on meta-analysis: A conceptual framework. Psychological Bulletin, 97, 134–147.PubMedCrossRefGoogle Scholar
- Pearson, K. (1933). On a method of determining whether a sample of given size n supposed to have been drawn from a parent population having a known probability integral has probably been drawn at random. Biometrika, 25, 379–410.Google Scholar
- Raudenbush, S. W., & Bryk, A. S. (1985). Empirical Bayes meta-analysis. Journal of Educational Statistics, 10, 75–98.CrossRefGoogle Scholar
- Rosenman, R. H., Brand, R. J., Jenkins, C. D., Friedman, M., Straus, R., & Wurm, M. (1975). Coronary heart disease in the Western Collaborative Group Study. Journal of the American Medical Association, 233, 872–877.PubMedCrossRefGoogle Scholar
- Rosenthal, R. (1984). M eta-analytic procedures for social research. Beverly Hills, CA: Sage.Google Scholar
- Rubin, D. B. (1976). Inference and missing data. Biometrika, 63, 581–592.CrossRefGoogle Scholar
- Sackett, D. L. (1979). Bias in analytic research. Journal of Chronic Diseases, 32, 51–63.PubMedCrossRefGoogle Scholar
- Schmidt, F. L., & Hunter, J. E. (1977). Development of a general solution to the problem of validity generalization. Journal of Applied Psychology, 62, 529–540.CrossRefGoogle Scholar
- Sheele, P. R. (1966). Combination of log-relative risks in retrospective studies of disease. American Journal of Public Health, 56, 1745–1750.CrossRefGoogle Scholar
- Shekelle, R. B., Hulley, S. B., Neaton, J. D., Billings, J. H., Borhani, N. O., Gerace, T. A., Jacobs, D. R., Tasser, N. L., Mittlemark, M. B., & Stamler, J. (1985). The MRFIT Behavior Pattern Study. American Journal of Epidemiology, 122, 559–570.PubMedGoogle Scholar
- Smith, M. L., & Glass, G. V. (1977). Meta-analysis of psychotherapy outcome studies. American Psychologist, 32, 752–760.PubMedCrossRefGoogle Scholar
- Sterling, T. C. (1959). Publication decisions and their possible effects on inferences drawn from tests of significance—or vice versa. Journal of the American Statistical Association, 54, 30–34.Google Scholar
- Stock, W. A., Okun, M. A., Haring, M. J., Miller, W., Kinney, C., & Cuervorst, R. W. (1982). Rigor in data synthesis: A case study of reliability in meta-analysis. Educational Researcher, 11, 10–14, 20.CrossRefGoogle Scholar
- Stouffer, S. A., Suchman, E. A., DeVinney, L. C., Star, S. A., & Williams, R. M., Jr. (1949). The American soldier: Vol. I. Adjustment during Army life. Princeton, NJ: Princeton University Press.Google Scholar
- Tippett, L. H. C. (1931). The method of statistics. London: Williams & Norgate.Google Scholar
- Wallis, W. A. (1942). Compounding probabilities from independent significance tests. Econometrica, 10, 229–248.CrossRefGoogle Scholar
- Webb, E., Campbell, D. T., Schwartz, R., Sechrest, L., & Grove, J. (1981). Nonreactive measures in the social sciences. Boston: Houghton Mifflin.Google Scholar
- Yates, F., & Cochran, W. G. (1938). The analysis of groups of experiments. Journal of Agricultural Research, 28, 556–580.Google Scholar
- Yusuf, S., Peto, R., Lewis, J., Collins, R., & Sleight, P. (1985). Beta blockade during and after myocardial infarction: An overview of the randomized trials. Progress in Cardiovascular Diseases, 27, 335–371.PubMedCrossRefGoogle Scholar