Treatment Effect Norms

  • William H. Yeaton

Abstract

The wide diversity of problems in educational settings poses a formidable challenge to behavior therapists. Indeed, the multitude of behavior change strategies in the second part of this volume probably reflects the degree of creativity required to confront such a diverse set of problems. At some point, however, the clinician and the researcher may wish to look beyond these multiple and seemingly separate problems and strategies to gain a more unified perspective of the field of behavior therapy. One fundamental purpose in this chapter is to create such a unified perspective by discussing the notion of treatment effect norms. Although the concept of norm necessarily implies a more catholic view than would otherwise emerge from one’s own research or experience, it also offers an opportunity to integrate and to focus, yet also cast a broad net.

Keywords

Placebo Cholesterol Obesity Aspirin Stratification 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Barber, R. M., Kagey, J. R. (1977). Modification of school attendance for an elementary population. Journal of Applied Behavior Analysis, 10, 41–48.PubMedCrossRefGoogle Scholar
  2. Brown, JJ, Lever, AF, Robertson, JIS, et al. (1984). Salt and hypertension. Lancet, ii, 456.CrossRefGoogle Scholar
  3. Brownell, K. D. (1982). Obesity: Understanding and treating a serious, prevalent, and refractory disorder. Journal of Consulting and Clinical Psychology, 50, 820–840.PubMedCrossRefGoogle Scholar
  4. Bryant, F. B., Wortman, P. M. (1984). Methodological issues in the meta-analysis of quasi-experiments. In W. H. Yeaton, P. M. Wortman (Eds.), Issues in data synthesis (pp. 5–24). Beverly Hills, CA: Sage.Google Scholar
  5. Campbell, D. T. (1979). Assessing the impact of planned social change. Evaluation and Program Planning, 2, 67–90.CrossRefGoogle Scholar
  6. Campbell, D. T., Stanley, J. C. (1966). Experimentaland quasi-experimental designs for research. Chicago, IL: Rand McNally.Google Scholar
  7. Canadian Cooperative Study Group. (1978). A randomized trial of aspirin and sulfinpyrazone in threatened stroke. New England Journal of Medicine, 299, 61–67.Google Scholar
  8. Carlberg, C, Kavale, K. (1980). The efficacy of special versus regular class placement for exceptional children: A meta-analysis. Journal of Special Education, 14, 295–309.CrossRefGoogle Scholar
  9. CESF Newsletter (1985). The University of Michigan, Ann Arbor, 3, 2.Google Scholar
  10. Chalmers, T. C., Celano, P., Sacks, H. S. (1983). Bias in treatment assignment in controlled clinical trials. New England Journal of Medicine, 309, 1358–1361.PubMedCrossRefGoogle Scholar
  11. Cook, T. D., Campbell, D. T. (1979). Quasi-experimentation: Design and analysis issues for field settings. Chicago, IL: Rand McNally.Google Scholar
  12. Cook, T. D., Reichardt, C. S. (Eds.). (1979). Qualitative and quantitative methods in evaluation research. Beverly Hills, CA: Sage.Google Scholar
  13. Cordray, D. S. (Ed.). (1985). Utilizing prior research in evaluation planning. San Francisco: Jossey-Bass.Google Scholar
  14. Cordray, D. S., Bootzin, R. R. (1983). Placebo control conditions: Tests of theory or effectiveness? The Brain and Behavioral Sciences, 2, 286–287.CrossRefGoogle Scholar
  15. Cordray, DS, Orwin, RC (1983). Improving the quality of evidence. Interconnections among primary evaluation, secondary analysis, and quantitative synthesis. In RJ Light (Ed.), Evaluation studies review annual (Vol. 8, pp. 91–119). Beverly Hills, CA: Sage.Google Scholar
  16. Detre, K., Peduzzi, P. (1982). The problems of attributing death of non-adherers: The VA coronary bypass experience. Controlled Clinical Trials, 3, 355–364.PubMedCrossRefGoogle Scholar
  17. Durlak, J. A. (1979). Comparing effectiveness of paraprofessional and professional helpers. Psychological Bulletin, 86, 80–92.PubMedCrossRefGoogle Scholar
  18. Fisher, B., Bauer, M., Margolese, R., et al. (1985). Five-year results of a randomized clinical trial comparing total mastectomy with or without radiation in the treatment of breast cancer. The New England Journal of Medicine, 312, 665–673.PubMedCrossRefGoogle Scholar
  19. Fisher, B., Redmond, C, Fisher, E. R., et al. (1985). Ten-year results of a randomized clinical trial comparing radical mastectomy with or without radiation. The New England Journal of Medicine, 312, 674–681.PubMedCrossRefGoogle Scholar
  20. Hedges, L. V. (1984). Advances in statistical methods for meta-analysis. In W. H. Yeaton, P. M. Wortman (Eds.), Issues in data synthesis (pp. 25–42). Beverly Hills, CA: Sage.Google Scholar
  21. Hunter, J. E., Schmidt, F. L., Jackson, G. B. (1982). Meta-analysis: Cumulating research findings across studies. Beverly Hills, CA: Sage.Google Scholar
  22. Johnson, S. M., Mauritson, D. R., Corbett, J. R., Woodward, W., Willerson, J. T., Hillis, L. D. (1981). Double-blind, randomized, placebo-controlled comparison of propranolol and verapamil in the treatment of patients with stable angina pectoris. American Journal of Medicine, 71, 443–451.PubMedCrossRefGoogle Scholar
  23. Johnston, J. (Ed.). (1984). Evaluating the new information technologies. San Francisco: Jossey-Bass.Google Scholar
  24. Kavale, K. A., Glass, G. V. (1984). Meta-analysis and policy decisions in special education. In B. E. Keogh (Ed.), Advances in special education (Vol. 4). Greenwich, Connecticut: JAI Press.Google Scholar
  25. Kazdin, A. E. (1977). Assessing the clinical or applied importance of behavior change through social validation. Behavior Modification, 1, 427–452.CrossRefGoogle Scholar
  26. Kazdin, A. E., Wilson, G. T. (1978). Evaluation of behavior therapy: Issues, evidence, and research strategies. Lincoln, NE: University of Nebraska Press.Google Scholar
  27. Kolata, G. (1985). Heart panel’s conclusions questioned. Science, 227, 40–41.PubMedCrossRefGoogle Scholar
  28. Kulik, J. A., Kulik, C.-L. C, Cohen, P. A. (1980). Effectiveness of computer based college teaching: A meta-analysis of findings. Review of Educational Research, 50, 525–544.Google Scholar
  29. Landman, J. T., Dawes, R. M. (1982). Psychotherapy outcome. Smith and Glass’ conclusions stand up under scrutiny. American Psychologist, 37, 504–516.Google Scholar
  30. Light, R. J. (Ed.). (1983). Evaluation Studies Review Annual (Vol. 8). Beverly Hills, CA: Sage.Google Scholar
  31. Light, R. J. (1984). Six evaluation issues that synthesis can resolve better than single studies. In W. H. Yeaton, P. M. Wortman (Eds.), Issues in data synthesis (pp. 57–73). San Francisco: Jossey-Bass.Google Scholar
  32. McNeil, B. J., Weichselbaum, R., Pauker, S. G. (1981). Speech and survival. Tradeoffs between quality and quantity of life in laryngeal cancer. New England Journal of Medicine, 305, 982–987.Google Scholar
  33. McPeek, B., Gilbert, J. P., Mosteller, F. (1977). The end result: Quality of life. In J. P. Bunker, B. A. Barnes, F. Mosteller (Eds.), Costs, risks, and benefits of surgery (pp. 170–175). New York: Oxford University Press.Google Scholar
  34. Miller, R. C., Berman, J. S. (1983). The efficacy of cognitive behavior therapies: A quantitative review of the research evidence. Psychological Bulletin, 94, 39–53.PubMedCrossRefGoogle Scholar
  35. Minkin, N., Braukmann, C. J., Minkin, B. L., Timbers, B. J., Fixsen, D. L., Phillips, E. L., Wolf, M. M. (1976). The social validation and training of conversational skills. Journal of Applied Behavior Analysis, 9, 127–140.PubMedCrossRefGoogle Scholar
  36. Nicholson, R. A., Berman, J. S. (1983). Is follow-up necessary in evaluating psychotherapy? Psychological Bulletin, 93, 261–278.PubMedCrossRefGoogle Scholar
  37. Nurius, P. S., Yeaton, W. H. (1987). An illustrated critique of “hidden” judgments, choices, and compromises. Clinical Psychology Review, 7, 695–714.CrossRefGoogle Scholar
  38. Office of Technology Assessment. (1978). Assessing the efficacy and safety of medical technologies. Washington, DC: U.S. Government Printing Office (Stock No. 052-003-00593-0).Google Scholar
  39. Orwin, R. G., Cordray, D. S. (1985). Effects of deficient reporting on meta-analysis: A conceptual framework and reanalysis. Psychological Bulletin, 97, 134–147.PubMedCrossRefGoogle Scholar
  40. Paul, G. L. (in press). Can pregnancy be a placebo effect?—Terminology, designs, and conclusions in the study of psychological and pharmacological treatments of behavioral disorders. In L. White, B. Tursky, G. F. Schwartz (Eds.), Placebo: Clinical phenomena and new insights. New York: Guilford Press.Google Scholar
  41. Peterson, L. (1984). Teaching home safety and survival skills to latch-key children: A comparison of two manuals and methods. Journal of Applied Behavior Analysis, 17, 279–293.PubMedCrossRefGoogle Scholar
  42. Rosenthal, R. (1984). Meta-analytic procedures for social research. Beverly Hills, CA: Sage.Google Scholar
  43. Sacks, H., Chalmers, T. C., Smith, H. (1982). Randomized versus historical controls for clinical trials. American Journal of Medicine, 72, 233–240.PubMedCrossRefGoogle Scholar
  44. Salend, S. J. (1984). Integrity of treatment in special education research. Mental Retardation, 22, 309–315.PubMedGoogle Scholar
  45. Scovern, A. W., Kilmann, P. R. (1980). Status of electroconvulsive therapy: Review of the outcome literature. Psychological Bulletin, 87, 260–303.PubMedCrossRefGoogle Scholar
  46. Sechrest, L., Yeaton, W. (1981). Assessing the effectiveness of social programs: Methodological and conceptual issues. In S. Ball (Ed.), Assessing and interpreting outcomes (pp. 41–56). San Francisco: Jossey-Bass.Google Scholar
  47. Sechrest, L., Yeaton, W. H. (1982). Magnitudes of experimental effects in social science research. Evaluation Review, 6, 579–600.CrossRefGoogle Scholar
  48. Sechrest, L., Yeaton, W. H. (1985). Role of no-difference findings in applied research. Manuscript submitted for publication.Google Scholar
  49. Skinner, B. F. (1984). The shame of American education. American Psychologist, 39, 947–954.CrossRefGoogle Scholar
  50. Smith, M. L., Glass, G. V. (1977). Meta-analysis of psychotherapy outcome studies. American Psychologist, 32, 752–760.PubMedCrossRefGoogle Scholar
  51. Staines, G. L. (1974). The strategic combination argument. In W. Leinfellner, E. Kohler (Eds.), Developments in the Methodology of Social Science (pp. 417–430). Dordrecht, Holland: D. Reidel.Google Scholar
  52. Sulzbacher, S. I. (1973). Psychotropic medication with children: An evaluation of procedural biases in results of reported studies. Pediatrics, 51, 513–517.PubMedGoogle Scholar
  53. Trowbridge, F. L. (1982). Attainable growth. Lancet, 1, 232.Google Scholar
  54. Whiting-O’Keefe, Q. E., Henke, C, Simborg, D. W. (1984). Choosing the correct unit of analysis in medical experiments. Medical Care, 22, 1101–1114.PubMedCrossRefGoogle Scholar
  55. Wolf, M. M. (1978). Social validity: The case for subjective measurement or how applied behavior analysis is finding its heart. Journal of Applied Behavior Analysis, 11,203–214.PubMedCrossRefGoogle Scholar
  56. Wortman, P. M. (1981). Consensus development. In P. M. Wortman (Ed.), Methods for evaluating health services (pp. 1–22). Beverly Hills, CA: Sage.Google Scholar
  57. Wortman, P. M. (1983). Evaluation research: A methodological perspective. Annual Review of Psychology, 34, 223–260.CrossRefGoogle Scholar
  58. Wortman, P. M. (1984). Evaluation at the frontier: Some timely comments for future use. Evaluation Network, 5, 35–44.Google Scholar
  59. Wortman, P. M., Bryant, F. B. (1985). School desegregation and black achievement. An integrative review. Sociological Methods and Research, 13, 289–324.Google Scholar
  60. Wortman, P. M., Yeaton, W. H. (1983). Synthesis of results in controlled trials of coronary artery bypass graft surgery. In R. J. Light (Ed.), Evaluation studies review annual (Vol. 8, pp. 536–551). Beverly Hills: Sage.Google Scholar
  61. Wortman, P. M., Yeaton, W. H. (1985). Cumulating quality of life results in controlled trials of coronary artery bypass graft surgery. Controlled Clinical Trials, 6, 289–305.PubMedCrossRefGoogle Scholar
  62. Yeaton, W. H. (1982a). A critique of the effectiveness of applied behavior analysis research. Advances in Behavior Research and Therapy, 4, 75–96.CrossRefGoogle Scholar
  63. Yeaton, W. H. (1982b, August). An evaluation research perspective for assessing new psychotherapeutic techniques. Paper presented at the meeting of the American Psychological Association, Washington, DC.Google Scholar
  64. Yeaton, W. H., Sechrest, L. (1981). Critical dimensions in the choice and maintenance of successful treatments: Strength, integrity, and effectiveness. Journal of Consulting and Clinical Psychology, 49, 156–167.PubMedCrossRefGoogle Scholar
  65. Yeaton, W. H., Wortman, P. M. (1984). Evaluation issues in medical research synthesis. In W. H. Yeaton, P. M. Wortman (Eds.), Issues in data synthesis (pp. 43–56). San Francisco: Jossey-Bass.Google Scholar
  66. Yeaton, W. H., Wortman, P. M. (1985). Medical technology assessment: The evaluation of coronary artery bypass graft surgery using data synthesis techniques. International Journal of Technology Assessment in Health Care, 1, 125–146.PubMedCrossRefGoogle Scholar
  67. Yeaton, W. H., Greene, B. F., Bailey, J. S. (1981). Behavioral community psychology: Strategies and tactics for teaching community skills to children and adolescents. In A. E. Kazdin, B. B. Lahey (Eds.), Advances in Clinical Child Psychology, (Vol. 4, pp. 243–282). New York: Plenum Press.Google Scholar
  68. Yeaton, W. H., Wortman, P. M., Langberg, N. (1983). Differential attrition: Estimating the effect of crossovers on the evaluation of a medical technology. Evaluation Review, 7, 831–840.CrossRefGoogle Scholar

Copyright information

© Plenum Press, New York 1988

Authors and Affiliations

  • William H. Yeaton
    • 1
  1. 1.ISR/SRCUniversity of MichiganAnn ArborUSA

Personalised recommendations