Reporting Outcome-Based Evaluation Results

  • Robert L. Schalock

Overview

Credibility and communication are the focus of this chapter. Just as contextual variables need to be addressed in any program evaluation, program evaluators must also deal frequently with two realities: (1) the nature of their outcome-based analysis results and (2) a skeptical audience. It would be nice if the analysis were unequivocal and the audience always friendly, but that is not the general case. Thus, evaluators must work hard to establish (and maintain) their credibility, along with communicating clearly the results of their outcome-based evaluations, which are frequently equivocal and playing to a skeptical audience.

The importance of these two realities is reflected in a recent medication study that I was asked to do by a mental health program administrator who wanted to know the effects of reducing psychotropic medication levels used on clientele. The study found that medication reduction was not associated with significant behavioral changes, increased use of restraints, or increased injuries to staff. But a skeptical audience came into play when the results were presented to the nursing and ward personnel, who were reasonably certain beforehand that medication reduction had deleterious effects. Thus, when it came time to interpret the findings, many staff were convinced that the study’s results were wrong at worst, and equivocal at best.

Keywords

Contextual Variable Medication Reduction Attrition Analysis Community Mental Health Journal Significant Behavioral Change 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Additional Readings

  1. Greene, J. C. (1988). Communication of results and utilization in participatory program evaluation. Evaluation and Program Planning, 11, 341–351.CrossRefGoogle Scholar
  2. Hendricks, M., & Handley, E. A. (1990). Improving the recommendations from evaluation studies. Evaluation and Program Planning, 13, 109–117.CrossRefGoogle Scholar
  3. Lester, J. P., & Wilds, L. J. (1990). The utilization of public policy analysis: A conceptual framework. Evaluation and Program Planning, 13, 313–319.CrossRefGoogle Scholar
  4. Moskowitz, J. M. (1993). Why reports of outcome evaluation are often biased or uninterpretable. Evaluation and Program Planning, 16, 1–9.CrossRefGoogle Scholar
  5. Palumbo, D. J. (Ed.). (1987). The politics of program evaluation. Beverly Hills, CA: Sage.Google Scholar
  6. Rapp, C. A., Gowdy, E., Sullivan, W. P., & Winterstein, R. (1988). Client outcome reporting: The status method. Community Mental Health Journal, 24(2), 118–133.PubMedCrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media New York 1995

Authors and Affiliations

  • Robert L. Schalock
    • 1
  1. 1.Hastings CollegeHastingsUSA

Personalised recommendations