Factors Influencing the Future of Outcome-Based Evaluation

  • Robert L. Schalock


The future is now. The trends that we discussed in Chapter 1, including the quality revolution, consumer empowerment, accountability defined on the basis of outcomes, the support paradigm, and the pragmatic evaluation paradigm, will continue to challenge educational and social programs, and those of us involved in outcome-based evaluation. A final truism: Education and social programs in the future will be fundamentally different than they are now. The parameters of what they will look like are summarized well in the following principles described by Osborne and Gaebler (1993), around which entrepreneurial public organizations are being built:
  • Steer more than row.

  • Empower communities rather than simply deliver services.

  • Encourage competition rather than monopoly.

  • Be driven by a mission, not rules.

  • Fund outcomes rather than inputs.

  • Meet the needs of the customer, not the bureaucracy.

  • Concentrate on earning, not just spending.

  • Invest in prevention rather than cure.

  • Decentralize authority.

  • Solve problems by leveraging the marketplace, rather than simply creating public programs.

These are powerful principles that many of us already see influencing education and social programs. I am convinced that they will result in profound changes in service provision, funding streams and patterns, and the locus of decision making. Obviously, they will also have a profound impact on outcome-based evaluation.

Additionally, my crystal ball forecasts other trends that will have profound impact on program evaluation. Six of these trends represent the content of this chapter. They include noncategorical approach to services and supports, accountability defined on the basis of outcomes, multiple evaluation designs, service-provider networks, consumer-oriented evaluation, and linking program evaluation and forecasting.


Program Evaluation Adaptive Behavior Role Status Crystal Ball Powerful Principle 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Additional Readings

  1. Alkin, M. C. (1990). Debates on evaluation. Newbury Park, CA: Sage.Google Scholar
  2. Bickman, L. (1987). The functions of program theory. In L. Bickman (Ed.), Using program theory in evaluation (pp. 5–18). San Francisco: Jossey-Bass.Google Scholar
  3. Brizius, J. A., & Campbell, M. D. (1991). Getting results: A guide for government accountability. Washington, DC: Council of Governor’s Policy Advisors.Google Scholar
  4. Mathison, S. (1994). Rethinking the evaluator role: Partnerships between organizations and evaluators. Evaluation and Program Planning, 17, 299–304.CrossRefGoogle Scholar
  5. Muraskin, L. D. (1993). Understanding evaluation: The way to better prevention programs. Washington, DC: U.S. Department of Education.Google Scholar

Copyright information

© Springer Science+Business Media New York 1995

Authors and Affiliations

  • Robert L. Schalock
    • 1
  1. 1.Hastings CollegeHastingsUSA

Personalised recommendations