Overview
This chapter assumes that you are an OBE producer and want to know where to begin to evaluate your program, and how to proceed in a logical way. Most of my program administrator friends either want or need to evaluate their programs, but are often lost as to what this actually means. In this regard, there are two questions that you will want to ask yourself. First, “For what purpose will I use the outcome evaluation data?” Answering this question requires that you be proactive and know where you are going with your evaluation. As we saw in Table 1.2, the three primary purposes of OBE are effectiveness, impact, and benefit-cost analysis.
Second, “What data will I need for the intended use?” Answering this question requires logical thinking. Here the tendency is to ask questions that the education or social program and data system are unable to answer. For example, many program administrators want to know whether their program is better than an alternative or is cost-beneficial, without having the capability of forming comparison conditions or having the necessary cost and outcome data required to answer the question. If, on the other hand, you want to use the data on an ongoing basis for reporting outcomes, longitudinal evaluations, program change, or policy evaluations, then you will need to be sure that your data management system has the capability for such efforts. And that is one of the primary purposes of this chapter—to familiarize you with the relationship between outcome based evaluation data use (Question 1) and data requirements (Question 2).
The chapter begins with a brief overview of what I propose as “OBE guiding principles.” The chapter is then organized around the OBE model presented in Figure 1.2 with its major components of mission/goals, services, person-referenced outcomes, formative feedback, and uses of OBE data. These model components translate into four actions steps that compose the major sections of the chapter: (1) develop the program’s mission and goals; (2) provide services that are consistent with the program’s mission and goals; (3) select valued, person-referenced outcomes; and (4) establish a data management system.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Additional Readings
Armstrong, M. I., Huz, S., & Evans, M. E. (1992). What works for whom: The design and evaluation of children’s mental health services. Social Work Research and Abstracts, 28(1), 35–41.
Bolton, B. (1987). Outcome analysis in vocational rehabilitation. In M. J. Fuhrer (Ed.), Rehabilitation outcomes: Analysis and measurement (pp. 57–69). Baltimore: Brookes.
Brown, A. C. (1993). Revitalizing “handicap” for disability research: Developing tools to assess progress in quality of life for persons with disabilities. Journal of Disability Policy Studies, 4(2), 57–76.
DeStefano, L. (1986). Designing and implementing program evaluation. In F. R. Rusch (Ed.), Supported employment (pp. 229–247). Sycamore, IL: Sycamore.
Eisen, S. V., Dill, D. L., & Grob, M. C. (1994). Reliability and validity of a brief patient-report instrument for psychiatric outcome evaluation. Hospital and Community Psychiatry, 45(3), 242–247.
George, M. P., George, N. L., & Grosenick, J. K. (1990). Features of program evaluation in special education. Remedial and Special Education, 11(5), 23–30.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 1995 Springer Science+Business Media New York
About this chapter
Cite this chapter
Schalock, R.L. (1995). Where to Begin and How to Proceed. In: Outcome-Based Evaluation. Springer, Boston, MA. https://doi.org/10.1007/978-1-4757-2399-1_2
Download citation
DOI: https://doi.org/10.1007/978-1-4757-2399-1_2
Publisher Name: Springer, Boston, MA
Print ISBN: 978-1-4757-2401-1
Online ISBN: 978-1-4757-2399-1
eBook Packages: Springer Book Archive