Education and social programs do not operate in a vacuum. The context within which these programs operate can best be described by a number of metaphors. At one level, a program’s context is like an umbrella: It protects and shields programs from unwanted intrusions. It is also like a personality assessment: It explains a number of characteristics about the program, including its development and dynamics. But a program’s context is also like a balloon: It surrounds the program and gives structure and dynamics to its mission, processes, and outcomes. Our task in this chapter is to analyze a program’s context and attempt to understand many of the most significant contextual variables that influence outcome-based evaluation efforts.
A basic question that you might be asking as this point is “How much contextual information is really required?” Indeed, entire books and monographs are devoted to this topic. A general guideline that I use is reflected in Henry Hill’s statement in The Music Man
: One needs to “know the territory.” The extent of knowing is dependent on one’s role and current level of understanding. For example, if the program evaluator is external to the program being evaluated (which is not recommended), then it is crucial to spend time in the program and understand its culture, clientele, and organizational structure and operation before undertaking any analysis efforts. In contrast, if the person is a part of the program (the preferred way), then the analyst should already be familiar with most of the contextual variables discussed in this chapter. Thus, how much contextual information do you really need?
Guiding Principle 19: Be able to describe the program and its contextual variables in enough detail so that it can be replicated.
This chapter is organized into two major sections. The first summarizes a number of key contextual variables that I personally have found to affect program processes and products. In the second section, two levels of contextual analysis are presented: descriptive and inferential analysis. Throughout the chapter, I stress that one should address the critical importance of contextual analysis at both the planning and reporting stages of an outcome-based evaluation. Critical guidelines for both stages include working with important stakeholders to:
Frame the questions on which the analysis is based.
Determine the organization’s capability regarding OBE analysis.
Agree on the use of the analysis results.
Select the data sets that will be used in the analysis.
Collect and analyze the data.
Describe how best to report the analysis results.
Determine how best to utilize the data for data-based management and program change.
KeywordsContextual Variable Social Program Contextual Analysis Family Variable Community Factor
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
Field, S. L., & Hill, S. D. (1988). Contextual appraisal: A framework for meaningful evaluation of special education programs. Remedial and Special Education
(4), 22–30.CrossRefGoogle Scholar
Goering, P. N., & Wasylenki, D. A. (1993). Promoting the utilization of outcome study results by assuming multiple roles within an organization. Evaluation and Program Planning
, 329–334.CrossRefGoogle Scholar
McDonald, R. M. (1991). Assessment of organizational context: A missing component in evaluation of training programs. Evaluation and Program Planning
, 273–279.CrossRefGoogle Scholar
Schein, E. H. (1990). Organization culture. American Psychologist
(2), 109–119.CrossRefGoogle Scholar
Snortum, J. R. (1988). On seeing the forest and the trees: The need for contextual analysis in evaluating drunk driving policies. Evaluation and Program Planning
, 279–294.CrossRefGoogle Scholar
© Springer Science+Business Media New York 1995