Advertisement

The CIPP Model for Program Evaluation

Chapter
Part of the Evaluation in Education and Human Services book series (EEHS, volume 6)

Abstract

This chapter is a review and update of the so-called CIPP Model1 for evaluation. That model (Stufflebeam, 1966) was developed in the late 1960s as one alternative to the views about evaluations that were most prevalent at that time — those oriented to objectives, testing, and experimental design. It emerged with other new conceptualizations, especially those developed by Scriven (1966) and Stake (1967). (For a discussion of these historical developments, see Chapter 1 of this book.) The CIPP approach was applied in many institutions; for example, the Southwest Regional Educational Laboratory in Austin, Texas; the National Center for Vocational and Technical Education; the U.S. Office of Education; and the school districts in Columbus, Toledo, and Cincinnati, Ohio; Dallas, Forth Worth, Houston, and Austin, Texas; and Saginaw, Detriot, and Lansing, Michigan. It was the subject of research and development by Adams (1971), Findlay (1979), Nevo (1974), Reinhard (1972), Root (1971), Webster (1975), and others. It was the central topic of the International Conference on the Evaluation of Physical Education held in Jyvaskyla, Finland in 1976 and was used as the advance organizer to group the evaluations that were presented and discussed during that week-long conference. It was also the central topic of the Eleventh National Phi Delta Kappa Symposium on Educational Research, and, throughout the 1970s it was referenced in many conferences and publications. It was most fully explicated in the Phi Delta appa book, Educational Evaluation and Decision Making (Stufflebeam et al., 1971) and most fully implemented in the Dallas Independent School District. Its conceptual and operational forms have evolved in response to critiques, applications, research, and parallel developments; and it continues to be referenced and applied in education and other fields.

Keywords

Product Evaluation School District Input Evaluation Project Staff American Educational Research Association 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Adams, James A. “A Study of the Status, Scope, and Nature of Educational Evaluation in Michigan’s Public K-12 School Districts.” Unpublished doctoral dissertation, Ohio State University, 1971.Google Scholar
  2. Brickell, Henry M. Needed: Instruments as Good as Our Eyes. Occasional Paper Series, no. 7. Kalamazoo, Michigan: Western Michigan University Evaluation Center, July 1976.Google Scholar
  3. Cronbach, Lee J. and Associates. Toward Reform of Program Evaluation. San Francisco: Jossey-Bass Publishers, 1980.Google Scholar
  4. Findlay, Donald. “Working Paper for Planning an Evaluation System.” Unpublished, The Center for Vocational and Technical Education, Ohio State University, 1979.Google Scholar
  5. Guba, Egon G. “The Failure of Educational Evaluation” Educational Technology, 9 (1969) 29–38.Google Scholar
  6. Nevo, David. “Evaluation Priorities of Students, Teachers, and Principals.” Unpublished doctoral dissertation, Ohio State University, 1974.Google Scholar
  7. Patton, Michael Quinn. Utilization-Focused Evaluation. Beverly Hills: Sage Publications, 1978.Google Scholar
  8. Reinhard, Diane L. “Methodology Development for Input Evaluation Using Advocate and Design Teams.” Unpublished doctoral dissertation, Ohio State University, 1972.Google Scholar
  9. Root, Darrell. “The Evaluation Training Needs of Superintendents of Schools.” Doctoral dissertation, Ohio State University, 1971.Google Scholar
  10. Sanders, James R. and Sachse, T.P. “Applied Performance Testing in the Classroom.” Journal of Research and Development in Education, 10 (Spring 1977) 92–104.Google Scholar
  11. Scriven, Michael. “The Methodology of Evaluation.” no. 110, Layfayette, Indiana: the Social Science Education Consortium, Purdue University, 1966.Google Scholar
  12. Scriven, Michael. “Critique of the PDK Book, Educational Evaluation and Decision Making.” Presentation at the annual meeting of the American Educational Research Association, New York City, 1970.Google Scholar
  13. Stake, Robert. “The Countenance of Educational Evaluation.” Teachers College Record, no. 7, 68 (April 1967).Google Scholar
  14. Stufflebeam, Daniel L. “Evaluation as Enlightenment for Decision Making.” In Walcott, A. Beaty(ed.) Improving Educational Assessment and an Inventory of Measures of Affective Behavior. Washington, D.C.: Assoc. for Supervision and Curriculum Development, 1969.Google Scholar
  15. Stuffiebeam, Daniel L. “The Relevance of the CIPP Evaluation Model for Educational Accountability.” Journal of Research and Development in Education, (Fall 1971).Google Scholar
  16. Tyler, R.W. “General Statement on Evaluation.” Journal of Educational Research, 35 (1942), 492–501.Google Scholar
  17. Webster, W.J. “The Organization and Functions of Research and Evaluation in Large Urban School Districts.” Paper presented at the annual meeting of the American Educational Research Association, Washington, D.C., March 1975.Google Scholar
  18. Wolf, R.L. “The Application of Select Legal Concepts to Educational Evaluation.” Unpublished doctoral dissertation, University of Illinois, 1974.Google Scholar

Copyright information

© Kluwer-Nijhoff Publishing 1983

Authors and Affiliations

There are no affiliations available

Personalised recommendations