Skip to main content
  • 886 Accesses

Abstract

Examining the assumptions that hold a program theory together is a vital part of evaluating program outcomes. Examining implicit or explicit program assumptions facilitates understanding of program results-both intended and unintended. Evaluation approaches for testing program assumptions are outlined. The best place to start integrating assumptions in an evaluation is at the conceptualizing stage, when evaluation questions are being formulated, not in data collection or methodologies. Tools are but a servant of methods and methods a servant of questions, which should be the servant of objectives and/or purpose. Ideally, by framing the questions well, the methods, tools, and data will produce highly useful answers and solutions. But examining assumptions is without doubt a necessary element in the process.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  • Chen, H. T. (2005). Practical program evaluation: assessing and improving planning, implementation, and effectiveness. Newbury Park: Sage publications.

    Google Scholar 

  • Chen, H. T. (2006). A Theory-driven evaluation perspective on mixed methods research. Research In the Schools (Mid-South Educational Research Association), 13(1), 75–83.

    Google Scholar 

  • Chen, H. T., & Rossi, P. H. (1980). The multi-goal, theory-driven approach to evaluation: A model linking basic and applied social science. Social Forces, 59(1), 106–122.

    Google Scholar 

  • Connell, J. P. & Kubisch, A.C. (1998). Applying a theory of change approach to the evaluation of comprehensive community initiatives: Progress, prospects, and problems. In K. Fulbright-Anderson, A. C. Kubrisch & J. P. Connell (Eds.), New approaches to evaluating community initiatives ,theory, measurement and analysis (vol. 2). Washington: Aspen Institute.

    Google Scholar 

  • Davies, R. (2010). Counter-factual and counter-theories. Retrieved 01/29/2012 http://mandenews.blogspot.com/2010/10/counter-factuals-and-counter-theories.html.

  • Donaldson, S. I., & Gooler, L. E. (2002). Theory-driven evaluation of the work and health initiative: A focus on winning new jobs. American Journal of Evaluation, 23(3), 341–347.

    Google Scholar 

  • Funnel, S. C. (2000). Developing and using a program theoryprogram theory matrix for program evaluation and performance monitoring. New Directions for Evaluation, 87(Fall), 91–101.

    Article  Google Scholar 

  • Green, B. L. & McAllister, C. (1998) Theory-based, participatory evaluation: A powerful tool for evaluating family support programs. In The bulletin of the national center for zero to three (pp. 30–36) Feb/March 1998.

    Google Scholar 

  • Janssens, F. J. G., & De Wolf, I. F. (2010). Analyzing the assumptions of a policy program: An ex-ante evaluation of ‘‘educational governance’’ in the Netherlands. American Journal of Evaluation, 30(3), 330–348.

    Google Scholar 

  • Kelloway, E. K. (1998). Using LISREL for structural equation modeling. Thousand Oaks: Sage.

    Google Scholar 

  • Leviton, L. C. (1994). Program theory and evaluation theory in community-based programs. American Journal of Evaluation, 15(1), 89–92.

    Article  Google Scholar 

  • Marquart, J. M. (1990). A pattern-matching approach to link program theoryprogram theory and evaluation data. New Directions for Program Evaluation, 47, 93–107.

    Article  Google Scholar 

  • Maruyama, G. M. (1998). Basics of structural equation modeling. Thousand Oaks: Sage.

    Google Scholar 

  • Mayne, J. (2011) Contribution analysis: Addressing cause effect. In K. Forss, M. Marra & R. Schwartz (Eds.), Evaluating the complex: Attribution, contribution, and beyond (pp 53–96). New Brunswick: Transactional Publishers.

    Google Scholar 

  • McLaughlin, J. A., & Jordan, G. B. (1999). Logic models: A tool for telling your performance story. Evaluation and Program Planning, Elsevier Science, 22(1), 65–72.

    Article  Google Scholar 

  • McLinden, D. J., & Trochim, W. M. K. (1998). Getting to parallel: Assessing the return on expectations of training. Performance Improvement, 37(1998), 21–26.

    Article  Google Scholar 

  • Mitchell, R.J. (1993). Path analysis: pollination. In S. M. Scheiner & J. Gurevitch (Eds.), Design and analysis of ecological experiments (pp. 211–231).New York: Chapman and Hall.

    Google Scholar 

  • Morell, J. A. (2005). Why are there unintended consequences of program action, and what are the implications for doing evaluation? American Journal of Evaluation, 26(4), 444–463.

    Article  Google Scholar 

  • Pawson, R., & Tilley, N. (1997). Realistic evaluation. London: Sage.

    Google Scholar 

  • Schalock, R. L., & Bonham, G. S. (2003). Measuring outcomes and managing for results. Evaluation and Program Planning, 26(3), 229–235.

    Article  Google Scholar 

  • Schumacker, R. E., & Lomax, R. G. (1996). A beginner’s guide to structural equation modeling. Mahwah: Lawrence Erlbaum Associates.

    Google Scholar 

  • Scriven, M. (2008). A summative evaluation of RCT methodology: & an alternative approach to causal research. Journal of Multi Disciplinary Evaluation, 5(9), 15–24.

    Google Scholar 

  • Shaw, I., & Crompton, I. A. (2003). Theory like mist on spectacles, obscures vision. Evaluation, 9(2), 192–204.

    Article  Google Scholar 

  • Stame, N. (2010). What doesn’t work? three failures, many answers. Evaluation, 16(4), 371–387.

    Article  Google Scholar 

  • Tilley, N. (2004). Applying theory-driven evaluation to the british crime reduction program: The theories of the program and of its evaluations. Criminology and Criminal Justice, 4(3), 255–276.

    Article  Google Scholar 

  • Trochim, W. (1989). Outcome pattern matching and program theoryprogram theory. Evaluation and Program Planning, 12(4), 355–366.

    Article  Google Scholar 

  • Trochim, W., & Cook, J. (1992). Pattern matching in theory-driven evaluation: a field example from psychiatric rehabilitation. In H. Chen & P. H. Rossi (Eds.), Using theory to improve program and policy evaluations (pp. 49–69). New York: Greenwood Press.

    Google Scholar 

  • University of Oxford (2010). Inventory of measures, typology of non-intentional effects and a framework for policy packaging. Retrieved 11/13/11 http://optic.toi.no/mmarchive_getfile.php?mmfileid=14934&CPMMFILEID_URL_WYSIWYG_TOKEN=1.

  • Weiss, C. H. (1993). Where politics and evaluation research meet. American Journal of Evaluation, 14(1), 93–106.

    Article  Google Scholar 

  • Weiss, C. (1997a). Theory-based evaluation: Past, present, and future. New Directions for Evaluation, 76, 41–55.

    Article  Google Scholar 

  • Weiss, C. (1997b). How can theory-based evaluation make greater headway? Evaluation Review, 21(4), 501–524.

    Article  Google Scholar 

  • Weiss, C. (2000). Which links in which theories shall we evaluate? New Directions for Evaluation, 87, 35–45.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Apollo M. Nkwake .

Rights and permissions

Reprints and permissions

Copyright information

© 2013 Springer Science+Business Media New York

About this chapter

Cite this chapter

Nkwake, A.M. (2013). Evaluating Assumptions . In: Working with Assumptions in International Development Program Evaluation. Springer, New York, NY. https://doi.org/10.1007/978-1-4614-4797-9_11

Download citation

Publish with us

Policies and ethics