Skip to main content
  • 974 Accesses

Abstract

Evaluation is a form of applied social science research that uses a set of skills and tools to determine the success of interventions. Five constituents of evaluation practice—behavioral, competence, utilization, industrial, and methodological—are identified based on the common attributes of professions. Methodology includes the application of methods, procedures, and tools in research. Competence refers to the capacity of evaluators (micro-level), organizations (meso-level), and society (macro-level). The behavioral constituent of evaluation practice concerns appropriate conduct, ethical guidelines, and professional culture. The industrial (supply) constituent concerns the exercise of professional authority to provide services to further client interests. The utilization (demand) constituent concerns the use of research results, including a demand for evidence to guide policy. Evaluators are preoccupied more with methodology than with the other evaluation constituents.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 54.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  • American Evaluation Association (AEA). (2003). Evaluation 2003, presidential welcome. http://www.eval.org/eval2003/aea03.program3.pdf. Accessed 13th Dec 2012.

  • Baizerman, M., Compton, D. W., & Stockdill, S. H. (2002). Editors’ notes–the art, craft, and science of evaluation capacity building. New Directions for Evaluation, 9, 1–6.

    Google Scholar 

  • Bamberger, M. (2006). Enhancing the utilization of evaluations for evidence-based policy making. In Segone (ed) Bridging the gap: the role of monitoring and evaluation in evidence based policy making…Unicef (pp. 120–142), New York.

    Google Scholar 

  • Barbour, R. (2001). Checklists for improving rigor in qualitative research: A case of the tail wagging the dog? British Medical Journal, 322, 1115–1117.

    Google Scholar 

  • Campbell, D. T. (1991). Methods for the experimenting society. American Journal of Evaluation, 12(3), 223–260.

    Article  Google Scholar 

  • Chapple, A., & Rogers, A. (1998). Explicit guidelines for qualitative research: A step in the right direction, a defence of the ‘soft’ option, or a form of sociological imperialism? Family Practice, 15, 556–561.

    Google Scholar 

  • Chelimsky, E. (2012). Valuing, evaluation methods, and the politicization of the evaluation process. In G. Julnes (Ed.), Promoting valuation in the public interest: Informing policies for judging value in evaluation. New Directions for Evaluation, 133, 77–83.

    Google Scholar 

  • Christie, C. A., & Alkin, M. C. (2008). Evaluation theory tree re-examined. Studies in Educational Evaluation, 34, 131–135.

    Google Scholar 

  • Conner, R. F., & Dickman, F. B. (1979). Professionalization of evaluative research: Conflict as a sign of health. Evaluation and Program Planning, 2(2), 103–109.

    Article  Google Scholar 

  • Coryn, C. L. S., & Hattie, J. A. (2006). The transdisciplinary model of evaluation. Journal of MultiDisciplinary Evaluation, 3(4), 107–114.

    Google Scholar 

  • Creswell, J. W., Hanson, W. E., Plano, V. L., & Morales, A. (2007). Qualitative research designs: Selection and implementation. The Counseling Psychologist, 35(2), 236–264.

    Article  Google Scholar 

  • Cruess, S. R., & Cruess, R. L. (1997). Professionalism must be taught. BMJ, 315, 1674.

    Article  Google Scholar 

  • Davies, H., Nutley, S., & Smith, P. (2000). Introducing evidence-based policy and practice in public services in WHAT WORKS? In Huw T.O. Davies, Sandra M. Nutley & Peter C. Smith (Eds.), Evidence-based policy and practice in public services (pp. 1–41). The Policy Press, Bristol.

    Google Scholar 

  • Dyer, A. R. (1985). Ethics, advertising and the definition of a profession. Journal of medical ethics, 11(1985), 72–78.

    Article  Google Scholar 

  • Elliott, R., Fischer, C. T., & Rennie D. L. (1999). Evolving guidelines for publication of qualitative research studies in psychology and related fields. British Journal of Clinical Psychology, 38, 215–229.

    Article  Google Scholar 

  • European Commission (EC). (2008). What is evaluation capacity? http://ec.europa.eu/regional_policy/sources/docgener/evaluation/evalsed/guide/evaluation_capacity/definition_en.htm. Accessed 24th Dec 2012.

  • Everett, H. C. (1963). Professions. Daedalus, 92(4), 655–668.

    Google Scholar 

  • Ghere, G., King, J. A., Stevahn, L., & Minnema, J. (2006), Linking effective professional development and program evaluator competencies. American Journal of Evaluation, 27(1), 108–123.

    Article  Google Scholar 

  • Greene, C. J., & Curucelli, V. J. (1997). Defining and describing the paradigm issue in mixed-method evaluation. New Directions for Evaluation, 74(Summer), 5–17.

    Article  Google Scholar 

  • Hall, J. N., Ahn, J., & Greene, J. C. (2012). Values engagement in evaluation: Ideas, illustrations, and implications. American Journal of Evaluation, 33(2), 195–207.

    Article  Google Scholar 

  • Halliday, T. C. (1985). Knowledge mandates: Collective influence by scientific, normative and syncretic professions. The British Journal of Sociology, 36(3), 421–447.

    Article  Google Scholar 

  • Hawes, J. M., Rich, A. K., & Widmier, S. M. (2004). Assessing the development of the sales profession. The Journal of Personal Selling and Sales Management, 24(1), 27–37.

    Google Scholar 

  • Hawkins, D. F. (1978). Applied research and social theory. Evaluation Quarterly, 2(1), 141–152.

    Article  Google Scholar 

  • Henry, G. T., & Mark, M. M. (2003). Beyond use: Understanding evaluation’s influence on attitudes and actions. American Journal of Evaluation, 24(3), 293–314.

    Article  Google Scholar 

  • Hopson, R. (2009). Reclaiming knowledge at the margins: Culturally responsive evaluation in the current evaluation moment. In K. E. Ryan & J. B. Cousins (Eds.), The sage international handbook of educational evaluation (pp. 429–446). Thousand Oaks: Sage.

    Chapter  Google Scholar 

  • House, E. R. (1995). Principled evaluation: A critique of the AEA guiding principles. New Directions for Evaluation, 66(Summer), 27–35.

    Article  Google Scholar 

  • Hughes, E. C. (1960). The professions in society. Canadian Journal of Economics and Political Science, 26, 54–61.

    Google Scholar 

  • International Development Evaluation Association (IDEAS). (2012). Competencies for development evaluators onto the five constituents of evaluation practice. www.ideas-global.org. Accessed 22 May 2013.

  • International Organisation for Cooperation in Evaluation (IOCE). (2012). Newsletter Issue No. 5. September 2012.

    Google Scholar 

  • Jones, H. (2009). Policy-making as discourse: A review of recent knowledge-to-policy literature A Joint IKM Emergent–ODI Working Paper No. 5 August 2009 IKM Emergent Research Programme, European Association of Development Research and Training Institutes (EADI): Bonn.

    Google Scholar 

  • Ketchum, M. D. (1967). Is financial analysis a profession? Financial Analysts Journal, 23(6), 33–37.

    Article  Google Scholar 

  • King, J. A., & Volkov, B. (2005). A framework for building evaluation capacity based on the experiences of three organizations. CURA Reporter, 35(3), 10–16.

    Google Scholar 

  • King, J. A., Stevahn, L., Ghere, G., & Minnema, J. (2001), Toward a taxonomy of essential evaluator competencies. American Journal of Evaluation, 22(2), 229–247.

    Article  Google Scholar 

  • Kirkhart, K. E. (2005). Through a cultural lens: Reflections on validity and theory in evaluation. In S. Hood, R. Hopson, & H. Frierson (Eds.), The role of culture and cultural context in evaluation: A mandate for inclusion, the discovery of truth, and understanding in evaluative theory and practice (pp. 21–39). Greenwich: Information Age.

    Google Scholar 

  • Kirkhart, E. (2010). Eyes on the prize: Multicultural validity and evaluation theory. American Journal of Evaluation, 31(3), 400–413.

    Article  Google Scholar 

  • Kultgen, J. (1998). Ethics and professionalism Philadelphia: University of Pennsylvania Press.

    Google Scholar 

  • Lawrenz, F., Keiser, N., & Lavoie, B. (2003). Evaluative site visits: A methodological review. American Journal of Evaluation, 24(3), 341–352.

    Article  Google Scholar 

  • Mabry, L. (2010). Critical social theory evaluation: Slaying the dragon. New Directions for Evaluation, 127, 83–98.

    Article  Google Scholar 

  • Mackay, K. (2002). The World Bank’s ECB experience. New Directions for Evaluation, 93, 81–99.

    Article  Google Scholar 

  • Mackenzie, N., & Knipe, S. (2006). Research dilemmas: Paradigms, methods and methodology. Issues In Educational Research, 16, 2006. http://www.iier.org.au/iier16/mackenzie.html. Accessed 12th Dec 2012.

    Google Scholar 

  • Merriam-Webster. (2002). Webster’s third new international dictionary of English language-unabridged version. Springfield: Merriam-Webster Publishers.

    Google Scholar 

  • Mertens, D. (1998). Research methods in education and psychology: Integrating diversity with quantitative and qualitative approaches. Thousand Oaks: Sage.

    Google Scholar 

  • Mertens, D. M. (2008). Stakeholder representation in culturally complex communities: Insights from the transformative paradigm. In N. L. Smith & P. R. Brandon (Eds.), Fundamental issues in evaluation (pp. 41–56). New York: Guilford.

    Google Scholar 

  • Merwin, J. C., & Wiener, P. H. (1985). Evaluation: A profession? Educational Evaluation and Policy Analysis, 7(3), 253–259.

    Article  Google Scholar 

  • Morell, J. A., & Flaherty, E. W. (1978). The development of evaluation as a profession: Current status and some predictions. Evaluation and Program Planning, 1(1), 11–17.

    Article  Google Scholar 

  • Organisation for Economic Co-operation in Development (OECD DAC). (2002) Glossary of key terms in evaluation and results based management. Paris, OECD DAC.

    Google Scholar 

  • Patton, M. Q. (1990). The challenge of being a profession. Evaluation Practice, 11(1), 45–51.

    Article  Google Scholar 

  • Peck, L. R., Kim, Y., & Lucio, J. (2012). An empirical examination of validity in evaluation. American Journal of Evaluation, 00(0), 1–16.

    Google Scholar 

  • Preskill, H., & Boyle, S. (2008). A multidisciplinary model of evaluation capacity building. American Journal of Evaluation, 29(4), 443–459.

    Article  Google Scholar 

  • Purvis, J. R. (1973). School teaching as a professional career. The British Journal of Sociology, 24(1), 43–57.

    Article  Google Scholar 

  • Reicher, S. (2000). Against methodolatry: Some comments on Elliott, Fischer, and Rennie. British Journal of Clinical Psychology, 39, 1–6.

    Google Scholar 

  • Schott, R. L. (1976). Public administration as a profession: Problems and prospects. Public Administration Review, 36(3), 253–259.

    Article  Google Scholar 

  • Schwandt, T. A. (2001). Dictionary of qualitative inquiry (2nd edn.), Thousand Oaks: Sage.

    Google Scholar 

  • Schwandt, T. A. (2005). The centrality of practice to evaluation. American Journal of Evaluation, 26(1), 95–105.

    Article  Google Scholar 

  • Scriven, M. (1991). Evaluation thesaurus (4th edn.). Newbury Park: Sage.

    Google Scholar 

  • Scriven, M. (2003). Evaluation in the new millennium: The transdisciplinary vision. In S. I. Donaldson & M. Scriven (Eds.), Evaluating social programs and problems Visions for the New Millennium (pp. 19–42). Mahwah: Lawrence Erlbaum.

    Google Scholar 

  • Scriven, M. (2008). The concept of a transdiscipline: And of evaluation as a transdiscipline. Journal of MultiDisciplinary Evaluation, 5(10), 65–66.

    Google Scholar 

  • Segone, M. (2006) (ed.). Bridging the gap: The role of monitoring and evaluation in evidence based policy making…Unicef, New York.

    Google Scholar 

  • Shadish, W., Cook, T., & Leviton, L. (1991). Foundations of program evaluation: Theories of practice. Newbury Park: Sage.

    Google Scholar 

  • Smith, H. L. (1958). Contingencies of professional differentiation. American Journal of Sociology, 63(4), 410–414.

    Article  Google Scholar 

  • Smith, M. F. (2001). Evaluation: Preview of the future #2. American Journal of Evaluation, 22, 281–300.

    Google Scholar 

  • Somekh, B., & Lewin, C. (2005). Research methods in social sciences. London: Sage.

    Google Scholar 

  • Stevahn, L., King, J. A., Ghere, G., & Minnema, J. (2005). Establishing essential competencies for program evaluators. American Journal of Evaluation, 26(1), 43–59.

    Article  Google Scholar 

  • Stiles, W. B. (1993). Quality control in qualitative research. Clinical Psychology Review, 13, 593–618.

    Google Scholar 

  • Sussman, M. R. (1969). Professional autonomy and the revolt of the client. Social Problems, 17(2), 153–161.

    Article  Google Scholar 

  • Taut, S. (2007). Studying self-evaluation capacity building in a large international development organization. American Journal of Evaluation, 28(1), 45–59.

    Article  Google Scholar 

  • Trow, W. C. (1945). Four professional attributes: And education. The Phi Delta Kappan, 27(4), 118–119.

    Google Scholar 

  • Turpin, G., Barley, V., Beail, N., Scaife, J., Slade, P., Smith, J. A. et al. (1997). Standards for research projects and theses involving qualitative methods: Suggested guidelines for trainees and courses. Clinical Psychology Forum, 108, 3–7.

    Google Scholar 

  • Walter, M. (2006). Social science methods: An Australian perspective. Oxford: Oxford University Press.

    Google Scholar 

  • Weiss, C. H. (1979). The many meanings of research utilisation. Public Administration Review, 39(5), 426–31.

    Article  Google Scholar 

  • Wiener, A. (1979). The development of evaluation as a concession. Evaluation and Program Planning, 2(3), 231–234.

    Article  Google Scholar 

  • Wilensky, H. L. (1964). The professionalization of every one? American Journal of Sociology, 7(2), 137–158.

    Article  Google Scholar 

  • Worthen, B. R. (1994). Is Evaluation a mature profession that warrants the preparation of evaluation professionals? New Directions for Program Evaluation, 62(Summer), 3–15.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Apollo M. Nkwake .

Appendices

Appendix 1.1

AEA guiding principles for evaluators (shaded boxes indicate the constituent with a guiding principle appears to be most aligned).

figure afigure a

Appendix 1.2

Australasia Evaluation Association Guidelines for the Ethical Conduct of Evaluations (shaded boxes indicate the constituent with a guiding principle appears to be most aligned).

figure bfigure bfigure b

Appendix 1.3

IDEA competencies for international development evaluators (shaded boxes indicate the constituent with a competency appears to be most aligned).

figure c

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer International Publishing Switzerland

About this chapter

Cite this chapter

Nkwake, A. (2015). Constituents of Evaluation Practice. In: Credibility, Validity, and Assumptions in Program Evaluation Methodology. Springer, Cham. https://doi.org/10.1007/978-3-319-19021-1_1

Download citation

Publish with us

Policies and ethics