Abstract
Evaluation is a form of applied social science research that uses a set of skills and tools to determine the success of interventions. Five constituents of evaluation practice—behavioral, competence, utilization, industrial, and methodological—are identified based on the common attributes of professions. Methodology includes the application of methods, procedures, and tools in research. Competence refers to the capacity of evaluators (micro-level), organizations (meso-level), and society (macro-level). The behavioral constituent of evaluation practice concerns appropriate conduct, ethical guidelines, and professional culture. The industrial (supply) constituent concerns the exercise of professional authority to provide services to further client interests. The utilization (demand) constituent concerns the use of research results, including a demand for evidence to guide policy. Evaluators are preoccupied more with methodology than with the other evaluation constituents.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
American Evaluation Association (AEA). (2003). Evaluation 2003, presidential welcome. http://www.eval.org/eval2003/aea03.program3.pdf. Accessed 13th Dec 2012.
Baizerman, M., Compton, D. W., & Stockdill, S. H. (2002). Editors’ notes–the art, craft, and science of evaluation capacity building. New Directions for Evaluation, 9, 1–6.
Bamberger, M. (2006). Enhancing the utilization of evaluations for evidence-based policy making. In Segone (ed) Bridging the gap: the role of monitoring and evaluation in evidence based policy making…Unicef (pp. 120–142), New York.
Barbour, R. (2001). Checklists for improving rigor in qualitative research: A case of the tail wagging the dog? British Medical Journal, 322, 1115–1117.
Campbell, D. T. (1991). Methods for the experimenting society. American Journal of Evaluation, 12(3), 223–260.
Chapple, A., & Rogers, A. (1998). Explicit guidelines for qualitative research: A step in the right direction, a defence of the ‘soft’ option, or a form of sociological imperialism? Family Practice, 15, 556–561.
Chelimsky, E. (2012). Valuing, evaluation methods, and the politicization of the evaluation process. In G. Julnes (Ed.), Promoting valuation in the public interest: Informing policies for judging value in evaluation. New Directions for Evaluation, 133, 77–83.
Christie, C. A., & Alkin, M. C. (2008). Evaluation theory tree re-examined. Studies in Educational Evaluation, 34, 131–135.
Conner, R. F., & Dickman, F. B. (1979). Professionalization of evaluative research: Conflict as a sign of health. Evaluation and Program Planning, 2(2), 103–109.
Coryn, C. L. S., & Hattie, J. A. (2006). The transdisciplinary model of evaluation. Journal of MultiDisciplinary Evaluation, 3(4), 107–114.
Creswell, J. W., Hanson, W. E., Plano, V. L., & Morales, A. (2007). Qualitative research designs: Selection and implementation. The Counseling Psychologist, 35(2), 236–264.
Cruess, S. R., & Cruess, R. L. (1997). Professionalism must be taught. BMJ, 315, 1674.
Davies, H., Nutley, S., & Smith, P. (2000). Introducing evidence-based policy and practice in public services in WHAT WORKS? In Huw T.O. Davies, Sandra M. Nutley & Peter C. Smith (Eds.), Evidence-based policy and practice in public services (pp. 1–41). The Policy Press, Bristol.
Dyer, A. R. (1985). Ethics, advertising and the definition of a profession. Journal of medical ethics, 11(1985), 72–78.
Elliott, R., Fischer, C. T., & Rennie D. L. (1999). Evolving guidelines for publication of qualitative research studies in psychology and related fields. British Journal of Clinical Psychology, 38, 215–229.
European Commission (EC). (2008). What is evaluation capacity? http://ec.europa.eu/regional_policy/sources/docgener/evaluation/evalsed/guide/evaluation_capacity/definition_en.htm. Accessed 24th Dec 2012.
Everett, H. C. (1963). Professions. Daedalus, 92(4), 655–668.
Ghere, G., King, J. A., Stevahn, L., & Minnema, J. (2006), Linking effective professional development and program evaluator competencies. American Journal of Evaluation, 27(1), 108–123.
Greene, C. J., & Curucelli, V. J. (1997). Defining and describing the paradigm issue in mixed-method evaluation. New Directions for Evaluation, 74(Summer), 5–17.
Hall, J. N., Ahn, J., & Greene, J. C. (2012). Values engagement in evaluation: Ideas, illustrations, and implications. American Journal of Evaluation, 33(2), 195–207.
Halliday, T. C. (1985). Knowledge mandates: Collective influence by scientific, normative and syncretic professions. The British Journal of Sociology, 36(3), 421–447.
Hawes, J. M., Rich, A. K., & Widmier, S. M. (2004). Assessing the development of the sales profession. The Journal of Personal Selling and Sales Management, 24(1), 27–37.
Hawkins, D. F. (1978). Applied research and social theory. Evaluation Quarterly, 2(1), 141–152.
Henry, G. T., & Mark, M. M. (2003). Beyond use: Understanding evaluation’s influence on attitudes and actions. American Journal of Evaluation, 24(3), 293–314.
Hopson, R. (2009). Reclaiming knowledge at the margins: Culturally responsive evaluation in the current evaluation moment. In K. E. Ryan & J. B. Cousins (Eds.), The sage international handbook of educational evaluation (pp. 429–446). Thousand Oaks: Sage.
House, E. R. (1995). Principled evaluation: A critique of the AEA guiding principles. New Directions for Evaluation, 66(Summer), 27–35.
Hughes, E. C. (1960). The professions in society. Canadian Journal of Economics and Political Science, 26, 54–61.
International Development Evaluation Association (IDEAS). (2012). Competencies for development evaluators onto the five constituents of evaluation practice. www.ideas-global.org. Accessed 22 May 2013.
International Organisation for Cooperation in Evaluation (IOCE). (2012). Newsletter Issue No. 5. September 2012.
Jones, H. (2009). Policy-making as discourse: A review of recent knowledge-to-policy literature A Joint IKM Emergent–ODI Working Paper No. 5 August 2009 IKM Emergent Research Programme, European Association of Development Research and Training Institutes (EADI): Bonn.
Ketchum, M. D. (1967). Is financial analysis a profession? Financial Analysts Journal, 23(6), 33–37.
King, J. A., & Volkov, B. (2005). A framework for building evaluation capacity based on the experiences of three organizations. CURA Reporter, 35(3), 10–16.
King, J. A., Stevahn, L., Ghere, G., & Minnema, J. (2001), Toward a taxonomy of essential evaluator competencies. American Journal of Evaluation, 22(2), 229–247.
Kirkhart, K. E. (2005). Through a cultural lens: Reflections on validity and theory in evaluation. In S. Hood, R. Hopson, & H. Frierson (Eds.), The role of culture and cultural context in evaluation: A mandate for inclusion, the discovery of truth, and understanding in evaluative theory and practice (pp. 21–39). Greenwich: Information Age.
Kirkhart, E. (2010). Eyes on the prize: Multicultural validity and evaluation theory. American Journal of Evaluation, 31(3), 400–413.
Kultgen, J. (1998). Ethics and professionalism Philadelphia: University of Pennsylvania Press.
Lawrenz, F., Keiser, N., & Lavoie, B. (2003). Evaluative site visits: A methodological review. American Journal of Evaluation, 24(3), 341–352.
Mabry, L. (2010). Critical social theory evaluation: Slaying the dragon. New Directions for Evaluation, 127, 83–98.
Mackay, K. (2002). The World Bank’s ECB experience. New Directions for Evaluation, 93, 81–99.
Mackenzie, N., & Knipe, S. (2006). Research dilemmas: Paradigms, methods and methodology. Issues In Educational Research, 16, 2006. http://www.iier.org.au/iier16/mackenzie.html. Accessed 12th Dec 2012.
Merriam-Webster. (2002). Webster’s third new international dictionary of English language-unabridged version. Springfield: Merriam-Webster Publishers.
Mertens, D. (1998). Research methods in education and psychology: Integrating diversity with quantitative and qualitative approaches. Thousand Oaks: Sage.
Mertens, D. M. (2008). Stakeholder representation in culturally complex communities: Insights from the transformative paradigm. In N. L. Smith & P. R. Brandon (Eds.), Fundamental issues in evaluation (pp. 41–56). New York: Guilford.
Merwin, J. C., & Wiener, P. H. (1985). Evaluation: A profession? Educational Evaluation and Policy Analysis, 7(3), 253–259.
Morell, J. A., & Flaherty, E. W. (1978). The development of evaluation as a profession: Current status and some predictions. Evaluation and Program Planning, 1(1), 11–17.
Organisation for Economic Co-operation in Development (OECD DAC). (2002) Glossary of key terms in evaluation and results based management. Paris, OECD DAC.
Patton, M. Q. (1990). The challenge of being a profession. Evaluation Practice, 11(1), 45–51.
Peck, L. R., Kim, Y., & Lucio, J. (2012). An empirical examination of validity in evaluation. American Journal of Evaluation, 00(0), 1–16.
Preskill, H., & Boyle, S. (2008). A multidisciplinary model of evaluation capacity building. American Journal of Evaluation, 29(4), 443–459.
Purvis, J. R. (1973). School teaching as a professional career. The British Journal of Sociology, 24(1), 43–57.
Reicher, S. (2000). Against methodolatry: Some comments on Elliott, Fischer, and Rennie. British Journal of Clinical Psychology, 39, 1–6.
Schott, R. L. (1976). Public administration as a profession: Problems and prospects. Public Administration Review, 36(3), 253–259.
Schwandt, T. A. (2001). Dictionary of qualitative inquiry (2nd edn.), Thousand Oaks: Sage.
Schwandt, T. A. (2005). The centrality of practice to evaluation. American Journal of Evaluation, 26(1), 95–105.
Scriven, M. (1991). Evaluation thesaurus (4th edn.). Newbury Park: Sage.
Scriven, M. (2003). Evaluation in the new millennium: The transdisciplinary vision. In S. I. Donaldson & M. Scriven (Eds.), Evaluating social programs and problems Visions for the New Millennium (pp. 19–42). Mahwah: Lawrence Erlbaum.
Scriven, M. (2008). The concept of a transdiscipline: And of evaluation as a transdiscipline. Journal of MultiDisciplinary Evaluation, 5(10), 65–66.
Segone, M. (2006) (ed.). Bridging the gap: The role of monitoring and evaluation in evidence based policy making…Unicef, New York.
Shadish, W., Cook, T., & Leviton, L. (1991). Foundations of program evaluation: Theories of practice. Newbury Park: Sage.
Smith, H. L. (1958). Contingencies of professional differentiation. American Journal of Sociology, 63(4), 410–414.
Smith, M. F. (2001). Evaluation: Preview of the future #2. American Journal of Evaluation, 22, 281–300.
Somekh, B., & Lewin, C. (2005). Research methods in social sciences. London: Sage.
Stevahn, L., King, J. A., Ghere, G., & Minnema, J. (2005). Establishing essential competencies for program evaluators. American Journal of Evaluation, 26(1), 43–59.
Stiles, W. B. (1993). Quality control in qualitative research. Clinical Psychology Review, 13, 593–618.
Sussman, M. R. (1969). Professional autonomy and the revolt of the client. Social Problems, 17(2), 153–161.
Taut, S. (2007). Studying self-evaluation capacity building in a large international development organization. American Journal of Evaluation, 28(1), 45–59.
Trow, W. C. (1945). Four professional attributes: And education. The Phi Delta Kappan, 27(4), 118–119.
Turpin, G., Barley, V., Beail, N., Scaife, J., Slade, P., Smith, J. A. et al. (1997). Standards for research projects and theses involving qualitative methods: Suggested guidelines for trainees and courses. Clinical Psychology Forum, 108, 3–7.
Walter, M. (2006). Social science methods: An Australian perspective. Oxford: Oxford University Press.
Weiss, C. H. (1979). The many meanings of research utilisation. Public Administration Review, 39(5), 426–31.
Wiener, A. (1979). The development of evaluation as a concession. Evaluation and Program Planning, 2(3), 231–234.
Wilensky, H. L. (1964). The professionalization of every one? American Journal of Sociology, 7(2), 137–158.
Worthen, B. R. (1994). Is Evaluation a mature profession that warrants the preparation of evaluation professionals? New Directions for Program Evaluation, 62(Summer), 3–15.
Author information
Authors and Affiliations
Corresponding author
Appendices
Appendix 1.1
AEA guiding principles for evaluators (shaded boxes indicate the constituent with a guiding principle appears to be most aligned).
Appendix 1.2
Australasia Evaluation Association Guidelines for the Ethical Conduct of Evaluations (shaded boxes indicate the constituent with a guiding principle appears to be most aligned).
Appendix 1.3
IDEA competencies for international development evaluators (shaded boxes indicate the constituent with a competency appears to be most aligned).
Rights and permissions
Copyright information
© 2015 Springer International Publishing Switzerland
About this chapter
Cite this chapter
Nkwake, A. (2015). Constituents of Evaluation Practice. In: Credibility, Validity, and Assumptions in Program Evaluation Methodology. Springer, Cham. https://doi.org/10.1007/978-3-319-19021-1_1
Download citation
DOI: https://doi.org/10.1007/978-3-319-19021-1_1
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-19020-4
Online ISBN: 978-3-319-19021-1
eBook Packages: Humanities, Social Sciences and LawSocial Sciences (R0)