Abstract
In peace programming in humanitarian contexts, evaluations are essential for learning what worked or did not work and for strengthening the practice of peacebuilding. Despite their importance, evaluations are seldom conducted. Worse yet, even when evaluations have been conducted, they may not be peaceful and may actively undermine or limit the impact of peacebuilding programs. The objectives of this chapter are to raise awareness of how evaluations are frequently not peaceful, increase understanding of why non-peaceful evaluations are prevalent, and provide principles and exemplars that support the use of peaceful approaches to evaluation. In examining how evaluations are frequently not peaceful, the first part of the chapter shows how evaluations are typically designed and implemented with technical considerations such as validity, reliability, and robustness in mind. Although these considerations are important, an exclusive focus on them often leads to evaluations that are driven by outsiders, take an extractive approach, discriminate against people who had not participated in the program, and marginalize the categories that guide the thinking of local people about peace and peacebuilding. When this happens, the evaluation process itself can cause unintended harm and actively set back the cause of peace in the local area. The evaluation process evokes fear, assumes colonial dimensions, and becomes an imposition that leaves local people feeling objectified, marginalized, and exploited. In this manner, the evaluation process becomes part of a system of social injustice that is antithetical to peace. The second part of the chapter will explore why non-peaceful evaluations are so widespread. An understanding of the “why” is critical to the efforts to prevent unintended harm and change the institutions, policies, and practices that enable non-peaceful evaluations. Emphasizing the structure of the humanitarian system and the power asymmetries it embodies, this section will examine the power gap that exists between communities and implementing agencies and also between donors and implementing agencies, the donor driven nature of many interventions, the pressures for quick results and for the implementing agency demonstrating positive results, the culture of technical experts, and the use of prespecified questions, surveys, and scales that have been selected primarily on the basis of scientific merit or the desire to impact policy leaders. The third part of the chapter will examine how to make evaluations more peaceful through processes of relationship building, power sharing, deep community engagement and ownership, giving voice to the voiceless , giving constructive feedback, and inclusivity at stages of evaluation design, implementation, analysis, and use of the data and findings. It shows how the use of qualitative data in a mixed methods approach can simultaneously strengthen the technical quality of the evaluation and the process aspects of the evaluation. Two practical exemplars of peaceful approaches to program evaluation will be used to illustrate key points, provide practical tips on how to make evaluations more peaceful, and encourage readers to use peaceful evaluation processes.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsReferences
Allden, K., Jones, L., Weissbecker, I., Wessells, M., Bolton, P., Betancourt, T., Hijazi, Z., Galappati, A., Yamout, R., Patel, P., & Sumathipala, A. (2009). Mental health and psychosocial support in crisis and conflict: Report of the Mental Health Working Group—Humanitarian Action Summit. Prehospital and Disaster Medicine, 24(2), s217–2227.
Buchanan-Smith, M., & Cosgrove, J. (2013). Evaluation of humanitarian action. London: ALNAP.
Bush, K., & Duggan, C. (Eds.). (2013). Evaluation in violently divided societies: Politics, ethics and methods. Journal of Peacebuilding and Development, 8(2), 5–25.
Chambers, R. 1994. The origins and practice of participatory rural appraisal. World Development, 22(7), 953–69.
Charmaz, K. (2006). Constructing grounded theory: A practical guide through qualitative research. Thousand Oaks: Sage.
Creswell, J. (2013). Research design: Qualitative, quantitative, and mixed methods approaches (4th ed.). Thousand Oaks: Sage.
Davies, R., & Dart, J. (2005). The most significant change technique: A guide to its use. London: Authors.
Goldwyn, R., & Chigas, D. (Eds.)(2013). Monitoring and evaluating conflict sensitivity: Methodological challenges and practical solutions. London: Department for International Development.
Mazurana, D., Jacobsen, K., & Gale, L. A. (Eds.). (2013). Research methods in conflict settings: A view from below. New York: Cambridge University Press.
Stark, L., Macfarlane, M., King, D., Lamin, D., Lillley, S., & Wessells, M. (2014). A community-driven approach to reducing teenage pregnancy in Sierra Leone: Midline Evaluation. London: Save the Children.
Wessells, M. G. (2006). Child soldiers: From violence to protection. Cambridge: Harvard University Press.
Wessells, M. (2006). The impact of U. S. anti-terrorism interventions on terrorist motivation: Preliminary research on youth in Afghanistan and Iraq. In P. Kimmel & C. Stout (Eds.), Collateral damage: The psychological consequences of America’s war on terrorism (pp. 165–188). Westport: Praeger.
Wessells, M. (2009). Do no harm: Toward contextually appropriate psychosocial support in international emergencies. American Psychologist, 64(8), 842–854.
Wessells, M. G. (2015). Bottom-up approaches to strengthening child protection systems: Placing children, families, and communities at the center. Child Abuse & Neglect: The International Journal, 43, 8–21.
Wessells, M. G., & Monteiro, C. (2001). Psychosocial interventions and post-war reconstruction in Angola: Interweaving Western and traditional approaches. In D. Christie, R. V. Wagner, & D. Winter (Eds.), Peace, conflict, and violence: Peace psychology for the 21st century (pp. 262–275). Upper Saddle River: Prentice-Hall.
Wessells, M., Lamin, D., King, D., Kostelny, K., Stark, L., & Lilley, S. (2012). The disconnect between community-based child protection mechanisms and the formal child protection system in rural Sierra Leone: Challenges to building an effective national child protection system. Vulnerable Children and Youth Studies, 7(31), 211–227.
Worthen, M., Veale, A., Mckay, S., & Wessells, M. (2010). ‘I stand like a woman’: Empowerment and human rights in the context of community-based reintegration of girl mothers formerly associated with fighting forces and armed groups. Journal of Human Rights Practice, 10, 49–70.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2015 Springer International Publishing Switzerland
About this chapter
Cite this chapter
Wessells, M. (2015). Program Evaluation: Why Process Matters. In: Bretherton, D., Law, S. (eds) Methodologies in Peace Psychology. Peace Psychology Book Series, vol 26. Springer, Cham. https://doi.org/10.1007/978-3-319-18395-4_20
Download citation
DOI: https://doi.org/10.1007/978-3-319-18395-4_20
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-18394-7
Online ISBN: 978-3-319-18395-4
eBook Packages: Behavioral ScienceBehavioral Science and Psychology (R0)