Advertisement

Knowledge, Framing, and Ethics in Programme Design and Evaluation

  • Suraj Jacob
Chapter

Abstract

This chapter will explore ethical issues surrounding the design and evaluation of public health programmes. For programme design, the chapter will argue that programme choice often occurs with solutions already in mind and that these solutions reflect “off-the-shelf” thinking (for instance, ubiquitous “training workshops”), implying little real “choice” in programme design. Further, at a broader level, programme choice is influenced by implicit ideological and epistemological positions that may be ethically dubious especially if they are not problematised and made transparent. On programme evaluation, the chapter focuses on ethical aspects of three key elements: participatory evaluation, the use of evaluation results and the place of impact evaluation. The chapter concludes with a discussion of the role of ethics in relation to epistemology. While it may be relatively uncontroversial to note the problematic ethics of research that comes up short when benchmarked against its own research / methodological paradigm, it is worth asking to what extent the choice of research / methodological / epistemological paradigm is itself an ethical one.

References

  1. Alkin, M. C. (2010). Evaluation essentials: From A to Z. New York: Guildford Press.Google Scholar
  2. Altman, D. (1980). Statistics and ethics in medical research: Misuse of statistics is unethical. British Medical Journal, 281.CrossRefGoogle Scholar
  3. Amrith, S. (2007, January 13–19). Political culture of health in India: A historical perspective. Economic and Political Weekly, 42(2), 114–121.Google Scholar
  4. Angell, M. (1997). The ethics of clinical research in the third world. New England Journal of Medicine, 337(12), 847–849.CrossRefGoogle Scholar
  5. Banerjee, A. V., Duflo, E., Glennerster, R., & Kothari, D. (2010). Improving immunisation coverage in rural India: clustered randomised controlled evaluation of immunisation campaigns with and without incentives. BMJ, 340, c2220.CrossRefGoogle Scholar
  6. Batliwala, S., & Pittman, A. (2010). Capturing change in women’s realities: A critical overview of current monitoring and evaluation frameworks. Toronto: Association for Women in Development.Google Scholar
  7. Caplan, A. L. (2001). Twenty years after: The legacy of the Tuskegee syphilis study. In Bioethics, justice and health care (pp. 231–225). Belmont: Wadsworth-Thomson Learning.Google Scholar
  8. Carden, F. (2010). Introduction to the forum on evaluation field building in South Asia. American Journal of Evaluation, 31(2), 219–221.CrossRefGoogle Scholar
  9. Carden, F., & Alkin, M. C. (2012). Evaluation roots: An international perspective. Journal of MultiDisciplinary Evaluation, 8(17), 102–118.Google Scholar
  10. Cartwright, N., & Hardie, J. (2012). Evidence-based policy: A practical guide to doing it better. New York: Oxford University Press.CrossRefGoogle Scholar
  11. Chambers, R.(2007). From PRA to PLA and pluralism: Practice and theory. In The SAGE handbook of action research: Participative inquiry and practice (p. 297). London: Sage.Google Scholar
  12. Chigateri, S., & Saha, S. (2016). Gender transformative evaluations.Google Scholar
  13. Chouinard, J. A. (2013). The case for participatory evaluation in an era of accountability. American Journal of Evaluation, 34(2), 237–253.CrossRefGoogle Scholar
  14. CIOMS (Council for International Organizations of Medical Sciences). (2002). International ethical guidelines for biomedical research involving human subjects. Bulletin of Medical Ethics, 182, 17.Google Scholar
  15. Clayton, D. G. (1982). Ethically optimised designs. British Journal of Clinical Pharmacology, 13(4), 469–480.CrossRefGoogle Scholar
  16. Cook, T. D., & Campbell, D. T. (1979). Quasi experimentation: Design and analytical issues for field settings. Chicago: Rand McNally.Google Scholar
  17. Crishna, B. (2006). Participatory evaluation (I)–sharing lessons from fieldwork in Asia. Child Care, Health and Development, 33(3), 217–223.CrossRefGoogle Scholar
  18. Dalkin, S. M., Greenhalgh, J., Jones, D., Cunningham, B., & Lhussier, M. (2015). What’s in a mechanism? Development of a k concept in realist evaluation. Implementation Science, 10, 49.CrossRefGoogle Scholar
  19. Deaton, A. (2010, June). Instruments, randomization, and learning about development. Journal of Economic Literature, 48, 424–455.CrossRefGoogle Scholar
  20. Duflo, E., Glennester, R., & Kremer, M. (2007). Using randomization in development economics research: A toolkit. In Handbook of development economics.Google Scholar
  21. Dunning, T. (2012). Natural experiments in the Social Sciences: A design-based approach. New York: Cambridge University Press.CrossRefGoogle Scholar
  22. Engber, D. (2011, November 15). The mouse trap (part I): The dangers of using one lab animal to study every disease. Slate. Available at http://www.slate.com/articles/health_and_science/the_mouse_trap/2011/11/lab_mice_are_they_limiting_our_understanding_of_human_disease_.html
  23. Esteves, A. M., Franks, D., & Vanclay, F. (2012). Social impact assessment: The state of the art. Impact Assessment and Project Appraisal, 30(1), 34–42.CrossRefGoogle Scholar
  24. Freedman, B. (1987). Equipoise and the ethics of clinical research. New England Journal of Medicine, 317(3), 141–145.CrossRefGoogle Scholar
  25. George, S. M., Latham, M. C., Frongillo, E. A., Abel, R., & Ethirajan, N. (1993). Evaluation of effectiveness of good growth monitoring in south Indian villages. The Lancet, 342(8867), 348–352.CrossRefGoogle Scholar
  26. Glewwe, P., Park, A., & Zhao, M. (2012). Visualizing development: Eyeglasses and academic performance in primary schools in China. Center for International Food and Agricultural Policy Research, University of Minnesota, Working Paper WP12-2 (Jan.)Google Scholar
  27. Govinda, R. (2012). Mapping ‘gender evaluation’ in South Asia. Indian Journal of Gender Studies, 19(2), 187–209.CrossRefGoogle Scholar
  28. Habermas, J. (1971). Knowledge and human interests (J. J. Shapiro, Trans.). Boston: Beacon Press.Google Scholar
  29. Heaver, R. (2002). India’s Tamil Nadu nutrition program: Lessons and issues in management and capacity development. HNP discussion paper series. World Bank, Washington, DC. https://openknowledge.worldbank.org/handle/10986/13787 license: CC BY 3.0 IGO.
  30. Jacob, S., Natrajan, B., & Patil, I. (2015). Explaining village-level development trajectories through schooling in Karnataka. Economic & Political Weekly, L52, 54–64.Google Scholar
  31. Joseph, M. (2004, August 23). What if ‘Hum do, Hamare Do’ had worked? Outlook India.Google Scholar
  32. Kaplan, S. A., & Garrett, K. E. (2005). The use of logic models by community-based initiatives. Evaluation and Program Planning, 28, 167–172.CrossRefGoogle Scholar
  33. Kemp, D., & Vanclay, F. (2013). Human rights and impact assessment: clarifying the connections in practice. Impact Assessment and Project Appraisal, 31(2), 86–96.CrossRefGoogle Scholar
  34. Khanna, R. (2013). Ethical issues in community based monitoring of health programmes: Reflections from India. Paper 3 in COPASAH Series on Social Accountability.Google Scholar
  35. Konkipudi, K., & Jacob, S. (2017). Political pressures and bureaucratic consequences: Vignettes of Janmabhoomi implementation in Andhra Pradesh. Studies in Indian Politics, 5(1), 1–17.CrossRefGoogle Scholar
  36. Kumar, A. K. (2010). Shiva “a comment on ‘evaluation field building in South Asia: Reflections, anecdotes, and questions”. American Journal of Evaluation, 31(2), 238–240.CrossRefGoogle Scholar
  37. Lim, S. S., Dandona, L., Hoisington, J. A., James, S. L., Hogan, M. C., & Gakidou, E. (2010, June 5). India’s Janani Suraksha Yojana, a conditional cash transfer Programme to increase births in health facilities: An impact evaluation. Lancet, 375, 2009–2023.CrossRefGoogle Scholar
  38. May, W. W. (1975). Composition and function of ethical committees. Journal of Medical Ethics, 1(1), 23–29.CrossRefGoogle Scholar
  39. Mehrotra, S. (2013). The government monitoring and evaluation system in India: A work in progress. ECD working paper series No. 28.Google Scholar
  40. Miller, F. G., & Brody, H. (2003). A critique of clinical equipoise: Therapeutic misconception in the ethics of clinical trials. Hastings Center Report, 33(3), 19–28.CrossRefGoogle Scholar
  41. Mishra, A. (2014). ‘Trust and teamwork matter’: Community health Workers’ experiences in integrated service delivery in India. Global Public Health, 9(8), 960–974.CrossRefGoogle Scholar
  42. Morgan, R. K. (2012). Environmental impact assessment: The state of the art. Impact Assessment and Project Appraisal, 30(1), 5–14.CrossRefGoogle Scholar
  43. Pal, S. P., & Chakrabarti, M.. Reforming India’s evaluation architecture: The role of Independent Evaluation Office (unpublished work).Google Scholar
  44. Prashanth, N. S., Marchal, B., & Criel, B. (2013). Evaluating healthcare interventions: Answering the ‘how’ question. Indian Anthropologist, 43(1), 35–50.Google Scholar
  45. Rao, K. S. (2017). Do we care: India’s health system. Delhi: Oxford University Press.CrossRefGoogle Scholar
  46. Reddy, S. G. (2012). Randomise this! On poor economics. Review of Agrarian Studies, 2(2), 60–73.Google Scholar
  47. Riis, P. (2003). Thirty years of bioethics: The Helsinki declaration 1964-2003. New Review of Bioethics, 1(1), 15–25.CrossRefGoogle Scholar
  48. Rogers, P. J. (2012). Introduction to impact evaluation. Impact Evaluation Notes. Retrieved from http://interaction.org/impact-evaluation-notes.
  49. Rossi, P., Freeman, H., & Lipsey, M. (2004). Monitoring program process and performance. In Evaluation: A systematic approach.Google Scholar
  50. Shukla, A., Khanna, R., & Jadhav, N. (2014). Using community-based evidence for decentralized health planning: insights from Maharashtra, India. Health Policy and Planning, czu099.  https://doi.org/10.1093/heapol/czu099.CrossRefGoogle Scholar
  51. Smith, M. J., Clarke, R. V., & Pease, K. (2002). Anticipatory benefits in crime prevention. In N. Tilley (Ed.), Analysis for crime prevention: Crime prevention studies (Vol. 13, pp. 71–88). Monsey: Criminal Justice Press.Google Scholar
  52. Sridhar, D. (2008). The battle against hunger: Choice, circumstance, and the World Bank. Oxford: Oxford University Press.CrossRefGoogle Scholar
  53. Sridhar, D. (2010). Addressing undernutrition in India: Do ‘rational’ approaches work? In H. Margetts & C. Hood (Eds.), Paradoxes of modernization: Unintended consequences of public policy reform (pp. 119–137). New York: Oxford University Press.CrossRefGoogle Scholar
  54. St. Pierre, E. A. (2006). Scientifically based research in education: Epistemology and ethics. Adult Education Quarterly, 56(4), 239–266.CrossRefGoogle Scholar
  55. UN Women. (2015). How to manage gender-responsive evaluation, Independent Evaluation Office. Retrieved from: http://genderevaluation.unwomen.org/en/evaluation-handbook
  56. UNICEF. (2011). How to design and manage Equity-focused evaluations (Michael Bamberger and Marco Segone).Google Scholar
  57. van Hollen, C. (2003). Birth on the threshold: Childbirth and modernity in South India. University of California Press, 272 pp.Google Scholar
  58. Watkins, R., Meiers, M. W., & Visser, Y. L. (2012). Needs assessment: Frequently asked questions. In A guide to assessing needs: Tools for collecting information, making decisions, and achieving development results.Google Scholar
  59. White, H., & Masset, E. (2007). Assessing interventions to improve child nutrition: A theory-based impact evaluation of the Bangladesh integrated nutrition project. Journal of International Development, 19(5), 627–652.CrossRefGoogle Scholar
  60. Woodward, J. (2017). Scientific explanation. In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy (Fall 2017 edn). https://plato.stanford.edu/archives/fall2017/entries/scientific-explanation/
  61. Woolcock, M. (2013). Using case studies to explore the external validity of ‘complex’ development interventions. Evaluation, 19(3), 229–248.CrossRefGoogle Scholar
  62. World Bank. (1994). Impact evaluation report. India. Tamil Nadu integrated nutrition project. Washington: World Bank, Operations Evaluation Department. Internal report. Processed.Google Scholar
  63. World Bank. (2005, December ). The Bangladesh Integrated Nutrition Project: Effectiveness and lessons. Bangladesh Development Series – paper no.8.Google Scholar
  64. Ziliak, S., & Teather-Posadas, E. R. (2016). The unprincipled randomization principle in economics and medicine. In G. DeMartino & D. McCloskey (Eds.), The Oxford handbook of professional economic ethics. New York: Oxford University Press.Google Scholar
  65. Zimbardo, P. G. (1973). On the ethics of intervention in human psychological research: With special reference to the Stanford prison experiment. Cognition, 2(2), 243–256.CrossRefGoogle Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2018

Authors and Affiliations

  • Suraj Jacob
    • 1
    • 2
  1. 1.Vidya Bhawan SocietyUdaipurIndia
  2. 2.Visiting FacultyAzim Premji UniversityBengaluruIndia

Personalised recommendations