Abstract
This chapter will explore ethical issues surrounding the design and evaluation of public health programmes. For programme design, the chapter will argue that programme choice often occurs with solutions already in mind and that these solutions reflect “off-the-shelf” thinking (for instance, ubiquitous “training workshops”), implying little real “choice” in programme design. Further, at a broader level, programme choice is influenced by implicit ideological and epistemological positions that may be ethically dubious especially if they are not problematised and made transparent. On programme evaluation, the chapter focuses on ethical aspects of three key elements: participatory evaluation, the use of evaluation results and the place of impact evaluation. The chapter concludes with a discussion of the role of ethics in relation to epistemology. While it may be relatively uncontroversial to note the problematic ethics of research that comes up short when benchmarked against its own research / methodological paradigm, it is worth asking to what extent the choice of research / methodological / epistemological paradigm is itself an ethical one.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
This went against earlier attempts at an inclusive model emphasising community engagement and environmental hygiene for positive health and wellbeing (for instance, see the ideas of the Bhore Committee). There were also occasional (but failed) attempts at an integrated approach later on, for instance, the National Health Policy of 1983.
- 2.
Further, where needs assessment is attempted, it is often reduced to a cursory “baseline survey” generating descriptive statistics rather than deeper probing of the situation to construct an understanding of the “how” and the “why” of the need/problem.
- 3.
“All Delhi police stations to have women officers: Shinde”, Times of India, December 29, 2012; “Only 442 women police stations across India: Police research data”, The Hindu, December 25, 2012.
- 4.
More recently, the Health Ministry launched a “high-octane campaign with a three-in-one message of family planning, child spacing, and safe sex practices” (“What’s the family plan”, The Hindu, April 17, 2016).
- 5.
That the programme was designed explicitly for supplementary rather than regular feeding cannot seriously be considered an ethical problem – feeding was not designed to directly redress diet or calorie problems and with carefully justified reasons. Further, in practice, programme personnel did not withhold food from children who expressed hunger even when they did not qualify based on the threshold (Sridhar 2008).
- 6.
Govinda (2012) notes several: “analysing gender inequality”, “promoting gender equity”, “mainstreaming gender”, “engendering development”, and “gender sensitisation”.
- 7.
Here, I am not taking up the more obvious ethical issue of doing harm, even unintentionally, as in the famous Stanford prison experiment (Zimbardo 1973).
- 8.
Freedman (1987:141) notes that equipoise is “a state of genuine uncertainty … regarding the comparative therapeutic merits of each arm in a trial”. For him, “clinical equipoise” is when there is “genuine uncertainty” on the part of the “expert medical community – not necessarily on the part of the individual investigator –about the preferred treatment”. However, this is not without its critics. For instance, Miller and Brody (2003: 20) critique equipoise for viewing “clinical trials through a therapeutic lens”. Relatedly, Clayton (1982) distinguishes between an “individual ethic” (avoid harm, provide equal benefit to each individual) and a “collective ethic” (acquire new knowledge so that individuals may benefit in the future). In this rendering, RCTs can be justified ethically when the latter counters the former.
While equipoise is about uncertainty regarding knowledge, there is also the related matter of clinical trials with methodological failures that have ethical consequences. May (1975: 25) notes that “one of the most serious ethical problems in clinical research is that of placing subjects at risk of injury, discomfort, or inconvenience in experiments where there are too few subjects for valid results, too many subjects for the point to be established, or an improperly designed random or double-blind procedure”.
- 9.
The study had other ethical violations as well, for instance, deliberate deception of participants. Subsequently, a larger literature and consensus have developed around ethical dos and don’ts regarding research on human subjects, and these have been institutionalised in specific research contexts (for instance, in Institutional Review Boards).
- 10.
However, this has come in for criticism. For instance, Glewwe et al. (2012) report an RCT gauging the extent to which students with eyesight problems do better at school if they wear corrective eyeglasses. For a critique from the perspective of clinical equipoise, see Ziliak and Teather-Posadas (2016).
- 11.
Interestingly, participatory and group-sensitive evaluation approaches naturally take process and context more in their stride than do conventional approaches: “Equity-focused evaluations pay particular attention to process and contextual analysis, while conventional impact evaluation designs use a pre-test/post-test comparison group design, which does not study the processes through which interventions are implemented nor the context in which they operate” (UNICEF 2011:9–10). See also Batliwala and Pittman (2010).
- 12.
In the context of ethics, it is also worth raising the question of who the evaluator should be answerable to. In practice, typically accountability is to evaluation sponsors and programme funders alone rather than to intended beneficiaries and specific marginalised groups, and this is particularly problematic where evaluation is merely “ritualistic”.
- 13.
Although in the text I do not discuss the problem of poor-quality evaluation reports, this is also an important reality in the Indian context, particularly when “[e]valuations are typically carried out by professionals who have neither an evaluation background nor a good understanding of how governments function”, so that evaluation reports merely “contain generalised statements” rather than contextually relevant recommendations based on real-world processes and pragmatic judgements (Kumar 2010:239).
- 14.
Of the three types of “evaluation orientation” distinguished by Carden and Alkin (2012) – use-oriented approaches, values-oriented approaches, and methods-oriented approaches – donor-driven evaluation focuses more on the third and is particularly weak on the second (for instance, genuinely participatory methodologies, as discussed earlier), whereas both the second and the first are likely of greater relevance for the programme beneficiaries and the programme itself.
References
Alkin, M. C. (2010). Evaluation essentials: From A to Z. New York: Guildford Press.
Altman, D. (1980). Statistics and ethics in medical research: Misuse of statistics is unethical. British Medical Journal, 281.
Amrith, S. (2007, January 13–19). Political culture of health in India: A historical perspective. Economic and Political Weekly, 42(2), 114–121.
Angell, M. (1997). The ethics of clinical research in the third world. New England Journal of Medicine, 337(12), 847–849.
Banerjee, A. V., Duflo, E., Glennerster, R., & Kothari, D. (2010). Improving immunisation coverage in rural India: clustered randomised controlled evaluation of immunisation campaigns with and without incentives. BMJ, 340, c2220.
Batliwala, S., & Pittman, A. (2010). Capturing change in women’s realities: A critical overview of current monitoring and evaluation frameworks. Toronto: Association for Women in Development.
Caplan, A. L. (2001). Twenty years after: The legacy of the Tuskegee syphilis study. In Bioethics, justice and health care (pp. 231–225). Belmont: Wadsworth-Thomson Learning.
Carden, F. (2010). Introduction to the forum on evaluation field building in South Asia. American Journal of Evaluation, 31(2), 219–221.
Carden, F., & Alkin, M. C. (2012). Evaluation roots: An international perspective. Journal of MultiDisciplinary Evaluation, 8(17), 102–118.
Cartwright, N., & Hardie, J. (2012). Evidence-based policy: A practical guide to doing it better. New York: Oxford University Press.
Chambers, R.(2007). From PRA to PLA and pluralism: Practice and theory. In The SAGE handbook of action research: Participative inquiry and practice (p. 297). London: Sage.
Chigateri, S., & Saha, S. (2016). Gender transformative evaluations.
Chouinard, J. A. (2013). The case for participatory evaluation in an era of accountability. American Journal of Evaluation, 34(2), 237–253.
CIOMS (Council for International Organizations of Medical Sciences). (2002). International ethical guidelines for biomedical research involving human subjects. Bulletin of Medical Ethics, 182, 17.
Clayton, D. G. (1982). Ethically optimised designs. British Journal of Clinical Pharmacology, 13(4), 469–480.
Cook, T. D., & Campbell, D. T. (1979). Quasi experimentation: Design and analytical issues for field settings. Chicago: Rand McNally.
Crishna, B. (2006). Participatory evaluation (I)–sharing lessons from fieldwork in Asia. Child Care, Health and Development, 33(3), 217–223.
Dalkin, S. M., Greenhalgh, J., Jones, D., Cunningham, B., & Lhussier, M. (2015). What’s in a mechanism? Development of a k concept in realist evaluation. Implementation Science, 10, 49.
Deaton, A. (2010, June). Instruments, randomization, and learning about development. Journal of Economic Literature, 48, 424–455.
Duflo, E., Glennester, R., & Kremer, M. (2007). Using randomization in development economics research: A toolkit. In Handbook of development economics.
Dunning, T. (2012). Natural experiments in the Social Sciences: A design-based approach. New York: Cambridge University Press.
Engber, D. (2011, November 15). The mouse trap (part I): The dangers of using one lab animal to study every disease. Slate. Available at http://www.slate.com/articles/health_and_science/the_mouse_trap/2011/11/lab_mice_are_they_limiting_our_understanding_of_human_disease_.html
Esteves, A. M., Franks, D., & Vanclay, F. (2012). Social impact assessment: The state of the art. Impact Assessment and Project Appraisal, 30(1), 34–42.
Freedman, B. (1987). Equipoise and the ethics of clinical research. New England Journal of Medicine, 317(3), 141–145.
George, S. M., Latham, M. C., Frongillo, E. A., Abel, R., & Ethirajan, N. (1993). Evaluation of effectiveness of good growth monitoring in south Indian villages. The Lancet, 342(8867), 348–352.
Glewwe, P., Park, A., & Zhao, M. (2012). Visualizing development: Eyeglasses and academic performance in primary schools in China. Center for International Food and Agricultural Policy Research, University of Minnesota, Working Paper WP12-2 (Jan.)
Govinda, R. (2012). Mapping ‘gender evaluation’ in South Asia. Indian Journal of Gender Studies, 19(2), 187–209.
Habermas, J. (1971). Knowledge and human interests (J. J. Shapiro, Trans.). Boston: Beacon Press.
Heaver, R. (2002). India’s Tamil Nadu nutrition program: Lessons and issues in management and capacity development. HNP discussion paper series. World Bank, Washington, DC. https://openknowledge.worldbank.org/handle/10986/13787 license: CC BY 3.0 IGO.
Jacob, S., Natrajan, B., & Patil, I. (2015). Explaining village-level development trajectories through schooling in Karnataka. Economic & Political Weekly, L52, 54–64.
Joseph, M. (2004, August 23). What if ‘Hum do, Hamare Do’ had worked? Outlook India.
Kaplan, S. A., & Garrett, K. E. (2005). The use of logic models by community-based initiatives. Evaluation and Program Planning, 28, 167–172.
Kemp, D., & Vanclay, F. (2013). Human rights and impact assessment: clarifying the connections in practice. Impact Assessment and Project Appraisal, 31(2), 86–96.
Khanna, R. (2013). Ethical issues in community based monitoring of health programmes: Reflections from India. Paper 3 in COPASAH Series on Social Accountability.
Konkipudi, K., & Jacob, S. (2017). Political pressures and bureaucratic consequences: Vignettes of Janmabhoomi implementation in Andhra Pradesh. Studies in Indian Politics, 5(1), 1–17.
Kumar, A. K. (2010). Shiva “a comment on ‘evaluation field building in South Asia: Reflections, anecdotes, and questions”. American Journal of Evaluation, 31(2), 238–240.
Lim, S. S., Dandona, L., Hoisington, J. A., James, S. L., Hogan, M. C., & Gakidou, E. (2010, June 5). India’s Janani Suraksha Yojana, a conditional cash transfer Programme to increase births in health facilities: An impact evaluation. Lancet, 375, 2009–2023.
May, W. W. (1975). Composition and function of ethical committees. Journal of Medical Ethics, 1(1), 23–29.
Mehrotra, S. (2013). The government monitoring and evaluation system in India: A work in progress. ECD working paper series No. 28.
Miller, F. G., & Brody, H. (2003). A critique of clinical equipoise: Therapeutic misconception in the ethics of clinical trials. Hastings Center Report, 33(3), 19–28.
Mishra, A. (2014). ‘Trust and teamwork matter’: Community health Workers’ experiences in integrated service delivery in India. Global Public Health, 9(8), 960–974.
Morgan, R. K. (2012). Environmental impact assessment: The state of the art. Impact Assessment and Project Appraisal, 30(1), 5–14.
Pal, S. P., & Chakrabarti, M.. Reforming India’s evaluation architecture: The role of Independent Evaluation Office (unpublished work).
Prashanth, N. S., Marchal, B., & Criel, B. (2013). Evaluating healthcare interventions: Answering the ‘how’ question. Indian Anthropologist, 43(1), 35–50.
Rao, K. S. (2017). Do we care: India’s health system. Delhi: Oxford University Press.
Reddy, S. G. (2012). Randomise this! On poor economics. Review of Agrarian Studies, 2(2), 60–73.
Riis, P. (2003). Thirty years of bioethics: The Helsinki declaration 1964-2003. New Review of Bioethics, 1(1), 15–25.
Rogers, P. J. (2012). Introduction to impact evaluation. Impact Evaluation Notes. Retrieved from http://interaction.org/impact-evaluation-notes.
Rossi, P., Freeman, H., & Lipsey, M. (2004). Monitoring program process and performance. In Evaluation: A systematic approach.
Shukla, A., Khanna, R., & Jadhav, N. (2014). Using community-based evidence for decentralized health planning: insights from Maharashtra, India. Health Policy and Planning, czu099. https://doi.org/10.1093/heapol/czu099.
Smith, M. J., Clarke, R. V., & Pease, K. (2002). Anticipatory benefits in crime prevention. In N. Tilley (Ed.), Analysis for crime prevention: Crime prevention studies (Vol. 13, pp. 71–88). Monsey: Criminal Justice Press.
Sridhar, D. (2008). The battle against hunger: Choice, circumstance, and the World Bank. Oxford: Oxford University Press.
Sridhar, D. (2010). Addressing undernutrition in India: Do ‘rational’ approaches work? In H. Margetts & C. Hood (Eds.), Paradoxes of modernization: Unintended consequences of public policy reform (pp. 119–137). New York: Oxford University Press.
St. Pierre, E. A. (2006). Scientifically based research in education: Epistemology and ethics. Adult Education Quarterly, 56(4), 239–266.
UN Women. (2015). How to manage gender-responsive evaluation, Independent Evaluation Office. Retrieved from: http://genderevaluation.unwomen.org/en/evaluation-handbook
UNICEF. (2011). How to design and manage Equity-focused evaluations (Michael Bamberger and Marco Segone).
van Hollen, C. (2003). Birth on the threshold: Childbirth and modernity in South India. University of California Press, 272 pp.
Watkins, R., Meiers, M. W., & Visser, Y. L. (2012). Needs assessment: Frequently asked questions. In A guide to assessing needs: Tools for collecting information, making decisions, and achieving development results.
White, H., & Masset, E. (2007). Assessing interventions to improve child nutrition: A theory-based impact evaluation of the Bangladesh integrated nutrition project. Journal of International Development, 19(5), 627–652.
Woodward, J. (2017). Scientific explanation. In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy (Fall 2017 edn). https://plato.stanford.edu/archives/fall2017/entries/scientific-explanation/
Woolcock, M. (2013). Using case studies to explore the external validity of ‘complex’ development interventions. Evaluation, 19(3), 229–248.
World Bank. (1994). Impact evaluation report. India. Tamil Nadu integrated nutrition project. Washington: World Bank, Operations Evaluation Department. Internal report. Processed.
World Bank. (2005, December ). The Bangladesh Integrated Nutrition Project: Effectiveness and lessons. Bangladesh Development Series – paper no.8.
Ziliak, S., & Teather-Posadas, E. R. (2016). The unprincipled randomization principle in economics and medicine. In G. DeMartino & D. McCloskey (Eds.), The Oxford handbook of professional economic ethics. New York: Oxford University Press.
Zimbardo, P. G. (1973). On the ethics of intervention in human psychological research: With special reference to the Stanford prison experiment. Cognition, 2(2), 243–256.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer Nature Singapore Pte Ltd.
About this chapter
Cite this chapter
Jacob, S. (2018). Knowledge, Framing, and Ethics in Programme Design and Evaluation. In: Mishra, A., Subbiah, K. (eds) Ethics in Public Health Practice in India. Springer, Singapore. https://doi.org/10.1007/978-981-13-2450-5_3
Download citation
DOI: https://doi.org/10.1007/978-981-13-2450-5_3
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-13-2449-9
Online ISBN: 978-981-13-2450-5
eBook Packages: Religion and PhilosophyPhilosophy and Religion (R0)