Skip to main content

An Evidence-Hierarchical Decision Aid for Ranking in Evidence-Based Medicine

  • Chapter
  • First Online:
Book cover Uncertainty in Pharmacology

Part of the book series: Boston Studies in the Philosophy and History of Science ((BSPS,volume 338))

Abstract

This chapter addresses the problem of ranking available drugs in guideline development to support clinicians in their work. Based on a pragmatic approach to the notion of evidence and a hierarchical view on different kinds of evidence this chapter introduces a decision aid, HiDAD, which draws on the multi criteria decision making literature. This decision aid implements the wide-spread intuition that there are different kinds of evidence with varying degrees of importance by relying on a strict ordinal ordering of kinds of evidence. In order to construct a ranking every pair of drugs is first compared separately on all kinds of evidence. Next, these quantitative comparisons are then aggregated into an overall comparison between drugs based on all the available evidence in a way which avoids that evidence of less importance is trumped by evidence of the higher levels. Finally, these overall comparisons are used to determine the final ranking of drugs which then informs the process of guideline writing. Properties, modifications and applicability of the decision aid HiDAD are discussed and assessed.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 119.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 159.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 159.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Information known to be false or irrelevant is thus ignored. Which information is deemed relevant and which is deemed irrelevant is a complicated question outside the scope of this contribution. The answers will depend on the epistemic state, as well as cognitive limitations and the exact framing of the decision problem.

  2. 2.

    In the more applied sciences, the term information fusion rather than evidence amalgamation or evidence aggregation is often used. Definitions of the term information fusion are surveyed in Boström et al. (2007). Further often-used terms are “research synthesis” and “evidence synthesis”, see also Sect. 11.2.1.3.

  3. 3.

    These recommendations are intended to guide doctors in their daily work. I emphatically do not want to suggest that a recommendation of a regulatory body ought to be followed at all times. There are good reasons to deviate from general medical guidelines when it comes to the treatment of individual patients. Patients have individual circumstances such as: co-morbidities, known or suspected (drug)-intolerances and treatment preferences as well as outcome preferences. For deciding on a treatment in an individual patient at a particular time, these patient-specific circumstances ought to matter, too.

  4. 4.

    There is no principled reason for which I could not construe the decision problem as a multi-outcome problem. For migraines, these outcomes might be: hours with headache, headache severity, days of sick leave and adverse events. In order to keep the complexity of the problem and of the presentation manageable, I abstain from doing so.

  5. 5.

    The GRADE approach is discussed in more detail in Sect. 11.2.1.2.

  6. 6.

    Recently, it was alleged that it is impossible to amalgamate of evidence of different kinds (Stegenga 2013); an appropriate response was provided in Lehtinen (2013).

  7. 7.

    Without going into details here, GRADE and HiDAD use similar language to refer to different concepts and techniques.

  8. 8.

    The ever-present difficulties from passing from a continuum to a discretisation (of judgements) are another layer of complexity (Guyatt et al. 2013, p. 154–155), which apply equally to GRADE and to HiDAD.

  9. 9.

    Under the construal of evidence offered here, expert clinical judgement is evidence, too.

  10. 10.

    There is no suggestion here that even such limited comparisons are always feasible. I would like to refer the reader to Footnote 14 for further discussion. To help determine marginal comparisons the DM may choose to avail herself to further (medical) decision aids. For example, (a) to assess (systematic reviews of) RCTs the DM may use decision support systems put forward in the medical decision literature which were discussed in Sect. 11.2.1.3, (b) means to make sense of multiple, possibly conflicting, expert opinions are put forward in the literature on judgement aggregation.

  11. 11.

    The term multi criteria decision analysis is also often found in the literature which is, at times, used interchangeably.

  12. 12.

    A reluctance to use precise numbers has not only manifested itself in the analysis of decision problems but also in the related, but by no means equivalent, epistemological problem of determining rational degrees of beliefs. This reluctance has given rise (among others) to the framework of imprecise probabilities, see Troffaes and de Cooman (2014) for a very recent treatment, Dempster-Shafer Theory, see Shafer (1976), and fuzzy logic as championed by Dubois and Prade, see Dubois et al. (1997). In Shafer and Srivastava (1990, p. 129), Shafer & Srivastava argued in favor of qualitative approaches [those with “fewer inputs” in their terminology] thusly: When fewer inputs are required, we have a better chance of finding reasonably solid evidence on which to base these inputs, and thus, we have a better chance of producing an overall argument based on evidence rather than mere fancy.

  13. 13.

    An ideal rational agent, the protagonist of many a philosophical piece, may be in a position to give meaningful precise quantitative assessments. A (group of) human decision makers is in a significantly different epistemic situation. The applicability of HiDAD depending on the DM’s situation is discussed in Sect. 11.6. Section 11.6.1 focuses on applications of HiDAD to other problems, while Sect. 11.6.2 provides conditions under which HiDAD should not be applied.

  14. 14.

    Clearly, it may not always be the case that the DM is able and comfortable to do so. This does not mean that HiDAD is wrong, it simply means that it should not be applied in such a case. Mutatis mutandis, the same is true for further assumptions I make: If the assumptions I make do not hold in another concrete decision problem, then HiDAD should not be applied, see also Footnote 10.

  15. 15.

    Note that all other marginal comparisons lead to a change of the overall ranking of A k and A i and hence all possible cases have been considered here.

  16. 16.

    I think that such cases are *very* rare. However, should the DM assess the evidence thusly, then there have to be good reasons for doing so.

  17. 17.

    I do think that this definition of evidence could be of use much more generally. I shall here be content with keeping the focus on the discussed ranking problem.

  18. 18.

    If precise quantification were possible, then I do recommend to use these numbers. However, I supposed that precise quantification is not possible and hence went down a qualitative path. For cases in which precise quantification of the importance of criteria is feasible the reader is referred to Mussen et al. (2009), Tervonen et al. (2011).

References

  • Alonso-Coello, P., Schünemann, H. J., Moberg, J., Brignardello-Petersen, R., Akl, E. A., Davoli, M., Treweek, S., Mustafa, R. A., Rada, G., Rosenbaum, S., Morelli, A., Guyatt, G. H., & Oxman, A. D. (2016). GRADE Evidence to Decision (EtD) frameworks: A systematic and transparent approach to making well informed healthcare choices. 1: Introduction. BMJ, 353. https://doi.org/10.1136/bmj.i2016.

  • Aumann, R. J. (1962). Utility theory without the completeness axiom. Econometrica, 30(3), 445–462.

    Article  Google Scholar 

  • Belton, V. (2009). Matters of weight. Journal of Multi-Criteria Decision Analysis, 15, 109–110.

    Article  Google Scholar 

  • Belton, V., & Stewart, T. J. (2002). Multiple criteria decision analysis: An integrated approach. Boston: Springer.

    Book  Google Scholar 

  • Bertamini, M., & Munafó, M. R. (2012). Bite-size science and its undesired side effects. Perspectives on Psychological Science, 7(1), 67–71.

    Article  Google Scholar 

  • Bes-Rastrollo, M., Schulze, M. B., Ruiz-Canela, M., & Martinez-Gonzalez, M. A. (2013). Financial conflicts of interest and reporting bias regarding the association between sugar-sweetened beverages and weight gain: A systematic review of systematic reviews. PLoS Medicine, 10(12), 1–9.

    Article  Google Scholar 

  • Billaut, J.-C., Bouyssou, D., & Vincke, P. (2010). Should you believe in the Shanghai ranking? An MCDM view. Scientometrics, 84, 237–263.

    Article  Google Scholar 

  • Borm, G. F., Lemmers, O., Fransen, J., & Donders, R. (2009). The evidence provided by a single trial is less reliable than its statistical analysis suggests. Journal of Clinical Epidemiology, 62(7), 711–715.

    Article  Google Scholar 

  • Boström, H., Andler, S. F., Brohede, M., & Johansson, R. (2007). On the definition of information fusion as a field of research. Technical report, University of Skövde.

    Google Scholar 

  • Boutilier, C. (1994). Toward a logic for qualitative decision theory. In Proceedings of KR (Vol. 94, pp. 75–86). San Fransisco: Morgan Kaufmann.

    Google Scholar 

  • Bouyssou, D., Marchant, T., Pirlot, M., Tsoukiàs, A., & Vincke, P. (2006). Evaluation and decision models with multiple criteria. New York: Springer.

    Google Scholar 

  • Button, K. S., Ioannidis, J. P. A., Mokrysz, C., Nosek, B. A., Flint, J., Robinson, E. S. J., & Munafo, M. R. (2013). Power failure: Why small sample size undermines the reliability of neuroscience. Nature Reviews Neuroscience, 14, 365–376.

    Article  Google Scholar 

  • Canadian Task Force on the Periodic Health Examination. (1979). The periodic health examination. Canadian Medical Association Journal, 121(9), 1193–1254.

    Google Scholar 

  • Carnap, R. (1947). On the application of inductive logic. Philosophy and Phenomenological Research, 8(1), 133–148.

    Article  Google Scholar 

  • Cartwright, N. (2007). Are RCTs the gold standard? Biosocieties, 2(1), 11–20.

    Article  Google Scholar 

  • Cartwright, N., & Munro, E. (2010). The limitations of randomized controlled trials in predicting effectiveness. Journal of Evaluation in Clinical Practice, 16(2), 260–266.

    Article  Google Scholar 

  • Chalmers, I., Hedges, L. V., & Cooper, H. (2002). A brief history of research synthesis. Evaluation & the Health Professions, 25(1), 12–37.

    Article  Google Scholar 

  • Chan, A.-W., & Altman, D. G. (2005). Epidemiology and reporting of randomised trials published in PubMed journals. The Lancet, 365(9465), 1159–1162.

    Article  Google Scholar 

  • Clarke, B., Gillies, D., Illari, P., Russo, F., & Williamson, J. (2013). The evidence that evidence-based medicine omits. Preventive Medicine, 57(6), 745–747.

    Article  Google Scholar 

  • Clarke, B., Gillies, D., Illari, P., Russo, F., & Williamson, J. (2014). Mechanisms and the evidence hierarchy. Topoi, 33, 339–360.

    Article  Google Scholar 

  • Cooper, N., Coyle, D., Abrams, K., Mugford, M., & Sutton, A. (2005). Use of evidence in decision models: An appraisal of health technology assessments in the UK since 1997. Journal of Health Services Research & Policy, 10(4), 245–250.

    Article  Google Scholar 

  • Doll, R., & Peto, R. (1980). Randomised controlled trials and retrospective controls. British Medical Journal, 280, 44.

    Article  Google Scholar 

  • Doyle, J., & Thomason, R. H. (1999). Background to qualitative decision theory. AI Magazine, 20(2), 55–68.

    Google Scholar 

  • Dubois, D., Fargier, H., & Perny, P. (2002). On the limitations of ordinal approaches to decision-making. In D. Fensel, F. Giunchiglia, D. L. McGuinness, & M.-A. Williams (Eds.), Proceedings of KR (pp. 133–146). San Fransisco: Morgan Kaufmann.

    Google Scholar 

  • Dubois, D., Fargier, H., & Prade, H. (1997). Decision-making under ordinal preferences and comparative uncertainty. In Proceeding of UAI (pp. 157–164).

    Google Scholar 

  • Dubois, D., Fargier, H., Prade, H., & Sabadin, R. (2009). A survey of qualitative decision rules under uncertainty. In Decision-making Process (Chapter 11, pp. 435–473). London: Wiley Online Library.

    Google Scholar 

  • Earman, J. (1992). Bayes or bust? MIT. Cambridge, MA

    Google Scholar 

  • Etz, A., & Vandekerckhove, J. (2016). A bayesian perspective on the reproducibility project: Psychology. PLoS ONE, 11(2).

    Google Scholar 

  • Evans, D. (2003). Hierarchy of evidence: A framework for ranking evidence evaluating healthcare interventions. Journal of Clinical Nursing, 12(1), 77–84.

    Article  Google Scholar 

  • Every-Palmer, S., & Howick, J. (2014). How evidence-based medicine is failing due to biased trials and selective publication. Journal of Evaluation in Clinical Practice, 20(6), 908–914.

    Article  Google Scholar 

  • Figueira, J., Greco, S., & Ehrgott, M. (2005). Multiple criteria decision analysis: State of the art surveys. Springer.

    Book  Google Scholar 

  • Gigerenzer, G., & Gaissmaier, W. (2011). Heuristic decision making. Annual Review of Psychology, 62(1), 451–482.

    Article  Google Scholar 

  • GRADE Working Group (2004). Grading quality of evidence and strength of recommendations. British Medical Journal, 328(7454), 1490–1494.

    Article  Google Scholar 

  • Guyatt, G., Oxman, A. D., Sultan, S., Brozek, J., Glasziou, P., Alonso-Coello, P., Atkins, D., Kunz, R., Montori, V., Jaeschke, R., Rind, D., Dahm, P., Akl, E. A., Meerpohl, J., Vist, G., Berliner, E., Norris, S., Falck-Ytter, Y., & Schünemann, H. J. (2013). GRADE guidelines: 11. Making an overall rating of confidence in effect estimates for a single outcome and for all outcomes. Journal of Clinical Epidemiology, 66(2), 151–157.

    Article  Google Scholar 

  • Guyatt, G. H., Oxman, A. D., Schünemann, H. J., Tugwell, P., & Knottnerus, A. (2011). GRADE guidelines: A new series of articles in the journal of clinical epidemiology. Journal of Clinical Epidemiology, 64, 380–382.

    Article  Google Scholar 

  • Guyatt, G. H., Oxman, A. D., Vist, G. E., Kunz, R., Falck-Ytter, Y., Alonso-Coello, P., & Schünemann, H. J. (2008). GRADE: An emerging consensus on rating quality of evidence and strength of recommendations. British Medical Journal, 336(7650), 924–926.

    Article  Google Scholar 

  • Horton, R. (2004). Vioxx, the implosion of Merck, and aftershocks at the FDA. The Lancet, 364(9450), 1995–1996.

    Article  Google Scholar 

  • Howick, J. H. (2011). The philosophy of evidence-based medicine. Chichester: Blackwell.

    Book  Google Scholar 

  • Howick, J., et al. (2011). The Oxford 2011 levels of evidence. Oxford: Oxford Centre for Evidence Based Medicine. http://www.cebm.net/wp-content/uploads/2014/06/CEBM-Levels-of-Evidence-2.1.pdf.

    Google Scholar 

  • Jüni, P., Nartey, L., Reichenbach, S., Sterchi, R., Dieppe, P. A., & Egger, M. (2004). Risk of cardiovascular events and rofecoxib: Cumulative meta-analysis. The Lancet, 364(9450), 2021–2029.

    Article  Google Scholar 

  • Keeney, R. L., & Raiffa, H. (1993). Decisions with multiple objectives. Cambridge: Cambridge Books. Cambridge University Press.

    Book  Google Scholar 

  • Kelly, M. P., & Moore, T. A. (2012). The judgement process in evidence-based medicine and health technology assessment. Social Theory & Health, 10(1), 1–19.

    Article  Google Scholar 

  • Kelly, T. (2015). Evidence. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy (Fall 2014 ed.). Stanford: Metaphysics Research Lab, Stanford University. https://plato.stanford.edu/cgi-bin/encyclopedia/archinfo.cgi?entry=evidence

    Google Scholar 

  • Khangura, S., Polisena, J., Clifford, T. J., Farrah, K., & Kamel, C. (2014). Rapid review: An emerging approach to evidence synthesis in health technology assessment. International Journal of Technology Assessment in Health Care, 30(1), 1–8.

    Article  Google Scholar 

  • Krumholz, H. M., Ross, J. S., Presler, A. H., & Egilman, D. S. (2007). What have we learnt from vioxx? British Medical Journal, 334(7585), 120–123.

    Article  Google Scholar 

  • LaCaze, A. (2009). Evidence-based medicine must be …. Journal of Medicine and Philosophy, 34(5), 509–527.

    Article  Google Scholar 

  • Landes, J., Osimani, B., & Poellinger, R. (2018). Epistemology of causal inference in pharmacology. European Journal for Philosophy of Science, 8, 3–49.

    Article  Google Scholar 

  • Lehtinen, A. (2013). On the impossibility of amalgamating evidence. Journal for General Philosophy of Science, 44(1), 101–110.

    Article  Google Scholar 

  • Marewski, J. N., & Gigerenzer, G. (2012). Heuristic decision making in medicine. Dialogues in Clinical Neuroscience, 14, 77–89.

    Google Scholar 

  • McGauran, N., Wieseler, B., Kreis, J., Schuler, Y.-B., Kolsch, H., & Kaiser, T. (2010). Reporting bias in medical research – A narrative review. Trials, 11(37), 1–15.

    Google Scholar 

  • Mussen, F., Salek, S., & Walker, S. (2009). Benefit-risk appraisal of medicines. Chichester: Wiley.

    Google Scholar 

  • Onakpoya, I. J., Heneghan, C. J., & Aronson, J. K. (2016). Worldwide withdrawal of medicinal products because of adverse drug reactions: A systematic review and analysis. Critical Reviews in Toxicology, 46, 477–489.

    Article  Google Scholar 

  • Osimani, B. (2014a). Hunting side effects and explaining them: Should we reverse evidence hierarchies upside down? Topoi, 33(2), 295–312.

    Article  Google Scholar 

  • Osimani, B. (2014b). Safety vs. efficacy assessment of pharmaceuticals: Epistemological rationales and methods. Preventive Medicine Reports, 1, 9–13.

    Article  Google Scholar 

  • Price, K. L., Amy Xia, H., Lakshminarayanan, M., Madigan, D., Manner, D., Scott, J., Stamey, J. D., & Thompson, L. (2014). Bayesian methods for design and analysis of safety trials. Pharmaceutical Statistics, 13(1), 13–24.

    Article  Google Scholar 

  • Reiss, J. (2015). A pragmatist theory of evidence. Philosophy of Science, 82(3), 341–362.

    Article  Google Scholar 

  • Revicki, D. A., & Frank, L. (1999). Pharmacoeconomic evaluation in the real world. PharmacoEconomics, 15(5), 423–434.

    Article  Google Scholar 

  • Russo, F., & Williamson, J. (2007). Interpreting causality in the health sciences. International Studies in the Philosophy of Science, 21(2), 157–170.

    Article  Google Scholar 

  • Sackett, D. L., Straus, S. E., Richardson, W. S., & Haynes, R. B. (2000). Evidence-based medicine: How to practice and teach EBM (2nd ed.). New York: Churchill Livingstone.

    Google Scholar 

  • Savage, L. J. (1954). The foundations of statistics. New York: Dover Publications.

    Google Scholar 

  • Shafer, G. (1976). A mathematical theory of evidence. Princeton: Princeton University Press.

    Google Scholar 

  • Shafer, G., & Srivastava, R. (1990). The bayesian and belief-function formalisms: A general perspective for auditing. Auditing: A Journal of Practice & Theory, 9, 110–137.

    Google Scholar 

  • Skyrms, B. (1990). The dynamics of rational deliberation. Cambridge, MA: Harvard University Press.

    Google Scholar 

  • Solomon, M. (2011). Just a paradigm: Evidence-based medicine in epistemological context. European Journal for Philosophy of Science, 1(3), 451–466.

    Article  Google Scholar 

  • Stegenga, J. (2013). An impossibility theorem for amalgamating evidence. Synthese, 190(12), 2391–2411.

    Article  Google Scholar 

  • Stegenga, J. (2014). Down with the hierarchies. Topoi, 33(2), 313–322.

    Article  Google Scholar 

  • Tan, S.-W., & Pearl, J. (1994). Qualitative decision theory. In Proceedings of AAAI (vol. 2, pp. 928–933). Palo Alto: AAAI Press.

    Google Scholar 

  • Tervonen, T., van Valkenhoef, G., Buskens, E., Hillege, H. L., & Postmus, D. (2011). A stochastic multicriteria model for evidence-based decision making in drug benefit-risk analysis. Statistics in Medicine, 30(12), 1419–1428.

    Article  Google Scholar 

  • Thomas, J., Newman, M., & Oliver, S. (2013). Rapid evidence assessments of research to inform social policy: taking stock and moving forward. Evidence & Policy: A Journal of Research, Debate and Practice, 9(1), 5–27.

    Article  Google Scholar 

  • Troffaes, M. C. M., & de Cooman, G. (2014). Lower previsions. Chichester: Wiley.

    Book  Google Scholar 

  • Twinning, W. (2011). Moving beyond law: Interdisciplinarity and the study of evidence. In P. Dawid, W. Twinning, & M. Vasilaki (Eds.), Evidence, inference and enquiry (chapter 4, pp. 73–118). Oxford University Press.

    Google Scholar 

  • Upshur, R. (1995). Looking for rules in a world of exceptions: Reflections on evidence-based practice. Perspectives in Biology and Medicine, 48(4), 477–489.

    Article  Google Scholar 

  • Urfalino, P. (2012). Reasons and preferences in medicine evaluation committees. In H. Landemore & J. Elster (Eds.), Collective wisdom (pp. 173–202). Cambridge: Cambridge University Press.

    Chapter  Google Scholar 

  • van Valkenhoef, G., Tervonen, T., Zwinkels, T., de Brock, B., & Hillege, H. (2013). ADDIS: A decision support system for evidence-based medicine. Decision Support Systems, 55(2), 459–475.

    Article  Google Scholar 

  • Vandenbroucke, J. P. (2008). Observational research, randomised trials, and two views of medical science. PLoS Medicine, 5(3), e67.

    Article  Google Scholar 

  • Williamson, J. (2015). Deliberation, judgement and the nature of evidence. Economics and Philosophy, 31, 27–65.

    Article  Google Scholar 

  • Worrall, J. (2002). What evidence in evidence-based medicine? Philosophy of Science, 69(3), 316–330.

    Article  Google Scholar 

  • Worrall, J. (2007a). Evidence in medicine and evidence-based medicine. Philosophy Compass, 2(6), 981–1022.

    Article  Google Scholar 

  • Worrall, J. (2007b). Why there’s no cause to randomize. British Journal for the Philosophy of Science, 58(3), 451–488.

    Article  Google Scholar 

  • Worrall, J. (2010). Evidence: Philosophy of science meets medicine. Journal of Evaluation in Clinical Practice, 16(2), 356–362.

    Article  Google Scholar 

Download references

Acknowledgements

The idea for this chapter arose when the author was a research assistant on a project on the “Optimal design of biofuel production by microalgae” at INRA, UR0050, Laboratoire de Biotechnologie de l’Environnement (2009–2010). Progress all but ceased when the author joined the “From objective Bayesian epistemology to inductive logic” AHRC-funded project at the University of Kent. The great majority of the work was carried out after the author joined the ERC-funded project “Philosophy of Pharmacology: Safety, Statistical Standards, and Evidence Amalgamation” (grant 639276) at the LMU Munich. Currently, the author is the principal investigator of the project Evidence and Objective Bayesian Epistemology funded by the German Research Council. Regarding this chapter, the author benefited from a number of discussions with Seamus Bradley, Ricardo Büttner, Teddy Groves, Adam LaCaze, Laurent Lardon, Barbara Osimani, Roland Poellinger, David Teira and Jon Williamson as well as the members of the Environmental Lifecycle and Sustainability Assessment group. He would also like to thank an anonymous referee for the European Journal of Operational Research and three anonymous reviewers for this volume as well as the editors of this volume for their thoughtful comments and insights.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jürgen Landes .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Landes, J. (2020). An Evidence-Hierarchical Decision Aid for Ranking in Evidence-Based Medicine. In: LaCaze, A., Osimani, B. (eds) Uncertainty in Pharmacology. Boston Studies in the Philosophy and History of Science, vol 338. Springer, Cham. https://doi.org/10.1007/978-3-030-29179-2_11

Download citation

Publish with us

Policies and ethics