Skip to main content

Accounting for Possibilities in Decision Making

  • Chapter
  • First Online:
The Argumentative Turn in Policy Analysis

Part of the book series: Logic, Argumentation & Reasoning ((LARI,volume 10))

Abstract

Intended as a practical guide for decision analysts, this chapter provides an introduction to reasoning under great uncertainty. It seeks to incorporate standard methods of risk analysis in a broader argumentative framework by re-interpreting them as specific (consequentialist) arguments that may inform a policy debate—side by side along further (possibly non-consequentialist) arguments which standard economic analysis does not account for. The first part of the chapter reviews arguments that can be advanced in a policy debate despite deep uncertainty about policy outcomes, i.e. arguments which assume that uncertainties surrounding policy outcomes cannot be (probabilistically) quantified. The second part of the chapter discusses the epistemic challenge of reasoning under great uncertainty, which consists in identifying all possible outcomes of the alternative policy options. It is argued that our possibilistic foreknowledge should be cast in nuanced terms and that future surprises—triggered by major flaws in one’s possibilistic outlook—should be anticipated in policy deliberation.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Like for example Heal and Millner (2013), I use “deep uncertainty” to refer to decision situations where the outcomes of alternative options cannot be predicted probabilistically. Hansson and Hirsch Hadorn (2016) refer to situations where, among other things, predictive uncertainties cannot be quantified as “great uncertainty.” Compare Hansson and Hirsch Hadorn (2016) also for alternative terminologies and further terminological clarifications.

  2. 2.

    This chapter complements Brun and Betz (2016) in this volume on argument analysis; for readers with no background in argumentation theory, it is certainly profitable to study both in conjunction.

  3. 3.

    I try however to pinpoint substantial dissent in footnotes.

  4. 4.

    For an up-to-date decision-theoretic review of decision making under deep uncertainty see Etner et al. (2012).

  5. 5.

    Terminologically I follow Clarke (2006), who criticizes probabilism on the basis of extensive case studies. A succinct statement of probabilism is due to O’Hagan and Oakley (2004:239): “In principle, probability is uniquely appropriate for the representation and quantification of all forms of uncertainty; it is in this sense that we claim that ‘probability is perfect’.” The formal decision theory that inspires probabilism was developed by Savage (1954) and Jeffrey (1965).

  6. 6.

    In the context of climate policy making, (Schneider 2001) is a prominent defence of this view; compare also Jenkins et al. (2009:23) for a more recent example. A (self-)critical review by someone who has been pioneering uncertainty quantification in climate science is (Morgan 2011).

  7. 7.

    Morgan et al. (1990) spell out this view in detail (see for example p. 49 for a very clear statement).

  8. 8.

    This view is echoed in various contributions to this book, e.g. Hansson (2016, esp. fallacies), Shrader-Frechette (2016 p. 12) and Doorn (2016, beginning). Compare Gilboa et al. (2009) as well as Heal and Millner (2013) for a decision-theoretic defence.

  9. 9.

    See again Shrader-Frechette (2016).

  10. 10.

    The illustrative analogy is inspired by Ellsberg (1961), whose “Ellsberg Paradox” is an important argument against probabilism.

  11. 11.

    It has been suggested that decision-makers can non-arbitrarily assume allegedly “un-informative” or “objective” probability distributions (e.g. a uniform distribution) in the absence of any relevant data. However, most Bayesian statisticians seem to concede that there are no non-subjective prior probabilities (e.g. Bernardo 1979:123). Van Fraassen (1989:293–317) thoroughly discusses the problems of assuming “objective priors.” Williamson (2010) is a recent defence of doing so.

  12. 12.

    For a state-of-the-art explication of the concept of real possibility, using branching-space-time theory, see Müller (2012).

  13. 13.

    Or, more precisely, “knowledge claims.” In the remainder of this chapter, I will refer to fallible knowledge claims, relative to which hypotheses are assessed, as “(background) knowledge” simpliciter.

  14. 14.

    There is a vast philosophical literature on whether this explication fully accommodates our linguistic intuitions (the “data”), cf. Egan and Weatherson (2009). Still, it’s unclear whether that philosophical controversy is also of decision-theoretic relevance.

  15. 15.

    On top, that’s a question we cannot answer anyway: Every judgement about whether some state-of-affairs S is a real possibility is based on an assessment of S in terms of epistemic possibility. To assert that S is really possible is simply to say that S represents an epistemic possibility (relative to background knowledge K) and that K is in a specific way “complete”, i.e. includes everything that can be known about S. Likewise, to assert that S does not represent a real possibility means that S is no epistemic possibility (relative to background knowledge K) and that K is objectively correct.

  16. 16.

    Brun and Betz (2016), this volume, which nicely complements this chapter, provides practical guidance for analyzing and evaluating argumentation.

  17. 17.

    On prerequisites of sound decision making under uncertainty see also Steele (2006).

  18. 18.

    The symmetry arguments Hansson (2016) discusses are another case in point. Suppose a proponent argues that option A′ should be preferred to option A on the grounds that A possibly leads to the disastrous effect E. An opponent counters the argument by showing that A′ may lead to an equally disastrous effect E′. Now, both arguments only draw on some possible effects of A and A′ respectively. They are weak and preliminary in the sense that more elaborate considerations will make them obsolete. Maybe we can construe them as heuristic reasoning which serves the piecemeal construction of more complex and robust practical arguments.

  19. 19.

    Nordhaus and Boyer (2000) is a (influential) case in point.

  20. 20.

    For a more detailed discussion of the implications of representation theorems see Briggs (2014: especially Sect. 2.2) and the references therein.

  21. 21.

    Cf. Luce and Raiffa (1957:278), Resnik (1987:26).

  22. 22.

    E.g. Elliott (2010).

  23. 23.

    The lexicographically refined maximin criterion is called “leximin.”

  24. 24.

    Moreover, the general premiss (2) can be understood as an implementation of Hansson’s symmetry tests (cf. Hansson 2016).

  25. 25.

    Gardiner (2006:47); see also Sunstein (2005), who argues for a weaker set of conditions. The general strategy to identify specific conditions under which the various decision principles may be applied is also favored by Resnik (1987:40).

  26. 26.

    In case the (dis)value of the best |case and worst case is quantifiable, their beta-balance is simply a weighted mean (where the parameter \( 0\le \beta \le 1 \) determines the relative weight of best versus worst case in the argumentation): \( \beta \times \mathrm{value}\hbox{-} \mathrm{of}\hbox{-} \mathrm{best}\hbox{-} \mathrm{case}+\left(1-\beta \right)\times \mathrm{disvalue}\hbox{-} \mathrm{of}\hbox{-} \mathrm{worst}\hbox{-} \mathrm{case} \). The corresponding decision principle is called “Hurwicz criterion” in decision theory (Resnik 1987: 32, Luce and Raiffa 1957:282). Hansson (2001:102–113) investigates the formal properties of “extremal” preferences which only take best and worst possible cases into account.

  27. 27.

    This is a version of the dominance principle (Resnik 1987:9).

  28. 28.

    In the context of climate policy making, an analogous line of reasoning, which focuses on the probability of attaining climate targets, is discussed under the title “cost risk analysis”; see the decision-theoretic analyzes by Schmidt et al. (2011) and Neubersch et al. (2014). Peterson (2006) shows that decision-making which seeks to minimize the probability of some harm runs into problems as soon as various harmful outcomes with different disvalue are distinguished.

  29. 29.

    Robust decision analysis à la Lempert et al. is hence a systematic form of “hypothetical retrospection” (see Hansson 2016, Sect. 6).

  30. 30.

    These different arguments and the coherent position (cf. Brun and Betz 2016: Sect. 4.2) one adopts with regard to them can be understood as an operationalization of Hansson’s degrees of unacceptability (cf. Hansson 2013:69–70).

  31. 31.

    For a detailed discussion of risk imposition and the problems standard moral theories face in coping with risks see Hansson (2003).

  32. 32.

    Brun and Betz (2016), this volume, discuss how such principles and the corresponding arguments can be analyzed. See also Hansson (2013:97–101).

  33. 33.

    Thus, Hansson (1997) stresses that in decision-making under deep uncertainty the demarcation of the possible from the impossible involves as influential a choice as the selection of a decision principle.

  34. 34.

    In speaking of “verified” and “falsified” conceptual possibilities, I follow a terminological suggestion by Betz (2010). To “verify” a conceptual possibility in this sense does not imply to show that the corresponding hypothesis is true, what is shown to be true (in possibilistic verification) is the claim that the hypothesis is consistent with background knowledge. However, to “falsify” a conceptual possibility involves showing that the corresponding hypothesis is false (given background knowledge).

  35. 35.

    For this very reason, it is a non-trivial assumption that a dynamic model of a complex system (e.g. a climate model) is adequate for verifying possibilities about that system (cf. Betz 2015).

  36. 36.

    See also the “epistemic defaults” discussed by Hansson (2016: Sect. 5).

  37. 37.

    For a discussion of narrower bounds for future sea level rise see Church et al. (2013:1185–6).

  38. 38.

    See Ellis et al. (2008) and Blaizot et al. (2003).

  39. 39.

    Compare the EU Energy Roadmap 2050 (European Commission 2011).

  40. 40.

    Cf. Church et al. (2013:1186–9).

  41. 41.

    Hansen et al. (2013) distinguish different “run-away greenhouse” scenarios and discuss whether they can be robustly ruled out—which, according to the authors, is the case for the most extreme ones (p. 24).

  42. 42.

    See Betz (2011), especially the discussion of Popper’s argument against predicting scientific progress (pp. 650–651).

  43. 43.

    See Rescher (1984, 2009) for a discussion of limits of science and their various (conceptual or empirical) reasons.

  44. 44.

    Brun and Betz (2016: especially Sect. 4.2) explain how argument analysis, and especially argument mapping techniques, help to balance conflicting normative reasons in general.

  45. 45.

    Basili and Zappia (2009) discuss the role of surprise in modern decision theory and its anticipation in the works of George L. S. Shackle.

  46. 46.

    So, to give an example, it may be that in a specific debate, say about geoengineering, one cannot coherently accept in the same time (i) the precautionary principle, (ii) sustainability goals and (iii) a general ban of risk technologies. Whoever takes a stance in this debate has to strike a balance between these normative ideas.

Recommended Readings

  • Betz, G. (2010a). What’s the worst case? The methodology of possibilistic prediction. Analyse und Kritik, 32, 87–106.

    Article  Google Scholar 

  • Etner, J., Jeleva, M., & Tallon, J.-M. (2012a). Decision theory under ambiguity. Journal of Economic Surveys, 26, 234–270.

    Article  Google Scholar 

  • Lempert, R. J., Popper, S. W., & Bankes, S. C. (2003a). Shaping the next one hundred years: New methods for quantitative, long-term policy analysis. Santa Monica: RAND.

    Google Scholar 

  • Resnik, M. D. (1987a). Choices: An introduction to decision theory. Minneapolis: University of Minnesota Press.

    Google Scholar 

References

  • Basili, M., & Zappia, C. (2009). Shackle and modern decision theory. Metroeconomica, 60, 245–282.

    Article  Google Scholar 

  • Bernardo, J. M. (1979). Reference posterior distributions for Bayesian inference. Journal for the Royal Statistical Society. Series B (Methodological), 41, 113–147.

    Google Scholar 

  • Betz, G. (2010b). What’s the worst case? The methodology of possibilistic prediction. Analyse und Kritik, 32, 87–106.

    Article  Google Scholar 

  • Betz, G. (2011). Prediction. In I. C. Jarvie & J. Zamora-Bonilla (Eds.), The sage handbook of the philosophy of social sciences (pp. 647–664). Thousand Oaks: SAGE Publications.

    Google Scholar 

  • Betz, G. (2015). Are climate models credible worlds? Prospects and limitations of possibilistic climate prediction. European Journal for Philosophy of Science, 5, 191–215.

    Article  Google Scholar 

  • Blaizot, J-P., Iliopoulos, J., Madsen, J., Ross, G. G., Sonderegger, P., Specht, H. J. (2003). Study of potentially dangerous events during heavy-ion collisions at the LHC: Report of the LHC Safety Study Group. https://cds.cern.ch/record/613175/files/CERN-2003-001.pdf. Accessed 12 Aug 2015.

  • Briggs, R. (2014). Normative theories of rational choice: Expected utility. The Stanford Encyclopedia of Philosophy. http://plato.stanford.edu/entries/rationality-normative-utility/.

  • Brun, G., & Betz, G. (2016). Analysing practical argumentation. In S. O. Hansson & G. Hirsch Hadorn (Eds.), The argumentative turn in policy analysis. Reasoning about uncertainty (pp. 39–77). Cham: Springer. doi:10.1007/978-3-319-30549-3_3.

    Google Scholar 

  • Church, J. A., Clark, P. U., Cazenave, A., Gregory, J. M., Jevrejeva, S., Levermann, A., Merrifield, M. A., et al. (2013). Sea level change. In T. F. Stocker, D. Qin, G.-K. Plattner, M. Tignor, S. K. Allen, J. Boschung, A. Nauels, Y. Xia, V. Bex, & P. M. Midgley (Eds.), Climate change 2013: The physical science basis contribution of Working Group I to the fifth assessment report of the Intergovernmental Panel on Climate Change (pp. 1137–1216). Cambridge: Cambridge University Press.

    Google Scholar 

  • Clarke, L. B. (2006). Worst cases: Terror and catastrophe in the popular imagination. Chicago: University of Chicago Press.

    Google Scholar 

  • Doorn, N. (2016). Reasoning about uncertainty in flood risk governance. In S. O. Hansson & G. Hirsch Hadorn (Eds.), The argumentative turn in policy analysis. Reasoning about uncertainty (pp. 245–263). Cham: Springer. doi:10.1007/978-3-319-30549-3_10.

    Google Scholar 

  • Egan, A., & Weatherson, B. (2009). Epistemic modality. Oxford: Oxford University Press.

    Google Scholar 

  • Elliott, K. C. (2010). Geoengineering and the precautionary principle. International Journal of Applied Philosophy, 24, 237–253.

    Article  Google Scholar 

  • Elliott, K. C. (2016). Climate geoengineering. In S. O. Hansson & G. Hirsch Hadorn (Eds.), The argumentative turn in policy analysis. Reasoning about uncertainty (pp. 305–324). Cham: Springer. doi:10.1007/978-3-319-30549-3_13.

    Google Scholar 

  • Ellis, J., Giudice, G., Mangano, M., Tkachev, I., & Wiedemann, U. (2008). Review of the safety of LHC collisions. http://www.cern.ch/lsag/LSAG-Report.pdf. Accessed 10 Nov 2012.

  • Ellsberg, D. (1961). Risk, ambiguity, and the savage axioms. Quarterly Journal of Economics, 75, 643–669.

    Article  Google Scholar 

  • Etner, J., Jeleva, M., & Tallon, J.-M. (2012b). Decision theory under ambiguity. Journal of Economic Surveys, 26, 234–270.

    Article  Google Scholar 

  • European Commission. (2011). Commission staff working paper. Impact assessment. accompanying the document communication from the commission to the council, the European Parliament, the European Economic and Social Committee and the Committee of the Regions. Energy Roadmap 2050. COM(2011)885. http://ec.europa.eu/smart-regulation/impact/ia_carried_out/docs/ia_2011/sec_2011_1565_en.pdf. Accessed 12 Aug 2015.

  • Gardiner, S. M. (2006). A core precautionary principle. The Journal of Political Philosophy, 14, 33–60.

    Article  Google Scholar 

  • Gilboa, I., Postlewaite, A., & Schmeidler, D. (2009). Is it always rational to satisfy Savage’s axioms? Economics and Philosophy, 25(Special Issue 03): 285–296.

    Google Scholar 

  • Hartmut, G., Kokott, J., Kulessa, M., Luther, J., Nuscheler, F., Sauerborn, R., Schellnhuber, H-J., Schubert, R., & Schulze, E-D. (2003). World in transition: Towards sustainable energy systems. German Advisory Council on Global Change Flagship Report. http://www.wbgu.de/fileadmin/templates/dateien/veroeffentlichungen/hauptgutachten/jg2003/wbgu_jg2003_engl.pdf. Accessed 12 Aug 2015.

  • Hansen, J., Sato, M., Russell, G., & Kharecha, P. (2013). Climate sensitivity, sea level and atmospheric carbon dioxide. Philosophical Transactions of the Royal Society A-Mathematical Physical and Engineering Sciences, 371 (20120294).

    Google Scholar 

  • Hansson, S. O. (1997). The limits of precaution. Foundations of Science, 1997, 293–306.

    Article  Google Scholar 

  • Hansson, S. O. (2001). The structure of values and norms. Cambridge studies in probability, induction, and decision theory. Cambridge: Cambridge University Press.

    Book  Google Scholar 

  • Hansson, S. O. (2003). Ethical criteria of risk acceptance. Erkenntnis, 59, 291–309.

    Article  Google Scholar 

  • Hansson, S. O. (2013). The ethics of risk: Ethical analysis in an uncertain world. New York: Palgrave Macmillan.

    Book  Google Scholar 

  • Hansson, S. O. (2016). Evaluating the uncertainties. In S. O. Hansson & G. Hirsch Hadorn (Eds.), The argumentative turn in policy analysis. Reasoning about uncertainty (pp. 79–104). Cham: Springer. doi:10.1007/978-3-319-30549-3_4.

    Google Scholar 

  • Heal, G., & Millner, A. (2013). Uncertainty and decision in climate change economics. NBER working paper No. 18929. http://www.nber.org/papers/w18929.pdf. Accessed 12 Aug 2015.

  • Jeffrey, R. (1965). The logic of decision. Chicago: University of Chicago Press.

    Google Scholar 

  • Jenkins, G. J., Murphy, J. M., Sexton, D. M. H., Lowe, J. A., Jones, P., & Kilsby, C. G. (2009). UK climate projections: Briefing report. Exeter: Met Office Hadley Centre.

    Google Scholar 

  • Lempert, R. J., Popper, S. W., & Bankes, S. C. (2002). Confronting surprise. Social Science Computer Review, 20, 420–440.

    Article  Google Scholar 

  • Lempert, R. J., Popper, S. W., & Bankes, S. C. (2003b). Shaping the next one hundred years: New methods for quantitative, long-term policy analysis. Santa Monica: RAND.

    Google Scholar 

  • Luce, R. D., & Raiffa, H. (1957). Games and decisions: Introduction and critical survey. New York: Wiley.

    Google Scholar 

  • Möller, N. (2016). Value uncertainty. In S. O. Hansson & G. Hirsch Hadorn (Eds.), The argumentative turn in policy analysis. Reasoning about uncertainty (pp. 105–133). Cham: Springer. doi:10.1007/978-3-319-30549-3_5.

    Google Scholar 

  • Morgan, M. G. (2011). Certainty, uncertainty, and climate change. Climatic Change, 108, 707–721.

    Article  Google Scholar 

  • Morgan, M. G., Henrion, M., & Small, M. (1990). Uncertainty: A guide to dealing with uncertainty in quantitative risk and policy analysis. Cambridge: Cambridge University Press.

    Book  Google Scholar 

  • Müller, T. (2012). Branching in the landscape of possibilities. Synthese, 188, 41–65.

    Article  Google Scholar 

  • Neubersch, D., Held, H., & Otto, A. (2014). Operationalizing climate targets under learning: An application of cost-risk analysis. Climatic Change, 126, 305–318.

    Article  Google Scholar 

  • Nordhaus, W. D., & Boyer, J. (2000). Warming the world: Economic models of climate change. Cambridge, MA: MIT Press.

    Google Scholar 

  • O’Hagan, A., & Oakley, J. E. (2004). Probability is perfect, but we can’t elicit it perfectly. Reliability Engineering & System Safety, 85, 239–248.

    Article  Google Scholar 

  • Peterson, M. (2006). The precautionary principle is incoherent. Risk Analysis, 26, 595–601.

    Article  Google Scholar 

  • Rawls, J. (1971). A theory of justice. Cambridge: Harvard University Press.

    Google Scholar 

  • Rescher, N. (1984). The limits of science. Pittsburgh series in philosophy and history of science. Berkeley: University of California Press.

    Google Scholar 

  • Rescher, N. (2009). Ignorance: On the wider implications of deficient knowledge. Pittsburgh: University of Pittsburgh Press.

    Google Scholar 

  • Resnik, M. D. (1987b). Choices: An introduction to decision theory. Minneapolis: University of Minnesota Press.

    Google Scholar 

  • Savage, L. J. (1954). The foundation of statistics. New York: Wiley.

    Google Scholar 

  • Schefczyk, M. (2016). Financial markets: The stabilisation task. In S. O. Hansson & G. Hirsch Hadorn (Eds.), The argumentative turn in policy analysis. Reasoning about uncertainty (pp. 265–290). Cham: Springer. doi:10.1007/978-3-319-30549-3_11.

    Google Scholar 

  • Schmidt, M. G. W., Lorenz, A., Held, H., & Kriegler, E. (2011). Climate targets under uncertainty: Challenges and remedies. Climatic Change, 104, 783–791.

    Article  Google Scholar 

  • Schneider, S. H. (2001). What is ’dangerous’ climate change? Nature, 411, 17–19.

    Article  Google Scholar 

  • Shrader-Frechette, K. (2016). Uncertainty analysis, nuclear waste, and million-year predictions. In S. O. Hansson & G. Hirsch Hadorn (Eds.), The argumentative turn in policy analysis. Reasoning about uncertainty (pp. 291–303). Cham: Springer. doi:10.1007/978-3-319-30549-3_12.

    Google Scholar 

  • Steele, K. (2006). The precautionary principle: A new approach to public decision-making? Law, Probability, and Risk, 5, 19–31.

    Article  Google Scholar 

  • Sunstein, C. R. (2005). Laws of fear: Beyond the precautionary principle. Cambridge: Cambridge University Press.

    Book  Google Scholar 

  • Toth, F. L. (2003). Climate policy in light of climate science: The ICLIPS project. Climatic Change, 56, 7–36.

    Article  Google Scholar 

  • van Fraassen, B. C. (1989). Laws and symmetry. Oxford: Oxford University Press.

    Book  Google Scholar 

  • Williamson, J. (2010). In defence of objective Bayesianism. Oxford: Oxford University Press.

    Book  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Gregor Betz .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing Switzerland

About this chapter

Cite this chapter

Betz, G. (2016). Accounting for Possibilities in Decision Making. In: Hansson, S., Hirsch Hadorn, G. (eds) The Argumentative Turn in Policy Analysis. Logic, Argumentation & Reasoning, vol 10. Springer, Cham. https://doi.org/10.1007/978-3-319-30549-3_6

Download citation

Publish with us

Policies and ethics