Improving and Measuring the Effectiveness of Decision Analysis: Linking Decision Analysis and Behavioral Decision Research

  • Robert T. Clemen
Part of the Springer Optimization and Its Applications book series (SOIA, volume 21)

Although behavioral research and decision analysis began with a close connection, that connection appears to have diminished over time. This chapter discusses how to re-establish the connection between the disciplines in two distinct ways. First, theoretical and empirical results in behavioral research in many cases provide a basis for crafting improved prescriptive decision analysis methods. Several productive applications of behavioral results to decision analysis are reviewed, and suggestions are made for additional areas in which behavioral results can be brought to bear on decision analysis methods in precise ways. Pursuing behaviorally based improvements in prescriptive techniques will go a long way toward re-establishing the link between the two fields.

The second way to reconnect behavioral research and decision analysis involves the development of new empirical methods for evaluating the effectiveness of prescriptive techniques. New techniques, including behaviorally based ones such as those proposed above, will undoubtedly be subjected to validation studies as part of the development process. However, validation studies typically focus on specific aspects of the decision-making process and do not answer a more fundamental question. Are the proposed methods effective in helping people achieve their objectives? More generally, if we use decision analysis techniques, will we do a better job of getting what we want over the long run than we would if we used some other decisionmaking method? In order to answer these questions, we must develop methods that will allow us to measure the effectiveness of decision-making methods. In our framework, we identify two types of effectiveness. We begin with the idea that individuals typically make choices based on their own preferences and often before all uncertainties are resolved. A decision-making method is said to be weakly effective if it leads to choices that can be shown to be preferred (in a way that we make precise) before consequences are experienced. In contrast, when the decision maker actually experiences his or her consequences, the question is whether decision analysis helps individuals do a better job of achieving their objectives in the long run. A decisionmaking method that does so is called strongly effective.We propose some methods for measuring effectiveness, discuss potential research paradigms, and suggest possible research projects. The chapter concludes with a discussion of the beneficial interplay between research on specific prescriptive methods and effectiveness studies.


Decision Maker Analytic Hierarchy Process Decision Analysis Subjective Probability Prospect Theory 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    M. Allais. Le comportement de l’homme rationnel devant le risque: Critique des postulats et axiomes de l’ecole americaine. Econometrica, 21:503–546, 1953.MATHCrossRefMathSciNetGoogle Scholar
  2. 2.
    M. Allais and J. Hagen. Expected Utility Hypotheses and the Allais Paradox. Reidel, Dordrecht, The Netherlands, 1979.MATHGoogle Scholar
  3. 3.
    R. M. Anderson and B. F. Hobbs. Using a Bayesian approach to quantify scale compatibility bias. Management Science, 48:1555–1568, 2002.CrossRefGoogle Scholar
  4. 4.
    F. G. Ashby, A. M. Eisen, and A. U. Turken. A neuropsychological theory of positive affect and its influence on cognition. Psychological Review, 106:529–550, 1999.CrossRefGoogle Scholar
  5. 5.
    R. F. Baumeister and T. F. Heatherton. Self-regulation failure: An overview. Psychological Inquiry, 7:1–15, 1996.CrossRefGoogle Scholar
  6. 6.
    H. Bleichrodt, J. L. Pinto, and P. P. Wakker. Making descriptive use of prospect theory to improve the prescriptive use of expected utility. Management Science, 47:1498–1514, 2001.CrossRefGoogle Scholar
  7. 7.
    L. G. Boiney. When efficient is insufficient: Fairness in decisions affecting a group. Management Science, 41:1523–1537, 1995.MATHCrossRefGoogle Scholar
  8. 8.
    D. Bunn. Applied Decision Analysis. McGraw-Hill, New York, 1984.MATHGoogle Scholar
  9. 9.
    C. M. Clancy, R. D. Cebul, and S. V. Williams. Guiding individual decisions: A randomized, controlled trial of decision analysis. American Journal of Medicine, 84:283–288, 1988.CrossRefGoogle Scholar
  10. 10.
    R. Clemen, S. K. Jones, and R. L. Winkler. Aggregating forecasts: An empirical evaluation of some Bayesian methods. In D. Berry, K. M. Chaloner, and J. K. Geweke, editors, Bayesian Analysis in Statistics and Econometrics, pages 3–14. Wiley, New York, 1996.Google Scholar
  11. 11.
    R. T. Clemen. Making Hard Decisions: An Introduction to Decision Analysis. Duxbury, Belmont, CA, second edition, 1996.Google Scholar
  12. 12.
    R. T. Clemen and R. C. Kwit. The value of decision analysis at Eastman Kodak Company, 1990–1999. Interfaces, 31:74–92, 2001.Google Scholar
  13. 13.
    R. T. Clemen and C. Ulu. Interior additivity and subjective probability assessment of continuous variables. Unpublished manuscript, Duke University, 2006.Google Scholar
  14. 14.
    P. Delquié. “Bimatching”: A new preference assessment method to reduce compatibility effects. Management Science, 43:640–658, 1997.MATHCrossRefGoogle Scholar
  15. 15.
    A. Dijksterhuis, M. W. Bos, L. F. Nordgren, and R. B. van Baaren. On making the right choice: The deliberation-without-attention effect. Science, 311:1005–1007, 2006.CrossRefGoogle Scholar
  16. 16.
    G. W. Fischer. Utility models for multiple objective decisions: Do they accurately represent human preferences? Decision Sciences, 10:451–479, 1979.CrossRefGoogle Scholar
  17. 17.
    B. Fischhoff. Debiasing. In D. Kahneman, P. Slovic, and A. Tversky, editors, Judgment Under Uncertainty: Heuristics and Biases, pages 422–444. Cambridge University Press, Cambridge, UK, 1982.Google Scholar
  18. 18.
    P. C. Fishburn and R. K. Sarin. Fairness and social risk I: Unaggregated analyses. Management Science, 40:1174–1188, 1994.MATHCrossRefGoogle Scholar
  19. 19.
    P. C. Fishburn and R. K. Sarin. Fairness and social risk II: Aggregated analyses. Management Science, 43:115–126, 1997.CrossRefGoogle Scholar
  20. 20.
    R. Folger. Distributive and procedural justice: Combined impact of “voice” and improvement on experienced inequity. Journal of Personality and Social Psychology, 35:108–119, 1977.CrossRefGoogle Scholar
  21. 21.
    C. R. Fox and R. T. Clemen. Subjective probability assessment in decision analysis: Partition dependence and bias toward the ignorance prior. Management Science, 51:1417–1432, 2005.CrossRefGoogle Scholar
  22. 22.
    C. R. Fox and Y. Rottenstreich. Partition priming in judgment under uncertainty. Psychological Science, 14:195–200, 2003.CrossRefGoogle Scholar
  23. 23.
    C. R. Fox and A. Tversky. A belief-based account of decision under uncertainty. Management Science, 44:879–895, 1998.MATHCrossRefGoogle Scholar
  24. 24.
    D. Frisch and R. T. Clemen. Beyond expected utility: Rethinking behavioral decision research. Psychological Bulletin, 116:46–54, 1994.CrossRefGoogle Scholar
  25. 25.
    D. G. Fryback and J. R. Thornbury. Informal use of decision theory to improve radiological patient management. Radiology, 129:385–388, 1978.Google Scholar
  26. 26.
    G. Gigerenzer. How to make cognitive illusions disappear: Beyond heuristics and biases. European Review of Social Psychology, 2:83–115, 1991.CrossRefGoogle Scholar
  27. 27.
    G. Gigerenzer, U. Hoffrage, and H. Kleinbölting. Probabilistic mental models: A Brunswikian theory of confidence. Psychological Review, 98:506–528, 1991.CrossRefGoogle Scholar
  28. 28.
    T. Gilovich, D. Griffin, and D. Kahneman, editors. Heuristics and Biases: The Psychology of Intuitive Judgment. Cambridge University Press, Cambridge, UK, 2002.Google Scholar
  29. 29.
    R. Gregory, S. Lichtenstein, and P. Slovic. Valuing environmental resources: A constructive approach. Journal of Risk and Uncertainty, 7:177–197, 1993.CrossRefGoogle Scholar
  30. 30.
    J. Hershey, H. C. Kunreuther, and P. J. Schoemaker. Sources of bias in assessment of utility functions. Management Science, 28:936–954, 1982.MATHCrossRefGoogle Scholar
  31. 31.
    S. C. Hora, N. G. Dodd, and J. A. Hora. The use of decomposition in probability assessments on continuous variables. Journal of Behavioral Decision Making, 6:133–147, 1993.CrossRefGoogle Scholar
  32. 32.
    C. K. Hsee and Y. Rottenstreich. Music, pandas, and muggers: On the affective psychology of value. Journal of Experimental Psychology: General, 133:23–30, 2004.CrossRefGoogle Scholar
  33. 33.
    S. K. Jacobi and B. F. Hobbs. Quantifying and mitigating splitting biases in value trees. Unpublished manuscript, Johns Hopkins University, Baltimore, MD, 2006.Google Scholar
  34. 34.
    D. Kahneman. Maps of bounded rationality: Psychology for behavioral economics. American Economic Review, 93:1449–1475, 2003.CrossRefGoogle Scholar
  35. 35.
    D. Kahneman and S. Frederick. Representativeness revisited: Attribute substitution in intuitive judgment. In T. Gilovich, D. Griffin, and D. Kahneman, editors, Heuristics and Biases: The Psychology of Intuitive Judgment, pages 49–81. Cambridge University Press, New York, 2002.Google Scholar
  36. 36.
    D. Kahneman, P. Slovic, and A. Tversky, editors. Judgment Under Uncertainty: Heuristics and Biases. Cambridge University Press, Cambridge, UK, 1982.Google Scholar
  37. 37.
    D. Kahneman and A. Tversky. Prospect theory: An analysis of decision under risk. Econometrica, 47:263–291, 1979.MATHCrossRefGoogle Scholar
  38. 38.
    D. Kahneman and A. Tversky. Choices, Values, and Frames. Cambridge University Press, Cambridge, UK, 2000.Google Scholar
  39. 39.
    R. Keeney and D. von Winterfeldt. Eliciting probabilities from experts in complex technical problems. IEEE Transactions on Engineering Management, 38:191–201, 1991.CrossRefGoogle Scholar
  40. 40.
    R. L. Keeney. Value-Focused Thinking: A Path to Creative Decision Making. Harvard University Press, Cambridge, MA, 1992.Google Scholar
  41. 41.
    S. Lichtenstein, B. Fischhoff, and L. D. Phillips. Calibration of probabilities: The state of the art to 1980. In D. Kahneman, P. Slovic, and A. Tversky, editors, Judgment Under Uncertainty: Heuristics and Biases, pages 306–334. Cambridge University Press, Cambridge, UK, 1982.Google Scholar
  42. 42.
    S. Lichtenstein and P. Slovic. Reversals of preference between bids and choices in gambling decisions. Journal of Experimental Psychology, 89:46–55, 1971.CrossRefGoogle Scholar
  43. 43.
    G. F. Loewenstein, C. K. Hsee, E. U. Weber, and N. Welch. Risk as feelings. Psychological Bulletin, 127:267–286, 2001.CrossRefGoogle Scholar
  44. 44.
    M. F. Luce, J. R. Bettman, and J. W. Payne. Choice processing in emotionally difficult decisions. Journal of Experimental Psychology: Learning, Memory, and Cognition, 23:384–405, 1997.CrossRefGoogle Scholar
  45. 45.
    S. Makridakis, A. Andersen, R. Carbone, R. Fildes, M. Hibon, R. Lewandowski, J. Newton, E. Parzen, and R. Winkler. The accuracy of extrapolation (time series) methods: Results of a forecasting competition. Journal of Forecasting, 1:111–153, 1982.CrossRefGoogle Scholar
  46. 46.
    S. Makridakis, C. Chatfield, M. Hibon, M. Lawrence, T. Mills, K. Ord, and L. Simmons. The M-2 competition: A real-time judgmentally based forecasting study. International Journal of Forecasting, 9:5–22, 1993.CrossRefGoogle Scholar
  47. 47.
    S. Makridakis and M. Hibon. The M3-competition. International Journal of Forecasting, 16:451–476, 2000.CrossRefGoogle Scholar
  48. 48.
    M. McCord and R. de Neufville. Lottery equivalents: Reduction of the certainty effect problem in utility assessment. Management Science, 32:56–60, 1986.MATHCrossRefGoogle Scholar
  49. 49.
    M. W. Merkhofer. Quantifying judgmental uncertainty: Methodology, experiences, and insights. IEEE Transactions on Systems, Man, and Cybernetics, 17:741–752, 1987.Google Scholar
  50. 50.
    M. G. Morgan and M. Henrion. Uncertainty: A Guide to Dealing with Uncertainty in Quantitative Risk and Policy Analysis. Cambridge University Press, Cambridge, UK, 1990.Google Scholar
  51. 51.
    M. Muraven and R. F. Baumeister. Self-regulation and depletion of limited resources: Does self-control resemble a muscle? Psychological Bulletin, 126:247–259, 2000.CrossRefGoogle Scholar
  52. 52.
    A. H. Murphy and R. L. Winkler. Scoring rules in probability assessment and evaluation. Acta Psychologica, 34:273–286, 1970.CrossRefGoogle Scholar
  53. 53.
    L. D. Ordóñez, B. A. Mellers, S.-J. Chang, and J. Roberts. Are preference reversals reduced when made explicit? Journal of Behavioral Decision Making, 8:265–277, 1995.CrossRefGoogle Scholar
  54. 54.
    J. W. Payne, J. R. Bettman, and E. J. Johnson. The Adaptive Decision Maker. Cambridge University Press, Cambridge, UK, 1993.Google Scholar
  55. 55.
    J. W. Payne, J. R. Bettman, and D. A. Schkade. Measuring constructed preferences: Towards a building code. Journal of Risk and Uncertainty, 19:243–270, 1999.MATHCrossRefGoogle Scholar
  56. 56.
    J. Protheroe, T. Fahey, A. A. Montgomery, and T. J. Peters. The impact of patients’ preferences on the treatment of atrial fibrillation: Observational study of patient based decision analysis. British Medical Journal, 320:1380–1384, 2000.CrossRefGoogle Scholar
  57. 57.
    H. Raiffa. Decision Analysis. Addison-Wesley, Reading, MA, 1968.MATHGoogle Scholar
  58. 58.
    Y. Rottenstreich and A. Tversky. Unpacking, repacking, and anchoring: Advances in support theory. Psychological Review, 2:406–415, 1997.CrossRefGoogle Scholar
  59. 59.
    T. Saaty. The Analytic Hierarchy Process. McGraw-Hill, New York, 1980.MATHGoogle Scholar
  60. 60.
    R. E. Schaefer and K. Borcherding. The assessment of subjective probability distributions: A training experiment. Acta Psychologica, 37:117–129, 1973.CrossRefGoogle Scholar
  61. 61.
    B. J. Schmeichel, K. D. Vohs, and R. F. Baumeister. Intellectual performance and ego depletion: Role of the self in logical reasoning and other information processing. Journal of Personality and Social Psychology, 85:33–46, 2003.CrossRefGoogle Scholar
  62. 62.
    K. E. See, C. R. Fox, and Y. Rottenstreich. Between ignorance and truth: Partition dependence and learning in judgment under uncertainty. Unpublished manuscript, University of Pennsylvania, 2006.Google Scholar
  63. 63.
    S. Sloman. The empirical case for two systems of reasoning. Psychological Bulletin, 119:3–22, 1996.CrossRefGoogle Scholar
  64. 64.
    P. Slovic. The construction of preferences. American Psychologist, 50:364–371, 1995.CrossRefGoogle Scholar
  65. 65.
    P. Slovic, M. Finucane, E. Peters, and D. G. MacGregor. The affect heuristic. In T. Gilovich, D. Griffin, and D. Kahneman, editors, Heuristics and Biases: The Psychology of Intuitive Judgment, pages 397–420. Cambridge University Press, Cambridge, UK, 2002.Google Scholar
  66. 66.
    P. Slovic, D. Griffin, and A. Tversky. Compatibility effects in judgment and choice. In R. Hogarth, editor, Insights in Decision Making: A Tribute to Hillel J. Einhorn, pages 5–27. University of Chicago Press, IL, 1990.Google Scholar
  67. 67.
    C. S. Spetzler and C.-A. S. Staël Von Holstein. Probability encoding in decision analysis. Management Science, 22:340–352, 1975.CrossRefGoogle Scholar
  68. 68.
    C.-A. S. Staël Von Holstein. The effect of learning on the assessment of subjective probability distributions. Organizational Behavior and Human Decision Processes, 6:304–315, 1971.Google Scholar
  69. 69.
    C.-A. S. Staël Von Holstein. Two techniques for assessment of subjective probability distributions: An experimental study. Acta Psychologica, 35:478–494, 1971.CrossRefGoogle Scholar
  70. 70.
    A. Tversky and D. Kahneman. The framing of decisions and the psychology of choice. Science, 211:453–458, 1981.CrossRefMathSciNetGoogle Scholar
  71. 71.
    A. Tversky and D. Kahneman. Advances in prospect theory: Cumulative representation of uncertainty. Journal of Risk and Uncertainty, 26:297–323, 1992.CrossRefGoogle Scholar
  72. 72.
    A. Tversky and D. J. Koehler. Support theory: A nonextensional representation of subjective probability. Psychological Review, 101:547–567, 1994.CrossRefGoogle Scholar
  73. 73.
    A. Tversky, S. Sattath, and P. Slovic. Contingent weighting in judgment and choice. Psychological Review, 95:371–84, 1988.CrossRefGoogle Scholar
  74. 74.
    A. Tversky, P. Slovic, and D. Kahneman. The causes of preference reversal. The American Economic Review, 80:204–217, 1990.Google Scholar
  75. 75.
    D. von Winterfeldt and W. Edwards. Decision Analysis and Behavioral Research. Cambridge University Press, Cambridge, UK, 1986.Google Scholar
  76. 76.
    P. Wakker and D. Deneffe. Eliciting von Neumann-Morgenstern utilities when probabilities are distorted or unknown. Management Science, 42:1131–1150, 1996.MATHCrossRefGoogle Scholar
  77. 77.
    M. Weber, F. Eisenführ, and D. von Winterfeldt. The effects of splitting attributes on weights in multiattribute utility measurement. Management Science, 34:431–445, 1988.CrossRefGoogle Scholar
  78. 78.
    G. Wu and R. Gonzalez. Nonlinear decision weights in choice under uncertainty. Management Science, 45:74–85, 1999.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC 2008

Authors and Affiliations

  • Robert T. Clemen
    • 1
  1. 1.Fuqua School of BusinessDuke UniversityDurhamUSA

Personalised recommendations