Advertisement

Learning Preferences in a Cognitive Decision Model

  • Taher RahgooyEmail author
  • K. Brent Venable
Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 1072)

Abstract

Understanding human decision processes has been a topic of intense study in different disciplines including psychology, economics, and artificial intelligence. Indeed, modeling human decision making plays a fundamental role in the design of intelligent systems capable of rich interactions. Decision Field Theory (DFT) [3] provides a cognitive model of the deliberation process that precedes the selection of an option. DFT is grounded in psychological principles and has been shown to be effective in modeling several behavioral effects involving uncertainty and interactions among alternatives. In this paper, we address the problem of learning the internal DFT model of a decision maker by observing only his final choices. In our setting choices are among several options which are evaluated according to different attributes. Our approach, based on Recurrent Neural Networks, extracts underlying preferences compatible with the observed choice behavior and, thus, provides a method for learning a rich preference model of an individual which encompasses psychological aspects and which can be used as more realistic predictor of future behavior.

References

  1. 1.
    Bengio, Y., Boulanger-Lewandowski, N., Pascanu, R.: Advances in optimizing recurrent networks. In: 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 8624–8628. IEEE (2013)Google Scholar
  2. 2.
    Bergen, L., Evans, O., Tenenbaum, J.: Learning structured preferences. In: Proceedings of the Annual Meeting of the Cognitive Science Society, vol. 32 (2010)Google Scholar
  3. 3.
    Busemeyer, J.R., Diederich, A.: Survey of decision field theory. Math. Soc. Sci. 43(3), 345–370 (2002)MathSciNetCrossRefGoogle Scholar
  4. 4.
    Busemeyer, J.R., Townsend, J.T.: Decision field theory: a dynamic-cognitive approach to decision making in an uncertain environment. Psychol. Rev. 100(3), 432 (1993)CrossRefGoogle Scholar
  5. 5.
    De Soete, G., Feger, H., Klauer, K.C.: New Developments in Probabilistic Choice Modeling. North-Holland, Amsterdam (1989)Google Scholar
  6. 6.
    Fürnkranz, J., Hüllermeier, E.: Preference Learning. Springer, Boston (2010).  https://doi.org/10.1007/978-0-387-30164-8CrossRefzbMATHGoogle Scholar
  7. 7.
    Hinton, G., Srivastava, N., Swersky, K.: Neural networks for machine learning lecture 6A overview of mini-batch gradient descentGoogle Scholar
  8. 8.
    Hotaling, J.M., Busemeyer, J.R., Li, J.: Theoretical developments in decision field theory: comment on Tsetsos, Usher, and Chater (2010)Google Scholar
  9. 9.
    Keeney, R.L., Raiffa, H.: Decisions with Multiple Objectives: Preference and Value Tradeoffs. Wiley, New York (1976)zbMATHGoogle Scholar
  10. 10.
    Kendall, M.G.: A new measure of rank correlation. Biometrika 30(1/2), 81–93 (1938)CrossRefGoogle Scholar
  11. 11.
    Koriche, F., Zanuttini, B.: Learning conditional preference networks. Artif. Intell. 174(11), 685–703 (2010)MathSciNetCrossRefGoogle Scholar
  12. 12.
    Kullback, S.: Information Theory and Statistics. Courier Corporation, North Chelmsford (1997)zbMATHGoogle Scholar
  13. 13.
    Lin, J.: Divergence measures based on the Shannon entropy. IEEE Trans. Inf. Theory 37(1), 145–151 (1991)MathSciNetCrossRefGoogle Scholar
  14. 14.
    Paszke, A., et al.: Automatic differentiation in PyTorch. In: NIPS-W (2017)Google Scholar
  15. 15.
    Raedt, L.D., Passerini, A., Teso, S.: Learning constraints from examples. In: Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI 2018), pp. 7965–7970. AAAI (2018)Google Scholar
  16. 16.
    Roe, R.M., Busemeyer, J.R., Townsend, J.T.: Multialternative decision field theory: a dynamic connectionst model of decision making. Psychol. Rev. 108(2), 370 (2001)CrossRefGoogle Scholar
  17. 17.
    Rossi, F., Venable, K., Walsh, T.: A Short Introduction to Preferences: Between Artificial Intelligence and Social Choice. Morgan and Claypool, San Rafael (2011)Google Scholar
  18. 18.
    Rossi, F., Sperduti, A.: Learning solution preferences in constraint problems. J. Exp. Theor. Artif. Intell. 10(1), 103–116 (1998)CrossRefGoogle Scholar
  19. 19.
    Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature 323(6088), 533 (1986)CrossRefGoogle Scholar
  20. 20.
    Thurstone, L.L.: The Measurement of Values. University of Chicago Press, Chicago (1959)Google Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2019

Authors and Affiliations

  1. 1.Department of Computer ScienceTulane UniversityNew OrleansUSA
  2. 2.IHMCPensacolaUSA

Personalised recommendations