Abstract
Understanding human decision processes has been a topic of intense study in different disciplines including psychology, economics, and artificial intelligence. Indeed, modeling human decision making plays a fundamental role in the design of intelligent systems capable of rich interactions. Decision Field Theory (DFT) [3] provides a cognitive model of the deliberation process that precedes the selection of an option. DFT is grounded in psychological principles and has been shown to be effective in modeling several behavioral effects involving uncertainty and interactions among alternatives. In this paper, we address the problem of learning the internal DFT model of a decision maker by observing only his final choices. In our setting choices are among several options which are evaluated according to different attributes. Our approach, based on Recurrent Neural Networks, extracts underlying preferences compatible with the observed choice behavior and, thus, provides a method for learning a rich preference model of an individual which encompasses psychological aspects and which can be used as more realistic predictor of future behavior.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Bengio, Y., Boulanger-Lewandowski, N., Pascanu, R.: Advances in optimizing recurrent networks. In: 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 8624–8628. IEEE (2013)
Bergen, L., Evans, O., Tenenbaum, J.: Learning structured preferences. In: Proceedings of the Annual Meeting of the Cognitive Science Society, vol. 32 (2010)
Busemeyer, J.R., Diederich, A.: Survey of decision field theory. Math. Soc. Sci. 43(3), 345–370 (2002)
Busemeyer, J.R., Townsend, J.T.: Decision field theory: a dynamic-cognitive approach to decision making in an uncertain environment. Psychol. Rev. 100(3), 432 (1993)
De Soete, G., Feger, H., Klauer, K.C.: New Developments in Probabilistic Choice Modeling. North-Holland, Amsterdam (1989)
Fürnkranz, J., Hüllermeier, E.: Preference Learning. Springer, Boston (2010). https://doi.org/10.1007/978-0-387-30164-8
Hinton, G., Srivastava, N., Swersky, K.: Neural networks for machine learning lecture 6A overview of mini-batch gradient descent
Hotaling, J.M., Busemeyer, J.R., Li, J.: Theoretical developments in decision field theory: comment on Tsetsos, Usher, and Chater (2010)
Keeney, R.L., Raiffa, H.: Decisions with Multiple Objectives: Preference and Value Tradeoffs. Wiley, New York (1976)
Kendall, M.G.: A new measure of rank correlation. Biometrika 30(1/2), 81–93 (1938)
Koriche, F., Zanuttini, B.: Learning conditional preference networks. Artif. Intell. 174(11), 685–703 (2010)
Kullback, S.: Information Theory and Statistics. Courier Corporation, North Chelmsford (1997)
Lin, J.: Divergence measures based on the Shannon entropy. IEEE Trans. Inf. Theory 37(1), 145–151 (1991)
Paszke, A., et al.: Automatic differentiation in PyTorch. In: NIPS-W (2017)
Raedt, L.D., Passerini, A., Teso, S.: Learning constraints from examples. In: Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI 2018), pp. 7965–7970. AAAI (2018)
Roe, R.M., Busemeyer, J.R., Townsend, J.T.: Multialternative decision field theory: a dynamic connectionst model of decision making. Psychol. Rev. 108(2), 370 (2001)
Rossi, F., Venable, K., Walsh, T.: A Short Introduction to Preferences: Between Artificial Intelligence and Social Choice. Morgan and Claypool, San Rafael (2011)
Rossi, F., Sperduti, A.: Learning solution preferences in constraint problems. J. Exp. Theor. Artif. Intell. 10(1), 103–116 (1998)
Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature 323(6088), 533 (1986)
Thurstone, L.L.: The Measurement of Values. University of Chicago Press, Chicago (1959)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Rahgooy, T., Venable, K.B. (2019). Learning Preferences in a Cognitive Decision Model. In: Zeng, A., Pan, D., Hao, T., Zhang, D., Shi, Y., Song, X. (eds) Human Brain and Artificial Intelligence. HBAI 2019. Communications in Computer and Information Science, vol 1072. Springer, Singapore. https://doi.org/10.1007/978-981-15-1398-5_13
Download citation
DOI: https://doi.org/10.1007/978-981-15-1398-5_13
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-15-1397-8
Online ISBN: 978-981-15-1398-5
eBook Packages: Computer ScienceComputer Science (R0)