Advertisement

Why Bad Coffee? Explaining Agent Plans with Valuings

  • Michael WinikoffEmail author
  • Virginia Dignum
  • Frank Dignum
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11094)

Abstract

An important issue in deploying an autonomous system is how to enable human users and stakeholders to develop an appropriate level of trust in the system. It has been argued that a crucial mechanism to enable appropriate trust is the ability of a system to explain its behaviour. Obviously, such explanations need to be comprehensible to humans. We argue that it makes sense to build on the results of extensive research in social sciences that explores how humans explain their behaviour. Using similar concepts for explanation is argued to help with comprehensibility, since the concepts are familiar. Following work in the social sciences, we propose the use of a folk-psychological model that utilises beliefs, desires, and “valuings”. We propose a formal framework for constructing explanations of the behaviour of an autonomous system, present an (implemented) algorithm for giving explanations, and present evaluation results.

References

  1. 1.
    Bratman, M.E., Israel, D.J., Pollack, M.E.: Plans and resource-bounded practical reasoning. Comput. Intell. 4, 349–355 (1988)CrossRefGoogle Scholar
  2. 2.
    Bratman, M.E.: Intentions, Plans, and Practical Reason. Harvard University Press, Cambridge (1987)Google Scholar
  3. 3.
    Burmeister, B., Arnold, M., Copaciu, F., Rimassa, G.: BDI-agents for agile goal-oriented business processes. In: Proceedings of the Seventh International Conference on Autonomous Agents and Multiagent Systems (AAMAS) [Industry Track], pp. 37–44. IFAAMAS (2008)Google Scholar
  4. 4.
    Chakraborti, T., Sreedharan, S., Zhang, Y., Kambhampati, S.: Plan explanations as model reconciliation: moving beyond explanation as soliloquy. In: Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI 2017, pp. 156–163 (2017).  https://doi.org/10.24963/ijcai.2017/23
  5. 5.
    Cranefield, S., Winikoff, M., Dignum, V., Dignum, F.: No pizza for you: value-based plan selection in BDI agents. In: Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI 2017, pp. 178–184 (2017).  https://doi.org/10.24963/ijcai.2017/26
  6. 6.
    EU: EU General Data Protection Regulation, April 2016. http://tinyurl.com/GDPREU2016 (see articles 13-15 and 22)
  7. 7.
    Grice, H.P.: Logic and conversation. In: Cole, P., Morgan, J. (eds.) Syntax and Semantics Volume 3: Speech Acts. Academic Press, New York (1975)Google Scholar
  8. 8.
    Gunning, D.: Explainable Artificial Intelligence (XAI) (2018). https://www.darpa.mil/program/explainable-artificial-intelligence
  9. 9.
    Harbers, M.: Explaining Agent Behavior in Virtual Training. SIKS dissertation series no. 2011-35, SIKS (Dutch Research School for Information and Knowledge Systems) (2011)Google Scholar
  10. 10.
    Lombrozo, T.: Explanation and abductive inference. In: Oxford Handbook of Thinking and Reasoning, pp. 260–276 (2012)Google Scholar
  11. 11.
    Malle, B.F.: How the Mind Explains Behavior: Folk Explanations, Meaning, and Social Interaction. The MIT Press, Cambridge (2004). ISBN 0-262-13445-4Google Scholar
  12. 12.
    Miller, T.: Explanation in artificial intelligence: Insights from the social sciences. CoRR abs/1706.07269 (2017). http://arxiv.org/abs/1706.07269
  13. 13.
    Rao, A.S., Georgeff, M.P.: An abstract architecture for rational agents. In: Rich, C., Swartout, W., Nebel, B. (eds.) Proceedings of the Third International Conference on Principles of Knowledge Representation and Reasoning, pp. 439–449. Morgan Kaufmann Publishers, San Mateo (1992)Google Scholar
  14. 14.
    Schwartz, S.: An overview of the Schwartz theory of basic values. Online Read. Psychol. Cult. 2(1) (2012).  https://doi.org/10.9707/2307-0919.1116
  15. 15.
    Thangarajah, J., Padgham, L., Winikoff, M.: Detecting and avoiding interference between goals in intelligent agents. In: Proceedings of the 18th International Joint Conference on Artificial Intelligence (IJCAI), pp. 721–726 (2003)Google Scholar
  16. 16.
    Thangarajah, J., Padgham, L., Winikoff, M.: Detecting and exploiting positive goal interaction in intelligent agents. In: Proceedings of the Second International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS), pp. 401–408. ACM Press (2003)Google Scholar
  17. 17.
    Visser, S., Thangarajah, J., Harland, J., Dignum, F.: Preference-based reasoning in BDI agent systems. Auton. Agents Multi-Agent Syst. 30(2), 291–330 (2016).  https://doi.org/10.1007/s10458-015-9288-2CrossRefGoogle Scholar
  18. 18.
    van der Weide, T.: Arguing to motivate decisions. Dissertation, Utrecht University Repository (2011). https://dspace.library.uu.nl/handle/1874/210788
  19. 19.
    Winikoff, M.: Towards Trusting Autonomous Systems. In: Fifth Workshop on Engineering Multi-Agent Systems (EMAS) (2017)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Michael Winikoff
    • 1
    Email author
  • Virginia Dignum
    • 2
  • Frank Dignum
    • 3
  1. 1.University of OtagoDunedinNew Zealand
  2. 2.Delft University of TechnologyDelftThe Netherlands
  3. 3.Utrecht UniversityUtrechtThe Netherlands

Personalised recommendations