Advertisement

Empathic Autonomous Agents

  • Timotheus KampikEmail author
  • Juan Carlos Nieves
  • Helena Lindgren
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11375)

Abstract

Identifying and resolving conflicts of interests is a key challenge when designing autonomous agents. For example, such conflicts often occur when complex information systems interact persuasively with humans and are in the future likely to arise in non-human agent-to-agent interaction. We introduce a theoretical framework for an empathic autonomous agent that proactively identifies potential conflicts of interests in interactions with other agents (and humans) by considering their utility functions and comparing them with its own preferences using a system of shared values to find a solution all agents consider acceptable. To illustrate how empathic autonomous agents work, we provide running examples and a simple prototype implementation in a general-purpose programing language. To give a high-level overview of our work, we propose a reasoning-loop architecture for our empathic agent.

Keywords

Multi-agent systems Utility theory Conflicts of interests 

Notes

Acknowledgments

We thank the anonymous reviewers for their constructive critical feedback. This work was partially supported by the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation.

References

  1. 1.
    Albrecht, S.V., Stone, P.: Autonomous agents modelling other agents: a comprehensive survey and open problems. Artif. Intell. 258, 66–95 (2018)MathSciNetCrossRefGoogle Scholar
  2. 2.
    Alshabi, W., Ramaswamy, S., Itmi, M., Abdulrab, H.: Coordination, cooperation and conflict resolution in multi-agent systems. In: Sobh, T. (ed.) Innovations and Advanced Techniques in Computer and Information Sciences and Engineering, pp. 495–500. Springer, Dordrecht (2007).  https://doi.org/10.1007/978-1-4020-6268-1_87CrossRefGoogle Scholar
  3. 3.
    Amgoud, L., Dimopoulos, Y., Moraitis, P.: A unified and general framework for argumentation-based negotiation. In: Proceedings of the 6th International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS, pp. 158:1–158:8. ACM, New York (2007)Google Scholar
  4. 4.
    Berinsky, A.J.: Rumors and health care reform: experiments in political misinformation. Br. J. Polit. Sci. 47(2), 241–262 (2017)CrossRefGoogle Scholar
  5. 5.
    Black, E., Atkinson, K.: Choosing persuasive arguments for action. In: The 10th International Conference on Autonomous Agents and Multiagent Systems-Volume 3, pp. 905–912. International Foundation for Autonomous Agents and Multiagent Systems (2011)Google Scholar
  6. 6.
    Bordini, R.H., Hübner, J.F.: BDI agent programming in agentspeak using jason. In: Toni, F., Torroni, P. (eds.) CLIMA 2005. LNCS (LNAI), vol. 3900, pp. 143–164. Springer, Heidelberg (2006).  https://doi.org/10.1007/11750734_9CrossRefzbMATHGoogle Scholar
  7. 7.
    Bratman, M.: Intention, Plans, and Practical Reason. Center for the Study of Language and Information, Stanford (1987)Google Scholar
  8. 8.
    Bruccoleri, M., Nigro, G.L., Perrone, G., Renna, P., Diega, S.N.L.: Production planning in reconfigurable enterprises and reconfigurable production systems. CIRP Ann. 54(1), 433–436 (2005)CrossRefGoogle Scholar
  9. 9.
    Busoniu, L., Babuska, R., De Schutter, B.: A comprehensive survey of multiagent reinforcement learning. IEEE Trans. Syst. Man Cybern. Part C 38(2), 156–172 (2008)CrossRefGoogle Scholar
  10. 10.
    Chajewska, U., Koller, D., Ormoneit, D.: Learning an agent’s utility function by observing behavior. In: ICML, pp. 35–42 (2001)Google Scholar
  11. 11.
    Conroy, D.E., Yang, C.H., Maher, J.P.: Behavior change techniques in top-ranked mobile apps for physical activity. Am. J. Prev. Med. 46(6), 649–652 (2014)CrossRefGoogle Scholar
  12. 12.
    Coplan, A.: Will the real empathy please stand up? A case for a narrow conceptualization. South. J. Philos. 49(s1), 40–65 (2011)CrossRefGoogle Scholar
  13. 13.
    Dautenhahn, K.: The art of designing socially intelligent agents: science, fiction, and the human in the loop. Appl. Artif. Intell. 12(7–8), 573–617 (1998)CrossRefGoogle Scholar
  14. 14.
    Hors-Fraile, S., et al.: Analyzing recommender systems for health promotion using a multidisciplinary taxonomy: a scoping review. Int. J. Med. Inform. 114, 143–155 (2018)CrossRefGoogle Scholar
  15. 15.
    Marey, O., Bentahar, J., Khosrowshahi-Asl, E., Sultan, K., Dssouli, R.: Decision making under subjective uncertainty in argumentation-based agent negotiation. J. Ambient Intell. Humanized Comput. 6(3), 307–323 (2015)CrossRefGoogle Scholar
  16. 16.
    Monostori, L., Váncza, J., Kumara, S.: Agent-based systems for manufacturing. CIRP Ann. 55(2), 697–720 (2006)CrossRefGoogle Scholar
  17. 17.
    Ng, A.Y., Russell, S.J., et al.: Algorithms for inverse reinforcement learning. In: ICML, pp. 663–670 (2000)Google Scholar
  18. 18.
    Oinas-Kukkonen, H., Harjumaa, M.: Towards deeper understanding of persuasion in software and information systems. In: 2008 First International Conference on Advances in Computer-Human Interaction, pp. 200–205. IEEE (2008)Google Scholar
  19. 19.
    Osborne, M.J., Rubinstein, A.: A Course in Game Theory. MIT Press, Cambridge (1994)zbMATHGoogle Scholar
  20. 20.
    Philippe, V., Oscar, Y., Maxime, R., John, J., Ethan, K.: Do social network sites enhance or undermine subjective well-being? A critical review. Soc. Issues Policy Rev. 11(1), 274–302 (2017)CrossRefGoogle Scholar
  21. 21.
    Ricci, F., Rokach, L., Shapira, B.: Recommender systems: introduction and challenges. In: Ricci, F., Rokach, L., Shapira, B. (eds.) Recommender Systems Handbook, pp. 1–34. Springer, Boston (2015).  https://doi.org/10.1007/978-1-4899-7637-6_1CrossRefzbMATHGoogle Scholar
  22. 22.
    Russell, S.J., Norvig, P.: Artificial Intelligence: A Modern Approach. Pearson Education Limited, Malaysia (2016)zbMATHGoogle Scholar
  23. 23.
    Sikkenk, M., Terken, J.: Rules of conduct for autonomous vehicles. In: Proceedings of the 7th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, AutomotiveUI 2015, pp. 19–22. ACM, New York (2015)Google Scholar
  24. 24.
    Von Neumann, J., Morgenstern, O.: Theory of games and economic behavior. Bull. Amer. Math. Soc. 51(7), 498–504 (1945)MathSciNetCrossRefGoogle Scholar
  25. 25.
    Wojdynski, B.W., Bang, H.: Distraction effects of contextual advertising on online news processing: an eye-tracking study. Behav. Inf. Technol. 35(8), 654–664 (2016)CrossRefGoogle Scholar
  26. 26.
    Yusof, N.M., Karjanto, J., Terken, J., Delbressine, F., Hassan, M.Z., Rauterberg, M.: The exploration of autonomous vehicle driving styles: preferred longitudinal, lateral, and vertical accelerations. In: Proceedings of the 8th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, AutomotiveUI 2016, pp. 245–252. ACM, New York (2016)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Umeå UniversityUmeåSweden

Personalised recommendations