Skip to main content

Empathic Autonomous Agents

  • Conference paper
  • First Online:
Engineering Multi-Agent Systems (EMAS 2018)

Abstract

Identifying and resolving conflicts of interests is a key challenge when designing autonomous agents. For example, such conflicts often occur when complex information systems interact persuasively with humans and are in the future likely to arise in non-human agent-to-agent interaction. We introduce a theoretical framework for an empathic autonomous agent that proactively identifies potential conflicts of interests in interactions with other agents (and humans) by considering their utility functions and comparing them with its own preferences using a system of shared values to find a solution all agents consider acceptable. To illustrate how empathic autonomous agents work, we provide running examples and a simple prototype implementation in a general-purpose programing language. To give a high-level overview of our work, we propose a reasoning-loop architecture for our empathic agent.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 74.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    E.g., research provides evidence that contextual advertisement influences how users process online news [25]; social network applications have effectively been employed for political persuasion (see for an example: [4]).

  2. 2.

    As we will explain later, the scenario and the resulting specification can be gradually extended to allow for better real-world applicability.

  3. 3.

    We allow for utility functions to return a null value for action tuples that are considered impossible, e.g. in case some actions are mutually exclusive. While we concede that the elegance of this approach is up for debate, we opted for it because of its simplicity.

  4. 4.

    The \({{\,\mathrm{arg\,max}\,}}\) operator takes the function it precedes and returns all argument tuples that maximize the function.

  5. 5.

    I.e., for the same actions, an agent should only receive a different utility outcome than another agent if the impact on the two is distinguishable in its consequences. We again allow for null values to be returned in case of impossible action tuples.

  6. 6.

    As different aggregation approaches are possible (for example: sum, product) to determine the maximal shared utility, we introduce the not further specified aggregation function \(aggregate(u_{0}, ..., u_{n})\). In our running examples (see Sect. 3), we use the product of the individual utility function outcomes to introduce some notion of fairness; inequality should not be in the interest of the empathic agent. However, the design choice for this implementation detail can be discussed.

  7. 7.

    To facilitate readability, we switch to a pseudo-code notation for the following algorithms.

  8. 8.

    We already use null to denote impossible action tuples. This implies an acceptable action tuple should always exists. To achieve a distinction, a value of \(- \infty \) could be assigned.

  9. 9.

    See the Nash equilibrium definition provided by Osborne and Rubinstein [19, p. 11 et sqq.].

  10. 10.

    As state above, we assume that the first function sorts the action tuples in a deterministic order before returning the first element.

  11. 11.

    \(drive_A \wedge wait_A\) and \(drive_B \wedge wait_B\), respectively, are mutually exclusive (\(\{drive_A \oplus wait_A, drive_B \oplus wait_B\}\), with \(A \oplus B := (A \vee B) \wedge \lnot (A \wedge B)\)). I.e., the functions return null if \(drive_A \wedge wait_A \vee drive_B \wedge wait_B\).

  12. 12.

    The scenario is an adjusted and extended version of the “Bach or Stravinsky? (BoS)” example presented by Osborne and Rubinstein [19, pp. 15–16].

  13. 13.

    Note that the if-condition that triggers the return of a null value simply defines that \(Bach_A\), \(Stravinsky_A\), and \(Mozart_A\) are mutually exclusive, as are \(Bach_B\), \(Stravinsky_B\), and \(Mozart_B\).

  14. 14.

    The code, as well as documentation and tests, are available at http://s.cs.umu.se/qxgbfi.

  15. 15.

    However, the same can be achieved with temporal and probabilistic logic.

References

  1. Albrecht, S.V., Stone, P.: Autonomous agents modelling other agents: a comprehensive survey and open problems. Artif. Intell. 258, 66–95 (2018)

    Article  MathSciNet  Google Scholar 

  2. Alshabi, W., Ramaswamy, S., Itmi, M., Abdulrab, H.: Coordination, cooperation and conflict resolution in multi-agent systems. In: Sobh, T. (ed.) Innovations and Advanced Techniques in Computer and Information Sciences and Engineering, pp. 495–500. Springer, Dordrecht (2007). https://doi.org/10.1007/978-1-4020-6268-1_87

    Chapter  Google Scholar 

  3. Amgoud, L., Dimopoulos, Y., Moraitis, P.: A unified and general framework for argumentation-based negotiation. In: Proceedings of the 6th International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS, pp. 158:1–158:8. ACM, New York (2007)

    Google Scholar 

  4. Berinsky, A.J.: Rumors and health care reform: experiments in political misinformation. Br. J. Polit. Sci. 47(2), 241–262 (2017)

    Article  Google Scholar 

  5. Black, E., Atkinson, K.: Choosing persuasive arguments for action. In: The 10th International Conference on Autonomous Agents and Multiagent Systems-Volume 3, pp. 905–912. International Foundation for Autonomous Agents and Multiagent Systems (2011)

    Google Scholar 

  6. Bordini, R.H., Hübner, J.F.: BDI agent programming in agentspeak using jason. In: Toni, F., Torroni, P. (eds.) CLIMA 2005. LNCS (LNAI), vol. 3900, pp. 143–164. Springer, Heidelberg (2006). https://doi.org/10.1007/11750734_9

    Chapter  MATH  Google Scholar 

  7. Bratman, M.: Intention, Plans, and Practical Reason. Center for the Study of Language and Information, Stanford (1987)

    Google Scholar 

  8. Bruccoleri, M., Nigro, G.L., Perrone, G., Renna, P., Diega, S.N.L.: Production planning in reconfigurable enterprises and reconfigurable production systems. CIRP Ann. 54(1), 433–436 (2005)

    Article  Google Scholar 

  9. Busoniu, L., Babuska, R., De Schutter, B.: A comprehensive survey of multiagent reinforcement learning. IEEE Trans. Syst. Man Cybern. Part C 38(2), 156–172 (2008)

    Article  Google Scholar 

  10. Chajewska, U., Koller, D., Ormoneit, D.: Learning an agent’s utility function by observing behavior. In: ICML, pp. 35–42 (2001)

    Google Scholar 

  11. Conroy, D.E., Yang, C.H., Maher, J.P.: Behavior change techniques in top-ranked mobile apps for physical activity. Am. J. Prev. Med. 46(6), 649–652 (2014)

    Article  Google Scholar 

  12. Coplan, A.: Will the real empathy please stand up? A case for a narrow conceptualization. South. J. Philos. 49(s1), 40–65 (2011)

    Article  Google Scholar 

  13. Dautenhahn, K.: The art of designing socially intelligent agents: science, fiction, and the human in the loop. Appl. Artif. Intell. 12(7–8), 573–617 (1998)

    Article  Google Scholar 

  14. Hors-Fraile, S., et al.: Analyzing recommender systems for health promotion using a multidisciplinary taxonomy: a scoping review. Int. J. Med. Inform. 114, 143–155 (2018)

    Article  Google Scholar 

  15. Marey, O., Bentahar, J., Khosrowshahi-Asl, E., Sultan, K., Dssouli, R.: Decision making under subjective uncertainty in argumentation-based agent negotiation. J. Ambient Intell. Humanized Comput. 6(3), 307–323 (2015)

    Article  Google Scholar 

  16. Monostori, L., Váncza, J., Kumara, S.: Agent-based systems for manufacturing. CIRP Ann. 55(2), 697–720 (2006)

    Article  Google Scholar 

  17. Ng, A.Y., Russell, S.J., et al.: Algorithms for inverse reinforcement learning. In: ICML, pp. 663–670 (2000)

    Google Scholar 

  18. Oinas-Kukkonen, H., Harjumaa, M.: Towards deeper understanding of persuasion in software and information systems. In: 2008 First International Conference on Advances in Computer-Human Interaction, pp. 200–205. IEEE (2008)

    Google Scholar 

  19. Osborne, M.J., Rubinstein, A.: A Course in Game Theory. MIT Press, Cambridge (1994)

    MATH  Google Scholar 

  20. Philippe, V., Oscar, Y., Maxime, R., John, J., Ethan, K.: Do social network sites enhance or undermine subjective well-being? A critical review. Soc. Issues Policy Rev. 11(1), 274–302 (2017)

    Article  Google Scholar 

  21. Ricci, F., Rokach, L., Shapira, B.: Recommender systems: introduction and challenges. In: Ricci, F., Rokach, L., Shapira, B. (eds.) Recommender Systems Handbook, pp. 1–34. Springer, Boston (2015). https://doi.org/10.1007/978-1-4899-7637-6_1

    Chapter  MATH  Google Scholar 

  22. Russell, S.J., Norvig, P.: Artificial Intelligence: A Modern Approach. Pearson Education Limited, Malaysia (2016)

    MATH  Google Scholar 

  23. Sikkenk, M., Terken, J.: Rules of conduct for autonomous vehicles. In: Proceedings of the 7th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, AutomotiveUI 2015, pp. 19–22. ACM, New York (2015)

    Google Scholar 

  24. Von Neumann, J., Morgenstern, O.: Theory of games and economic behavior. Bull. Amer. Math. Soc. 51(7), 498–504 (1945)

    Article  MathSciNet  Google Scholar 

  25. Wojdynski, B.W., Bang, H.: Distraction effects of contextual advertising on online news processing: an eye-tracking study. Behav. Inf. Technol. 35(8), 654–664 (2016)

    Article  Google Scholar 

  26. Yusof, N.M., Karjanto, J., Terken, J., Delbressine, F., Hassan, M.Z., Rauterberg, M.: The exploration of autonomous vehicle driving styles: preferred longitudinal, lateral, and vertical accelerations. In: Proceedings of the 8th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, AutomotiveUI 2016, pp. 245–252. ACM, New York (2016)

    Google Scholar 

Download references

Acknowledgments

We thank the anonymous reviewers for their constructive critical feedback. This work was partially supported by the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Timotheus Kampik .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Kampik, T., Nieves, J.C., Lindgren, H. (2019). Empathic Autonomous Agents. In: Weyns, D., Mascardi, V., Ricci, A. (eds) Engineering Multi-Agent Systems. EMAS 2018. Lecture Notes in Computer Science(), vol 11375. Springer, Cham. https://doi.org/10.1007/978-3-030-25693-7_10

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-25693-7_10

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-25692-0

  • Online ISBN: 978-3-030-25693-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics