Abstract
In a previous publication, we introduced the core concepts of empathic agents as agents that use a combination of utility-based and rule-based approaches to resolve conflicts when interacting with other agents in their environment. In this work, we implement proof-of-concept prototypes of empathic agents with the multi-agent systems development framework Jason and apply argumentation theory to extend the previously introduced concepts to account for inconsistencies between the beliefs of different agents. We then analyze the feasibility of different admissible set-based argumentation semantics to resolve these inconsistencies. As a result of the analysis, we identify the maximal ideal extension as the most feasible argumentation semantics for the problem in focus.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
We based our empathic agent on a rationality-oriented definition of empathy, to avoid the technical ambiguity definitions that focus on emotional empathy imply. A comprehensive discussion of definitions of empathy is beyond the scope of this work.
- 2.
Note that \(\mathop {\text {arg max}}u_{A_i}\) returns a set of sets.
- 3.
Note that a single acceptability rule does not necessarily consider all to-be-executed actions, i.e. it might ignore some of its input arguments.
- 4.
As the simple examples we implement in this paper feature only one acting agent (a second agent is merely approving or disapproving of the actions), such game-theoretical considerations are beyond scope. Hence, we will not elaborate further on them.
- 5.
For now, we assume all agents in a given scenario have the same implementation variant. Empathic agents that are capable to effectively interact with empathic agents of other implementation variants or with non-empathic agents are–although interesting–beyond scope.
- 6.
The implementation of our empathic agents with Jason (including the Jason extension we introduce below, as well as a technical report that documents the implementation) is available at https://github.com/TimKam/empathic-jason.
- 7.
The mappings are end-user specific. In a scenario with multiple end-users, the persuader would have one set of mappings per user.
- 8.
Note that in Jason terminology, acceptability rules are beliefs and not rules.
- 9.
If at any step of the decision process, several actions could be picked because they provide the same utility, the agents will always pick the first one in the corresponding list to reach a deterministic result.
- 10.
Note that we compare different argumentation semantics in Sect. 5.
- 11.
However, the provided example code implements only one argumentation cycle.
- 12.
In Example 2, we illustrate an empathic agent argumentation scenario, in which grounded semantics are overly strict.
- 13.
For the sake of simplicity, we use a wild card (\(*\)) to denote that the acceptability rule applies no matter which preference the mitigator agent has. Note that this syntax is not supported by our implementation.
References
Albrecht, S.V., Stone, P.: Autonomous agents modelling other agents: a comprehensive survey and open problems. Artif. Intell. 258, 66–95 (2018)
Alsinet, T., Chesnevar, C.I., Godo, L., Simari, G.R.: A logic programming framework for possibilistic argumentation: formalization and logical properties. Fuzzy Sets Syst. 159(10), 1208–1228 (2008)
Bench-Capon, T.J.: Persuasion in practical argument using value-based argumentation frameworks. J. Log. Comput. 13(3), 429–448 (2003)
Berariu, T.: An argumentation framework for BDI agents. In: Zavoral, F., Jung, J., Badica, C. (eds.) Intelligent Distributed Computing VII. SCI, vol. 511, pp. 343–354. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-01571-2_40
Black, E., Atkinson, K.: Choosing persuasive arguments for action. In: The 10th International Conference on Autonomous Agents and Multiagent Systems, vol. 3, pp. 905–912. International Foundation for Autonomous Agents and Multiagent Systems (2011)
Bordini, R.H., Hübner, J.F.: BDI agent programming in agentspeak using Jason. In: Toni, F., Torroni, P. (eds.) CLIMA 2005. LNCS (LNAI), vol. 3900, pp. 143–164. Springer, Heidelberg (2006). https://doi.org/10.1007/11750734_9
Christman, J.: Autonomy in moral and political philosophy. In: Zalta, E.N. (ed.) The Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University, Spring 2018 edn. (2018)
Coplan, A.: Will the real empathy please stand up? A case for a narrow conceptualization. South. J. Philos. 49(s1), 40–65 (2011)
Dung, P.M.: On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games. Artif. Intell. 77(2), 321–357 (1995)
Dung, P.M., Mancarella, P., Toni, F.: Computing ideal sceptical argumentation. Artif. Intell. 171(10–15), 642–674 (2007)
Fatima, S., Rahwan, I.: Negotiation and bargaining. In: Weiss, G. (ed.) Multiagent Systems, 2nd edn, pp. 143–176. MIT Press, Cambridge (2013). Chap. 4
Kampik, T., Nieves, J.C., Lindgren, H.: Coercion and deception in persuasive technologies. In: 20th International TRUST Workshop (2018)
Kampik, T., Nieves, J.C., Lindgren, H.: Towards empathic autonomous agents. In: 6th International Workshop on Engineering Multi-Agent Systems (EMAS 2018), Stockholm, July 2018
Panisson, A.R., Meneguzzi, F., Vieira, R., Bordini, R.H.: An approach for argumentation-based reasoning using defeasible logic in multi-agent programming languages. In: 11th International Workshop on Argumentation in Multiagent Systems (2014)
Panisson, A.R., Sarkadi, S., McBurney, P., Parson, S., Bordini, R.H.: Lies, bullshit, and deception in agent-oriented programming languages. In: 20th International TRUST Workshop, Stockholm (2018)
Rahwan, I.: Argumentation among agents. In: Weiss, G. (ed.) Multiagent Systems, 2nd edn, pp. 177–210. MIT Press, Cambridge (2013). Chap. 5
Sen, S., Crawford, C., Rahaman, Z., Osman, Y.: Agents for social (media) change. In: Proceedings of the 17th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2018), Stockholm (2018)
Thimm, M.: Tweety - a comprehensive collection of java libraries for logical aspects of artificial intelligence and knowledge representation. In: Proceedings of the 14th International Conference on Principles of Knowledge Representation and Reasoning (KR 2014), Vienna, Austria (2014)
Acknowledgements
We thank the anonymous reviewers for their constructive feedback. This work was partially supported by the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Kampik, T., Nieves, J.C., Lindgren, H. (2019). Implementing Argumentation-Enabled Empathic Agents. In: Slavkovik, M. (eds) Multi-Agent Systems. EUMAS 2018. Lecture Notes in Computer Science(), vol 11450. Springer, Cham. https://doi.org/10.1007/978-3-030-14174-5_10
Download citation
DOI: https://doi.org/10.1007/978-3-030-14174-5_10
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-14173-8
Online ISBN: 978-3-030-14174-5
eBook Packages: Computer ScienceComputer Science (R0)