Abstract
Planning is a classic problem in Artificial Intelligence (AI). Recently, the need for creating “Explainable AI” has been recognised and voiced by many researchers. Leveraging on the strength of argumentation, in particular, the Related Admissible semantics for generating explanations, this work makes an initial step towards “explainable planning”. We illustrate (1) how plan generation can be equated to constructing acceptable arguments and (2) how explanations for both “planning solutions” as well as “invalid plans” can be obtained by extracting information from an arguing process. We present an argumentation-based model which takes plans written in a STRIPS-like language as its inputs and returns Assumption-based Argumentation (ABA) frameworks as its outputs. The presented plan construction mapping is both sound and complete in that the planning problem has a solution if and only if its corresponding ABA framework has a set of Related Admissible arguments with the planning goal as its topic. We use the classic Tower of Hanoi puzzle as our case study and demonstrate how ABA can be used to solve this planning puzzle while giving explanations.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
\(\tau \notin \mathcal {L}\) represents “true” and stands for the empty body of rules.
- 2.
Here, a stands for ‘abstract’. Also, ‘proponent’ and ‘opponent’ should be seen as roles/fictitious participants in a debate rather than actual agents.
- 3.
In the original definition of abstract dispute tree [7], every \(\mathbf{O}\) node is required to have exactly one child. We incorporate this requirement into the definition of admissible dispute tree given later, so that our notion of admissible abstract dispute tree and the admissible abstract dispute trees of [7] coincide.
- 4.
This is a standard approach in planning as it allows the complete specification of the planning search space. Techniques have been developed to estimate the step bound, see e.g. [17].
- 5.
We use the convention that the over-line on \(\mathtt {A}\) denotes that \(\mathtt {\overline{A}}\) represents a list of variables of unspecified length. Variables without over-lines are “normal” variables.
- 6.
\(\mathtt {mv,cl}\) and \(\mathtt {sm}\) are short-hands for \(\mathtt {move, clear}\) and \(\mathtt {smaller}\), respectively.
- 7.
When defining ABA frameworks, we omit to indicate the language component, as this can be easily inferred from the other components (being the set of all sentences occurring in rules, assumptions, and contraries). Also, we use rule schemata to simplify the notation. Each rule schema represents the set of grounded rules.
- 8.
\(\mathtt {hA}\) is shorthand for \(\mathtt {hasAct}\).
- 9.
Throughout, \(\_\) stands for an anonymous variable.
References
Amgoud, L., Cayrol, C.: On the use of an ATMS for handling conflicting desires. In: Proceedings of KR, pp. 194–202 (2004)
Amgoud, L., Devred, C., Lagasquie-Schiex, M.: Generating possible intentions with constrained argumentation systems. IJAR 52(9), 1363–1391 (2011)
Besnard, P., Garcia, A., Hunter, A., Modgil, S., Prakken, H., Simari, G., Toni, F.: Special issue: tutorials on structured argumentation. Argum. Comput. 5(1) (2014)
Black, E., Coles, A.J., Hampson, C.: Planning for persuasion. In: Proceedings of AAMAS, pp. 933–942 (2017)
Čyras, K., Fan, X., Schulz, C., Toni, F.: Assumption-based argumentation: disputes, explanations, preferences. IfCoLog JLTA 4(8) (2017)
Doran, D., Schulz, S., Besold, T.R.: What does explainable AI really mean? A new conceptualization of perspectives. CoRR, arXiv:abs/1710.00794 (2017)
Dung, P.M., Kowalski, R.A., Toni, F.: Dialectic proof procedures for assumption-based, admissible argumentation. AIJ 170, 114–159 (2006)
Fan, X., Toni, F.: On computing explanations in argumentation. In: Proceedings of AAAI, pp. 1496–1502 (2015)
Ferrando, S.P., Onaindia, E.: Defeasible argumentation for multi-agent planning in ambient intelligence applications. In: Proceedings of AAMAS, pp. 509–516. IFAAMS, Richland (2012)
Fox, M., Long, D., Magazzeni, D.: Explainable planning. CoRR, arXiv:abs/1709.10256 (2017)
García, D.R., García, A.J., Simari, G.R.: Defeasible reasoning and partial order planning. In: Hartmann, S., Kern-Isberner, G. (eds.) FoIKS 2008. LNCS, vol. 4932, pp. 311–328. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-77684-0_21
Goldstein, H.A.: Planning as argumentation. Environ. Plan. B: Plan. Des. 11(3), 297–312 (1984)
Hulstijn, J., van der Torre, L.W.N.: Combining goal generation and planning in an argumentation framework. In: Proceedings of NMR, pp. 212–218 (2004)
Lapintie, K.: Analysing and evaluating argumentation in planning. Environ. Plan. B: Plan. Des. 25(2), 187–204 (1998)
Modgil, S.: The added value of argumentation. In: Ossowski, S. (ed.) Agreement Technologies, vol. 8, pp. 357–403. Springer, Heidelberg (2013). https://doi.org/10.1007/978-94-007-5583-3_21
Monteserin, A., Amandi, A.: Argumentation-based negotiation planning for autonomous agents. Decis. Support. Syst. 51(3), 532–548 (2011)
Nau, D., Ghallab, M., Traverso, P.: Automated Planning: Theory & Practice. Morgan Kaufmann Publishers Inc., San Francisco (2004)
Nilsson, N.J.: Principles of Artificial Intelligence. Morgan Kaufmann Publishers Inc., San Francisco (1980)
Panisson, A.R., Farias, G., Fraitas, A., Meneguzzi, F., Vieira, R., Bordini, R.H.: Planning interactions for agents in argumentation-based negotiation. In: Proceedings of ArgMAS (2014)
Vallati, M., Chrpa, L., Grzes, M., McCluskey, T.L., Roberts, M., Sanner, S.: The 2014 international planning competition: progress and trends. AI Mag. 36(3), 90–98 (2015)
Wachter, S., Mittelstadt, B., Floridi, L.: Transparent, explainable, and accountable AI for robotics. Sci. Robot. 2(6) (2017)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer Nature Switzerland AG
About this paper
Cite this paper
Fan, X. (2018). On Generating Explainable Plans with Assumption-Based Argumentation. In: Miller, T., Oren, N., Sakurai, Y., Noda, I., Savarimuthu, B.T.R., Cao Son, T. (eds) PRIMA 2018: Principles and Practice of Multi-Agent Systems. PRIMA 2018. Lecture Notes in Computer Science(), vol 11224. Springer, Cham. https://doi.org/10.1007/978-3-030-03098-8_21
Download citation
DOI: https://doi.org/10.1007/978-3-030-03098-8_21
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-03097-1
Online ISBN: 978-3-030-03098-8
eBook Packages: Computer ScienceComputer Science (R0)