Skip to main content

On Generating Explainable Plans with Assumption-Based Argumentation

  • Conference paper
  • First Online:
PRIMA 2018: Principles and Practice of Multi-Agent Systems (PRIMA 2018)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 11224))

Abstract

Planning is a classic problem in Artificial Intelligence (AI). Recently, the need for creating “Explainable AI” has been recognised and voiced by many researchers. Leveraging on the strength of argumentation, in particular, the Related Admissible semantics for generating explanations, this work makes an initial step towards “explainable planning”. We illustrate (1) how plan generation can be equated to constructing acceptable arguments and (2) how explanations for both “planning solutions” as well as “invalid plans” can be obtained by extracting information from an arguing process. We present an argumentation-based model which takes plans written in a STRIPS-like language as its inputs and returns Assumption-based Argumentation (ABA) frameworks as its outputs. The presented plan construction mapping is both sound and complete in that the planning problem has a solution if and only if its corresponding ABA framework has a set of Related Admissible arguments with the planning goal as its topic. We use the classic Tower of Hanoi puzzle as our case study and demonstrate how ABA can be used to solve this planning puzzle while giving explanations.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    \(\tau \notin \mathcal {L}\) represents “true” and stands for the empty body of rules.

  2. 2.

    Here, a stands for ‘abstract’. Also, ‘proponent’ and ‘opponent’ should be seen as roles/fictitious participants in a debate rather than actual agents.

  3. 3.

    In the original definition of abstract dispute tree [7], every \(\mathbf{O}\) node is required to have exactly one child. We incorporate this requirement into the definition of admissible dispute tree given later, so that our notion of admissible abstract dispute tree and the admissible abstract dispute trees of  [7] coincide.

  4. 4.

    This is a standard approach in planning as it allows the complete specification of the planning search space. Techniques have been developed to estimate the step bound, see e.g. [17].

  5. 5.

    We use the convention that the over-line on \(\mathtt {A}\) denotes that \(\mathtt {\overline{A}}\) represents a list of variables of unspecified length. Variables without over-lines are “normal” variables.

  6. 6.

    \(\mathtt {mv,cl}\) and \(\mathtt {sm}\) are short-hands for \(\mathtt {move, clear}\) and \(\mathtt {smaller}\), respectively.

  7. 7.

    When defining ABA frameworks, we omit to indicate the language component, as this can be easily inferred from the other components (being the set of all sentences occurring in rules, assumptions, and contraries). Also, we use rule schemata to simplify the notation. Each rule schema represents the set of grounded rules.

  8. 8.

    \(\mathtt {hA}\) is shorthand for \(\mathtt {hasAct}\).

  9. 9.

    Throughout, \(\_\) stands for an anonymous variable.

References

  1. Amgoud, L., Cayrol, C.: On the use of an ATMS for handling conflicting desires. In: Proceedings of KR, pp. 194–202 (2004)

    Google Scholar 

  2. Amgoud, L., Devred, C., Lagasquie-Schiex, M.: Generating possible intentions with constrained argumentation systems. IJAR 52(9), 1363–1391 (2011)

    MathSciNet  Google Scholar 

  3. Besnard, P., Garcia, A., Hunter, A., Modgil, S., Prakken, H., Simari, G., Toni, F.: Special issue: tutorials on structured argumentation. Argum. Comput. 5(1) (2014)

    Google Scholar 

  4. Black, E., Coles, A.J., Hampson, C.: Planning for persuasion. In: Proceedings of AAMAS, pp. 933–942 (2017)

    Google Scholar 

  5. Čyras, K., Fan, X., Schulz, C., Toni, F.: Assumption-based argumentation: disputes, explanations, preferences. IfCoLog JLTA 4(8) (2017)

    Google Scholar 

  6. Doran, D., Schulz, S., Besold, T.R.: What does explainable AI really mean? A new conceptualization of perspectives. CoRR, arXiv:abs/1710.00794 (2017)

  7. Dung, P.M., Kowalski, R.A., Toni, F.: Dialectic proof procedures for assumption-based, admissible argumentation. AIJ 170, 114–159 (2006)

    MathSciNet  Google Scholar 

  8. Fan, X., Toni, F.: On computing explanations in argumentation. In: Proceedings of AAAI, pp. 1496–1502 (2015)

    Google Scholar 

  9. Ferrando, S.P., Onaindia, E.: Defeasible argumentation for multi-agent planning in ambient intelligence applications. In: Proceedings of AAMAS, pp. 509–516. IFAAMS, Richland (2012)

    Google Scholar 

  10. Fox, M., Long, D., Magazzeni, D.: Explainable planning. CoRR, arXiv:abs/1709.10256 (2017)

  11. García, D.R., García, A.J., Simari, G.R.: Defeasible reasoning and partial order planning. In: Hartmann, S., Kern-Isberner, G. (eds.) FoIKS 2008. LNCS, vol. 4932, pp. 311–328. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-77684-0_21

    Chapter  Google Scholar 

  12. Goldstein, H.A.: Planning as argumentation. Environ. Plan. B: Plan. Des. 11(3), 297–312 (1984)

    Article  Google Scholar 

  13. Hulstijn, J., van der Torre, L.W.N.: Combining goal generation and planning in an argumentation framework. In: Proceedings of NMR, pp. 212–218 (2004)

    Google Scholar 

  14. Lapintie, K.: Analysing and evaluating argumentation in planning. Environ. Plan. B: Plan. Des. 25(2), 187–204 (1998)

    Article  Google Scholar 

  15. Modgil, S.: The added value of argumentation. In: Ossowski, S. (ed.) Agreement Technologies, vol. 8, pp. 357–403. Springer, Heidelberg (2013). https://doi.org/10.1007/978-94-007-5583-3_21

    Chapter  Google Scholar 

  16. Monteserin, A., Amandi, A.: Argumentation-based negotiation planning for autonomous agents. Decis. Support. Syst. 51(3), 532–548 (2011)

    Article  Google Scholar 

  17. Nau, D., Ghallab, M., Traverso, P.: Automated Planning: Theory & Practice. Morgan Kaufmann Publishers Inc., San Francisco (2004)

    Google Scholar 

  18. Nilsson, N.J.: Principles of Artificial Intelligence. Morgan Kaufmann Publishers Inc., San Francisco (1980)

    Google Scholar 

  19. Panisson, A.R., Farias, G., Fraitas, A., Meneguzzi, F., Vieira, R., Bordini, R.H.: Planning interactions for agents in argumentation-based negotiation. In: Proceedings of ArgMAS (2014)

    Google Scholar 

  20. Vallati, M., Chrpa, L., Grzes, M., McCluskey, T.L., Roberts, M., Sanner, S.: The 2014 international planning competition: progress and trends. AI Mag. 36(3), 90–98 (2015)

    Google Scholar 

  21. Wachter, S., Mittelstadt, B., Floridi, L.: Transparent, explainable, and accountable AI for robotics. Sci. Robot. 2(6) (2017)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiuyi Fan .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Fan, X. (2018). On Generating Explainable Plans with Assumption-Based Argumentation. In: Miller, T., Oren, N., Sakurai, Y., Noda, I., Savarimuthu, B.T.R., Cao Son, T. (eds) PRIMA 2018: Principles and Practice of Multi-Agent Systems. PRIMA 2018. Lecture Notes in Computer Science(), vol 11224. Springer, Cham. https://doi.org/10.1007/978-3-030-03098-8_21

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-03098-8_21

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-03097-1

  • Online ISBN: 978-3-030-03098-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics