Skip to main content

Accountability for Practical Reasoning Agents

  • Conference paper
  • First Online:
Book cover Agreement Technologies (AT 2018)

Abstract

Artificial intelligence has been increasing the autonomy of man-made artefacts such as software agents, self-driving vehicles and military drones. This increase in autonomy together with the ubiquity and impact of such artefacts in our daily lives have raised many concerns in society. Initiatives such as transparent and ethical AI aim to allay fears of a “free for all” future where amoral technology (or technology amorally designed) will replace humans with terrible consequences. We discuss the notion of accountable autonomy, and explore this concept within the context of practical reasoning agents. We survey literature from distinct fields such as management, healthcare, policy-making, and others, and differentiate and relate concepts connected to accountability. We present a list of justified requirements for accountable software agents and discuss research questions stemming from these requirements. We also propose a preliminary formalisation of one core aspect of accountability: responsibility.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://deepmind.com/applied/deepmind-ethics-society/.

  2. 2.

    https://deepmind.com/research/alphago/.

  3. 3.

    This formalism is based on dynamic logic, but it is out of scope of this paper to describe the semantics. Also, note that our purpose here is to specify the nature of the obligation implied by answerability. For implementing accountability processes, it is likely that agents can use less expresssive and possibly more specialised, representations of their obligations.

References

  1. Dubnick, M.J.: Accountability as a cultural keyword. In: Bovens et al. [56]

    Google Scholar 

  2. Billingham, P., Colin, A.: The democratisation of accountability in the digital age: promise and pitfalls. In: Winner of Robert Davies Essay Competition 2016, Skoll Centre for Social Entrepreneurship, Saïd Business School, The University of Oxford, U.K. (2016). https://www.sbs.ox.ac.uk/sites/default/files/Skoll_Centre/Docs/Accountability_BillinghamColin-Jones.pdf

  3. Wachter, S.: Towards accountable A.I. in Europe? The Alan Turing Institute, U.K. https://www.turing.ac.uk/blog/towards-accountable-ai-europe. Accessed 25 July 2018

  4. Bostrom, N., Yudkowsky, E.: The ethics of artificial intelligence. In: Frankish, K., Ramsey, W.M. (eds.) The Cambridge Handbook of Artificial Intelligence, pp. 316–334. Cambridge University Press (2014)

    Google Scholar 

  5. Dignum, V.: Ethics in artificial intelligence: introduction to the special issue. Ethics Inf. Technol. 20(1), 1–3 (2018)

    Article  MathSciNet  Google Scholar 

  6. Simonite, T.: Tech firms move to put ethical guard rails around AI. Wired, May 2018. https://www.wired.com/story/tech-firms-move-to-put-ethical-guard-rails-around-ai/. Accessed 29 July 2018

  7. Zou, J., Schiebinger, L.: AI can be sexist and racist – it’s time to make it fair. Nature 559, 324–326 (2018)

    Article  Google Scholar 

  8. Georgeff, M., Pell, B., Pollack, M., Tambe, M., Wooldridge, M.: The belief-desire-intention model of agency. In: Müller, J.P., Rao, A.S., Singh, M.P. (eds.) ATAL 1998. LNCS, vol. 1555, pp. 1–10. Springer, Heidelberg (1999). https://doi.org/10.1007/3-540-49057-4_1

    Chapter  Google Scholar 

  9. Meneguzzi, F.R., Zorzo, A.F., da Costa Móra, M.: Propositional planning in BDI agents. In: Proceedings of the ACM Symposium on Applied Computing, pp. 58–63. ACM, New York (2004)

    Google Scholar 

  10. Rao, A.S., Georgeff, M.P.: BDI agents: from theory to practice. In: Proceedings of the 1st International Conference on Multi-Agent Systems (ICMAS 1995), pp. 312–319. AAAI (1995). https://www.aaai.org/Papers/ICMAS/1995/ICMAS95-042.pdf

  11. Chopra, A.K., Singh, M.P.: The thing itself speaks: accountability as a foundation for requirements in sociotechnical systems. In: 2014 IEEE 7th International Workshop on Requirements Engineering and Law, p. 22. IEEE (2014)

    Google Scholar 

  12. Dastani, M., van der Torre, L., Yorke-Smith, N.: Commitments and interaction norms in organisations. Auton. Agent. Multi-Agent Syst. 31(2), 207–249 (2017)

    Article  Google Scholar 

  13. Fornara, N., Colombetti, M.: Representation and monitoring of commitments and norms using OWL. AI Commun. 23(4), 341–356 (2010)

    MathSciNet  MATH  Google Scholar 

  14. Baldoni, M., Baroglio, C., May, K.M., Micalizio, R., Tedeschi, S.: Computational accountability. In: Proceedings of the AI*IA Workshop on Deep Understanding and Reasoning: A Challenge for Next-generation Intelligent Agents, volume 1802 of CEUR Workshop Proceedings, pp. 56–62. CEUR-WS.org (2017)

    Google Scholar 

  15. Baldoni, M., Baroglio, C., May, K.M., Micalizio, R., Tedeschi, S.: ADOPT JaCaMo: accountability-driven organization programming technique for JaCaMo. In: An, B., Bazzan, A., Leite, J., Villata, S., van der Torre, L. (eds.) PRIMA 2017. LNCS (LNAI), vol. 10621, pp. 295–312. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-69131-2_18

    Chapter  Google Scholar 

  16. Baldoni, M., Baroglio, C., Micalizio, R.: The AThOS project: first steps towards computational accountability. In: Proceedings of the 1st Workshop on Computational Accountability and Responsibility in Multiagent Systems, volume 2051 of CEUR Workshop Proceedings, pp. 3–19. CEUR-WS.org (2018)

    Google Scholar 

  17. Bovens, M., Schillemans, T., Goodin, R.E.: Public accountability. In: Bovens et al. [56]

    Google Scholar 

  18. Dignum, V.: Responsible artificial intelligence: designing AI for human values. ITU J. ICT Discov. 1(1), 1–8 (2018)

    Google Scholar 

  19. Fox, J.: The uncertain relationship between transparency and accountability. Dev. Pract. 17(4–5), 663–671 (2007)

    Article  Google Scholar 

  20. Schillemans, T.: The public accountability review: a meta-analysis of public accountability research in six academic disciplines. Working paper, Utrecht University School of Governance (2013). https://dspace.library.uu.nl/handle/1874/275784

  21. Emanuel, E.J., Emanuel, L.L.: What is accountability in health care? Ann. Intern. Med. 124(2), 229–239 (1996)

    Article  Google Scholar 

  22. Eshleman, A.: Moral responsibility. In: Zalta, E.N. (ed.) The Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University, winter 2016 edn. (2016)

    Google Scholar 

  23. PMI: Guide to the Project Management Body of Knowledge (PMBOK®Guide), 5th edn. Project Management Institute (2013)

    Google Scholar 

  24. Jacka, J.M., Keller, P.J.: Business Process Mapping: Improving Customer Satisfaction, 2nd edn. Wiley, Hoboken (2009)

    Google Scholar 

  25. Grossi, D., Dignum, F., Royakkers, L.M.M., Meyer, J.-J.C.: Collective obligations and agents: who gets the blame? In: Lomuscio, A., Nute, D. (eds.) DEON 2004. LNCS (LNAI), vol. 3065, pp. 129–145. Springer, Heidelberg (2004). https://doi.org/10.1007/978-3-540-25927-5_9

    Chapter  Google Scholar 

  26. Micalizio, R., Torasso, P., Torta, G.: On-line monitoring and diagnosis of multi-agent systems: a model based approach. In: Proceedings of the 16th European Conference on Artificial Intelligence, pp. 848–852. IOS Press (2004)

    Google Scholar 

  27. Witteveen, C., Roos, N., van der Krogt, R., de Weerdt, M.: Diagnosis of single and multi-agent plans. In: Proceedings of the Fourth International Joint Conference on Autonomous Agents and Multiagent Systems, pp. 805–812. ACM (2005)

    Google Scholar 

  28. Grossi, D., Royakkers, L., Dignum, F.: Organizational structure and responsibility. Artif. Intell. Law 15(3), 223–249 (2007)

    Article  Google Scholar 

  29. de Jonge, F., Roos, N., Witteveen, C.: Primary and secondary diagnosis of multi-agent plan execution. Auton. Agent. Multi-Agent Syst. 18(2), 267–294 (2009)

    Article  Google Scholar 

  30. Mastop, R.: Characterising responsibility in organisational structures: the problem of many hands. In: Governatori, G., Sartor, G. (eds.) DEON 2010. LNCS (LNAI), vol. 6181, pp. 274–287. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-14183-6_20

    Chapter  MATH  Google Scholar 

  31. De Lima, T., Royakkers, L.M.M., Dignum, F.: Modeling the problem of many hands in organisations. In: Proceedings of the 19th European Conference on Artificial Intelligence, volume 215 of Frontiers in Artificial Intelligence and Applications, pp. 79–84. IOS Press (2010)

    Google Scholar 

  32. Bulling, N., Dastani, M.: Coalitional responsibility in strategic settings. In: Leite, J., Son, T.C., Torroni, P., van der Torre, L., Woltran, S. (eds.) CLIMA 2013. LNCS (LNAI), vol. 8143, pp. 172–189. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-40624-9_11

    Chapter  MATH  Google Scholar 

  33. Micalizio, R., Torasso, P.: Cooperative monitoring to diagnose multiagent plans. J. Artif. Intell. Res. 51, 1–70 (2014)

    Article  MathSciNet  Google Scholar 

  34. Lorini, E., Longin, D., Mayor, E.: A logical analysis of responsibility attribution: emotions, individuals and collectives. J. Log. Comput. 24(6), 1313–1339 (2014)

    Article  MathSciNet  Google Scholar 

  35. Aldewereld, H., Dignum, V., Vasconcelos, W.W.: Group norms for multi-agent organisations. ACM Trans. Auton. Adapt. Syst. 11(2), 15:1–15:31 (2016)

    Article  Google Scholar 

  36. Alechina, N., Halpern, J.Y., Logan,B.: Causality, responsibility and blame in team plans. In: Proceedings of the 16th International Conference on Autonomous Agents and Multiagent Systems, pp. 1091–1099. IFAAMAS (2017)

    Google Scholar 

  37. Winikoff, M.: Towards trusting autonomous systems. In: El Fallah-Seghrouchni, A., Ricci, A., Son, T.C. (eds.) EMAS 2017. LNCS (LNAI), vol. 10738, pp. 3–20. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-91899-0_1

    Chapter  Google Scholar 

  38. Bovens, M.: Analysing and assessing accountability: a conceptual framework. Eur. Law J. 13(4), 447–468 (2007)

    Article  MathSciNet  Google Scholar 

  39. Richard, M.: ‘accountability’: An ever-expanding concept? Public Adm. 78(3), 555–573 (2000)

    Article  Google Scholar 

  40. Anderson, M.L., Perlis, D.R.: Logic, self-awareness and self-improvement: the metacognitive loop and the problem of brittleness. J. Log. Comput. 15(1), 21–40 (2005)

    Article  MathSciNet  Google Scholar 

  41. Cranefield, S., Winikoff, M., Dignum, V., Dignum, F.: No pizza for you: Value-based plan selection in BDI agents. In: Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, pp. 178–184. ijcai.org (2017)

    Google Scholar 

  42. Meneguzzi, F., Rodrigues, O., Oren, N., Vasconcelos, W.W., Luck, M.: BDI reasoning with normative considerations. Eng. Appl. Artif. Intell. 43, 127–146 (2015)

    Article  Google Scholar 

  43. Gatt, A., et al.: From data to text in the neonatal intensive care unit: using NLG technology for decision support and information management. AI Commun. 22(3), 153–186 (2009)

    MathSciNet  Google Scholar 

  44. Mulwa, C., Lawless, S., Sharp, M., Wade, V.: The evaluation of adaptive and personalised information retrieval systems: a review. Int. J. Knowl. Web Intell. 2(2/3), 138–156 (2011)

    Article  Google Scholar 

  45. Bex, F., Grasso, F., Green, N., Paglieri, F., Reed, C.: Argument Technologies: Theory, Analysis, and Applications. Studies in Logic and Argumentation. College Publications (2017)

    Google Scholar 

  46. Alechina, N., Dastani, M., Logan, B., Meyer, J.-J.C.: Reasoning about plan revision in BDI agent programs. Theoret. Comput. Sci. 412(44), 6115–6134 (2011)

    Article  MathSciNet  Google Scholar 

  47. Ma, J., Liu, W., Hong, J., Godo, L., Sierra, C.: Plan selection for probabilistic BDI agents. In: 2014 IEEE 26th International Conference on Tools with Artificial Intelligence, pp. 83–90, November 2014

    Google Scholar 

  48. Winikoff, M.: An AgentSpeak meta-interpreter and its applications. In: Bordini, R.H., Dastani, M.M., Dix, J., El Fallah Seghrouchni, A. (eds.) ProMAS 2005. LNCS (LNAI), vol. 3862, pp. 123–138. Springer, Heidelberg (2006). https://doi.org/10.1007/11678823_8

    Chapter  Google Scholar 

  49. Winikoff, M.: Debugging agent programs with “why?” questions. In: Proceedings of the 16th International Conference on Autonomous Agents and Multiagent Systems, pp. 251–259. IFAAMAS (2017)

    Google Scholar 

  50. Atkinson, K., Bench-Capon, T.J.M.: Practical reasoning as presumptive argumentation using action based alternating transition systems. Artifi. Intell. 171(10–15), 855–874 (2007)

    Article  MathSciNet  Google Scholar 

  51. Andrighetto, G., Governatori, G., Noriega, P., van der Torre, L.W.N. (eds.) Normative Multi-Agent Systems, volume 4 of Dagstuhl Follow-Ups. Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2013)

    Google Scholar 

  52. Mallya, A.U., Singh, M.P.: An algebra for commitment protocols. Auton. Agent. Multi-Agent Syst. 14(2), 143–163 (2007)

    Article  Google Scholar 

  53. Dignum, F., Weigand, H., Verharen, E.: Meeting the deadline: on the formal specification of temporal deontic constraints. In: Raś, Z.W., Michalewicz, M. (eds.) ISMIS 1996. LNCS, vol. 1079, pp. 243–252. Springer, Heidelberg (1996). https://doi.org/10.1007/3-540-61286-6_149

    Chapter  Google Scholar 

  54. Searle, J.R.: The Construction of Social Reality. Free Press, New York (1995)

    Google Scholar 

  55. Finkel, A., Iyer, S.P., Sutre, G.: Well-abstracted transition systems: application to FIFO automata. Inf. Comput. 181(1), 1–31 (2003)

    Article  MathSciNet  Google Scholar 

  56. Bovens, M., Goodin, R.E., Schillemans, T. (eds.): The Oxford Handbook of Public Accountability. Oxford University Press, Oxford (2014)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Stephen Cranefield , Nir Oren or Wamberto W. Vasconcelos .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Cranefield, S., Oren, N., Vasconcelos, W.W. (2019). Accountability for Practical Reasoning Agents. In: Lujak, M. (eds) Agreement Technologies. AT 2018. Lecture Notes in Computer Science(), vol 11327. Springer, Cham. https://doi.org/10.1007/978-3-030-17294-7_3

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-17294-7_3

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-17293-0

  • Online ISBN: 978-3-030-17294-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics