Advertisement

What Is Acceptably Safe for Reinforcement Learning?

  • John BraggEmail author
  • Ibrahim Habli
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11094)

Abstract

Machine Learning algorithms are becoming more prevalent in critical systems where dynamic decision making and efficiency are the goal. As is the case for complex and safety-critical systems, where certain failures can lead to harm, we must proactively consider the safety assurance of such systems that use Machine Learning. In this paper we explore the implications of the use of Reinforcement Learning in particular, considering the potential benefits that it could bring to safety-critical systems, and our ability to provide assurances on the safety of systems incorporating such technology. We propose a high-level argument that could be used as the basis of a safety case for Reinforcement Learning systems, where the selection of ‘reward’ and ‘cost’ mechanisms would have a critical effect on the outcome of decisions made. We conclude with fundamental challenges that will need to be addressed to give the confidence necessary for deploying Reinforcement Learning within safety-critical applications.

Keywords

Safety Assurance Artificial Intelligence Machine Learning Reinforcement Learning 

References

  1. 1.
    Faria, J.M.: Non-determinism and failure modes in machine learning. In: 2017 IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW), pp. 310–316. IEEE (2017)Google Scholar
  2. 2.
    Calinescu, R.: Emerging techniques for the engineering of self-adaptive high-integrity software. In: Cámara, J., de Lemos, R., Ghezzi, C., Lopes, A. (eds.) Assurances for Self-Adaptive Systems. LNCS, vol. 7740, pp. 297–310. Springer, Heidelberg (2013).  https://doi.org/10.1007/978-3-642-36249-1_11CrossRefGoogle Scholar
  3. 3.
    McDermid, J.: Safety of autonomy: challenges and strategies. In: International Conference on Computer Safety, Reliability, and Security. Springer (2017)Google Scholar
  4. 4.
    McDermid, J.: Playing catch-up: The fate of safety engineering. In: Developments in System Safety Engineering, Proceedings of the Twenty-fifth Safety-Critical Systems Symposium, Bristol, UK (2017). ISBN 978–1540796288Google Scholar
  5. 5.
    Pumfrey, D.J.: The Principled Design of Computer System Safety Analyses. Ph.D. thesis, University of York (1999)Google Scholar
  6. 6.
    Hollnagel, E.: Safety-I and Safety-II: The Past and Future of Safety Management. Ashgate Publishing, Ltd. (2014)Google Scholar
  7. 7.
    Hollnagel, E., Leonhardt, J., Licu, T., Shorrock, S.: From Safety-i to Safety-ii: A white paper. European Organisation for the Safety of Air Navigation (EUROCONTROL), Brussels (2013)Google Scholar
  8. 8.
    Denney, E., Pai, G., Habli, I.: Dynamic safety cases for through-life safety assurance. In: International Conference on Software Engineering (ICSE 2015) (2015)Google Scholar
  9. 9.
    Assurance Case Working Group [ACWG]: GSN community standard version 2. Safety Critical Systems Club (2018)Google Scholar
  10. 10.
    Kelly, T.P.: Arguing Safety - A Systematic Approach to Managing Safety Cases. Ph.D. thesis, The University of York (1998)Google Scholar
  11. 11.
    Porter, Z., Habli, I., Monkhouse, H., Bragg, J.: The moral responsibility gap and the increasing autonomy of systems. In: First International Workshop on Artificial Intelligence Safety Engineering (WAISE) (2018)Google Scholar
  12. 12.
    Suleyman, M.: In 2018, AI will gain a moral compass, January 2018. http://www.wired.co.uk/article/mustafa-suleyman-deepmind-ai-morals-ethics. Accessed 09 Mar 2018
  13. 13.
    Russell, S.: 3 principles for creating safer AI, April 2017. https://www.ted.com/talks/stuart_russell_how_ai_might_make_us_better_people. Accessed 09 Mar 2018
  14. 14.
    Bostrom, N., Yudkowsky, E.: The ethics of artificial intelligence. Camb. Handb. Artif. Intell. 316, 334 (2014)Google Scholar
  15. 15.
    Dennis, L., Fisher, M., Slavkovik, M., Webster, M.: Formal verification of ethical choices in autonomous systems. Rob. Auton. Syst. 77, 1–14 (2016)CrossRefGoogle Scholar
  16. 16.
    Yampolskiy, R.V.: Artificial intelligence safety engineering: why machine ethics is a wrong approach. In: Müller, V. (ed.) Philosophy and Theory of Artificial Intelligence. SAPERE, vol. 5, pp. 389–396. Springer, Heidelberg (2013).  https://doi.org/10.1007/978-3-642-31674-6_29CrossRefGoogle Scholar
  17. 17.
    Leong, C., Kelly, T., Alexander, R.: Incorporating epistemic uncertainty into the safety assurance of socio-technical systems. arXiv preprint arXiv:1710.03394 (2017)
  18. 18.
    Rushby, J.: Logic and epistemology in safety cases. In: Bitsch, F., Guiochet, J., Kaâniche, M. (eds.) SAFECOMP 2013. LNCS, vol. 8153, pp. 1–7. Springer, Heidelberg (2013).  https://doi.org/10.1007/978-3-642-40793-2_1CrossRefGoogle Scholar
  19. 19.
    Morris, A.H.: Decision support and safety of clinical environments. BMJ Qual. Saf. 11(1), 69–75 (2002)CrossRefGoogle Scholar
  20. 20.
    Amodei, D., Olah, C., Steinhardt, J., Christiano, P., Schulman, J., Mané, D.: Concrete problems in AI safety. arXiv preprint arXiv:1606.06565 (2016)
  21. 21.
    Leveson, N.: A systems approach to risk management through leading safety indicators. Reliab. Eng. Syst. Saf. 136, 17–34 (2015)CrossRefGoogle Scholar
  22. 22.
    Garcıa, J., Fernández, F.: A comprehensive survey on safe reinforcement learning. J. Mach. Learn. Res. 16(1), 1437–1480 (2015)MathSciNetzbMATHGoogle Scholar
  23. 23.
    Mason, G.R., Calinescu, R.C., Kudenko, D., Banks, A.: Assured reinforcement learning with formally verified abstract policies. In: 9th International Conference on Agents and Artificial Intelligence (ICAART), York (2017)Google Scholar
  24. 24.
    Feth, P., Schneider, D., Adler, R.: A conceptual safety supervisor definition and evaluation framework for autonomous systems. In: Tonetta, S., Schoitsch, E., Bitsch, F. (eds.) SAFECOMP 2017. LNCS, vol. 10488, pp. 135–148. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-66266-4_9CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  1. 1.MBDA UK Ltd.Filton, BristolUK
  2. 2.University of YorkYorkUK

Personalised recommendations