Advertisement

Continuous Argument Engineering: Tackling Uncertainty in Machine Learning Based Systems

  • Fuyuki IshikawaEmail author
  • Yutaka Matsuno
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11094)

Abstract

Components or systems implemented by using machine learning techniques have intrinsic difficulties caused by uncertainty. Specifically, it is impossible to logically or deductively conclude what they can(not) do or how they behave for untested inputs. In addition, such systems are often applied to the real world, which has uncertain requirements and environments. In this paper, we discuss what becomes difficult or even impossible in the use of arguments or assurance cases for machine learning based systems. We then propose an approach for continuously analyzing, managing, and updating arguments while accepting uncertainty as intrinsic in nature.

Keywords

Assurance cases Arguments Machine learning Artificial intelligence Cyber-physical systems 

Notes

Acknowledgments

This work is partially supported by ERATO HASUO Metamathematics for Systems Design Project (No. JPMJER1603), JST. We are thankful to the industry researchers and engineers who gave deep insight into the difficulties of engineering for cyber-physical systems and machine learning systems.

References

  1. 1.
    Baresi, L., Ghezzi, C.: The disappearing boundary between development-time and run-time. In: FSE/SDP Workshop on Future of Software Engineering Research, pp. 17–22, November 2010Google Scholar
  2. 2.
    Blair, G., Bencomo, N., France, R.B.: Models@ run.time. IEEE Comput. 42(10), 22–27 (2009)CrossRefGoogle Scholar
  3. 3.
    Breck, E., Cai, S., Nielsen, E., Salib, M., Sculley, D.: What’s your ML test score? a rubric for ML production systems. In: NIPS 2016 Workshop on Reliable Machine Learning in the Wild, December 2017Google Scholar
  4. 4.
    Fujita, H., Matsuno, Y., Hanawa, T., Sato, M., Kato, S., Ishikawa, Y.: DS-Bench Toolset: Tools for dependability benchmarking with simulation and assurance. In: IEEE/IFIP International Conference on Dependable Systems and Networks (DSN 2012), pp. 1–8, June 2012Google Scholar
  5. 5.
    Burton, S., Gauerhof, L., Heinzemann, C.: Making the case for safety of machine learning in highly automated driving. In: Tonetta, S., Schoitsch, E., Bitsch, F. (eds.) SAFECOMP 2017. LNCS, vol. 10489, pp. 5–16. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-66284-8_1CrossRefGoogle Scholar
  6. 6.
    Goodfellow, I., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: International Conference on Learning Representations (ICLR), May 2015Google Scholar
  7. 7.
    Gunning, D.: Explainable artificial intelligence (XAI). In: IJCAI 2016 Workshop on Deep Learning for Artificial Intelligence (DLAI), July 2016Google Scholar
  8. 8.
    Huang, X., Kwiatkowska, M., Wang, S., Wu, M.: Safety verification of deep neural networks. In: Majumdar, R., Kunčak, V. (eds.) CAV 2017. LNCS, vol. 10426, pp. 3–29. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-63387-9_1CrossRefGoogle Scholar
  9. 9.
    Kelly, T., Weaver, R.: The Goal Structuring Notation - a safety argument notation. In: Dependable Systems and Networks 2004 Workshop on Assurance Cases, July 2004Google Scholar
  10. 10.
    van Lamsweerde, A.: Requirements Engineering: From System Goals to UML Models to Software Specifications. Wiley, January 2009Google Scholar
  11. 11.
    Pei, K., Cao, Y., Yang, J., Jana, S.: Deepxplore: automated whitebox testing of deep learning systems. In: The 26th Symposium on Operating Systems Principles (SOSP 2017), pp. 1–18, October 2017Google Scholar
  12. 12.
    Ribeiro, M.T., Singh, S., Guestrin, C.: “why should i trust you?” explaining the predictions of any classifier. In: The 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD 2016), pp. 1135–1144, August 2016Google Scholar
  13. 13.
    Sawyer, P., Bencomo, N., Whittle, J., Letier, E., Finkelstein, A.: Requirements-aware systems: A research agenda for re for self-adaptive systems. In: The 18th IEEE International Requirements Engineering Conference (RE 2010), pp. 95–103, September 2010Google Scholar
  14. 14.
    Sculley, D., Holt, G., Golovin, D., Davydov, E., Phillips, T., Ebner, D., Chaudhary, V., Young, M.: Machine learning: the high interest credit card of technical debt. In: NIPS 2014 Workshop on Software Engineering for Machine Learning (SE4ML), December 2014Google Scholar
  15. 15.
    Seshia, S.A., Sadigh, D., Sastry, S.S.: Towards verified artificial intelligence (v3), October 2017. https://arxiv.org/abs/1606.08514
  16. 16.
    Tokuda, H., Yonezawa, T., Nakazawa, J.: Monitoring dependability of city-scale iot using d-case. In: 2014 IEEE World Forum on Internet of Things (WF-IoT), pp. 371–372, March 2014Google Scholar
  17. 17.
    Zinkevich, M.: Rules for reliable machine learning: Best practices for ML engineering. NIPS 2016 Workshop on Reliable Machine Learning in the Wild, December 2017Google Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  1. 1.National Institute of InformaticsTokyoJapan
  2. 2.Nihon UniversityFunabashiJapan

Personalised recommendations