Abstract
In contrast to established safety-critical software components, we can neither prove nor assume that the outcomes of components containing models based on artificial intelligence (AI) or machine learning (ML) will be correct in any situation. Thus, uncertainty is an inherent part of decision-making when using the outcomes of data-driven models created by AI/ML algorithms. In order to deal with this – especially in the context of safety-related systems – we need to make uncertainty transparent via dependable statistical statements. This paper introduces both a conceptual model and the related mathematical foundation of an uncertainty wrapper solution for data-driven models. The wrapper enriches existing data-driven models such as provided by ML or other AI techniques with case-individual and sound uncertainty estimates. The task of traffic sign recognition is used to illustrate the approach, which considers uncertainty not only in terms of model fit but also in terms of data quality and scope compliance.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
Since we cannot obtain representative samples for all \( case TAS \), we make a worst-case approximation by assuming \( p (m\left( X \right) = O | caseTAS) = 0 \), i.e., outcomes are never correct.
References
Solomatine, D., Ostfeld, A.: Data-driven modelling: some past experiences and new approaches. J. Hydroinform. 10(2), 3–22 (2008)
Solomatine, D., See, L., Abrahart, R.: Data-driven modelling: concepts, approaches and experiences. In: Abrahart, R.J., See, L.M., Solomatine, D.P. (eds.) Practical Hydroinformatics, pp. 17–30. Springer, Heidelberg (2009). https://doi.org/10.1007/978-3-540-79881-1_2
Kläs, M., Vollmer, A.M.: Uncertainty in machine learning applications – a practice-driven classification of uncertainty. In: First International Workshop on Artificial Intelligence Safety Engineering (WAISE 2018) (2018)
Armstrong, J.S.: The Forecasting Dictionary. In: Principles of Forecasting: A Handbook for Researchers and Practitioners, Springer Science & Business Media (2001)
Kläs, M.: Towards identifying and managing sources of uncertainty in AI and machine learning models - an overview. arXiv preprint arXiv:1811.11669 (2018)
Lee, S., Chen, W.: A comparative study of uncertainty propagation methods for black-box-type problems. Struct. Multidiscip. Optim. 37, 239 (2009)
Safavian, S., Landgrebe, D.: A survey of decision tree classifier methodology. IEEE Trans. Syst. Man Cybern. B Cybern. 21(3), 660–674 (1991)
Khosravi, A., Nahavandi, S., Creighton, D., Atiya, A.: Comprehensive review of neural network-based prediction intervals and new advances. IEEE Trans. Neural Networks 22(9), 1341–1356 (2011)
Gal, Y.: Uncertainty in Deep Learning, University of Cambridge (2016)
McAllister, R., et al.: Concrete problems for autonomous vehicle safety: advantages of Bayesian deep learning, In: International Joint Conferences on Artificial Intelligence (2017)
Kläs, M., Trendowicz, A., Wickenkamp, A., Münch, J., Kikuchi, N., Ishigai, Y.: The use of simulation techniques for hybrid software cost estimation and risk analysis. Adv. Comput. 74, 115–174 (2008)
Angelis, L., Stamelos, I.: A simulation tool for efficient analogy based cost estimation. Empir. Softw. Eng. 5(1), 35–68 (2000)
Shrestha, D., Solomatine, D.: Machine learning approaches for estimation of prediction interval for the model output. Neural Netw. 19(2), 225–235 (2006)
Solomatine, D., Shrestha, D.: A novel method to estimate model uncertainty using machine learning techniques. Water Resour. Res. 45(12), W00B11 (2009)
Brown, L., Cai, T., DasGupta, A.: Interval Estimation for a Binomial Proportion. Stat. Sci. 16(2), 101–133 (2001)
Dorai-Raj, S.: Cran R packages: ‘binom’, February 2015. https://cran.r-project.org/web/packages/binom/binom.pdf. Accessed May 2019
Acknowledgments
Parts of this work is being funded by the German Ministry of Education and Research (BMBF) under grant number 01IS16043E.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Kläs, M., Sembach, L. (2019). Uncertainty Wrappers for Data-Driven Models. In: Romanovsky, A., Troubitsyna, E., Gashi, I., Schoitsch, E., Bitsch, F. (eds) Computer Safety, Reliability, and Security. SAFECOMP 2019. Lecture Notes in Computer Science(), vol 11699. Springer, Cham. https://doi.org/10.1007/978-3-030-26250-1_29
Download citation
DOI: https://doi.org/10.1007/978-3-030-26250-1_29
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-26249-5
Online ISBN: 978-3-030-26250-1
eBook Packages: Computer ScienceComputer Science (R0)