Advertisement

Formal Verification of Random Forests in Safety-Critical Applications

  • John TörnblomEmail author
  • Simin Nadjm-Tehrani
Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 1008)

Abstract

Recent advances in machine learning and artificial intelligence are now being applied in safety-critical autonomous systems where software defects may cause severe harm to humans and the environment. Design organizations in these domains are currently unable to provide convincing arguments that systems using complex software implemented using machine learning algorithms are safe and correct.

In this paper, we present an efficient method to extract equivalence classes from decision trees and random forests, and to formally verify that their input/output mappings comply with requirements. We implement the method in our tool VoRF (Verifier of Random Forests), and evaluate its scalability on two case studies found in the literature. We demonstrate that our method is practical for random forests trained on low-dimensional data with up to 25 decision trees, each with a tree depth of 20. Our work also demonstrates the limitations of the method with high-dimensional data and touches upon the trade-off between large number of trees and time taken for verification.

Keywords

Machine learning Formal verification Random forest Decision tree 

Notes

Acknowledgements

This work was partially supported by the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation.

References

  1. 1.
    Bastani, O., Pu, Y., Solar-Lezama, A.: Verifiable reinforcement learning via policy extraction. In: Advances in Neural Information Processing Systems (NIPS) (2018)Google Scholar
  2. 2.
    Breiman, L.: Classification and Regression Trees. Wadsworth International Group (1984)Google Scholar
  3. 3.
    Breiman, L.: Random forests. Mach. Learn. 45(1), 5–32 (2001).  https://doi.org/10.1023/A:1010933404324CrossRefzbMATHGoogle Scholar
  4. 4.
    Burton, S., Gauerhof, L., Heinzemann, C.: Making the case for safety of machine learning in highly automated driving. In: Tonetta, S., Schoitsch, E., Bitsch, F. (eds.) SAFECOMP 2017. LNCS, vol. 10489, pp. 5–16. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-66284-8_1CrossRefGoogle Scholar
  5. 5.
    DO-178C: Software Considerations in Airborne Systems and Equipment Certification. RTCA, Inc. (2012)Google Scholar
  6. 6.
    DO-333: Formal Methods Supplement to DO-178C and DO-278A. RTCA, Inc. (2012)Google Scholar
  7. 7.
    Ehlers, R.: Formal verification of piece-wise linear feed-forward neural networks. In: D’Souza, D., Narayan Kumar, K. (eds.) ATVA 2017. LNCS, vol. 10482, pp. 269–286. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-68167-2_19CrossRefGoogle Scholar
  8. 8.
    Esteva, A., et al.: Dermatologist-level classification of skin cancer with deep neural networks. Nature 542(7639), 115 (2017).  https://doi.org/10.1038/nature21056CrossRefGoogle Scholar
  9. 9.
    Irsoy, O., Yildiz, O.T., Alpaydin, E.: Soft decision trees. In: International Conference on Pattern Recognition (ICPR) (2012)Google Scholar
  10. 10.
    ISO 26262: Road Vehicles - Functional Safety. International Organization for Standardization (2011)Google Scholar
  11. 11.
    Julian, K.D., Lopez, J., Brush, J.S., Owen, M.P., Kochenderfer, M.J.: Policy compression for aircraft collision avoidance systems. In: 2016 IEEE/AIAA 35th Digital Avionics Systems Conference (DASC), pp. 1–10. IEEE (2016).  https://doi.org/10.1109/DASC.2016.7778091
  12. 12.
    Katz, G., Barrett, C., Dill, D.L., Julian, K., Kochenderfer, M.J.: Reluplex: an efficient SMT solver for verifying deep neural networks. In: Majumdar, R., Kunčak, V. (eds.) CAV 2017. LNCS, vol. 10426, pp. 97–117. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-63387-9_5CrossRefGoogle Scholar
  13. 13.
    Kurd, Z., Kelly, T., Austin, J.: Developing artificial neural networks for safety critical systems. Neural Comput. Appl. 16(1), 11–19 (2007).  https://doi.org/10.1007/s00521-006-0039-9CrossRefGoogle Scholar
  14. 14.
    LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998).  https://doi.org/10.1109/5.726791CrossRefGoogle Scholar
  15. 15.
    Mirman, M., Gehr, T., Vechev, M.: Differentiable abstract interpretation for provably robust neural networks. In: International Conference on Machine Learning (ICML) (2018)Google Scholar
  16. 16.
    Pedregosa, F., et al.: Scikit-learn: machine learning in Python. J. Mach. Learn. Res. 12(Oct), 2825–2830 (2011)MathSciNetzbMATHGoogle Scholar
  17. 17.
    Pulina, L., Tacchella, A.: Challenging SMT solvers to verify neural networks. AI Commun. 25(2), 117–135 (2012).  https://doi.org/10.3233/AIC-2012-0525MathSciNetCrossRefzbMATHGoogle Scholar
  18. 18.
    Russell, S., Dewey, D., Tegmark, M.: Research priorities for robust and beneficial artificial intelligence. AI Mag. 36(4), 105–114 (2015).  https://doi.org/10.1609/aimag.v36i4.2577CrossRefGoogle Scholar
  19. 19.
    Scheibler, K., Winterer, L., Wimmer, R., Becker, B.: Towards verification of artificial neural networks. In: Automatic Verification and Analysis of Complex Systems (MBMV), pp. 30–40 (2015)Google Scholar
  20. 20.
    Seshia, S.A., Zhu, X.J., Krause, A., Jha, S.: Machine learning and formal methods (Dagstuhl Seminar 17351). In: Dagstuhl Reports. Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik (2018).  https://doi.org/10.4230/DagRep.7.8.55
  21. 21.
    Silver, D., et al.: Mastering the game of go with deep neural networks and tree search. Nature 529(7587), 484–489 (2016).  https://doi.org/10.1038/nature16961CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Department of Computer and Information ScienceLinköping UniversityLinköpingSweden

Personalised recommendations