Skip to main content

Output Range Analysis for Deep Feedforward Neural Networks

  • Conference paper
  • First Online:
NASA Formal Methods (NFM 2018)

Abstract

Given a neural network (NN) and a set of possible inputs to the network described by polyhedral constraints, we aim to compute a safe over-approximation of the set of possible output values. This operation is a fundamental primitive enabling the formal analysis of neural networks that are extensively used in a variety of machine learning tasks such as perception and control of autonomous systems. Increasingly, they are deployed in high-assurance applications, leading to a compelling use case for formal verification approaches. In this paper, we present an efficient range estimation algorithm that iterates between an expensive global combinatorial search using mixed-integer linear programming problems, and a relatively inexpensive local optimization that repeatedly seeks a local optimum of the function represented by the NN. We implement our approach and compare it with Reluplex, a recently proposed solver for deep neural networks. We demonstrate applications of our approach to computing flowpipes for neural network-based feedback controllers. We show that the use of local search in conjunction with mixed-integer linear programming solvers effectively reduces the combinatorial search over possible combinations of active neurons in the network by pruning away suboptimal nodes.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Baier, C., Katoen, J.-P.: Principles of Model Checking. MIT Press, Cambridge (2008)

    Google Scholar 

  2. Barrett, C.W., Sebastiani, R., Seshia, S.A., Tinelli, C.: Satisfiability modulo theories. Handb. Satisfiability 185, 825–885 (2009)

    Google Scholar 

  3. Bastani, O., Ioannou, Y., Lampropoulos, L., Vytiniotis, D., Nori, A., Criminisi, A.: Measuring neural net robustness with constraints. In: Advances in Neural Information Processing Systems, pp. 2613–2621 (2016)

    Google Scholar 

  4. Bixby, R.E.: A brief history of linear and mixed-integer programming computation. Documenta Mathematica 107–121 (2012)

    Google Scholar 

  5. Bunel, R., Turkaslan, I., Torr, P.H.S., Kohli, P., Kumar, M.P.: Piecewise linear neural network verification: a comparative study. CoRR, abs/1711.00455 (2017)

    Google Scholar 

  6. Dutta, S.: Sherlock: an output range analysis tool for neural networks. https://github.com/souradeep-111/sherlock

  7. Dutta, S., Jha, S., Sankaranarayanan, S., Tiwari, A.: Verified inference of feedback control systems using feedforward neural networks. Draft (2017). Available upon request

    Google Scholar 

  8. Ehlers, R.: Formal verification of piece-wise linear feed-forward neural networks. In: D’Souza, D., Narayan Kumar, K. (eds.) ATVA 2017. LNCS, vol. 10482, pp. 269–286. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-68167-2_19

  9. Gurobi Optimization, Inc.: Gurobi optimizer reference manual (2016)

    Google Scholar 

  10. Huang, X., Kwiatkowska, M., Wang, S., Wu, M.: Safety verification of deep neural networks. CoRR, abs/1610.06940 (2016)

    Google Scholar 

  11. IBM ILOG Inc.: CPLEX MILP Solver (1992)

    Google Scholar 

  12. Julian, K., Kochenderfer, M.J.: Neural network guidance for UAVs. In: AIAA Guidance Navigation and Control Conference (GNC) (2017)

    Google Scholar 

  13. Kahn, G., Zhang, T., Levine, S., Abbeel, P.: Plato: policy learning using adaptive trajectory optimization. arXiv preprint arXiv:1603.00622 (2016)

  14. Katz, G., Barrett, C., Dill, D.L., Julian, K., Kochenderfer, M.J.: Reluplex: an efficient SMT solver for verifying deep neural networks. In: Majumdar, R., Kunčak, V. (eds.) CAV 2017. LNCS, vol. 10426, pp. 97–117. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-63387-9_5

    Chapter  Google Scholar 

  15. Katz, et al.: Reluplex: CAV 2017 prototype (2017). https://github.com/guykatzz/ReluplexCav2017

  16. Kurd, Z., Kelly, T.: Establishing safety criteria for artificial neural networks. In: Palade, V., Howlett, R.J., Jain, L. (eds.) KES 2003. LNCS (LNAI), vol. 2773, pp. 163–169. Springer, Heidelberg (2003). https://doi.org/10.1007/978-3-540-45224-9_24

    Chapter  Google Scholar 

  17. LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436–444 (2015)

    Article  Google Scholar 

  18. Lomuscio, A., Maganti, L.: An approach to reachability analysis for feed-forward relu neural networks. CoRR, abs/1706.07351 (2017)

    Google Scholar 

  19. Luenberger, D.G.: Optimization by Vector Space Methods. Wiley, Hoboken (1969)

    Google Scholar 

  20. Mitchell, J.E.: Branch-and-cut algorithms for combinatorial optimization problems. In: Handbook of Applied Optimization, pp. 65–77 (2002)

    Google Scholar 

  21. Papernot, N., McDaniel, P.D., Goodfellow, I.J., Jha, S., Celik, Z.B., Swami, A.: Practical black-box attacks against deep learning systems using adversarial examples. CoRR, abs/1602.02697 (2016)

    Google Scholar 

  22. Pulina, L., Tacchella, A.: An abstraction-refinement approach to verification of artificial neural networks. In: Touili, T., Cook, B., Jackson, P. (eds.) CAV 2010. LNCS, vol. 6174, pp. 243–257. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-14295-6_24

    Chapter  Google Scholar 

  23. Pulina, L., Tacchella, A.: Challenging SMT solvers to verify neural networks. AI Commun. 25(2), 117–135 (2012)

    MathSciNet  MATH  Google Scholar 

  24. Sassi, M.A.B., Bartocci, E., Sankaranarayanan, S.: A linear programming-based iterative approach to stabilizing polynomial dynamics. In: Proceedings of IFAC 2017. Elsevier, Amsterdam (2017)

    Google Scholar 

  25. Scheibler, K., Winterer, L., Wimmer, R., Becker, B.: Towards verification of artificial neural networks. In: MBMV Workshop, pp. 30–40 (2015)

    Google Scholar 

  26. Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., Fergus, R.: Intriguing properties of neural networks. CoRR, abs/1312.6199 (2013)

    Google Scholar 

  27. Tjeng, V., Tedrake, R.: Verifying neural networks with mixed integer programming. CoRR, abs/1711.07356 (2017)

    Google Scholar 

  28. Vanderbei, R.J.: Linear Programming: Foundations & Extensions, Second edn. Springer, Heidelberg (2001). Cf. http://www.princeton.edu/~rvdb/LPbook/

  29. Williams, H.P.: Model Building in Mathematical Programming, 5th edn. Wiley, Hoboken (2013)

    MATH  Google Scholar 

  30. Xiang, W., Tran, H.-D., Johnson, T.T.: Output reachable set estimation and verification for multi-layer neural networks. CoRR, abs/1708.03322 (2017)

    Google Scholar 

  31. Xiang, W., Tran, H.-D., Rosenfeld, J.A., Johnson, T.T.: Reachable set estimation and verification for a class of piecewise linear systems with neural network controllers (2018). To Appear in the American Control Conference (ACC), Invited Session on Formal Methods in Controller Synthesis

    Google Scholar 

Download references

Acknowledgments

We gratefully acknowledge inputs from Sergio Mover and Marco Gario for their helpful comments on an earlier version of this paper. This work was funded in part by the US National Science Foundation (NSF) under award numbers CNS-1646556, CNS-1750009, CNS-1740079 and US ARL Cooperative Agreement W911NF-17-2-0196. All opinions expressed are those of the authors and not necessarily of the US NSF or ARL.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sriram Sankaranarayanan .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer International Publishing AG, part of Springer Nature

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Dutta, S., Jha, S., Sankaranarayanan, S., Tiwari, A. (2018). Output Range Analysis for Deep Feedforward Neural Networks. In: Dutle, A., Muñoz, C., Narkawicz, A. (eds) NASA Formal Methods. NFM 2018. Lecture Notes in Computer Science(), vol 10811. Springer, Cham. https://doi.org/10.1007/978-3-319-77935-5_9

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-77935-5_9

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-77934-8

  • Online ISBN: 978-3-319-77935-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics