Advertisement

Formal Specification for Deep Neural Networks

  • Sanjit A. SeshiaEmail author
  • Ankush Desai
  • Tommaso Dreossi
  • Daniel J. Fremont
  • Shromona Ghosh
  • Edward Kim
  • Sumukh Shivakumar
  • Marcell Vazquez-Chanlatte
  • Xiangyu Yue
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11138)

Abstract

The increasing use of deep neural networks in a variety of applications, including some safety-critical ones, has brought renewed interest in the topic of verification of neural networks. However, verification is most meaningful when performed with high-quality formal specifications. In this paper, we survey the landscape of formal specification for deep neural networks, and discuss the opportunities and challenges for formal methods for this domain.

Notes

Acknowledgments

The work of the authors on this paper was funded in part by the NSF VeHICaL project (#1545126), NSF projects #1646208 and #1739816, NSF Graduate Research Fellowships, DARPA under agreement number FA8750-16-C0043, the DARPA Assured Autonomy program, Berkeley Deep Drive, and by Toyota under the iCyPhy center. This paper was the outcome of discussions amongst the co-authors in early 2018. It has additionally benefited from conversations with Somesh Jha, Susmit Jha, Pushmeet Kohli, Aditya Nori, Jerry Zhu, and several participants in Dagstuhl Seminar 18121.

References

  1. 1.
    Albarghouthi, A., D’Antoni, L., Drews, S., Nori, A.: Fairness as a program property (2016), arXiv:1610.06067
  2. 2.
    Albarghouthi, A., D’Antoni, L., Drews, S., Nori, A.V.: Fairsquare: probabilistic verification of program fairness. In: Proceedings of the ACM on Programming Languages (2017)CrossRefGoogle Scholar
  3. 3.
    Alipanahi, B., Delong, A., Weirauch, M.T., Frey, B.J.: Predicting the sequence specificities of DNA-and RNA-binding proteins by deep learning. Nat. Biotechnol. 33, 831–838 (2015)CrossRefGoogle Scholar
  4. 4.
    Amodei, D., Olah, C., Steinhardt, J., Christiano, P.F., Schulman, J., Mané, D.: Concrete problems in AI safety. ArXiV e-prints abs/1606.06565 (2016)
  5. 5.
    Bastani, O., Ioannou, Y., Lampropoulos, L., Vytiniotis, D., Nori, A., Criminisi, A.: Measuring neural net robustness with constraints. In: Lee, D.D., Sugiyama, M., Luxburg, U.V., Guyon, I., Garnett, R. (eds.) Advances in Neural Information Processing Systems (NIPS), vol. 29, pp. 2613–2621. MIT Press, Cambridge (2016)Google Scholar
  6. 6.
    Binns, R.: Fairness in machine learning: lessons from political philosophy (2017), arXiv:1712.03586
  7. 7.
    Bojarski, M., et al.: End to end learning for self-driving cars. arXiv preprint arXiv:1604.07316 (2016)
  8. 8.
    Cai, J., Shin, R., Song, D.: Making neural programming architectures generalize via recursion. arXiv preprint arXiv:1704.06611 (2017)
  9. 9.
    Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In: IEEE Symposium on Security and Privacy (SP) (2017)Google Scholar
  10. 10.
    Carpenter, B., et al.: Stan: a probabilistic programming language. J. Stat. Softw. 76(1), 1–32 (2017)CrossRefGoogle Scholar
  11. 11.
    Cheng, C.-H., Nührenberg, G., Ruess, H.: Maximum resilience of artificial neural networks. In: D’Souza, D., Narayan Kumar, K. (eds.) ATVA 2017. LNCS, vol. 10482, pp. 251–268. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-68167-2_18CrossRefGoogle Scholar
  12. 12.
    Clarkson, M.R., Schneider, F.B.: Hyperproperties. J. Comput. Secur. 18(6), 1157–1210 (2010)CrossRefGoogle Scholar
  13. 13.
    Dahl, G.E., Stokes, J.W., Deng, L., Yu, D.: Large-scale malware classification using random projections and neural networks. In: Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). pp. 3422–3426. IEEE (2013)Google Scholar
  14. 14.
    Dai, J., et al.: Deformable convolutional networks. In: IEEE International Conference on Computer Vision (2017)Google Scholar
  15. 15.
    Dreossi, T., Donzé, A., Seshia, S.A.: Compositional falsification of cyber-physical systems with machine learning components. In: NASA Formal Methods Symposium (2017)Google Scholar
  16. 16.
    Dreossi, T., Donzé, A., Seshia, S.A.: Compositional falsification of cyber-physical systems with machine learning components. In: Barrett, C., Davies, M., Kahsai, T. (eds.) NFM 2017. LNCS, vol. 10227, pp. 357–372. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-57288-8_26CrossRefGoogle Scholar
  17. 17.
    Dreossi, T., Ghosh, S., Yue, X., Keutzer, K., Sangiovanni-Vincentelli, A., Seshia, S.A.: Counterexample-guided data augmentation. In: 27th International Joint Conference on Artificial Intelligence (IJCAI) (2018)Google Scholar
  18. 18.
    Dreossi, T., Jha, S., Seshia, S.A.: Semantic adversarial deep learning. In: 30th International Conference on Computer Aided Verification (CAV) (2018)CrossRefGoogle Scholar
  19. 19.
    Dutta, S., Jha, S., Sanakaranarayanan, S., Tiwari, A.: Output range analysis for deep neural networks (2017), arXiv:1709.09130
  20. 20.
    Dvijotham, K., Stanforth, R., Gowal, S., Mann, T., Kohli, P.: A dual approach to scalable verification of deep networks (2018), arXiv:1803.06567
  21. 21.
    Fawzi, A., Frossard, P.: Manitest: Are classifiers really invariant? (2017), arXiv:1507.06535
  22. 22.
    Fremont, D., Yue, X., Dreossi, T., Ghosh, S., Sangiovanni-Vincentelli, A.L., Seshia, S.A.: Scenic: Language-based scene generation. Technical report UCB/EECS-2018-8. EECS Department, University of California, Berkeley, April 2018. http://www2.eecs.berkeley.edu/Pubs/TechRpts/2018/EECS-2018-8.html
  23. 23.
    Friedler, S.A., Scheidegger, C., Venkatasubramanian, S.: On the (im) possibility of fairness (2016), arXiv:1609.07236
  24. 24.
    Friedler, S.A., Scheidegger, C., Venkatasubramanian, S., Choudhary, S., Hamilton, E.P., Roth, D.: A comparative study of fairness-enhancing interventions in machine learning (2018), arXiv:1802.04422
  25. 25.
    Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. MIT Press, Cambridge (2016). http://goodfeli.github.io/dlbook/
  26. 26.
    Goodfellow, I., Lee, H., Le, Q.V., Saxe, A., Ng, A.Y.: Measuring invariances in deep networks. In: Advances in Neural Information Processing Systems (2009)Google Scholar
  27. 27.
    Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples (2014), arXiv:1412.6572
  28. 28.
    Goodman, N.D., Mansinghka, V.K., Roy, D., Bonawitz, K., Tenenbaum, J.B.: Church: a language for generative models. In: Proceedings of the Twenty-Fourth Conference on Uncertainty in Artificial Intelligence, pp. 220–229. UAI’08 (2008)Google Scholar
  29. 29.
    Gordon, A.D., Henzinger, T.A., Nori, A.V., Rajamani, S.K.: Probabilistic programming. In: FOSE 2014, pp. 167–181. ACM (2014)Google Scholar
  30. 30.
    Graves, A., Wayne, G., Danihelka, I.: Neural turing machines. arXiv preprint arXiv:1410.5401 (2014)
  31. 31.
    Hardt, M., Price, E., Srebro, N., et al.: Equality of opportunity in supervised learning. In: Advances in Neural Information Processing Systems (2016)Google Scholar
  32. 32.
    Huang, X., Kwiatkowska, M., Wang, S., Wu, M.: Safety verification of deep neural networks. In: Majumdar, R., Kunčak, V. (eds.) CAV 2017. LNCS, vol. 10426, pp. 3–29. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-63387-9_1CrossRefGoogle Scholar
  33. 33.
    Kaelbling, L.P., Littman, M.L., Moore, A.W.: Reinforcement learning: a survey. J. Artif. Intell. Res. 4, 237–285 (1996)CrossRefGoogle Scholar
  34. 34.
    Kanbak, C., Moosavi-Dezfooli, S.M., Frossard, P.: Geometric robustness of deep networks: analysis and improvement (2017), arXiv:1711.09115
  35. 35.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012)Google Scholar
  36. 36.
    Kusner, M.J., Loftus, J., Russell, C., Silva, R.: Counterfactual fairness. In: Advances in Neural Information Processing Systems (2017)Google Scholar
  37. 37.
    Lowe, D.G.: Object recognition from local scale-invariant features. In: IEEE International Conference on Computer Vision (1999)Google Scholar
  38. 38.
    Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks (2017), arXiv:1706.06083
  39. 39.
    Milch, B., Marthi, B., Russell, S.: Blog: Relational modeling with unknown objects. In: ICML 2004 Workshop on Statistical Relational Learning and its Connections to Other Fields, pp. 67–73 (2004)Google Scholar
  40. 40.
    Mnih, V., et al.: Human-level control through deep reinforcement learning. Nature 518(7540), 529 (2015)CrossRefGoogle Scholar
  41. 41.
    NVIDIA: Nvidia tegra drive px: Self-driving car computer (2015), http://www.nvidia.com/object/drive-px.html
  42. 42.
    Papernot, N., McDaniel, P., Jha, S., Fredrikson, M., Celik, Z.B., Swami, A.: The limitations of deep learning in adversarial settings. In: Proceedings of the 1st IEEE European Symposium on Security and Privacy. arXiv preprint arXiv:1511.07528 (2016)
  43. 43.
    Papernot, N., McDaniel, P., Wu, X., Jha, S., Swami, A.: Distillation as a defense to adversarial perturbations against deep neural networks. arXiv preprint arXiv:1511.04508 (2015)
  44. 44.
    Pei, K., Cao, Y., Yang, J., Jana, S.: Deepxplore: automated whitebox testing of deep learning systems. In: Proceedings of the 26th Symposium on Operating Systems Principles, pp. 1–18. ACM (2017)Google Scholar
  45. 45.
    Pfeffer, A.: Figaro: an object-oriented probabilistic programming language. Technical report, Charles River Analytics (2009)Google Scholar
  46. 46.
    Rodrigues, P., Costa, J.F., Siegelmann, H.T.: Verifying properties of neural networks. In: Mira, J., Prieto, A. (eds.) IWANN 2001. LNCS, vol. 2084, pp. 158–165. Springer, Heidelberg (2001).  https://doi.org/10.1007/3-540-45720-8_19CrossRefGoogle Scholar
  47. 47.
    Russell, S., et al.: Letter to the editor: research priorities for robust and beneficial artif icial intelligence: an open letter. AI Mag. 36(4), 3–4 (2015)CrossRefGoogle Scholar
  48. 48.
    Sadigh, D., Kim, E.S., Coogan, S., Sastry, S., Seshia, S.A.: A learning based approach to control synthesis of markov decision processes for linear temporal logic specifications. In: Proceedings of the 53rd IEEE Conference on Decision and Control (CDC), pp. 1091–1096, December 2014Google Scholar
  49. 49.
    Selsam, D., Liang, P., Dill, D.L.: Developing bug-free machine learning systems with formal mathematics. In: International Conference on Machine Learning, pp. 3047–3056 (2017)Google Scholar
  50. 50.
    Seshia, S.A.: Compositional verification without compositional specification for learning-based systems. Technical report UCB/EECS-2017-164. EECS Department, University of California, Berkeley, November 2017. http://www2.eecs.berkeley.edu/Pubs/TechRpts/2017/EECS-2017-164.html
  51. 51.
    Seshia, S.A., et al.: Formal specification for deep neural networks. Technical report UCB/EECS-2018-25. EECS Department, University of California, Berkeley, May 2018. http://www2.eecs.berkeley.edu/Pubs/TechRpts/2018/EECS-2018-25.html
  52. 52.
    Seshia, S.A., Sadigh, D., Sastry, S.S.: Towards Verified Artificial Intelligence. ArXiv e-prints, July 2016Google Scholar
  53. 53.
    Shin, E.C.R., Song, D., Moazzezi, R.: Recognizing functions in binaries with neural networks. In: 24th USENIX Security Symposium (USENIX Security 15), pp. 611–626 (2015)Google Scholar
  54. 54.
    Szegedy, C., et al.: Intriguing properties of neural networks (2013), arXiv:1312.6199
  55. 55.
    Taylor, B.J., Darrah, M.A.: Rule extraction as a formal method for the verification and validation of neural networks. In: IEEE International Joint Conference on Neural Networks (IJCNN), vol. 5, pp. 2915–2920. IEEE (2005)Google Scholar
  56. 56.
    Terauchi, T., Aiken, A.: Secure information flow as a safety problem. In: Hankin, C., Siveroni, I. (eds.) SAS 2005. LNCS, vol. 3672, pp. 352–367. Springer, Heidelberg (2005).  https://doi.org/10.1007/11547662_24CrossRefGoogle Scholar
  57. 57.
    Wen, M., Ehlers, R., Topcu, U.: Correct-by-synthesis reinforcement learning with temporal logic constraints. In: 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 4983–4990 (2015)Google Scholar
  58. 58.
    Weng, T.W., et al.: Evaluating the robustness of neural networks: an extreme value theory approach (2018), arXiv:1801.10578
  59. 59.
    You, S., Ding, D., Canini, K., Pfeifer, J., Gupta, M.: Deep lattice networks and partial monotonic functions. In: Advances in Neural Information Processing Systems (2017)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Sanjit A. Seshia
    • 1
    Email author
  • Ankush Desai
    • 1
  • Tommaso Dreossi
    • 1
  • Daniel J. Fremont
    • 1
  • Shromona Ghosh
    • 1
  • Edward Kim
    • 1
  • Sumukh Shivakumar
    • 1
  • Marcell Vazquez-Chanlatte
    • 1
  • Xiangyu Yue
    • 1
  1. 1.University of CaliforniaBerkeleyUSA

Personalised recommendations