Skip to main content

Safety and Robustness of Deep Neural Networks Object Recognition Under Generic Attacks

  • Conference paper
  • First Online:
Neural Information Processing (ICONIP 2019)

Abstract

Embedding machine or deep learning software into safety-critical systems such as autonomous vehicles requires software verification and validation. Such software adds non traceable hazards to traditional hardware and sensors failures, not to mention attacks that fool the prediction of a DNN and hampers its robustness. Formal methods from computer science are now applied to deep neural networks to assess the local and global robustness of a given DNN. Typically static analysis with Abstract Interpretation or SAT solvers approaches are applied to neural networks and leverages the important progress of formal methods over the last decades. Such approaches estimate bounds on the perturbation of the inputs and formally guarantee the same DNN prediction within these bounds. However formal methods over DNN for image perception system have only been applied to simple image attacks (2D rotation, brightness). In this work, we extend the definition of Lower and Upper Bounds to assess the robustness of a DNN perception system against more generic attacks. We propose a general method to verify object recognition systems using Abstract Interpretation theory. Another major contribution is the adaptation of Upper and Lower Bounds with the abstract intervals to support more complex attacks. We consider the three following classes: convolutional attacks, occlusion attacks and geometrical transformations. For the last one, we generalize the geometrical transformations with displacements in the three-dimensional space.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Bengio, Y., et al.: Learning deep architectures for AI. Found. Trends® Mach. Learn. 2(1), 1–127 (2009)

    Article  MathSciNet  Google Scholar 

  2. Cousot, P., Cousot, R.: Abstract interpretation and application to logic programs. J. Log. Program. 13(2–3), 103–179 (1992)

    Article  MathSciNet  Google Scholar 

  3. Craven, M.W.: Extracting comprehensible models from trained neural networks. Technical report, University of Wisconsin-Madison Department of Computer Sciences (1996)

    Google Scholar 

  4. Daniely, A., Frostig, R., Singer, Y.: Toward deeper understanding of neural networks: the power of initialization and a dual view on expressivity. In: Advances In Neural Information Processing Systems, pp. 2253–2261 (2016)

    Google Scholar 

  5. Deng, L., et al.: Recent advances in deep learning for speech research at Microsoft. In: 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 8604–8608. IEEE (2013)

    Google Scholar 

  6. Ehlers, R.: Formal verification of piece-wise linear feed-forward neural networks. In: D’Souza, D., Narayan Kumar, K. (eds.) ATVA 2017. LNCS, vol. 10482, pp. 269–286. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-68167-2_19

    Chapter  Google Scholar 

  7. Gehr, T., Mirman, M., Drachsler-Cohen, D., Tsankov, P., Chaudhuri, S., Vechev, M.: AI2: safety and robustness certification of neural networks with abstract interpretation. In: 2018 IEEE Symposium on Security and Privacy (SP), pp. 3–18. IEEE (2018)

    Google Scholar 

  8. Ghorbal, K., Goubault, E., Putot, S.: The zonotope abstract domain Taylor1+. In: Bouajjani, A., Maler, O. (eds.) CAV 2009. LNCS, vol. 5643, pp. 627–633. Springer, Heidelberg (2009). https://doi.org/10.1007/978-3-642-02658-4_47

    Chapter  Google Scholar 

  9. Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014)

  10. He, M., Tan, Q., Cao, L., He, Q., Jin, G.: Security enhanced optical encryption system by random phase key and permutation key. Opt. Express 17(25), 22462–22473 (2009)

    Article  Google Scholar 

  11. Huang, X., Kwiatkowska, M., Wang, S., Wu, M.: Safety verification of deep neural networks. In: Majumdar, R., Kunčak, V. (eds.) CAV 2017. LNCS, vol. 10426, pp. 3–29. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-63387-9_1

    Chapter  Google Scholar 

  12. LeCun, Y., Bottou, L., Bengio, Y., Haffner, P., et al.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)

    Article  Google Scholar 

  13. Lomuscio, A., Maganti, L.: An approach to reachability analysis for feed-forward ReLU neural networks. arXiv preprint arXiv:1706.07351 (2017)

  14. Pei, K., Cao, Y., Yang, J., Jana, S.: DeepXplore: automated whitebox testing of deep learning systems. In: Proceedings of the 26th Symposium on Operating Systems Principles, pp. 1–18. ACM (2017)

    Google Scholar 

  15. Pulina, L., Tacchella, A.: An abstraction-refinement approach to verification of artificial neural networks. In: Touili, T., Cook, B., Jackson, P. (eds.) CAV 2010. LNCS, vol. 6174, pp. 243–257. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-14295-6_24

    Chapter  Google Scholar 

  16. Seshia, S.A., Sadigh, D., Sastry, S.S.: Towards verified artificial intelligence. arXiv preprint arXiv:1606.08514 (2016)

  17. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)

  18. Singh, G., Gehr, T., Mirman, M., Püschel, M., Vechev, M.: Fast and effective robustness certification. In: Advances in Neural Information Processing Systems, pp. 10825–10836 (2018)

    Google Scholar 

  19. Singh, G., Gehr, T., Püschel, M., Vechev, M.: An abstract domain for certifying neural networks. Proc. ACM Program. Lang. 3, 41 (2019)

    Article  Google Scholar 

  20. Turski, J.: Projective Fourier analysis for patterns. Pattern Recogn. 33(12), 2033–2043 (2000)

    Article  Google Scholar 

  21. Young, T., Hazarika, D., Poria, S., Cambria, E.: Recent trends in deep learning based natural language processing. IEEE Comput. Intell. Mag. 13(3), 55–75 (2018)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Mallek Mziou Sallami or Mohamed Ibn Khedher .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Mziou Sallami, M., Ibn Khedher, M., Trabelsi, A., Kerboua-Benlarbi, S., Bettebghor, D. (2019). Safety and Robustness of Deep Neural Networks Object Recognition Under Generic Attacks. In: Gedeon, T., Wong, K., Lee, M. (eds) Neural Information Processing. ICONIP 2019. Communications in Computer and Information Science, vol 1142. Springer, Cham. https://doi.org/10.1007/978-3-030-36808-1_30

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-36808-1_30

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-36807-4

  • Online ISBN: 978-3-030-36808-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics