Skip to main content

Minimal Multi-Layer Modifications of Deep Neural Networks

  • Conference paper
  • First Online:
Software Verification and Formal Methods for ML-Enabled Autonomous Systems (NSV 2022, FoMLAS 2022)

Abstract

Deep neural networks (DNNs) have become increasingly popular in recent years. However, despite their many successes, DNNs may also err and produce incorrect and potentially fatal outputs in safety-critical settings, such as autonomous driving, medical diagnosis, and airborne collision avoidance systems. Much work has been put into detecting such erroneous behavior in DNNs, e.g., via testing or verification, but removing these errors after their detection has received lesser attention. We present here a new tool, called 3M-DNN, for repairing a given DNN, which is known to err on some set of inputs. The novel repair procedure implemented in 3M-DNN computes a modification to the network’s weights that corrects its behavior, and attempts to minimize this change via a sequence of calls to a backend, black-box DNN verification engine. To the best of our knowledge, our method is the first one that allows repairing the network by simultaneously modifying multiple layers. This is achieved by splitting the network into sub-networks, and applying a single-layer repairing technique to each component. We evaluated 3M-DNN tool on an extensive set of benchmarks, obtaining promising results.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 54.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 69.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    https://zenodo.org/record/5735194#.Ysvf_nZByUk.

  2. 2.

    It may be possible that an invocation of SingleLayerModification() fails because no change is possible that obtains the desired results. Whenever this happens, 3M-DNN continues to the next iteration, exploring a different change to the separation layers’ values. This situation is theoretically possible, but did not occur in our experiments.

References

  1. Amir, G., et al.: Verifying Learning-Based Robotic Navigation Systems. Technical report (2022). http://arxiv.org/abs/2205.13536

  2. Amir, G., Katz, G., Schapira, M.: Verification-aided deep ensemble selection. In: Proceedings of 22nd International Conference on Formal Methods in Computer-Aided Design (FMCAD) (2022)

    Google Scholar 

  3. Amir, G., Schapira, M., Katz, G.: Towards scalable verification of deep reinforcement learning. In: Proceedings of 21st International Conference on Formal Methods in Computer-Aided Design (FMCAD), pp. 193–203 (2021)

    Google Scholar 

  4. Amir, G., Wu, H., Barrett, C., Katz, G.: An SMT-based approach for verifying binarized neural networks. In: Proceedings of 27th International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS), pp. 203–222 (2021)

    Google Scholar 

  5. Ashok, P., Hashemi, V., Kretinsky, J., Mohr, S.: DeepAbstract: neural network abstraction for accelerating verification. In: Proceedings of 18th International Symposium on Automated Technology for Verification and Analysis (ATVA), pp. 92–107 (2020)

    Google Scholar 

  6. Bojarski, M., et al.: End to End Learning for Self-Driving Cars. Technical report (2016). http://arxiv.org/abs/1604.07316

  7. Browne, C., et al.: A survey of monte Carlo tree search methods. IEEE Trans. Comput. Intell. AI Games 4(1), 1–43 (2012)

    Article  Google Scholar 

  8. Bunel, R., Turkaslan, I., Torr, P., Kohli, P., Mudigonda, P.: A Unified View of Piecewise Linear Neural Network Verification. In: Proceedings of 32nd Conference on Neural Information Processing Systems (NeurIPS), pp. 4795–4804 (2018)

    Google Scholar 

  9. Carlini, N., Katz, G., Barrett, C., Dill, D.: Provably Minimally-Distorted Adversarial Examples. Technical report (2017). http://arxiv.org/abs/1709.10207

  10. Ciregan, D., Meier, U., Schmidhuber, J.: Multi-column deep neural networks for image classification. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3642–3649 (2012)

    Google Scholar 

  11. Dong, G., Sun, J., Wang, J., Wang, X., Dai, T.: Towards Repairing Neural Networks Correctly. Technical report (2020). http://arxiv.org/abs/2012.01872

  12. Dutta, S., Chen, X., Sankaranarayanan, S.: Reachability analysis for neural feedback systems using regressive polynomial rule inference. In: Proceedings of 22nd ACM International Conference on Hybrid Systems: Computation and Control (HSCC) (2019)

    Google Scholar 

  13. Dutta, S., Jha, S., Sanakaranarayanan, S., Tiwari, A.: Output range analysis for deep neural networks. In: Proceedings of 10th NASA Formal Methods Symposium (NFM), pp. 121–138 (2018)

    Google Scholar 

  14. Ehlers, R.: Formal verification of piece-wise linear feed-forward neural networks. In: Proceedings of 15th International Symposium on Automated Technology for Verification and Analysis (ATVA), pp. 269–286 (2017)

    Google Scholar 

  15. Elboher, Y., Gottschlich, J., Katz, G.: An abstraction-based framework for neural network verification. In: Proceedings of 32nd International Conference on Computer Aided Verification (CAV), pp. 43–65 (2020)

    Google Scholar 

  16. Eliyahu, T., Kazak, Y., Katz, G., Schapira, M.: Verifying learning-augmented systems. In Proceedings of Conference on the ACM Special Interest Group on Data Communication on the Applications, Technologies, Architectures, and Protocols for Computer Communication (SIGCOMM), pp. 305–318 (2021)

    Google Scholar 

  17. Fu, M.: AlphaGo and Monte Carlo tree search: the simulation optimization perspective. In: Proceedings of Winter Simulation Conference (WSC), pp. 659–670 (2016)

    Google Scholar 

  18. Gehr, T., Mirman, M., Drachsler-Cohen, D., Tsankov, E., Chaudhuri, S., Vechev, M.: AI2: safety and robustness certification of neural networks with abstract interpretation. In: Proceedings of 39th IEEE Symposium on Security and Privacy (S &P) (2018)

    Google Scholar 

  19. Gokulanathan, S., Feldsher, A., Malca, A., Barrett, C., Katz, G.: Simplifying neural networks using formal verification. In: Proceedings of 12th NASA Formal Methods Symposium (NFM), pp. 85–93 (2020)

    Google Scholar 

  20. Goldberger, B., Adi, Y., Keshet, J., Katz, G.: Minimal modifications of deep neural networks using verification. In: Proceedings of 23rd International Conference on Logic for Programming, Artificial Intelligence and Reasoning (LPAR), pp. 260–278 (2020)

    Google Scholar 

  21. Goodfellow, I., Bengio, Y., Courville, A., Bengio, Y.: Deep Learning. MIT Press Cambridge, Cambridge (2016)

    Google Scholar 

  22. Goodfellow, I., Shlens, J., Szegedy, C.: Explaining and Harnessing Adversarial Examples. Technical report (2014). http://arxiv.org/abs/1412.6572

  23. Gopinath, D., Katz, G., Pǎsǎreanu, C., Barrett, C.: DeepSafe: a data-driven approach for checking adversarial robustness in neural networks. In: Proceedings of 16th International Symposium on Automated Technology for Verification and Analysis (ATVA), pp. 3–19 (2018)

    Google Scholar 

  24. Hao, K.: Training a Single AI Model can Emit as much Carbon as Five Cars In Their Lifetimes. MIT Technology Review (2019)

    Google Scholar 

  25. Huang, X., Kwiatkowska, M., Wang, S., Wu, M.: Safety verification of deep neural networks. In: Proceedings of 29th International Conference on Computer Aided Verification (CAV), pp. 3–29 (2017)

    Google Scholar 

  26. Isac, O., Barrett, C., Zhang, M., Katz, G.: Neural network verification with proof production. In: Proceedings of 22nd International Conference on Formal Methods in Computer-Aided Design (FMCAD) (2022)

    Google Scholar 

  27. Jacoby, Y., Barrett, C., Katz, G.: Verifying recurrent neural networks using invariant inference. In: Proceedings of 18th International Symposium on Automated Technology for Verification and Analysis (ATVA), pp. 57–74 (2020)

    Google Scholar 

  28. Julian, K., Lopez, J., Brush, J., Owen, M., Kochenderfer, M.: Policy compression for aircraft collision avoidance systems. In: Proceedings of 35th Digital Avionics Systems Conference (DASC), pp. 1–10 (2016)

    Google Scholar 

  29. Katz, G.: Guarded deep learning using scenario-based modeling. In: Proceedings of 8th International Conference on Model-Driven Engineering and Software Development (MODELSWARD), pp. 126–136 (2020)

    Google Scholar 

  30. Katz, G., Barrett, C., Dill, D., Julian, K., Kochenderfer, M.: Reluplex: an efficient SMT solver for verifying deep neural networks. In: Proceedings of 29th International Conference on Computer Aided Verification (CAV), pp. 97–117 (2017)

    Google Scholar 

  31. Katz, G., Barrett, C., Dill, D., Julian, K., Kochenderfer, M.: Towards proving the adversarial robustness of deep neural networks. In: Proceedings of 1st Workshop on Formal Verification of Autonomous Vehicles (FVAV), pp. 19–26 (2017)

    Google Scholar 

  32. Katz, G., Barrett, C., Dill, D., Julian, K., Kochenderfer, M.: Reluplex: a calculus for reasoning about deep neural networks. In: Formal Methods in System Design (FMSD) (2021)

    Google Scholar 

  33. Katz, G., Elyasaf, A.: Towards combining deep learning, verification, and scenario-based programming. In: Proceedings of 1st Workshop on Verification of Autonomous and Robotic Systems (VARS), pp. 1–3 (2021)

    Google Scholar 

  34. Katz, G., et al.: The Marabou framework for verification and analysis of deep neural networks. In: Proceedings of 31st International Conference on Computer Aided Verification (CAV), pp. 443–452 (2019)

    Google Scholar 

  35. Kauschke, D., Lehmann, S.: Towards Neural Network Patching: Evaluating Engagement-Layers and Patch-Architectures. Technical report (2018). http://arxiv.org/abs/1812.03468

  36. Kauschke, S., Furnkranz, J.: Batchwise patching of classifiers. In: Proceedings of 32nd AAAI Conference on Artificial Alliance (2018)

    Google Scholar 

  37. Kazak, Y., Barrett, C., Katz, G., Schapira, M.: Verifying deep-RL-driven systems. In: Proceedings of 1st ACM SIGCOMM Workshop on Network Meets AI & ML (NetAI), pp. 83–89 (2019)

    Google Scholar 

  38. Kermany, D., et al.: Identifying medical diagnoses and treatable diseases by image-based deep learning. Cell 172(5), 1122–1131 (2018)

    Article  Google Scholar 

  39. Krizhevsky, A., Sutskever, I., Hinton, G.: ImageNet classification with deep convolutional neural networks. In: Proceedings of 26th Conference on Neural Information Processing Systems (NIPS), pp. 1097–1105 (2012)

    Google Scholar 

  40. Kuper, L., Katz, G., Gottschlich, J., Julian, K., Barrett, C., Kochenderfer, M.: Toward Scalable Verification for Safety-Critical Deep Networks. Technical report (2018). http://arxiv.org/abs/1801.05950

  41. Lahav, O., Katz, G.: Pruning and slicing neural networks using formal verification. In: Proceedings of 21st International Conference on Formal Methods in Computer-Aided Design (FMCAD), pp. 183–192 (2021)

    Google Scholar 

  42. LeCun, Y.: The MNIST Database of Handwritten Digits (1998). http://yann.lecun.com/exdb/mnist/

  43. Leino, K., Fromherz, A., Mangal, R., Fredrikson, M., Parno, B., Păsăreanu, C.: Self-Repairing Neural Networks: Provable Safety for Deep Networks via Dynamic Repair. Technical report (2021). http://arxiv.org/abs/2107.11445

  44. Liu, Y., et al.: Trojaning Attack on Neural Networks (2017)

    Google Scholar 

  45. Lomuscio, A., Maganti, L.: An Approach to Reachability Analysis for Feed-Forward ReLU Neural Networks. Technical report (2017). http://arxiv.org/abs/1706.07351

  46. Narodytska, N., Kasiviswanathan, S., Ryzhyk, L., Sagiv, M., Walsh, T.: Verifying Properties of Binarized Deep Neural Networks. Technical report (2017). http://arxiv.org/abs/1709.06662

  47. Nwankpa, C., Ijomah, W., Gachagan, A., Marshall, S.: Activation Functions: Comparison of Trends in Practice and Research for Deep Learning. Technical report (2018). http://arxiv.org/abs/1811.03378

  48. Ostrovsky, M., Barrett, C., Katz, G.: An Abstraction-refinement approach to verifying convolutional neural networks. In: Proceedings of 20th International Symposium on Automated Technology for Verification and Analysis (ATVA) (2022)

    Google Scholar 

  49. Ruan, W., Huang, X., Kwiatkowska, M.: Reachability analysis of deep neural networks with provable guarantees. In: Proceedings of 27th International Joint Conference on Artificial Intelligence (IJCAI) (2018)

    Google Scholar 

  50. Seide, F., Li, G., Yu, D.: Conversational speech transcription using context-dependent deep neural networks. In: Proceedings of 12th Conference of the International Speech Communication Association (Interspeech), pp. 437–440 (2011)

    Google Scholar 

  51. Simonyan, K., Zisserman, A.: Very Deep Convolutional Networks for Large-Scale Image Recognition. Technical report ( 2014). http://arxiv.org/abs/1409.1556

  52. Singh, S., Kumar, A., Darbari, H., Singh, L., Rastogi, A., Jain, S.: Machine translation using deep learning: an overview. In: Proceedings of International Conference on Computer, Communications and Electronics (Comptelix), pp. 162–167 (2017)

    Google Scholar 

  53. Sotoudeh, M., Thakur, A.: Correcting deep neural networks with small, generalizing patches. In: Workshop on Safety and Robustness in Decision Making (2019)

    Google Scholar 

  54. Sotoudeh, M., Thakur, A.: Provable repair of deep neural networks. In: Proceedings of 42nd ACM SIGPLAN International Conference on Programming Language Design and Implementation (PLDI), pp. 588–603 (2021)

    Google Scholar 

  55. Strong, C., et al.: Global optimization of objective functions represented by ReLU networks. J. Mach. Learn. 2021, 1–28 (2021). https://doi.org/10.1007/s10994-021-06050-2

  56. Sun, X., Khedr, H., Shoukry, Y.: Formal verification of neural network controlled autonomous systems. In: Proceedings of 22nd ACM International Conference on Hybrid Systems: Computation and Control (HSCC) (2019)

    Google Scholar 

  57. Sun, Y., Huang, X., Kroening, D., Sharp, J., Hill, M., Ashmore, R.: Testing Deep Neural Networks. Technical report (2018). http://arxiv.org/abs/1803.04792

  58. Tjeng, V., Xiao, K., Tedrake. Evaluating robustness of neural networks with mixed integer programming. In: Proceedings of 7th International Conference on Learning Representations (ICLR) (2019)

    Google Scholar 

  59. Usman, M., Gopinath, D., Sun, Y., Noller, Y., Pǎsǎreanu, C.: NNrepair: Constraint-based Repair of Neural Network Classifiers. Technical report (2021). http://arxiv.org/abs/2103.12535

  60. Wang, S., Pei, K., Whitehouse, J., Yang, J., Jana, S.: Formal security analysis of neural networks using symbolic intervals. In: Proceedings of 27th USENIX Security Symposium, pp. 1599–1614 (2018)

    Google Scholar 

  61. Wu, H., et al.: Parallelization techniques for verifying neural networks. In: Proceedings of 20th International Conference on Formal Methods in Computer-Aided Design (FMCAD), pp. 128–137 (2020)

    Google Scholar 

  62. Wu, H., Zeljić, A., Katz, G., Barrett, C.: Efficient neural network analysis with sum-of-infeasibilities. In: Proceedings of 28th International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS), pp. 143–163 (2022)

    Google Scholar 

  63. Yang, X., Yamaguchi, T., Tran, H.-D., Hoxha, B., Johnson, T., Prokhorov, D.: Neural Network Repair with Reachability Analysis. Technical report (2021). http://arxiv.org/abs/2108.04214

  64. Zelazny, T., Wu, C., Barrett, H., Katz, G.: On reducing over-approximation errors for neural network verification. In: Proceedings of 22nd International Conference on Formal Methods in Computer-Aided Design (FMCAD) (2022)

    Google Scholar 

  65. Zhang, H., Shinn, M., Gupta, A., Gurfinkel, A., Le, N., Narodytska, N.: Verification of recurrent neural networks for cognitive tasks via reachability analysis. In: Proceedings of 24th Conference of European Conference on Artificial Intelligence (ECAI) (2020)

    Google Scholar 

Download references

Acknowledgement

This work was partially supported by the Israel Science Foundation (grant number 683/18) and the HUJI Federmann Cyber Security Center.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Guy Katz .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Refaeli, I., Katz, G. (2022). Minimal Multi-Layer Modifications of Deep Neural Networks. In: Isac, O., Ivanov, R., Katz, G., Narodytska, N., Nenzi, L. (eds) Software Verification and Formal Methods for ML-Enabled Autonomous Systems. NSV FoMLAS 2022 2022. Lecture Notes in Computer Science, vol 13466. Springer, Cham. https://doi.org/10.1007/978-3-031-21222-2_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-21222-2_4

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-21221-5

  • Online ISBN: 978-3-031-21222-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics