Skip to main content

Differentiable Logics for Neural Network Training and Verification

  • Conference paper
  • First Online:
Software Verification and Formal Methods for ML-Enabled Autonomous Systems (NSV 2022, FoMLAS 2022)

Abstract

Neural network (NN) verification is a problem that has drawn attention of many researchers. The specific nature of neural networks does away with the conventional assumption that a static program is given for verification as in the case of NNs multiple models can be used if one fails a new one can be trained leading to an approach called continuous verification, referring to the loop between training and verification. One tactic for improving the network’s performance is through “constraint-based loss functions” - a method of using differentiable logic (DL) to translate logical constraints into loss functions which can then be used to train the network specifically to satisfy said constraint. In this paper we present a uniform way of defining a translation from logic syntax to a differentiable loss function then examine and compare the existing DLs. We explore mathematical properties desired in such translations and discuss the design space identifying possible directions of future work.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 54.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 69.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Bak, S., Liu, C., Johnson, T.: The second international verification of neural networks competition (VNN-COMP 2021): summary and results, 2021. Technical Report. https://arxiv.org/abs/2109.00498

  2. Casadio, M., et al.: Neural network robustness as a verification property: a principled case study. In: Computer Aided Verification (CAV 2022), Lecture Notes in Computer Science. Springer, 2022 https://doi.org/10.1007/978-3-031-13185-1_11

  3. Cintula, P., Hájek, P., Noguera, C.: Handbook of mathematical fuzzy logic (in 2 volumes), vol. 37, 38 of studies in logic, mathematical logic and foundations (2011)

    Google Scholar 

  4. Fischer, M., Balunovic, M., Drachsler-Cohen, D., Gehr, T., Zhang, C., Vechev, M.: DL2: training and querying neural networks with logic (2019). https://openreview.net/forum?id=H1faSn0qY7

  5. Hu, Z., Ma, X., Liu, Z., Hovy, E., Xing, E.: Harnessing deep neural networks with logic rules. arXiv preprint arXiv:1603.06318 (2016)

  6. Katz, G., Barrett, C., Dill, D.L., Julian, K., Kochenderfer, M.J.: Reluplex: an efficient SMT solver for verifying deep neural networks. In: Majumdar, R., Kunčak, V. (eds.) CAV 2017. LNCS, vol. 10426, pp. 97–117. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-63387-9_5

    Chapter  Google Scholar 

  7. Klebanov, L.B., Rachev, S.T., Fabozzi, F.J.: Robust and non-robust models in statistics. Nova Science Publishers Hauppauge (2009)

    Google Scholar 

  8. Klement, E.P., Mesiar, R., Pap, E.: Triangular norms. position paper ii: general constructions and parameterized families. Fuzzy sets Syst. 145 (3), 411–438 (2004)

    Google Scholar 

  9. Komendantskaya, E., Kokke, W., Kienitz, D.: Continuous verification of machine learning: a declarative programming approach. In: PPDP 2020: 22nd International Symposium on Principles and Practice of Declarative Programming, Bologna, Italy, pp. 1:1–1:3. ACM (2020)

    Google Scholar 

  10. Serafini, L., d’Avila Garcez, A.S.: Logic tensor networks: deep learning and logical reasoning from data and knowledge. CoRR abs/1606.04422 (2016)

    Google Scholar 

  11. Krieken, E.V., Acar, E., Harmelen, F.V.: Analyzing differentiable fuzzy implications. CoRR abs/2006.03472 (2020)

    Google Scholar 

  12. Krieken, E.V., Acar, E., Harmelen, F.V.: Analyzing differentiable fuzzy logic operators. Artif. Intell. 302, 1–46 (2022)

    Google Scholar 

  13. Varnai, P., Dimarogonas, D.: On robustness metrics for learning STL tasks (2020)

    Google Scholar 

  14. Wang, Q., Ma, Y., Zhao, K., Tian, Y.: A comprehensive survey of loss functions in machine learning. Ann. Data. Sci. 9(2), 187–212 (2022)

    Article  Google Scholar 

  15. Xu, J., Zhang, Z., Friedman, T., Liang, Y., Van den Broeck, G.: a semantic loss function for deep learning with symbolic knowledge. In: Dy, J., Krause, A., eds, Proceedings of the 35th International Conference on Machine Learning, vol. 80 of Proceedings of Machine Learning Research, pp. 5502–5511. PMLR, 10–15 Jul 2018. https://proceedings.mlr.press/v80/xu18h.html

Download references

Acknowledgement

Authors acknowledge support of EPSRC grant AISEC EP/T026952/1 and NCSC grant Neural Network Verification: in search of the missing spec.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Natalia Ślusarz .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Ślusarz, N., Komendantskaya, E., Daggitt, M.L., Stewart, R. (2022). Differentiable Logics for Neural Network Training and Verification. In: Isac, O., Ivanov, R., Katz, G., Narodytska, N., Nenzi, L. (eds) Software Verification and Formal Methods for ML-Enabled Autonomous Systems. NSV FoMLAS 2022 2022. Lecture Notes in Computer Science, vol 13466. Springer, Cham. https://doi.org/10.1007/978-3-031-21222-2_5

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-21222-2_5

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-21221-5

  • Online ISBN: 978-3-031-21222-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics