Advertisement

Your Proof Fails? Testing Helps to Find the Reason

  • Guillaume Petiot
  • Nikolai KosmatovEmail author
  • Bernard Botella
  • Alain Giorgetti
  • Jacques Julliand
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9762)

Abstract

Applying deductive verification to formally prove that a program respects its formal specification is a very complex and time-consuming task due in particular to the lack of feedback in case of proof failures. Along with a non-compliance between the code and its specification (due to an error in at least one of them), possible reasons of a proof failure include a missing or too weak specification for a called function or a loop, and lack of time or simply incapacity of the prover to finish a particular proof. This work proposes a complete methodology where test generation helps to identify the reason of a proof failure and to exhibit a counterexample clearly illustrating the issue. We define the categories of proof failures, introduce two subcategories of contract weaknesses (single and global ones), and examine their properties. We describe how to transform a formally specified C program into C code suitable for testing, and illustrate the benefits of the method on comprehensive examples. The method has been implemented in StaDy, a plugin of the software analysis platform Frama-C. Initial experiments show that detecting non-compliances and contract weaknesses allows to precisely diagnose most proof failures.

Keywords

Failed Proof Deductive Verification Weak Contraction PathCrawler Verification Engine 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Notes

Acknowledgment

Part of the research work leading to these results has received funding for DEWI project (www.dewi-project.eu) from the ARTEMIS Joint Undertaking under grant agreement No. 621353. The authors thank the Frama-C and PathCrawler teams for providing the tools and support. Special thanks to François Bobot, Loïc Correnson, Julien Signoles and Nicky Williams for many fruitful discussions, suggestions and advice.

References

  1. 1.
    Ahn, K.Y., Denney, E.: Testing first-order logic axioms in program verification. In: Fraser, G., Gargantini, A. (eds.) TAP 2010. LNCS, vol. 6143, pp. 22–37. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  2. 2.
    Arlt, S., Arenis, S.F., Podelski, A., Wehrle, M.: System testing and program verification. In: Software Engineering & Management (2015)Google Scholar
  3. 3.
    Arndt, J.: Matters Computational-Ideas, Algorithms, Source Code [The fxtbook] (2010). http://www.jjj.de Google Scholar
  4. 4.
    Baudin, P., Cuoq, P., Filliâtre, J.C., Marché, C., Monate, B., Moy, Y., Prevosto, V.: ACSL: ANSI/ISO C Specification Language. http://frama-c.com/acsl.html
  5. 5.
    Berghofer, S., Nipkow, T.: Random testing in Isabelle/HOL. In: SEFM (2004)Google Scholar
  6. 6.
    Botella, B., Delahaye, M., Hong-Tuan-Ha, S., Kosmatov, N., Mouy, P., Roger, M., Williams, N.: Automating structural testing of C programs: experience with PathCrawler. In: AST (2009)Google Scholar
  7. 7.
    Burghardt, J., Gerlach, J., Lapawczyk, T.: ACSL by Example (2016). https://gitlab.fokus.fraunhofer.de/verification/open-acslbyexample/blob/master/ACSL-by-Example.pdf
  8. 8.
    Chamarthi, H.R., Dillinger, P.C., Kaufmann, M., Manolios, P.: Integrating testing and interactive theorem proving. In: ACL2 (2011)Google Scholar
  9. 9.
    Chebaro, O., Kosmatov, N., Giorgetti, A., Julliand, J.: Program slicing enhances a verification technique combining static and dynamic analysis. In: SAC (2012)Google Scholar
  10. 10.
    Chen, T.Y., Tse, T.H., Zhou, Z.Q.: Semi-proving: an integrated method for program proving, testing, and debugging. IEEE Trans. Softw. Eng. 37, 109 (2011)CrossRefGoogle Scholar
  11. 11.
    Christ, J., Ermis, E., Schäf, M., Wies, T.: Flow-sensitive fault localization. In: Giacobazzi, R., Berdine, J., Mastroeni, I. (eds.) VMCAI 2013. LNCS, vol. 7737, pp. 189–208. Springer, Heidelberg (2013)CrossRefGoogle Scholar
  12. 12.
    Christakis, M., Leino, K.R.M., Müller, P., Wüstholz, V.: Integrated environment for diagnosing verification errors. In: Chechik, M., Raskin, J.-F. (eds.) TACAS 2016. LNCS, vol. 9636, pp. 424–441. Springer, Heidelberg (2016). doi: 10.1007/978-3-662-49674-9_25 CrossRefGoogle Scholar
  13. 13.
    Christakis, M., Emmisberger, P., Müller, P.: Dynamic test generation with static fields and initializers. In: RV (2014)Google Scholar
  14. 14.
    Christakis, M., Müller, P., Wüstholz, V.: Collaborative verification and testing with explicit assumptions. In: FM (2012)Google Scholar
  15. 15.
    Claessen, K., Svensson, H.: Finding counter examples in induction proofs. In: Beckert, B., Hähnle, R. (eds.) TAP 2008. LNCS, vol. 4966, pp. 48–65. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  16. 16.
    Clarke, E., Grumberg, O., Jha, S., Lu, Y., Veith, H.: Counterexample-guided abstraction refinement for symbolic model checking. J. ACM 50, 752 (2003)MathSciNetCrossRefzbMATHGoogle Scholar
  17. 17.
    Cousot, P., Cousot, R., Fähndrich, M., Logozzo, F.: Automatic inference of necessary preconditions. In: Giacobazzi, R., Berdine, J., Mastroeni, I. (eds.) VMCAI 2013. LNCS, vol. 7737, pp. 128–148. Springer, Heidelberg (2013)CrossRefGoogle Scholar
  18. 18.
    Delahaye, M., Kosmatov, N., Signoles, J.: Common specification language for static and dynamic analysis of C programs. In: SAC (2013)Google Scholar
  19. 19.
    Dijkstra, E.W.: A Discipline of Programming. Series in Automatic Computation. Prentice Hall, Englewood Cliffs (1976)zbMATHGoogle Scholar
  20. 20.
    Dimitrova, R., Finkbeiner, B.: Counterexample-guided synthesis of observation predicates. In: Jurdziński, M., Ničković, D. (eds.) FORMATS 2012. LNCS, vol. 7595, pp. 107–122. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  21. 21.
    Dybjer, P., Haiyan, Q., Takeyama, M.: Combining testing and proving in dependent type theory. In: Basin, D., Wolff, B. (eds.) TPHOLs 2003. LNCS, vol. 2758, pp. 188–203. Springer, Heidelberg (2003)CrossRefGoogle Scholar
  22. 22.
    Engel, C., Hähnle, R.: Generating unit tests from formal proofs. In: Gurevich, Y., Meyer, B. (eds.) TAP 2007. LNCS, vol. 4454, pp. 169–188. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  23. 23.
    Genestier, R., Giorgetti, A., Petiot, G.: Sequential generation of structured arrays and its deductive verification. In: Blanchette, J.C., Kosmatov, N. (eds.) TAP 2015. LNCS, vol. 9154, pp. 109–128. Springer, Heidelberg (2015)CrossRefGoogle Scholar
  24. 24.
    Gladisch, C.: Could we have chosen a better loop invariant or method contract? In: Dubois, C. (ed.) TAP 2009. LNCS, vol. 5668, pp. 74–89. Springer, Heidelberg (2009)CrossRefGoogle Scholar
  25. 25.
    Godefroid, P., Nori, A.V., Rajamani, S.K., Tetali, S.D.: Compositional may-must program analysis: unleashing the power of alternation. In: POPL (2010)Google Scholar
  26. 26.
    Groce, A., Kroening, D., Lerda, F.: Understanding counterexamples with explain. In: CAV (2004)Google Scholar
  27. 27.
    Gulavani, B.S., Henzinger, T.A., Kannan, Y., Nori, A.V., Rajamani, S.K.: SYNERGY: a new algorithm for property checking. In: FSE (2006)Google Scholar
  28. 28.
    Guo, S., Kusano, M., Wang, C., Yang, Z., Gupta, A.: Assertion guided symbolic execution of multithreaded programs. In: ESEC/FSE (2015)Google Scholar
  29. 29.
    Hauzar, D., Marché, C., Moy, Y.: Counterexamples from proof failures in SPARK. In: SEFM (to appear, 2016)Google Scholar
  30. 30.
    Jakobsson, A., Kosmatov, N., Signoles, J.: Fast as a shadow, expressive as a tree: hybrid memory monitoring for C. In: SAC (2015)Google Scholar
  31. 31.
    Kirchner, F., Kosmatov, N., Prevosto, V., Signoles, J., Yakobowski, B.: Frama-C: a software analysis perspective. Formal Asp. Comput. 27(3), 573–609 (2015). http://frama-c.com MathSciNetCrossRefGoogle Scholar
  32. 32.
    Kosmatov, N.: Online version of PathCrawler (2010–2015). http://pathcrawler-online.com/
  33. 33.
    Kosmatov, N., Petiot, G., Signoles, J.: An optimized memory monitoring for runtime assertion checking of C programs. In: RV (2013)Google Scholar
  34. 34.
    Kovács, L., Voronkov, A.: Finding loop invariants for programs over arrays using a theorem prover. In: Chechik, M., Wirsing, M. (eds.) FASE 2009. LNCS, vol. 5503, pp. 470–485. Springer, Heidelberg (2009)CrossRefGoogle Scholar
  35. 35.
    Müller, P., Ruskiewicz, J.N.: Using debuggers to understand failed verification attempts. In: FM (2011)Google Scholar
  36. 36.
    Owre, S.: Random testing in PVS. In: AFM (2006)Google Scholar
  37. 37.
    Petiot, G., Botella, B., Julliand, J., Kosmatov, N., Signoles, J.: Instrumentation of annotated C programs for test generation. In: SCAM (2014)Google Scholar
  38. 38.
    Petiot, G., Kosmatov, N., Giorgetti, A., Julliand, J.: How test generation helps software specification and deductive verification in Frama-C. In: Seidl, M., Tillmann, N. (eds.) TAP 2014. LNCS, vol. 8570, pp. 204–211. Springer, Heidelberg (2014)Google Scholar
  39. 39.
    Podelski, A., Wies, T.: Counterexample-guided focus. In: POPL (2010)Google Scholar
  40. 40.
    Signoles, J.: E-ACSL: Executable ANSI/ISO C Specification Language. http://frama-c.com/download/e-acsl/e-acsl.pdf
  41. 41.
    The Coq Development Team: The Coq proof assistant. http://coq.inria.fr
  42. 42.
    Tschannen, J., Furia, C.A., Nordio, M., Meyer, B.: Program checking with less hassle. In: Cohen, E., Rybalchenko, A. (eds.) VSTTE 2013. LNCS, vol. 8164, pp. 149–169. Springer, Heidelberg (2014)CrossRefGoogle Scholar
  43. 43.
    Williams, N., Marre, B., Mouy, P., Roger, M.: PathCrawler: automatic generation of path tests by combining static and dynamic analysis. In: Dal Cin, M., Kaâniche, M., Pataricza, A. (eds.) EDCC 2005. LNCS, vol. 3463, pp. 281–292. Springer, Heidelberg (2005)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  • Guillaume Petiot
    • 1
  • Nikolai Kosmatov
    • 1
    Email author
  • Bernard Botella
    • 1
  • Alain Giorgetti
    • 2
  • Jacques Julliand
    • 2
  1. 1.CEA, LIST, Software Reliability LaboratoryGif-sur-YvetteFrance
  2. 2.FEMTO-ST/DISC, University of Franche-ComtéBesançon CedexFrance

Personalised recommendations