Advertisement

Symbolic Execution and Deductive Verification Approaches to VerifyThis 2017 Challenges

  • Ziqing Luo
  • Stephen F. SiegelEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11245)

Abstract

We present solutions to the VerifyThis 2017 program verification challenges using the symbolic execution tool CIVL. Comparing these to existing solutions using deductive verification tools such as Why3 and KeY, we analyze the advantages and disadvantages of the two approaches. The issues include scalability; the ability to handle challenging programming language constructs, such as expressions with side-effects, pointers, and concurrency; the ability to specify complex properties; and usability and automation. We conclude with a presentation of a new CIVL feature that attempts to bridge the gap between the two approaches by allowing a user to incorporate loop invariants incrementally into a symbolic execution framework.

Keywords

VerifyThis Symbolic execution Deductive verification Loop invariants CIVL 

Notes

Acknowledgments

This research was supported by the U.S. National Science Foundation under Award CCF-1319571, and by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research, under Award Number DE-SC0012566. This report was prepared as an account of work sponsored by an agency of the United States Government. Neither the United States Government nor any agency thereof, nor any of their employees, makes any warranty, express or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights. Reference herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise does not necessarily constitute or imply its endorsement, recommendation, or favoring by the United States Government or any agency thereof. The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States Government or any agency thereof.

References

  1. 1.
    Ahrendt, W., Beckert, B., Bubel, R., Hähnle, R., Schmitt, P.H., Ulbrich, M. (eds.): Deductive Software Verification - The KeY Book - From Theory to Practice. LNCS, vol. 10001. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-49812-6CrossRefGoogle Scholar
  2. 2.
    Barnett, M., Leino, K.R.M.: Weakest-precondition of unstructured programs. In: Ernst, M.D., Jensen, T.P. (eds.) Proceedings of the 2005 ACM SIGPLAN-SIGSOFT Workshop on Program Analysis For Software Tools and Engineering, PASTE’05, Lisbon, Portugal, 5–6 September 2005, pp. 82–87. ACM (2005).  https://doi.org/10.1145/1108792.1108813
  3. 3.
    Beyer, D.: Software verification with validation of results. In: Legay, A., Margaria, T. (eds.) TACAS 2017, Part II. LNCS, vol. 10206, pp. 331–349. Springer, Heidelberg (2017).  https://doi.org/10.1007/978-3-662-54580-5_20CrossRefGoogle Scholar
  4. 4.
    Blom, S., Huisman, M.: The VerCors tool for verification of concurrent programs. In: Jones, C., Pihlajasaari, P., Sun, J. (eds.) FM 2014. LNCS, vol. 8442, pp. 127–131. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-06410-9_9CrossRefGoogle Scholar
  5. 5.
    Cuoq, P., Kirchner, F., Kosmatov, N., Prevosto, V., Signoles, J., Yakobowski, B.: Frama-C: a software analysis perspective. Form. Asp. Comput. 27, 573–609 (2012).  https://doi.org/10.1007/s00165-014-0326-7MathSciNetCrossRefGoogle Scholar
  6. 6.
    Eidgenössische Technische Hochschule Zürich: Chair of Programming Methodology (2017). http://www.pm.inf.ethz.ch/research/verifythis/Archive/2017.html
  7. 7.
    Filliâtre, J.-C., Paskevich, A.: Why3 — where programs meet provers. In: Felleisen, M., Gardner, P. (eds.) ESOP 2013. LNCS, vol. 7792, pp. 125–128. Springer, Heidelberg (2013).  https://doi.org/10.1007/978-3-642-37036-6_8CrossRefGoogle Scholar
  8. 8.
    Hentschel, M., Bubel, R., Hähnle, R.: The Symbolic Execution Debugger (SED): a platform for interactive symbolic execution, debugging, verification and more. Int. J. Softw. Tools Technol. Transf. (2018).  https://doi.org/10.1007/s10009-018-0490-9
  9. 9.
    Huisman, M., Monahan, R., Müller, P., Mostowski, W., Ulbrich, M.: VerifyThis 2017: a program verification competition. Technical report, Karlsruhe Reports in Informatics 2017, 10, Karlsruhe Institute of Technology, Faculty of Informatics (2017).  https://doi.org/10.5445/IR/1000077160
  10. 10.
    Păsăreanu, C.S., Visser, W.: Verification of Java programs using symbolic execution and invariant generation. In: Graf, S., Mounier, L. (eds.) SPIN 2004. LNCS, vol. 2989, pp. 164–181. Springer, Heidelberg (2004).  https://doi.org/10.1007/978-3-540-24732-6_13CrossRefGoogle Scholar
  11. 11.
    Siegel, S.F.: CIVL solutions to VerifyThis 2016 challenges. ACM SIGLOG News 4(2), 55–75 (2017). https://doi.acm.org/10.1145/3090064.3090070Google Scholar
  12. 12.
    Siegel, S.F., et al.: CIVL: the concurrency intermediate verification language. In: Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, SC 2015, pp. 61:1–61:12. ACM, New York (2015).  https://doi.org/10.1145/2807591.2807635
  13. 13.
    Siegel, S.F., Zirkel, T.K.: Loop invariant symbolic execution for parallel programs. In: Kuncak, V., Rybalchenko, A. (eds.) VMCAI 2012. LNCS, vol. 7148, pp. 412–427. Springer, Heidelberg (2012).  https://doi.org/10.1007/978-3-642-27940-9_27CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  1. 1.Verified Software Laboratory, Department of Computer and Information SciencesUniversity of DelawareNewarkUSA

Personalised recommendations