Advertisement

Evaluating Tools for Software Verification (Track Introduction)

  • Markus Schordan
  • Dirk Beyer
  • Stephen F. Siegel
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11245)

Abstract

Over the last several years, tools for program analysis and verification have became much more mature. There are now a number of competitions that evaluate and compare the implemented analyses for a given set of benchmarks. The comparison of the analyses either focuses on the analysis results themselves (verification of specified properties) or on the impact on a client analysis. This track is concerned with methods of evaluation for comparing analysis and verification techniques and how verified program properties can be represented such that they remain reproducible and reusable as intermediate results in the overall verification process (i.e., for other verification tools or verification steps).

Notes

Acknowledgments

This work was partially performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344, Lawrence Livermore National Security, LLC. IM release number LLNL-CONF-757485.

References

  1. 1.
    Beyer, D.: Software verification with validation of results (Report on SV-COMP 2017). In: Proc. TACAS. LNCS, vol. 10206, pp. 331–349. Springer (2017). https://doi.org/10.1007/978-3-662-54580-5_20CrossRefGoogle Scholar
  2. 2.
    Beyer, D., Keremoglu, M.E.: CPAchecker: A tool for configurable software verification. In: Proc. CAV. LNCS, vol. 6806, pp. 184–190. Springer (2011). https://doi.org/10.1007/978-3-642-22110-1_16CrossRefGoogle Scholar
  3. 3.
    Beyer, D., Dangl, M.: Strategy selection for software verification based on Boolean features: A simple but effective approach. In: Proc. ISoLA. LNCS, vol. 11245, pp. 144–159. Springer (2018). https://doi.org/10.1007/978-3-030-03421-4_11CrossRefGoogle Scholar
  4. 4.
    Beyer, D., Friedberger, K.: In-place vs. copy-on-write CEGAR refinement for block summarization with caching. In: Proc. ISoLA. LNCS, vol. 11245, pp. 197–215. Springer (2018). https://doi.org/10.1007/978-3-030-03421-4_14CrossRefGoogle Scholar
  5. 5.
    Efremov, D., Mandrykin, M., Khoroshilov, A.: Deductive verification of unmodified Linux kernel library functions. In: Proc. ISoLA. LNCS, vol. 11245, pp. 216–234. Springer (2018). https://doi.org/10.1007/978-3-030-03421-4_15CrossRefGoogle Scholar
  6. 6.
    Filliâtre, J.C., Paskevich, A.: Why3: Where programs meet provers. In: Proc ESOP. LNCS, vol. 7792, pp. 125–128. Springer, Berlin (2013). https://doi.org/10.1007/978-3-642-37036-6_8CrossRefGoogle Scholar
  7. 7.
    Howar, F., Isberner, M., Merten, M., Steffen, B., Beyer, D., Păsăreanu, C.S.: Rigorous examination of reactive systems. Int. J. Softw. Tools Technol. Transf. 16(5), 457–464 (2014).  https://doi.org/10.1007/s10009-014-0337-yCrossRefGoogle Scholar
  8. 8.
    Huisman, M., Klebanov, V., Monahan, R., Tautschnig, M.: VerifyThis 2015. Int. J. Softw. Tools Technol. Transf. 19(6), 763–771 (2017).  https://doi.org/10.1007/s10009-016-0438-xCrossRefGoogle Scholar
  9. 9.
    Huisman, M., Monahan, R., Müller, P., Mostowski, W., Ulbrich, M.: VerifyThis 2017: A Program Verification Competition. Technical report, Karlsruhe Reports in Informatics, number 2017-10, Karlsruhe Institute of Technology, Faculty of Informatics (2017). https://www.ethz.ch/content/dam/ethz/special-interest/infk/chair-program-method/pm/documents/Verify%20This/Solutions%202017/CompetitionReportVerifyThis2017.pdf
  10. 10.
    Jasper, M., Fecke, M., Steffen, B., Schordan, M., Meijer, J., Pol, J.v.d., Howar, F., Siegel, S.F.: The RERS 2017 challenge and workshop (invited paper). In: Proc. SPIN, pp. 11–20. ACM, New York (2017).  https://doi.org/10.1145/3092282.3098206
  11. 11.
    Jasper, M., Steffen, B.: Synthesizing subtle bugs with known witnesses. In: Proc. ISoLA. LNCS, vol. 11245, pp. 235–257. Springer (2018). https://doi.org/10.1007/978-3-030-03421-4_16CrossRefGoogle Scholar
  12. 12.
    Liao, C., Lin, P.H., Asplund, J., Schordan, M., Karlin, I.: DataRaceBench: A benchmark suite for systematic evaluation of data race detection tools. In: Proc. SC, pp. 11:1–11:14. ACM, New York (2017).  https://doi.org/10.1145/3126908.3126958
  13. 13.
    Lin, P.H., Schordan, M., Liao, C., Karlin, I.: Runtime and memory evaluation of data race detection tools. In: Proc. ISoLA. LNCS, vol. 11245, pp. 179–196. Springer (2018). https://doi.org/10.1007/978-3-030-03421-4_13CrossRefGoogle Scholar
  14. 14.
    Luo, Z., Siegel, S.F.: Symbolic execution and deductive verification approaches to VerifyThis 2017 challenges. In: Proc. ISoLA. LNCS, vol. 11245, pp. 160–178. Springer (2018). https://doi.org/10.1007/978-3-030-03421-4_12CrossRefGoogle Scholar
  15. 15.
    Siegel, S.F., Zheng, M., Luo, Z., Zirkel, T.K., Marianiello, A.V., Edenhofner, J.G., Dwyer, M.B., Rogers, M.S.: CIVL: The Concurrency Intermediate Verification Language. In: Proc. SC, pp. 61:1–61:12. ACM, New York (2015).  https://doi.org/10.1145/2807591.2807635

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Markus Schordan
    • 1
  • Dirk Beyer
    • 2
  • Stephen F. Siegel
    • 3
  1. 1.Lawrence Livermore National LaboratoryLivermoreUSA
  2. 2.LMU MunichMunichGermany
  3. 3.University of DelawareNewarkUSA

Personalised recommendations