Advertisement

Strategy Selection for Software Verification Based on Boolean Features

A Simple but Effective Approach
  • Dirk Beyer
  • Matthias Dangl
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11245)

Abstract

Software verification is the concept of determining, given an input program and a specification, whether the input program satisfies the specification or not. There are different strategies that can be used to approach the problem of software verification, but, according to comparative evaluations, none of the known strategies is superior over the others. Therefore, many tools for software verification leave the choice of which strategy to use up to the user, which is problematic because the user might not be an expert on strategy selection. In the past, several learning-based approaches were proposed in order to perform the strategy selection automatically. This automatic choice can be formalized by a strategy selector, which is a function that takes as input a model of the given program, and assigns a verification strategy. The goal of this paper is to identify a small set of program features that (1) can be statically determined for each input program in an efficient way and (2) sufficiently distinguishes the input programs such that a strategy selector for picking a particular verification strategy can be defined that outperforms every constant strategy selector. Our results can be used as a baseline for future comparisons, because our strategy selector is simple and easy to understand, while still powerful enough to outperform the individual strategies. We evaluate our feature set and strategy selector on a large set of 5 687 verification tasks and provide a replication package for comparative evaluation.

Keywords

Strategy selection Software verification Algorithm selection Program analysis Model checking 

References

  1. 1.
    Apel, S., Beyer, D., Friedberger, K., Raimondi, F., Rhein, A.v.: Domain types: Abstract-domain selection based on variable usage. In: Proc. HVC. LNCS, vol. 8244, pp. 262–278. Springer (2013). http://www.sosy-lab.org/~dbeyer/Publications/2013-HVC.Domain_Types_Abstract-Domain_Selection_Based_on_Variable_Usage.pdf
  2. 2.
    Ball, T., Bounimova, E., Kumar, R., Levin, V.: SLAM2: Static driver verification with under 4% false alarms. In: Proc. FMCAD, pp. 35–42. IEEE (2010). http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=5770931
  3. 3.
    Beyer, D.: Second competition on software verification (Summary of SV-COMP 2013). In: Proc. TACAS. LNCS, vol. 7795, pp. 594–609. Springer (2013). http://dx.doi.org/10.1007/978-3-642-36742-7_43
  4. 4.
    Beyer, D.: Software verification with validation of results (Report on SV-COMP 2017). In: Proc. TACAS. LNCS, vol. 10206, pp. 331–349. Springer (2017). http://dx.doi.org/10.1007/978-3-662-54580-5_20
  5. 5.
    Beyer, D., Dangl, M., Wendler, P.: Boosting k-induction with continuously-refined invariants. In: Proc. CAV. LNCS, vol. 9206, pp. 622–640. Springer (2015). http://dx.doi.org/10.1007/978-3-319-21690-4_42
  6. 6.
    Beyer, D., Dangl, M., Wendler, P.: A unifying view on SMT-based software verification. J. Autom. Reasoning 60(3), 299–335 (2018). http://dx.doi.org/10.1007/s10817-017-9432-6MathSciNetCrossRefGoogle Scholar
  7. 7.
    Beyer, D., Henzinger, T.A., Keremoglu, M.E., Wendler, P.: Conditional model checking: A technique to pass information between verifiers. In: Proc. FSE, pp. 57:1–57:11. ACM (2012). http://dx.doi.org/10.1145/2393596.2393664
  8. 8.
    Beyer, D., Henzinger, T.A., Majumdar, R., Rybalchenko, A.: Path invariants. In: Proc. PLDI, pp. 300–309. ACM (2007). http://www.sosy-lab.org/~dbeyer/Publications/2007-PLDI.Path_Invariants.pdf
  9. 9.
    Beyer, D., Keremoglu, M.E.: CPAchecker: A tool for configurable software verification. In: Proc. CAV. LNCS, vol. 6806, pp. 184–190. Springer (2011). http://dx.doi.org/10.1007/978-3-642-22110-1_16
  10. 10.
    Beyer, D., Keremoglu, M.E., Wendler, P.: Predicate abstraction with adjustable-block encoding. In: Proc. FMCAD, pp. 189–197. FMCAD (2010). http://www.sosy-lab.org/~dbeyer/Publications/2010-FMCAD.Predicate_Abstraction_with_Adjustable-Block_Encoding.pdf
  11. 11.
    Beyer, D., Löwe, S.: Explicit-state software model checking based on CEGAR and interpolation. In: Proc. FASE. LNCS, vol. 7793, pp. 146–162. Springer (2013). http://www.sosy-lab.org/~dbeyer/Publications/2013-FASE.Explicit-State_Software_Model_Checking_Based_on_CEGAR_and_Interpolation.pdf
  12. 12.
    Beyer, D., Löwe, S., Wendler, P.: Reliable benchmarking: Requirements and solutions. Int. J. Softw. Tools Technol. Transfer (2017). http://dx.doi.org/10.1007/s10009-017-0469-y
  13. 13.
    Biere, A., Cimatti, A., Clarke, E.M., Strichman, O., Zhu, Y.: Bounded model checking. Adv. Comput. 58, 117–148 (2003). http://dx.doi.org/10.1016/S0065-2458(03)58003-2
  14. 14.
    Biere, A., Cimatti, A., Clarke, E.M., Zhu, Y.: Symbolic model checking without BDDs. In: Proc. TACAS. LNCS, vol. 1579, pp. 193–207. Springer (1999). http://dx.doi.org/10.1007/3-540-49059-0_14
  15. 15.
    Bishop, C., Johnson, C.G.: Assessing roles of variables by program analysis. In: Proc. CSEIT, pp. 131–136. TUCS (2005)Google Scholar
  16. 16.
    Clarke, E.M., Grumberg, O., Jha, S., Lu, Y., Veith, H.: Counterexample-guided abstraction refinement for symbolic model checking. J. ACM 50(5), 752–794 (2003). http://dx.doi.org/10.1145/876638.876643MathSciNetCrossRefGoogle Scholar
  17. 17.
    Clarke, E.M., Kröning, D., Lerda, F.: A tool for checking ANSI-C programs. In: Proc. TACAS. LNCS, vol. 2988, pp. 168–176. Springer (2004). http://dx.doi.org/10.1007/978-3-540-24730-2_15
  18. 18.
    Czech, M., Hüllermeier, E., Jakobs, M., Wehrheim, H.: Predicting rankings of software verification tools. In: Proc. SWAN, pp. 23–26. ACM (2017). http://dx.doi.org/10.1145/3121257.3121262
  19. 19.
    Demyanova, Y., Pani, T., Veith, H., Zuleger, F.: Empirical software metrics for benchmarking of verification tools. In: Proc. CAV. LNCS, vol. 9206, pp. 561–579. Springer (2015). http://dx.doi.org/10.1007/978-3-319-21690-4_39
  20. 20.
    Demyanova, Y., Pani, T., Veith, H., Zuleger, F.: Empirical software metrics for benchmarking of verification tools. Form. Methods Syst. Des. 50(2-3), 289–316 (2017). http://dx.doi.org/10.1007/s10703-016-0264-5CrossRefGoogle Scholar
  21. 21.
    Demyanova, Y., Rümmer, P., Zuleger, F.: Systematic predicate abstraction using variable roles. In: Proc. NFM. LNCS, vol. 10227, pp. 265–281 (2017). http://dx.doi.org/10.1007/978-3-319-57288-8_18
  22. 22.
    Demyanova, Y., Veith, H., Zuleger, F.: On the concept of variable roles and its use in software analysis. In: Proc. FMCAD, pp. 226–230. IEEE (2013). http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=6679414
  23. 23.
    Gurfinkel, A., Albarghouthi, A., Chaki, S., Li, Y., Chechik, M.: Ufo: Verification with interpolants and abstract interpretation (competition contribution). In: Proc. TACAS. LNCS, vol. 7795, pp. 637–640. Springer (2013). http://dx.doi.org/10.1007/978-3-642-36742-7_52
  24. 24.
    Huberman, B.A., Lukose, R.M., Hogg, T.: An economics approach to hard computational problems. Science 275(7), 51–54 (1997). http://www.hpl.hp.com/research/idl/papers/EconomicsApproach/EconomicsApproach.pdfCrossRefGoogle Scholar
  25. 25.
    Lal, A., Qadeer, S., Lahiri, S.K.: A solver for reachability modulo theories. In: Proc. CAV. LNCS, vol. 7358, pp. 427–443. Springer (2012). http://dx.doi.org/10.1007/978-3-642-31424-7_32
  26. 26.
    Müller, P., Peringer, P., Vojnar, T.: Predator hunting party (competition contribution). In: Proc. TACAS. LNCS, vol. 9035, pp. 443–446. Springer (2015)Google Scholar
  27. 27.
    Nori, A.V., Rajamani, S.K., Tetali, S., Thakur, A.V.: The yogiproject: Software property checking via static analysis and testing. In: Proc. TACAS. LNCS, vol. 5505, pp. 178–181. Springer (2009). http://dx.doi.org/10.1007/978-3-642-00768-2_17
  28. 28.
    Rice, J.R.: The algorithm selection problem. Adv. Comput. 15, 65–118 (1976). http://dx.doi.org/10.1016/S0065-2458(08)60520-3
  29. 29.
    Sajaniemi, J.: An empirical analysis of roles of variables in novice-level procedural programs. In: Proc. HCC, pp. 37–39. IEEE (2002). http://dx.doi.org/10.1109/HCC.2002.1046340
  30. 30.
    Sherman, E., Dwyer, M.B.: Structurally defined conditional data-flow static analysis. In: Beyer, D., Huisman, M. (eds.) Proc. TACAS, Part II. LNCS, vol. 10806, pp. 249–265. Springer (2018). http://dx.doi.org/10.1007/978-3-319-89963-3_15
  31. 31.
    Stieglmaier, T.: Augmenting predicate analysis with auxiliary invariants. Master’s Thesis, University of Passau, Software Systems Lab (2016). https://www.sosy-lab.org/research/msc/stieglmaier
  32. 32.
    Tulsian, V., Kanade, A., Kumar, R., Lal, A., Nori, A.V.: MUX: Algorithm selection for software model checkers. In: Proc. MSR. ACM (2014). https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/msr14.pdf
  33. 33.
    van Deursen, A., Moonen, L.: Type inference for COBOL systems. In: Proc. WCRE, pp. 220–230. IEEE (1998)Google Scholar
  34. 34.
    van Deursen, A., Moonen, L.: Understanding COBOL systems using inferred types. In: Proc. IWPC, pp. 74–81. IEEE (1999). http://dx.doi.org/10.1109/WPC.1999.777746
  35. 35.
    Wendler, P.: CPAchecker with sequential combination of explicit-state analysis and predicate analysis (competition contribution). In: Proc. TACAS. LNCS, vol. 7795, pp. 613–615. Springer (2013). http://dx.doi.org/10.1007/978-3-642-36742-7_45
  36. 36.
    Wonisch, D., Wehrheim, H.: Predicate analysis with block-abstraction memoization. In: Proc. ICFEM. LNCS, vol. 7635, pp. 332–347. Springer (2012). http://dx.doi.org/10.1007/978-3-642-34281-3_24

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  1. 1.LMU MunichMunichGermany

Personalised recommendations