Advertisement

Conditional Testing

Off-the-Shelf Combination of Test-Case Generators
  • Dirk BeyerEmail author
  • Thomas Lemberger
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11781)

Abstract

There are several powerful automatic testers available, each with different strengths and weaknesses. To immediately benefit from different strengths of different tools, we need to investigate ways for quick and easy combination of techniques. Until now, research has mostly investigated integrated combinations, which require extra implementation effort. We propose the concept of conditional testing and a set of combination techniques that do not require implementation effort: Different testers can be taken ‘off the shelf’ and combined in a way that they cooperatively solve the problem of test-case generation for a given input program and coverage criterion. This way, the latest advances in test-case generation can be combined without delay. Conditional testing passes the test goals that a first tester has covered to the next tester, so that the next tester does not need to repeat work (as in combinations without information passing) but can focus on the remaining test goals. Our combinations do not require changes to the implementation of a tester, because we leverage a testability transformation (i.e., we reduce the input program to those parts that are relevant to the remaining test goals). To evaluate conditional testing and our proposed combination techniques, we (1) implemented the generic conditional tester Open image in new window , including the required transformations, and (2) ran experiments on a large amount of benchmark tasks; the obtained results are promising.

Keywords

Software testing Test-case generation Conditional model checking Cooperative verification Software verification Program analysis Test coverage 

References

  1. 1.
    Beyer, D.: Automatic verification of C and Java programs: SV-COMP 2019. In: Proc. TACAS, Part 3, LNCS, vol. 11429, pp. 133–155. Springer (2019).  https://doi.org/10.1007/978-3-030-17502-3_9CrossRefGoogle Scholar
  2. 2.
    Beyer, D.: Competition on software testing (Test-Comp). In: Proc. TACAS, Part 3, LNCS, vol. 11429, pp. 167–175. Springer (2019).  https://doi.org/10.1007/978-3-030-17502-3_11CrossRefGoogle Scholar
  3. 3.
    Beyer, D., Chlipala, A.J., Henzinger, T.A., Jhala, R., Majumdar, R.: Generating tests from counterexamples. In: Proc. ICSE, pp. 326–335. IEEE (2004).  https://doi.org/10.1109/ICSE.2004.1317455
  4. 4.
    Beyer, D., Dangl, M., Dietsch, D., Heizmann, M., Stahlbauer, A.: Witness validation and stepwise testification across software verifiers. In: Proc. FSE, pp. 721–733. ACM (2015).  https://doi.org/10.1145/2786805.2786867
  5. 5.
    Beyer, D., Dangl, M., Lemberger, T., Tautschnig, M.: Tests from witnesses: Execution-based validation of verification results. In: Proc. TAP, LNCS, vol. 10889, pp. 3–23. Springer (2018).  https://doi.org/10.1007/978-3-319-92994-1_1CrossRefGoogle Scholar
  6. 6.
    Beyer, D., Gulwani, S., Schmidt, D.: Combining model checking and data-flow analysis. In: Handbook on Model Checking, pp. 493–540. Springer (2018).  https://doi.org/10.1007/978-3-319-10575-8_16CrossRefGoogle Scholar
  7. 7.
    Beyer, D., Henzinger, T.A., Keremoglu, M.E., Wendler, P.: Conditional model checking: A technique to pass information between verifiers. In: Proc. FSE. ACM (2012).  https://doi.org/10.1145/2393596.2393664
  8. 8.
    Beyer, D., Jakobs, M.C.: CoVeriTest: Cooperative verifier-based testing. In: Proc. FASE, LNCS, vol. 11424, pp. 389–408. Springer (2019).  https://doi.org/10.1007/978-3-030-16722-6_23CrossRefGoogle Scholar
  9. 9.
    Beyer, D., Jakobs, M.C., Lemberger, T., Wehrheim, H.: Reducer-based construction of conditional verifiers. In: Proc. ICSE, pp. 1182–1193. ACM (2018).  https://doi.org/10.1145/3180155.3180259
  10. 10.
    Beyer, D., Lemberger, T.: Software verification: Testing vs. model checking. In: Proc. HVC, LNCS, vol. 10629, pp. 99–114. Springer (2017).  https://doi.org/10.1007/978-3-319-70389-3_7CrossRefGoogle Scholar
  11. 11.
    Beyer, D., Löwe, S., Wendler, P.: Reliable benchmarking: Requirements and solutions. Int. J. Softw. Tools Technol. Transfer 21(1), 1–29 (2019).  https://doi.org/10.1007/s10009-017-0469-y CrossRefGoogle Scholar
  12. 12.
    Beyer, D., Wehrheim, H.: Verification artifacts in cooperative verification: Survey and unifying component framework. arXiv/CoRR 1905(08505) May 2019. https://arxiv.org/abs/1905.08505
  13. 13.
    Böhme, M., Oliveira, B.C.d.S., Roychoudhury, A.: Partition-based regression verification. In: Proc. ICSE, pp. 302–311. IEEE (2013).  https://doi.org/10.1109/ICSE.2013.6606576
  14. 14.
    Cadar, C., Dunbar, D., Engler, D.R.: KLEE: Unassisted and automatic generation of high-coverage tests for complex systems programs. In: Proc. OSDI, pp. 209–224. USENIX Association (2008)Google Scholar
  15. 15.
    Chowdhury, A.B., Medicherla, R.K., Venkatesh, R.: VeriFuzz: Program-aware fuzzing (competition contribution). In: Proc. TACAS, Part 3, LNCS, vol. 11429, pp. 244–249. Springer (2019).  https://doi.org/10.1007/978-3-030-17502-3_22CrossRefGoogle Scholar
  16. 16.
    Christakis, M., Müller, P., Wüstholz, V.: Collaborative verification and testing with explicit assumptions. In: Proc. FM, LNCS, vol. 7436, pp. 132–146. Springer (2012).  https://doi.org/10.1007/978-3-642-32759-9_13CrossRefGoogle Scholar
  17. 17.
    Cruanes, S., Hamon, G., Owre, S., Shankar, N.: Tool integration with the Evidential Tool Bus. In: Proc. VMCAI, LNCS, vol. 7737, pp. 275–294. Springer (2013).  https://doi.org/10.1007/978-3-642-35873-9_18Google Scholar
  18. 18.
    Czech, M., Jakobs, M., Wehrheim, H.: Just test what you cannot verify! In: Proc. FASE, LNCS, vol. 9033, pp. 100–114. Springer (2015).  https://doi.org/10.1007/978-3-662-46675-9_7CrossRefGoogle Scholar
  19. 19.
    Daca, P., Gupta, A., Henzinger, T.A.: Abstraction-driven concolic testing. In: Proc. VMCAI, LNCS, vol. 9583, pp. 328–347. Springer (2016).  https://doi.org/10.1007/978-3-662-49122-5_16Google Scholar
  20. 20.
    Ferles, K., Wüstholz, V., Christakis, M., Dillig, I.: Failure-directed program trimming. In: Proc. ESEC/FSE, pp. 174–185. ACM (2017).  https://doi.org/10.1145/3106237.3106249
  21. 21.
    Gadelha, M.Y.R., Monteiro, F.R., Cordeiro, L.C., Nicole, D.A.: ESBMC v6.0: Verifying C programs using k-induction and invariant inference (competition contribution). In: Proc. TACAS, Part 3, LNCS, vol. 11429, pp. 209–213. Springer (2019).  https://doi.org/10.1007/978-3-030-17502-3_15CrossRefGoogle Scholar
  22. 22.
    Gulavani, B.S., Henzinger, T.A., Kannan, Y., Nori, A.V., Rajamani, S.K.: Synergy: A new algorithm for property checking. In: Proc. FSE, pp. 117–127. ACM (2006).  https://doi.org/10.1145/1181775.1181790
  23. 23.
    Harman, M., Hu, L., Hierons, R.M., Wegener, J., Sthamer, H., Baresel, A., Roper, M.: Testability transformation. IEEE Trans. Softw. Eng. 30(1), 3–16 (2004).  https://doi.org/10.1109/TSE.2004.1265732CrossRefGoogle Scholar
  24. 24.
    Holzer, A., Schallhart, C., Tautschnig, M., Veith, H.: Query-driven program testing. In: Proc. VMCAI, LNCS, vol. 5403, pp. 151–166. Springer (2009).  https://doi.org/10.1007/978-3-540-93900-9_15Google Scholar
  25. 25.
    Holzer, A., Schallhart, C., Tautschnig, M., Veith, H.: How did you specify your test suite. In: Proc. ASE, pp. 407–416. ACM (2010).  https://doi.org/10.1145/1858996.1859084
  26. 26.
    Howar, F., Isberner, M., Merten, M., Steffen, B., Beyer, D., Păsăreanu, C.S.: Rigorous examination of reactive systems. The RERS challenges 2012 and 2013. Int. J. Softw. Tools Technol. Transfer 16(5), 457–464 (2014).  https://doi.org/10.1007/s10009-014-0337-yCrossRefGoogle Scholar
  27. 27.
    Kim, Y., Xu, Z., Kim, M., Cohen, M.B., Rothermel, G.: Hybrid directed test suite augmentation: An interleaving framework. In: Proc. ICST, pp. 263–272. IEEE (2014).  https://doi.org/10.1109/ICST.2014.39
  28. 28.
    Majumdar, R., Sen, K.: Hybrid concolic testing. In: Proc. ICSE, pp. 416–426. IEEE (2007).  https://doi.org/10.1109/ICSE.2007.41
  29. 29.
    Margaria, T., Nagel, R., Steffen, B.: jETI: A tool for remote tool integration. In: Proc. TACAS, LNCS, vol. 3440, pp. 557–562. Springer (2005).  https://doi.org/10.1007/978-3-540-31980-1_38CrossRefGoogle Scholar
  30. 30.
    Noller, Y., Kersten, R., Pasareanu, C.S.: Badger: Complexity analysis with fuzzing and symbolic execution. In: Proc. ISSTA, pp. 322–332. ACM (2018).  https://doi.org/10.1145/3213846.3213868
  31. 31.
    Qiu, R., Khurshid, S., Pasareanu, C.S., Wen, J., Yang, G.: Using test ranges to improve symbolic execution. In: Proc. NFM, LNCS, vol. 10811, pp. 416–434. Springer (2018).  https://doi.org/10.1007/978-3-319-77935-5_28Google Scholar
  32. 32.
    Rushby, J.M.: An Evidential Tool Bus. In: Proc. ICFEM, LNCS, vol. 3785, p. 36. Springer (2005).  https://doi.org/10.1007/11576280_3CrossRefGoogle Scholar
  33. 33.
    Schneider, F.B.: Enforceable security policies. ACM Trans. Inf. Syst. Secur. 3(1), 30–50 (2000).  https://doi.org/10.1145/353323.353382CrossRefGoogle Scholar
  34. 34.
    Song, J., Alves-Foss, J.: The DARPA cyber grand challenge: A competitor’s perspective, part 2. IEEE Secur. Priv. 14(1), 76–81 (2016).  https://doi.org/10.1109/MSP.2016.14CrossRefGoogle Scholar
  35. 35.
    Steffen, B., Margaria, T., Braun, V.: The Electronic Tool Integration platform: Concepts and design. STTT 1(1–2), 9–30 (1997).  https://doi.org/10.1007/s100090050003CrossRefzbMATHGoogle Scholar
  36. 36.
    Stephens, N., Grosen, J., Salls, C., Dutcher, A., Wang, R., Corbetta, J., Shoshitaishvili, Y., Kruegel, C., Vigna, G.: Driller: Augmenting fuzzing through selective symbolic execution. In: Proc. NDSS. Internet Society (2016).  https://doi.org/10.14722/ndss.2016.23368
  37. 37.
    Taneja, K., Xie, T., Tillmann, N., de Halleux, J.: eXpress: Guided path exploration for efficient regression test generation. In: Proc. ISSTA, pp. 1–11. ACM (2011).  https://doi.org/10.1145/2001420.2001422
  38. 38.
    Tip, F.: A survey of program slicing techniques. J. Program. Lang. 3, 121–189 (1995)Google Scholar
  39. 39.
    Visser, W., Păsăreanu, C.S., Khurshid, S.: Test input generation with Java PathFinder. In: Proc. ISSTA, pp. 97–107. ACM (2004).  https://doi.org/10.1145/1007512.1007526
  40. 40.
    Zhu, Z., Jiao, L., Xu, X.: Combining search-based testing and dynamic symbolic execution by evolvability metric. In: Proc. ICSME, pp. 59–68. IEEE (2018).  https://doi.org/10.1109/ICSME.2018.00015

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.LMU MunichMunichGermany

Personalised recommendations