Skip to main content

Conditional Testing

Off-the-Shelf Combination of Test-Case Generators

  • Conference paper
  • First Online:
Automated Technology for Verification and Analysis (ATVA 2019)

Abstract

There are several powerful automatic testers available, each with different strengths and weaknesses. To immediately benefit from different strengths of different tools, we need to investigate ways for quick and easy combination of techniques. Until now, research has mostly investigated integrated combinations, which require extra implementation effort. We propose the concept of conditional testing and a set of combination techniques that do not require implementation effort: Different testers can be taken ‘off the shelf’ and combined in a way that they cooperatively solve the problem of test-case generation for a given input program and coverage criterion. This way, the latest advances in test-case generation can be combined without delay. Conditional testing passes the test goals that a first tester has covered to the next tester, so that the next tester does not need to repeat work (as in combinations without information passing) but can focus on the remaining test goals. Our combinations do not require changes to the implementation of a tester, because we leverage a testability transformation (i.e., we reduce the input program to those parts that are relevant to the remaining test goals). To evaluate conditional testing and our proposed combination techniques, we (1) implemented the generic conditional tester , including the required transformations, and (2) ran experiments on a large amount of benchmark tasks; the obtained results are promising.

Supported in part by DFG grant BE 1761/7-1.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    All coverage criteria that are based on code reachability can be reduced to reachability of CFA edges through testability transformations [23, 33].

  2. 2.

    Since the transformed program is generated such that each operation is written on an own line in the output code, a line uniquely identifies a test-goal label.

  3. 3.

    https://sv-comp.sosy-lab.org/2019/rules.php

  4. 4.

    https://gitlab.com/sosy-lab/software/conditional-testing/tree/v2.0

  5. 5.

    https://github.com/sosy-lab/benchexec/tree/2.0/benchexec/tools

  6. 6.

    https://gitlab.com/sosy-lab/test-comp/bench-defs/tree/testcomp19/benchmark-defs

  7. 7.

    https://github.com/sosy-lab/sv-comp/tree/svcomp19/benchmark-defs

  8. 8.

    https://www.es.tu-darmstadt.de/testcomp19/

  9. 9.

    https://www.sosy-lab.org/research/conditional-testing/

  10. 10.

    https://doi.org/10.5281/zenodo.3352401

References

  1. Beyer, D.: Automatic verification of C and Java programs: SV-COMP 2019. In: Proc. TACAS, Part 3, LNCS, vol. 11429, pp. 133–155. Springer (2019). https://doi.org/10.1007/978-3-030-17502-3_9

    Chapter  Google Scholar 

  2. Beyer, D.: Competition on software testing (Test-Comp). In: Proc. TACAS, Part 3, LNCS, vol. 11429, pp. 167–175. Springer (2019). https://doi.org/10.1007/978-3-030-17502-3_11

    Chapter  Google Scholar 

  3. Beyer, D., Chlipala, A.J., Henzinger, T.A., Jhala, R., Majumdar, R.: Generating tests from counterexamples. In: Proc. ICSE, pp. 326–335. IEEE (2004). https://doi.org/10.1109/ICSE.2004.1317455

  4. Beyer, D., Dangl, M., Dietsch, D., Heizmann, M., Stahlbauer, A.: Witness validation and stepwise testification across software verifiers. In: Proc. FSE, pp. 721–733. ACM (2015). https://doi.org/10.1145/2786805.2786867

  5. Beyer, D., Dangl, M., Lemberger, T., Tautschnig, M.: Tests from witnesses: Execution-based validation of verification results. In: Proc. TAP, LNCS, vol. 10889, pp. 3–23. Springer (2018). https://doi.org/10.1007/978-3-319-92994-1_1

    Chapter  Google Scholar 

  6. Beyer, D., Gulwani, S., Schmidt, D.: Combining model checking and data-flow analysis. In: Handbook on Model Checking, pp. 493–540. Springer (2018). https://doi.org/10.1007/978-3-319-10575-8_16

    Chapter  Google Scholar 

  7. Beyer, D., Henzinger, T.A., Keremoglu, M.E., Wendler, P.: Conditional model checking: A technique to pass information between verifiers. In: Proc. FSE. ACM (2012). https://doi.org/10.1145/2393596.2393664

  8. Beyer, D., Jakobs, M.C.: CoVeriTest: Cooperative verifier-based testing. In: Proc. FASE, LNCS, vol. 11424, pp. 389–408. Springer (2019). https://doi.org/10.1007/978-3-030-16722-6_23

    Chapter  Google Scholar 

  9. Beyer, D., Jakobs, M.C., Lemberger, T., Wehrheim, H.: Reducer-based construction of conditional verifiers. In: Proc. ICSE, pp. 1182–1193. ACM (2018). https://doi.org/10.1145/3180155.3180259

  10. Beyer, D., Lemberger, T.: Software verification: Testing vs. model checking. In: Proc. HVC, LNCS, vol. 10629, pp. 99–114. Springer (2017). https://doi.org/10.1007/978-3-319-70389-3_7

    Chapter  Google Scholar 

  11. Beyer, D., Löwe, S., Wendler, P.: Reliable benchmarking: Requirements and solutions. Int. J. Softw. Tools Technol. Transfer 21(1), 1–29 (2019). https://doi.org/10.1007/s10009-017-0469-y

    Article  Google Scholar 

  12. Beyer, D., Wehrheim, H.: Verification artifacts in cooperative verification: Survey and unifying component framework. arXiv/CoRR 1905(08505) May 2019. https://arxiv.org/abs/1905.08505

  13. Böhme, M., Oliveira, B.C.d.S., Roychoudhury, A.: Partition-based regression verification. In: Proc. ICSE, pp. 302–311. IEEE (2013). https://doi.org/10.1109/ICSE.2013.6606576

  14. Cadar, C., Dunbar, D., Engler, D.R.: KLEE: Unassisted and automatic generation of high-coverage tests for complex systems programs. In: Proc. OSDI, pp. 209–224. USENIX Association (2008)

    Google Scholar 

  15. Chowdhury, A.B., Medicherla, R.K., Venkatesh, R.: VeriFuzz: Program-aware fuzzing (competition contribution). In: Proc. TACAS, Part 3, LNCS, vol. 11429, pp. 244–249. Springer (2019). https://doi.org/10.1007/978-3-030-17502-3_22

    Chapter  Google Scholar 

  16. Christakis, M., Müller, P., Wüstholz, V.: Collaborative verification and testing with explicit assumptions. In: Proc. FM, LNCS, vol. 7436, pp. 132–146. Springer (2012). https://doi.org/10.1007/978-3-642-32759-9_13

    Chapter  Google Scholar 

  17. Cruanes, S., Hamon, G., Owre, S., Shankar, N.: Tool integration with the Evidential Tool Bus. In: Proc. VMCAI, LNCS, vol. 7737, pp. 275–294. Springer (2013). https://doi.org/10.1007/978-3-642-35873-9_18

    Google Scholar 

  18. Czech, M., Jakobs, M., Wehrheim, H.: Just test what you cannot verify! In: Proc. FASE, LNCS, vol. 9033, pp. 100–114. Springer (2015). https://doi.org/10.1007/978-3-662-46675-9_7

    Chapter  Google Scholar 

  19. Daca, P., Gupta, A., Henzinger, T.A.: Abstraction-driven concolic testing. In: Proc. VMCAI, LNCS, vol. 9583, pp. 328–347. Springer (2016). https://doi.org/10.1007/978-3-662-49122-5_16

    Google Scholar 

  20. Ferles, K., Wüstholz, V., Christakis, M., Dillig, I.: Failure-directed program trimming. In: Proc. ESEC/FSE, pp. 174–185. ACM (2017). https://doi.org/10.1145/3106237.3106249

  21. Gadelha, M.Y.R., Monteiro, F.R., Cordeiro, L.C., Nicole, D.A.: ESBMC v6.0: Verifying C programs using k-induction and invariant inference (competition contribution). In: Proc. TACAS, Part 3, LNCS, vol. 11429, pp. 209–213. Springer (2019). https://doi.org/10.1007/978-3-030-17502-3_15

    Chapter  Google Scholar 

  22. Gulavani, B.S., Henzinger, T.A., Kannan, Y., Nori, A.V., Rajamani, S.K.: Synergy: A new algorithm for property checking. In: Proc. FSE, pp. 117–127. ACM (2006). https://doi.org/10.1145/1181775.1181790

  23. Harman, M., Hu, L., Hierons, R.M., Wegener, J., Sthamer, H., Baresel, A., Roper, M.: Testability transformation. IEEE Trans. Softw. Eng. 30(1), 3–16 (2004). https://doi.org/10.1109/TSE.2004.1265732

    Article  Google Scholar 

  24. Holzer, A., Schallhart, C., Tautschnig, M., Veith, H.: Query-driven program testing. In: Proc. VMCAI, LNCS, vol. 5403, pp. 151–166. Springer (2009). https://doi.org/10.1007/978-3-540-93900-9_15

    Google Scholar 

  25. Holzer, A., Schallhart, C., Tautschnig, M., Veith, H.: How did you specify your test suite. In: Proc. ASE, pp. 407–416. ACM (2010). https://doi.org/10.1145/1858996.1859084

  26. Howar, F., Isberner, M., Merten, M., Steffen, B., Beyer, D., Păsăreanu, C.S.: Rigorous examination of reactive systems. The RERS challenges 2012 and 2013. Int. J. Softw. Tools Technol. Transfer 16(5), 457–464 (2014). https://doi.org/10.1007/s10009-014-0337-y

    Article  Google Scholar 

  27. Kim, Y., Xu, Z., Kim, M., Cohen, M.B., Rothermel, G.: Hybrid directed test suite augmentation: An interleaving framework. In: Proc. ICST, pp. 263–272. IEEE (2014). https://doi.org/10.1109/ICST.2014.39

  28. Majumdar, R., Sen, K.: Hybrid concolic testing. In: Proc. ICSE, pp. 416–426. IEEE (2007). https://doi.org/10.1109/ICSE.2007.41

  29. Margaria, T., Nagel, R., Steffen, B.: jETI: A tool for remote tool integration. In: Proc. TACAS, LNCS, vol. 3440, pp. 557–562. Springer (2005). https://doi.org/10.1007/978-3-540-31980-1_38

    Chapter  Google Scholar 

  30. Noller, Y., Kersten, R., Pasareanu, C.S.: Badger: Complexity analysis with fuzzing and symbolic execution. In: Proc. ISSTA, pp. 322–332. ACM (2018). https://doi.org/10.1145/3213846.3213868

  31. Qiu, R., Khurshid, S., Pasareanu, C.S., Wen, J., Yang, G.: Using test ranges to improve symbolic execution. In: Proc. NFM, LNCS, vol. 10811, pp. 416–434. Springer (2018). https://doi.org/10.1007/978-3-319-77935-5_28

    Google Scholar 

  32. Rushby, J.M.: An Evidential Tool Bus. In: Proc. ICFEM, LNCS, vol. 3785, p. 36. Springer (2005). https://doi.org/10.1007/11576280_3

    Chapter  Google Scholar 

  33. Schneider, F.B.: Enforceable security policies. ACM Trans. Inf. Syst. Secur. 3(1), 30–50 (2000). https://doi.org/10.1145/353323.353382

    Article  Google Scholar 

  34. Song, J., Alves-Foss, J.: The DARPA cyber grand challenge: A competitor’s perspective, part 2. IEEE Secur. Priv. 14(1), 76–81 (2016). https://doi.org/10.1109/MSP.2016.14

    Article  Google Scholar 

  35. Steffen, B., Margaria, T., Braun, V.: The Electronic Tool Integration platform: Concepts and design. STTT 1(1–2), 9–30 (1997). https://doi.org/10.1007/s100090050003

    Article  MATH  Google Scholar 

  36. Stephens, N., Grosen, J., Salls, C., Dutcher, A., Wang, R., Corbetta, J., Shoshitaishvili, Y., Kruegel, C., Vigna, G.: Driller: Augmenting fuzzing through selective symbolic execution. In: Proc. NDSS. Internet Society (2016). https://doi.org/10.14722/ndss.2016.23368

  37. Taneja, K., Xie, T., Tillmann, N., de Halleux, J.: eXpress: Guided path exploration for efficient regression test generation. In: Proc. ISSTA, pp. 1–11. ACM (2011). https://doi.org/10.1145/2001420.2001422

  38. Tip, F.: A survey of program slicing techniques. J. Program. Lang. 3, 121–189 (1995)

    Google Scholar 

  39. Visser, W., Păsăreanu, C.S., Khurshid, S.: Test input generation with Java PathFinder. In: Proc. ISSTA, pp. 97–107. ACM (2004). https://doi.org/10.1145/1007512.1007526

  40. Zhu, Z., Jiao, L., Xu, X.: Combining search-based testing and dynamic symbolic execution by evolvability metric. In: Proc. ICSME, pp. 59–68. IEEE (2018). https://doi.org/10.1109/ICSME.2018.00015

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Dirk Beyer .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Beyer, D., Lemberger, T. (2019). Conditional Testing. In: Chen, YF., Cheng, CH., Esparza, J. (eds) Automated Technology for Verification and Analysis. ATVA 2019. Lecture Notes in Computer Science(), vol 11781. Springer, Cham. https://doi.org/10.1007/978-3-030-31784-3_11

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-31784-3_11

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-31783-6

  • Online ISBN: 978-3-030-31784-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics