Abstract
Testing is currently the main technique adopted by the industry for improving the quality, reliability, and security of software. In order to lower the cost of manual testing, automatic testing techniques have been devised, such as random and symbolic testing, with their respective trade-offs. For example, random testing excels at fast global exploration of software, while it plateaus when faced with hard-to-hit numerically-intensive execution paths. On the other hand, symbolic testing excels at exploring such paths, while it struggles when faced with complex heap class structures. In this paper, we describe an approach for automatic unit testing of object-oriented software that integrates the two techniques. We leverage feedback-directed unit testing to generate meaningful sequences of constructor+method invocations that create rich heap structures, and we in turn further explore these sequences using dynamic symbolic execution. We implement this approach in a tool called JDoop, which we augment with several parameters for fine-tuning its heuristics; such “knobs” allow for a detailed exploration of the various trade-offs that the proposed integration offers. Using JDoop, we perform an extensive empirical exploration of this space, and we describe lessons learned and guidelines for future research efforts in this area.
Supported in part by the National Science Foundation (NSF) award CCF 1421678.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
Note that a very preliminary version of JDoop was presented earlier as a short workshop extended abstract [11].
- 2.
JDoop is available under the GNU General Public License version 3 (or later) at https://github.com/psycopaths/jdoop.
- 3.
The testing infrastructure is available under the GNU Affero GPLv3+ license at https://github.com/soarlab/jdoop-wrapper.
- 4.
These are methods for which Nhandler was not configured to take over execution, leading to a crash of JDart. We configured Nhandler to take care of all native methods of java.lang.String.
- 5.
- 6.
- 7.
References
Apt testbed facility. https://www.aptlab.net
ASM: A Java bytecode engineering library. http://asm.ow2.org
Baluda, M., Denaro, G., Pezzè, M.: Bidirectional symbolic analysis for effective branch testing. IEEE Trans. Softw. Eng. 42(5), 403–426 (2016)
Beyer, D.: Reliable and reproducible competition results with BenchExec and witnesses (report on SV-COMP 2016). In: Chechik, M., Raskin, J.-F. (eds.) TACAS 2016. LNCS, vol. 9636, pp. 887–904. Springer, Heidelberg (2016). https://doi.org/10.1007/978-3-662-49674-9_55
Boshernitsan, M., Doong, R., Savoia, A.: From Daikon to Agitator: lessons and challenges in building a commercial tool for developer testing. In: ISSTA, pp. 169–180 (2006)
Cadar, C., Dunbar, D., Engler, D.: KLEE: unassisted and automatic generation of high-coverage tests for complex systems programs. In: OSDI, pp. 209–224 (2008)
Cho, C.Y., Babić, D., Poosankam, P., Chen, K.Z., Wu, E.X., Song, D.: MACE: model-inference-assisted concolic exploration for protocol and vulnerability discovery. In: Proceedings of the 20th USENIX Security Symposium (2011)
Csallner, C., Smaragdakis, Y., Xie, T.: DSD-Crasher: a hybrid analysis tool for bug finding. ACM Trans. Softw. Eng. Methodol. 17(2), 8:1–8:37 (2008)
De Moura, L., Bjørner, N.: Z3: an efficient SMT solver. In: Ramakrishnan, C.R., Rehof, J. (eds.) TACAS 2008. LNCS, vol. 4963, pp. 337–340. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-78800-3_24
Deters, M., Reynolds, A., King, T., Barrett, C.W., Tinelli, C.: A tour of CVC4: how it works, and how to use it. In: FMCAD, p. 7 (2014)
Dimjašević, M., Rakamarić, Z.: JPF-Doop: combining concolic and random testing for Java. In: Java Pathfinder Workshop (JPF) (2013). Extended abstract
Eler, M.M., Endo, A.T., Durelli, V.H.S.: Quantifying the characteristics of Java programs that may influence symbolic execution from a test data generation perspective. In: COMPSAC, pp. 181–190 (2014)
Fraser, G., Arcuri, A.: EvoSuite: automatic test suite generation for object-oriented software. In: ESEC/FSE, pp. 416–419 (2011)
Galeotti, J.P., Fraser, G., Arcuri, A.: Improving search-based test suite generation with dynamic symbolic execution. In: ISSRE, pp. 360–369 (2013)
Garg, P., Ivančić, F., Balakrishnan, G., Maeda, N., Gupta, A.: Feedback-directed unit test generation for C/C++ using concolic execution. In: ICSE, pp. 132–141 (2013)
Gligoric, M., Groce, A., Zhang, C., Sharma, R., Alipour, M.A., Marinov, D.: Comparing non-adequate test suites using coverage criteria. In: ISSTA, pp. 302–313 (2013)
Godefroid, P., Klarlund, N., Sen, K.: DART: directed automated random testing. In: PLDI, pp. 213–223 (2005)
Godefroid, P., Levin, M.Y., Molnar, D.: SAGE: whitebox fuzzing for security testing. Queue 10(1), 20:20–20:27 (2012)
Inkumsah, K., Xie, T.: Improving structural testing of object-oriented programs via integrating evolutionary testing and symbolic execution. In: ASE, pp. 297–306 (2008)
JaCoCo Java code coverage library. http://www.jacoco.org/jacoco
Jayaraman, K., Harvison, D., Ganesh, V.: jFuzz: a concolic whitebox fuzzer for Java. In: NFM, pp. 121–125 (2009)
Jaygarl, H., Kim, S., Xie, T., Chang, C.K.: OCAT: object capture-based automated testing. In: ISSTA, pp. 159–170 (2010)
Java PathFinder (JPF). http://babelfish.arc.nasa.gov/trac/jpf
Kähkönen, K., Launiainen, T., Saarikivi, O., Kauttio, J., Heljanko, K., Niemelä, I.: LCT: an open source concolic testing tool for Java programs. In: BYTECODE, pp. 75–80 (2011)
Khurshid, S., Păsăreanu, C.S., Visser, W.: Generalized symbolic execution for model checking and testing. In: Garavel, H., Hatcliff, J. (eds.) TACAS 2003. LNCS, vol. 2619, pp. 553–568. Springer, Heidelberg (2003). https://doi.org/10.1007/3-540-36577-X_40
Luckow, K., Dimjašević, M., Giannakopoulou, D., Howar, F., Isberner, M., Kahsai, T., Rakamarić, Z., Raman, V.: JDart: a dynamic symbolic analysis framework. In: Chechik, M., Raskin, J.-F. (eds.) TACAS 2016. LNCS, vol. 9636, pp. 442–459. Springer, Heidelberg (2016). https://doi.org/10.1007/978-3-662-49674-9_26
Marcozzi, M., Bardin, S., Kosmatov, N., Papadakis, M., Prevosto, V., Correnson, L.: Time to clean your test objectives. In: ICSE, pp. 456–467 (2018)
McMinn, P.: Search-based software testing: past, present and future. In: 2011 IEEE Fourth International Conference on Software Testing, Verification and Validation Workshops, pp. 153–163 (2011)
Pacheco, C., Lahiri, S., Ernst, M., Ball, T.: Feedback-directed random test generation. In: ICSE, pp. 75–84 (2007)
Pasareanu, C.S., Rungta, N., Visser, W.: Symbolic execution with mixed concrete-symbolic solving. In: ISSTA, pp. 34–44 (2011)
Prasetya, I.S.W.B.: Budget-aware random testing with T3: benchmarking at the SBST2016 testing tool contest. In: SBST, pp. 29–32 (2016)
Pǎsǎreanu, C.S., Mehlitz, P.C., Bushnell, D.H., Gundy-Burlet, K., Lowry, M., Person, S., Pape, M.: Combining unit-level symbolic execution and system-level concrete execution for testing NASA software. In: ISSTA, pp. 15–26 (2008)
Rueda, U., Just, R., Galeotti, J.P., Vos, T.E.J.: Unit testing tool competition – round four. In: SBST, pp. 19–28 (2016)
Sakti, A., Pesant, G., Guéhéneuc, Y.G.: JTExpert at the fourth unit testing tool competition. In: SBST, pp. 37–40 (2016)
Sen, K., Agha, G.: CUTE and jCUTE: concolic unit testing and explicit path model-checking tools. In: Ball, T., Jones, R.B. (eds.) CAV 2006. LNCS, vol. 4144, pp. 419–423. Springer, Heidelberg (2006). https://doi.org/10.1007/11817963_38
Sen, K., Marinov, D., Agha, G.: CUTE: a concolic unit testing engine for C. In: ESEC/FSE, pp. 263–272 (2005)
The SF110 benchmark suite, July 2013. http://www.evosuite.org/experimental-data/sf110
Shafiei, N., van Breugel, F.: Automatic handling of native methods in Java PathFinder. In: SPIN, pp. 97–100 (2014)
Soot: A Java optimization framework. http://sable.github.io/soot
Stephens, N., Grosen, J., Salls, C., Dutcher, A., Wang, R., Corbetta, J., Shoshitaishvili, Y., Kruegel, C., Vigna, G.: Driller: augmenting fuzzing through selective symbolic execution. In: NDSS (2016)
Tanno, H., Zhang, X., Hoshino, T., Sen, K.: TesMa and CATG: automated test generation tools for models of enterprise applications. In: ICSE, pp. 717–720 (2015)
Thummalapenta, S., Xie, T., Tillmann, N., de Halleux, J., Su, Z.: Synthesizing method sequences for high-coverage testing. SIGPLAN Not. 46(10), 189–206 (2011)
Tillmann, N., de Halleux, J.: Pex–white box test generation for \(\text{.NET }\). In: Beckert, B., Hähnle, R. (eds.) TAP 2008. LNCS, vol. 4966, pp. 134–153. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-79124-9_10
Visser, W., Havelund, K., Brat, G., Park, S., Lerda, F.: Model checking programs. Autom. Softw. Eng. 10(2), 203–232 (2003)
White, B., Lepreau, J., Stoller, L., Ricci, R., Guruprasad, S., Newbold, M., Hibler, M., Barb, C., Joglekar, A.: An integrated experimental environment for distributed systems and networks. SIGOPS Oper. Syst. Rev. 36(SI), 255–270 (2002)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer Nature Switzerland AG
About this paper
Cite this paper
Dimjašević, M., Howar, F., Luckow, K., Rakamarić, Z. (2018). Study of Integrating Random and Symbolic Testing for Object-Oriented Software. In: Furia, C., Winter, K. (eds) Integrated Formal Methods. IFM 2018. Lecture Notes in Computer Science(), vol 11023. Springer, Cham. https://doi.org/10.1007/978-3-319-98938-9_6
Download citation
DOI: https://doi.org/10.1007/978-3-319-98938-9_6
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-98937-2
Online ISBN: 978-3-319-98938-9
eBook Packages: Computer ScienceComputer Science (R0)