Abstract
Recent advances in software testing introduced parameterized unit tests (PUT), which accept parameters, unlike conventional unit tests (CUT), which do not accept parameters. PUTs are more beneficial than CUTs with regards to fault-detection capability, since PUTs help describe the behaviors of methods under test for all test arguments. In general, existing applications often include manually written CUTs. With the existence of these CUTs, natural questions that arise are whether these CUTs can be retrofitted as PUTs to leverage the benefits of PUTs, and what are the cost and benefits involved in retrofitting CUTs as PUTs. To address these questions, in this paper, we conduct an empirical study to investigate whether existing CUTs can be retrofitted as PUTs with feasible effort and achieve the benefits of PUTs in terms of additional fault-detection capability and code coverage. We also propose a methodology, called test generalization, that helps in systematically retrofitting existing CUTs as PUTs. Our results on three real-world open-source applications (≈ 4.6 KLOC) show that the retrofitted PUTs detect 19 new defects that are not detected by existing CUTs, and also increase branch coverage by 4% on average (with maximum increase of 52% for one class under test and 10% for one application under analysis) with feasible effort.
Chapter PDF
Similar content being viewed by others
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
References
Agitar JUnit Factory (2008), http://www.agitar.com/developers/junit_factory.html
Parasoft Jtest (2008), http://www.parasoft.com/jsp/products/home.jsp?product=Jtest
CodePro AnalytiX (2009), http://www.eclipse-plugins.info/eclipse/plugin_details.jsp?id=943
Pex - automated white box testing for .NET (2009), http://research.microsoft.com/Pex/
Code Contracts (2010), http://research.microsoft.com/en-us/projects/contracts/
Pex Documentation (2010), http://research.microsoft.com/Pex/documentation.aspx
Cansdale, J., Feldman, G., Poole, C., Two, M.: NUnit (2002), http://nunit.com/index.php
Csallner, C., Smaragdakis, Y.: JCrasher: an automatic robustness tester for Java. Softw. Pract. Exper. 34(11) (2004)
Daniel, B., Jagannath, V., Dig, D., Marinov, D.: ReAssert: Suggesting repairs for broken unit tests. In: Proc. ASE, pp. 433–444 (2009)
Godefroid, P., Klarlund, N., Sen, K.: DART: Directed Automated Random Testing. In: Proc. PLDI, pp. 213–223 (2005)
Granville, B., Tongo, L.D.: Data structures and algorithms (2006), http://dsa.codeplex.com/
de Halleux, J.: Quickgraph, graph data structures and algorithms for .NET (2006), http://quickgraph.codeplex.com/
Jaygarl, H., Kim, S., Xie, T., Chang, C.K.: OCAT: Object capture-based automated testing. In: Proc. ISSTA, pp. 159–170 (2010)
Khurshid, S., Pasareanu, C.S., Visser, W.: Generalized symbolic execution for model checking and testing. In: Garavel, H., Hatcliff, J. (eds.) TACAS 2003. LNCS, vol. 2619, pp. 553–568. Springer, Heidelberg (2003)
King, J.C.: Symbolic execution and program testing. Communications of the ACM 19(7), 385–394 (1976)
Marri, M.R., Xie, T., Tillmann, N., de Halleux, J., Schulte, W.: An empirical study of testing file-system-dependent software with mock objects. In: Proc. AST, Business and Industry Case Studies, pp. 149–153 (2009)
Meyer, B.: Object-Oriented Software Construction. Prentice Hall PTR, Englewood Cliffs (2000)
Pacheco, C., Lahiri, S.K., Ernst, M.D., Ball, T.: Feedback-directed random test generation. In: Proc. ICSE. pp. 75–84 (2007)
Saff, D., Boshernitsan, M., Ernst, M.D.: Theories in practice: Easy-to-write specifications that catch bugs. Tech. Rep. MIT-CSAIL-TR-2008-002, MIT Computer Science and Artificial Intelligence Laboratory (2008), http://www.cs.washington.edu/homes/mernst/pubs/testing-theories-tr002-abstract.html
Sen, K., Marinov, D., Agha, G.: CUTE: a concolic unit testing engine for C. In: Proc. ESEC/FSE, pp. 263–272 (2005)
Thummalapenta, S., Xie, T., Tillmann, N., de Halleux, P., Schulte, W.: MSeqGen: Object-oriented unit-test generation via mining source code. In: Proc. ESEC/FSE, pp. 193–202 (2009)
Tillmann, N., de Halleux, J.: Pex–white box test generation for .NET. In: Beckert, B., Hähnle, R. (eds.) TAP 2008. LNCS, vol. 4966, pp. 134–153. Springer, Heidelberg (2008)
Tillmann, N., Schulte, W.: Parameterized Unit Tests. In: Proc. ESEC/FSE, pp. 253–262 (2005)
Xie, T., Marinov, D., Notkin, D.: Rostra: A framework for detecting redundant object-oriented unit tests. In: Proc. ASE, pp. 196–205 (2004)
Xie, T., Tillmann, N., de Halleux, P., Schulte, W.: Mutation analysis of parameterized unit tests. In: Proc. Mutation, pp. 177–181 (2009)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2011 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Thummalapenta, S., Marri, M.R., Xie, T., Tillmann, N., de Halleux, J. (2011). Retrofitting Unit Tests for Parameterized Unit Testing. In: Giannakopoulou, D., Orejas, F. (eds) Fundamental Approaches to Software Engineering. FASE 2011. Lecture Notes in Computer Science, vol 6603. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-19811-3_21
Download citation
DOI: https://doi.org/10.1007/978-3-642-19811-3_21
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-19810-6
Online ISBN: 978-3-642-19811-3
eBook Packages: Computer ScienceComputer Science (R0)