Advertisement

Empirical Software Engineering

, Volume 11, Issue 1, pp 33–70 | Cite as

Prioritizing JUnit Test Cases: An Empirical Assessment and Cost-Benefits Analysis

  • Hyunsook Do
  • Gregg Rothermel
  • Alex Kinneer
Special Issue Paper

Abstract

Test case prioritization provides a way to run test cases with the highest priority earliest. Numerous empirical studies have shown that prioritization can improve a test suite's rate of fault detection, but the extent to which these results generalize is an open question because the studies have all focused on a single procedural language, C, and a few specific types of test suites. In particular, Java and the JUnit testing framework are being used extensively to build software systems in practice, and the effectiveness of prioritization techniques on Java systems tested under JUnit has not been investigated. We have therefore designed and performed a controlled experiment examining whether test case prioritization can be effective on Java programs tested under JUnit, and comparing the results to those achieved in earlier studies. Our analyses show that test case prioritization can significantly improve the rate of fault detection of JUnit test suites, but also reveal differences with respect to previous studies that can be related to the language and testing paradigm. To investigate the practical implications of these results, we present a set of cost-benefits models for test case prioritization, and show how the effectiveness differences observed can result in savings in practice, but vary substantially with the cost factors associated with particular testing processes.

Keywords

Software maintenance Regression testing Testing object-oriented software Test case prioritization Empirical studies Cost-benefits analysis 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Andrews JH, Briand LC, Labiche Y (2005, May) Is mutation an appropriate tool for testing experiments? Proceedings of the International Conference on Software Engineering. pp. 402–411.Google Scholar
  2. Chen TY, Lau MF (1996) Dividing strategies for the optimization of a test suite. Inf Process Lett 60(3):135–141MathSciNetGoogle Scholar
  3. Chen YF, Rosenblum DS, Vo KP (1994, May) TestTube: a system for selective regression testing. Proceedings of the International Conference on Software Engineering. pp. 211–220Google Scholar
  4. Do H, Elbaum S, Rothermel G (2005) Supporting controlled experimentation with testing techniques: an infrastructure and its potential impact. Empir Softw Eng 10(4):405–435Google Scholar
  5. Elbaum S, Malishevsky A, Rothermel G (2000, August) Prioritizing test cases for regression testing. Proceedings of the International Symposium on Software Testing and Analysis. pp 102–112Google Scholar
  6. Elbaum S, Gable D, Rothermel G (2001a, April) Understanding and measuring the sources of variation in the prioritization of regression test suites. Proceedings of the International Software Metrics Symposium. pp 169–179Google Scholar
  7. Elbaum S, Malishevsky A, Rothermel G (2001b, May) Incorporating varying test costs and fault severities into test case prioritization. Proceedings of the International Conference on Software Engineering. pp 329–338Google Scholar
  8. Elbaum S, Malishevsky AG, Rothermel G (2002, February) Test case prioritization: a family of empirical studies. IEEE Trans Softw Eng 28(2):159–182CrossRefGoogle Scholar
  9. Elbaum S, Kallakuri P, Malishevsky A, Rothermel G, Kanduri S (2003) Understanding the effects of changes on the cost-effectiveness of regression testing techniques. J Soft Test Verifi Reliab 12(2):65–83Google Scholar
  10. Elbaum S, Rothermel G, Kanduri S, Malishevsky AG (2004) Selecting a cost-effective test case prioritization technique. Soft Qual J 12(3):185–210Google Scholar
  11. Fowler M (1999) Refactoring: improving the design of existing code. Addison-Wesley Professional, Reading, MAGoogle Scholar
  12. Harrold MJ, Gupta R, Soffa ML (1993, July) A methodology for controlling the size of a test suite. ACM Trans Soft Eng Methodol 2(3):270–285Google Scholar
  13. http://ant.apache.orgGoogle Scholar
  14. http://csce.unl.edu/~galileo/pub/galileoGoogle Scholar
  15. http://jakarta.apache.orgGoogle Scholar
  16. http://jakarta.apache.org/jmeterGoogle Scholar
  17. http://jtopas.sourceforge.net/jtopasGoogle Scholar
  18. http://sourceforge.netGoogle Scholar
  19. http://www.insightful.com/products/splusGoogle Scholar
  20. http://www.junit.orgGoogle Scholar
  21. http://xml.apache.org/securityGoogle Scholar
  22. Hutchins M, Foster H, Goradia T, Ostrand T (2004, May) Experiments on the effectiveness of dataflow- and controlflow-based test adequacy criteria. Proceedings of the International Conference on Software Engineering. pp 191–200Google Scholar
  23. Ishizaki K, Kawahito M, Yasue T, Takeuchi M, Ogasawara T, Suganuma T, Onodera T, Komatsu H, Nakatani T (1999, June) Design, implementation and evaluation of optimisations in a just-in-time compiler. ACM 1999 Java Grande Conference. pp 119–128Google Scholar
  24. Kim J, Porter A (2002, May) A history-based test prioritization technique for regression testing in resource constrained environments. Proceedings of the International Conference on Software Engineering. pp 119–129Google Scholar
  25. Malishevsky A, Rothermel G, Elbaum S (2002, November) Modeling the cost-benefits tradeoffs for regression testing techniques. Proceedings of the International Conference on Software Maintenance. pp 204–213Google Scholar
  26. Offutt J, Pan J, Voas JM (1995, June) Procedures for reducing the size of coverage-based test sets. Proceedings of the International Conference on Testing Computer Software. pp 111–123Google Scholar
  27. Onoma K, Tsai W-T, Poonawala M, Suganuma H (1988, May) Regression testing in an industrial environment. Commun ACM 41(5):81–86Google Scholar
  28. Power J, Waldron J (2002, July) A method-level analysis of object-oriented techniques in Java applications. Technical Report NUM-CS-TR-2002-07, National University of IrelandGoogle Scholar
  29. Ramsey FL, Schafer DW (1997) The statistical Sleuth: a course in methods of data analysis. 1st edition. Duxbury Press, Belmont, CAGoogle Scholar
  30. Rothermel G, Harrold MJ (1997, April) A safe, efficient regression test selection technique. ACM Trans Softw Eng Methodol 6(2):173–210CrossRefGoogle Scholar
  31. Rothermel G, Untch RH, Chu C, Harrold MJ (1999, August) Test case prioritization: an empirical study. Proceedings of the International Conference on Software Maintenance. pp 179–188Google Scholar
  32. Rothermel G, Untch R, Chu C, Harrold MJ (2001, October) Test case prioritization. IEEE Trans Softw Eng 27(10)Google Scholar
  33. Rothermel G, Elbaum S, Malishevsky A, Kallakuri P, Davia B (2002, May) The impact of test suite granularity on the cost-effectiveness of regression testing. Proceedings of the International Conference on Software Engineering. pp 230–240Google Scholar
  34. Rothermel G, Elbaum S, Malishevsky AG, Kallakuri P, Qiu X (2004, July) On test suite composition and cost-effective regression testing. ACM Trans Softw Eng Methodol 13(3):277–331CrossRefGoogle Scholar
  35. Saff D, Ernst MD (2003, November) Reducing wasted development time via continuous testing. Proceedings of the International Symposium on Software Reliability Engineering. pp 281–292Google Scholar
  36. Saff D, Ernst MD (2004a, July) An experimental evaluation of continuous testing during development. Proceedings of the 2004 International Symposium on Software Testing and Analysis. pp 76–85Google Scholar
  37. Saff D, Ernst MD (2004b, March) Continuous testing in Eclipse. Proceedings of the 2nd Eclipse Technology Exchange WorkshopGoogle Scholar
  38. Srivastava A, Thiagarajan J (2002, July) Effectively prioritizing tests in development environment. Proceeding of the International Symposium on Software Testing and AnalysisGoogle Scholar
  39. Wells D (2003, January) Extreme Programming: A Gentle Introduction. http://www.extremeprogramming.orgGoogle Scholar
  40. Wohlin C, Runeson P, Host M, Ohlsson M, Regnell B, Wesslen A (2000) Experimentation in Software Engineering: An Introduction. Kluwer Academic Publishers, BostonGoogle Scholar
  41. Wong WE, Horgan JR, London S, Agrawal H (1997, November) A study of effective regression testing in practice. Proceedings of the International Symposium on Software Reliability Engineering. pp 230–238Google Scholar

Copyright information

© Springer Science + Business Media, Inc. 2006

Authors and Affiliations

  1. 1.Computer Science and Engineering DepartmentUniversity of NebraskaLincolnUSA

Personalised recommendations