The Effectiveness of T-Way Test Data Generation

  • Michael Ellims
  • Darrel Ince
  • Marian Petre
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5219)


This paper reports the results of a study comparing the effectiveness of automatically generated tests constructed using random and t-way combinatorial techniques on safety related industrial code using mutation adequacy criteria. A reference point is provided by hand generated test vectors constructed during development to establish minimum acceptance criteria. The study shows that 2-way testing is not adequate measured by mutants kill rate compared with hand generated test set of similar size, but that higher factor t-way test sets can perform at least as well. To reduce the computation overhead of testing large numbers of vectors over large numbers of mutants a staged optimising approach to applying t-way tests is proposed and evaluated which shows improvements in execution time and final test set size.


Software testing random testing automated test generation unit test combinatorial design pairwise testing t-way testing mutation 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Duran, J., Ntafos, S.: An Evaluation of Random Testing. IEEE Trans. Softw. Eng. 10(4), 438–444 (1984)CrossRefGoogle Scholar
  2. 2.
    Gallagher, M.J., Narasimhan, V.L.: ADTEST: A Test Data Generation Suite for Ada Software Systems. IEEE Trans. Softw. Eng. 23(8), 473–484 (1997)CrossRefGoogle Scholar
  3. 3.
    Cohen, D.M., et al.: The AETG System: An Approach to Testing Based on Combinatorial Design. IEEE Trans. Softw. Eng. 23(7), 437–444 (1997)CrossRefGoogle Scholar
  4. 4.
    Diamond, W.J.: Practical Experiment Design For Engineers and Scientists. John Wiley & Sons, New York (2001)Google Scholar
  5. 5.
    Mandl, R.: Orthogonal Latin Squares: an Application of Experiment Design to Compiler Testing. Commun. ACM 28(10), 1054–1058 (1985)CrossRefGoogle Scholar
  6. 6.
    Sherwood, G.: Effective Testing of Factor Combinations. In: Third Int’l Conf. Software Testing, Analysis and Review, Software Quality Eng. pp. 151–166 (1994)Google Scholar
  7. 7.
    Brownlie, R., Prowse, J., Phadke, M.S.: Robust Testing of AT&T PMX/StarMAIL Using Oats. AT&T Technical Journal 71(3), 41–47 (1992)Google Scholar
  8. 8.
    Cohen, D.M., et al.: The Automatic Efficient Test Generator (AETG) System. In: Proceedings 5th International Symposium on Software Reliability Engineering, pp. 303–309. IEEE Computer Society, Los Alamitos (1994)Google Scholar
  9. 9.
    Dalal, S., et al.: Model-based Testing of a Highly Programmable System. In: Proc. of the Ninth International Symposium on Software Reliability Engineering. IEEE Computer Society, Los Alamitos (1998)Google Scholar
  10. 10.
    Dalal, S.R., et al.: Model-based Testing in Practice. In: Proc. of the 21st Int’l Conf. on Software Engineering, pp. 285–294. IEEE Computer Society, Los Alamitos (1999)Google Scholar
  11. 11.
    Smith, B., Feather, M.S., Muscettola, N.: Challenges and Methods in Testing the Remote Agent Planner. In: Proceedings of the Fifth International Conference on Artificial Intelligence Planning Systems, pp. 254–263. AAAI Press, Menlo Park (2000)Google Scholar
  12. 12.
    Wallace, D.R., Kuhn, D.R.: Failure Modes in medical device software: an analysis of 15 years of recall data. International Journal of Reliability, Quality and Safety Engineering 8(4), 351–371 (2001)CrossRefGoogle Scholar
  13. 13.
    Kuhn, D.R., Reilly, M.J.: An Investigation of the Applicability of Design of Experiments to Software Testing. In: Proceedings of the 27th Annual NASA Goddard Software Engineering Workshop (SEW-27 2002). IEEE Computer Society, Los Alamitos (2002)Google Scholar
  14. 14.
    Kuhn, D.R., Wallace, D.R., Gallo, A.M.: Software Fault Interactions and Implications for Software Testing. IEEE Trans. Softw. Eng. 30(6), 418–421 (2004)CrossRefGoogle Scholar
  15. 15.
    Dunietz, I.S., et al.: Applying Design of Experiments to Software Testing: Experience Report. In: Proc.of the 19th Int’l Conf. on Software Eng., pp. 205–215. ACM Press, New York (1997)CrossRefGoogle Scholar
  16. 16.
    Nair, V.N., et al.: A Statistical Assessment of some Software Testing Strategies and Application of Experimental Design Techniques. Statistica Sinica 8, 165–184 (1998)zbMATHMathSciNetGoogle Scholar
  17. 17.
    Kobayashi, N., Tsuchiya, T., Kikuno, T.: Non-Specification-Based Approaches to Logic Testing for Software. Information and Software Technology 44(2), 113–121 (2002)CrossRefMathSciNetGoogle Scholar
  18. 18.
    Schroeder, P.J., Bolaki, P., Gopu, V.: Comparing the Fault Detection Effectiveness of N-way and Random Test Suites. In: ISESE 2004: Proceedings of the 2004 International Symposium on Empirical Software Engineering, pp. 49–59. IEEE Computer Society, Los Alamitos (2004)CrossRefGoogle Scholar
  19. 19.
    Grindal, M., et al.: An Evaluation of Combination Strategies for Test Case Selection, in Technical Report, Department of Computer Science, University of Skövde (2003)Google Scholar
  20. 20.
    Malaiya, Y.K.: Antirandom testing: getting the most out of black-box testing. In: Proceedings, Sixth International Symposium on Software Reliability Engineering, pp. 86–95 (1995)Google Scholar
  21. 21.
    Hamlet, R.G.: Testing Programs with the Aid of a Compiler. IEEE Trans. Softw. Eng. 3(4), 279–290 (1977)CrossRefMathSciNetGoogle Scholar
  22. 22.
    DeMillo, R.A., Lipton, R.J., Sayward, F.G.: Hints on Test Data Selection: Help for the Practising Programmer. Computer, 34–41 (1978)Google Scholar
  23. 23.
    Daran, M., Thevenod-Fosse, P.: Software Error Analysis: a Real Case Study Involving Real Faults and Mutations. SIGSOFT Softw. Eng. Notes 21(3), 158–171 (1996)CrossRefGoogle Scholar
  24. 24.
    Frankl, P.G., Weiss, S.N., and Hu, C.: All-uses vs. mutation testing: an experimental comparison of effectiveness. J. Syst. Softw. 38(3), 235–253 (1997)Google Scholar
  25. 25.
    Zhan, Y., Clark, J.A.: Search-Based Mutation Testing for Simulink Models. In: Proc. of the 2005 Conference on Genetic and Evolutionary Computation, pp. 1061–1068. ACM Press, New York (2005)CrossRefGoogle Scholar
  26. 26.
    Offutt, A.J., Voas, J.M.: Subsumption of Condition Coverage Techniques by Mutation Testing, in Tech. Report, Dept. of Information and Software Systems Engineering, George Mason Univ., Fairfax, Va (1996)Google Scholar
  27. 27.
    Andrews, J.H., Briand, L.C., Labiche, Y.: Is Mutation an Appropriate Tool for Test Experiments? In: Proc. of the 27th Int’l Conf. on Software Engineering, pp. 402–411. ACM Press, New York (2005)CrossRefGoogle Scholar
  28. 28.
    Anon.: Functional Safety of Electrical/Electronic/Programmable electronic safety-related systems, Part 1: General Requirements, BS EN 61508-1:2002, British Standards (2002)Google Scholar
  29. 29.
    Woodward, M.R., Hedley, D., Hennel, M.A.: Experience with Path Analysis and Testing of Programs. IEEE Trans. Softw. Eng. 6(6), 228–278 (1980)Google Scholar
  30. 30.
    Ellims, M., Bridges, J., Ince, D.C.: The Economics of Unit Testing. Empirical Softw. Eng. 11(1), 5–31 (2006)CrossRefGoogle Scholar
  31. 31.
    Ellims, M., Ince, D., Petre, M.: The Csaw C Mutation Tool: Initial Results. In: Mutation 2007. IEEE Computer Society, Los Alamitos (2007)Google Scholar
  32. 32.
    Lei, Y., et al.: IPOG: A General Strategy for T-Way Software Testing. In: 14th Annual IEEE Int’l Conf. and Workshops on the Engineering of Computer-Based Systems (ECBS 2007), pp. 549–556. IEEE Computer Society, Los Alamitos (2007)CrossRefGoogle Scholar
  33. 33.
    Jenny (accessed June 2007),
  34. 34.
    Michael, C.C., McGraw, G., Schatz, M.A.: Generating Software Test Data by Evolution. IEEE Trans. Softw. Eng. 27(12), 1085–1110 (2001)CrossRefGoogle Scholar
  35. 35.
    Wichmann, B.A., Hill, I.D.: Generating Good Pseudo-Random Numbers. Computational Statistics & Data Analysis 51(3), 1614–1622 (2006)CrossRefMathSciNetzbMATHGoogle Scholar
  36. 36.
    Ammann, P.E., Offutt, J.: Using Formal Methods to Derive Test Frames in Category-Partition Testing. In: Proc. of 9th Annual Conf. on Computer Assurance (COMPASS 1994), pp. 824–830. IEEE Computer Society, Los Alamitos (1994)Google Scholar
  37. 37.
    Offutt, A.J.: A Practical System for Mutation Testing: Help for the Common Programmer. In: Proc. of the IEEE Int’l Test Conference on TEST: The Next 25 Years, pp. 824–830. IEEE Computer Society, Los Alamitos (1994)Google Scholar
  38. 38.
    Offutt, J.A., Pan, J., Voas, J.M.: Procedures for Reducing the Size of Coverage Based Test Sets. In: Twelfth Int. Conf. on Testing Computer Software, pp. 111–123 (1995)Google Scholar
  39. 39.
    Bottaci, L.: Instrumenting Programs with Flag Variables for Test Data Search by Genetic Algorithms. In: Proc. of the Genetic and Evolutionary Computation Conference. Morgan Kaufmann Publishers, San Francisco (2002)Google Scholar
  40. 40.
    Gotlieb, A.: Exploiting Symmetries to Test Programs. In: Proceedings of the 14th International Symposium on Software Reliability Engineering, p. 365. IEEE Computer Society, Los Alamitos (2003)Google Scholar
  41. 41.
    Offutt, A.J., et al.: An Experimental Determination of Sufficient Mutant Operators. ACM Trans. Softw. Eng. Methodol. 5(2), 99–118 (1996)CrossRefGoogle Scholar
  42. 42.
    Dillon, E., Meudec, C.: Automatic Test Data Generation from Embedded C Code. In: Heisel, M., Liggesmeyer, P., Wittmann, S. (eds.) SAFECOMP 2004. LNCS, vol. 3219, pp. 180–194. Springer, Heidelberg (2004)Google Scholar
  43. 43.
    Copeland, L.: A Practitioner’s Guide to Software Test Design. Artech House Publishers, Boston (2004)Google Scholar
  44. 44.
    Ellims, M.: The Csaw Mutation Tool Users Guide, in Technical Report, Department of Computer Science, Open University (2007)Google Scholar
  45. 45.
    Ellims, M., Ince, D., Petre, M.: AETG vs. Man: an Assessment of the Effectiveness of Combinatorial Test Data Generation, in Technical Report, Department of Computer Science, Open University (2007)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2008

Authors and Affiliations

  • Michael Ellims
    • 1
  • Darrel Ince
    • 2
  • Marian Petre
    • 2
  1. 1.Pi-Shurlok, Milton HallCambridgeUK
  2. 2.Dept. of ComputingOpen University, Walton HallMilton KeynesUK

Personalised recommendations