Extending Coverage Criteria by Evaluating Their Robustness to Code Structure Changes

  • Angelo Gargantini
  • Marco Guarnieri
  • Eros Magri
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7641)


Code coverage is usually used as a measurement of testing quality and as adequacy criterion. Unfortunately, code coverage is very sensitive to modifications of the code structure, and, therefore, the same test suite can achieve different degrees of coverage on the same program written in two syntactically different ways. For this reason, code coverage can provide the tester with misleading information.

In order to understand how a testing criterion is affected by code structure modifications, we introduce a way to measure the sensitivity of coverage to code changes. We formalize the modifications of the code structure using semantic preserving code-to-code transformations and we propose a framework to evaluate coverage robustness to these transformations, extending actual existing coverage criteria.

This allows us to define which programs and which test suites can be considered robust with respect to a certain set of transformations. We can identify when the obtained coverage is fragile and we extend the concept of coverage criterion by introducing an index that measures the fragility of the coverage of a given test suite. We show how to compute the fragility index and we evidence that also well-written industrial code and realistic test suites can be fragile. Moreover, we suggest how to deal with this kind of testing fragility.


Test Suite Coverage Criterion Code Coverage Testability Transformation Code Transformation 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. 1.
    Refactoring catalog - website,
  2. 2.
    Aho, A.V., Sethi, R., Ullman, J.D.: Compilers: principles, techniques, and tools. Addison-Wesley Longman Publishing Co., Inc., Boston (1986)Google Scholar
  3. 3.
    Ammann, P., Offutt, J.: Introduction to Software Testing, 1st edn. Cambridge University Press, New York (2008)zbMATHCrossRefGoogle Scholar
  4. 4.
    ARP 4761, Guidelines and Methods for Conducting the Safety Assessment Process on Civil Airborne Systems and Equipment. Aerospace Recommended Practice, Society of Automotive Engineers, Detroit, USA (1996)Google Scholar
  5. 5.
    Chilenski, J.J., Miller, S.P.: Applicability of modified condition/decision coverage to software testing. Software Engineering Journal 9(5), 193–200 (1994)CrossRefGoogle Scholar
  6. 6.
    Dijkstra, E.W.: Notes on structured programming. In: Dahl, O.J., Dijkstra, E.W., Hoare, C.A.R. (eds.) Structured Programming. Academic Press (1972)Google Scholar
  7. 7.
    Do, H., Elbaum, S.G., Rothermel, G.: Supporting controlled experimentation with testing techniques: An infrastructure and its potential impact. Empirical Software Engineering 10(4), 405–435 (2005)CrossRefGoogle Scholar
  8. 8.
    Fowler, M.: Refactoring: Improving the Design of Existing Code. Addison-Wesley (August 1999)Google Scholar
  9. 9.
    Fraser, G., Arcuri, A.: Evosuite: Automatic test suite generation for object-oriented software. In: Proc. of ACM SIGSOFT ESEC/FSE, pp. 416–419 (2011)Google Scholar
  10. 10.
    Goodenough, J.B., Gerhart, S.L.: Toward a theory of test data selection. IEEE Trans. Softw. Eng. 1(2), 156–173 (1975)MathSciNetCrossRefGoogle Scholar
  11. 11.
    Harman, M., Hu, L., Hierons, R., Wegener, J., Sthamer, H., Baresel, A., Roper, M.: Testability transformation. IEEE Trans. Softw. Eng. 30, 3–16 (2004)CrossRefGoogle Scholar
  12. 12.
    Kirner, R.: Towards preserving model coverage and structural code coverage. EURASIP J. Emb. Sys., 1–16 (2009)Google Scholar
  13. 13.
    Marick, B., Smith, J., Jones, M.: How to misuse code coverage. In: Proc. of ICTCS 1999 (June 1999)Google Scholar
  14. 14.
    Miller, J.C., Maloney, C.J.: Systematic mistake analysis of digital computer programs. Commun. ACM 6, 58–63 (1963)CrossRefGoogle Scholar
  15. 15.
    Myers, G.J.: The art of software testing, 2nd edn. Wiley (2004)Google Scholar
  16. 16.
    Namin, A.S., Andrews, J.H.: The influence of size and coverage on test suite effectiveness. In: Proc. of ISSTA 2009, pp. 57–68. ACM (2009)Google Scholar
  17. 17.
    Rajan, A., Whalen, M.W., Heimdahl, M.P.: The effect of program and model structure on mc/dc test adequacy coverage. In: Proc. of ICSE, pp. 161–170 (2008)Google Scholar
  18. 18.
    Rapps, S., Weyuker, E.: Selecting software test data using data flow information. IEEE Trans. Soft. Eng. SE-11(4), 367–375 (1985)CrossRefGoogle Scholar
  19. 19.
    Senko, M.E.: A control system for logical block diagnosis with data loading. Commun. ACM 3, 236–240 (1960)zbMATHCrossRefGoogle Scholar
  20. 20.
    Staats, M., Gay, G., Whalen, M., Heimdahl, M.: On the Danger of Coverage Directed Test Case Generation. In: de Lara, J., Zisman, A. (eds.) FASE 2012. LNCS, vol. 7212, pp. 409–424. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  21. 21.
    Staats, M., Pǎsǎreanu, C.: Parallel symbolic execution for structural test generation. In: Proc. of ISSTA, pp. 183–194. ACM, New York (2010)Google Scholar
  22. 22.
    Weißleder, S.: Simulated satisfaction of coverage criteria on uml state machines. In: Proc. of ICST 2010, pp. 117–126. IEEE Computer Society (2010)Google Scholar
  23. 23.
    Weyuker, E.J.: Translatability and decidability questions for restricted classes of program schemas. SIAM J. Comput. 8(4), 587–598 (1979)MathSciNetzbMATHCrossRefGoogle Scholar
  24. 24.
    Zhu, H., Hall, P., May, J.: Software unit test coverage and adequacy. ACM Computing Surveys 29(4), 366–427 (1997)CrossRefGoogle Scholar

Copyright information

© IFIP International Federation for Information Processing 2012

Authors and Affiliations

  • Angelo Gargantini
    • 1
  • Marco Guarnieri
    • 1
  • Eros Magri
    • 1
  1. 1.Dip. di Ing. dell’Informazione e Metodi MatematiciUniversità di BergamoItaly

Personalised recommendations