Increasing Functional Coverage by Inductive Testing: A Case Study

  • Neil Walkinshaw
  • Kirill Bogdanov
  • John Derrick
  • Javier Paris
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6435)


This paper addresses the challenge of generating test sets that achieve functional coverage, in the absence of a complete specification. The inductive testing technique works by probing the system behaviour with tests, and using the test results to construct an internal model of software behaviour, which is then used to generate further tests. The idea in itself is not new, but prior attempts to implement this idea have been hampered by expense and scalability, and inflexibility with respect to testing strategies. In the past, inductive testing techniques have tended to focus on the inferred models, as opposed to the suitability of the test sets that were generated in the process. This paper presents a flexible implementation of the inductive testing technique, and demonstrates its application with case-study that applies it to the Linux TCP stack implementation. The evaluation shows that the generated test sets achieve a much better coverage of the system than would be achieved by similar non-inductive techniques.


State Machine Transmission Control Protocol Testing Technique Inductive Inference Label Transition System 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. 1.
    Zhu, H., Hall, P.A.V., May, J.H.R.: Software unit test coverage and adequacy. ACM Computing Surveys 29(4), 366–427 (1997)CrossRefGoogle Scholar
  2. 2.
    Weyuker, E.: Assessing test data adequacy through program inference. ACM Transactions on Programming Languages and Systems 5(4), 641–655 (1983)zbMATHCrossRefGoogle Scholar
  3. 3.
    Bergadano, F., Gunetti, D.: Testing by means of inductive program learning. ACM Transactions on Software Engineering and Methodology 5(2), 119–145 (1996)CrossRefGoogle Scholar
  4. 4.
    Zhu, H., Hall, P., May, J.: Inductive inference and software testing. Software Testing, Verification, and Reliability 2(2), 69–81 (1992)CrossRefGoogle Scholar
  5. 5.
    Zhu, H.: A formal interpretation of software testing as inductive inference. Software Testing, Verification and Reliability 6(1), 3–31 (1996)CrossRefGoogle Scholar
  6. 6.
    Harder, M., Mellen, J., Ernst, M.: Improving test suites via operational abstraction. In: Proceedings of the International Conference on Software Engineering ICSE 2003, pp. 60–71 (2003)Google Scholar
  7. 7.
    Xie, T., Notkin, D.: Mutually enhancing test generation and specification inference. In: Petrenko, A., Ulrich, A. (eds.) FATES 2003. LNCS, vol. 2931, pp. 60–69. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  8. 8.
    Berg, T., Grinchtein, O., Jonsson, B., Leucker, M., Raffelt, H., Steffen, B.: On the correspondence between conformance testing and regular inference. In: Cerioli, M. (ed.) FASE 2005. LNCS, vol. 3442, pp. 175–189. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  9. 9.
    Raffelt, H., Steffen, B.: Learnlib: A library for automata learning and experimentation. In: Baresi, L., Heckel, R. (eds.) FASE 2006. LNCS, vol. 3922, pp. 377–380. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  10. 10.
    Bollig, B., Katoen, J., Kern, C., Leucker, M.: Smyle: A tool for synthesizing distributed models from scenarios by learning. In: van Breugel, F., Chechik, M. (eds.) CONCUR 2008. LNCS, vol. 5201, Springer, Heidelberg (2008)CrossRefGoogle Scholar
  11. 11.
    Shahbaz, M., Groz, R.: Inferring mealy machines. In: Cavalcanti, A., Dams, D.R. (eds.) FM 2009, vol. 5850, pp. 207–222. Springer, Heidelberg (2009)CrossRefGoogle Scholar
  12. 12.
    Raffelt, H., Merten, M., Steffen, B., Margaria, T.: Dynamic testing via automata learning. STTT 11(4), 307–324 (2009)CrossRefGoogle Scholar
  13. 13.
    Walkinshaw, N., Derrick, J., Guo, Q.: Iterative refinement of reverse-engineered models by model-based testing. In: Cavalcanti, A., Dams, D.R. (eds.) FM 2009. LNCS, vol. 5850, pp. 305–320. Springer, Heidelberg (2009)CrossRefGoogle Scholar
  14. 14.
    Angluin, D.: Learning Regular Sets from Queries and Counterexamples. Information and Computation 75, 87–106 (1987)zbMATHCrossRefMathSciNetGoogle Scholar
  15. 15.
    Lang, K., Pearlmutter, B., Price, R.: Results of the Abbadingo One DFA Learning Competition and a New Evidence-Driven State Merging Algorithm. In: Honavar, V.G., Slutzki, G. (eds.) ICGI 1998. LNCS (LNAI), vol. 1433, pp. 1–12. Springer, Heidelberg (1998)CrossRefGoogle Scholar
  16. 16.
    Dupont, P., Lambeau, B., Damas, C., van Lamsweerde, A.: The QSM Algorithm and its Application to Software Behavior Model Induction. Applied Artificial Intelligence 22, 77–115 (2008)CrossRefGoogle Scholar
  17. 17.
    Pacheco, C., Lahiri, S., Ernst, M., Ball, T.: Feedback-directed random test generation. In: Proceedings of the International Conference on Software Engineering (ICSE 2007), pp. 75–84. IEEE Computer Society, Los Alamitos (2007)CrossRefGoogle Scholar
  18. 18.
    Hopcroft, J., Motwani, R., Ullman, J.: Introduction to Automata Theory, Languages, and Computation, 2nd edn. Addison-Wesley, Reading (2001)zbMATHGoogle Scholar
  19. 19.
    Postel, J.: Transmission control protocol. Technical Report 793, DDN Network Information Center, SRI International, RFC (September 1981)Google Scholar
  20. 20.
    Paris, J., Arts, T.: Automatic testing of tcp/ip implementations using quickcheck. In: Erlang 2009: Proceedings of the 8th ACM SIGPLAN workshop on Erlang, pp. 83–92. ACM, New York (2009)CrossRefGoogle Scholar
  21. 21.
    Armstrong, J.: Programming Erlang: Software for a Concurrent World. Pragmatic Bookshelf (July 2007)Google Scholar
  22. 22.
    Claessen, K., Hughes, J.: Quickcheck: A Lightweight Tool for Random Testing of Haskell Programs. In: Proceedings of the International Conference on Functional Programming (ICFP), pp. 268–279 (2000)Google Scholar
  23. 23.
    Walkinshaw, N., Bogdanov, K., Holcombe, M., Salahuddin, S.: Reverse Engineering State Machines by Interactive Grammar Inference. In: 14th IEEE International Working Conference on Reverse Engineering, WCRE (2007)Google Scholar
  24. 24.
    Arcuri, A.: Longer is better: On the role of test sequence length in software testing. In: Proceedings of the International Conference on Software Testing, Verification and Validation, ICST 2010 (2010)Google Scholar

Copyright information

© IFIP International Federation for Information Processing 2010

Authors and Affiliations

  • Neil Walkinshaw
    • 1
  • Kirill Bogdanov
    • 2
  • John Derrick
    • 2
  • Javier Paris
    • 3
  1. 1.Department of Computer ScienceThe University of LeicesterLeicesterUK
  2. 2.Department of Computer ScienceThe University of SheffieldSheffieldUK
  3. 3.Department of Computer ScienceUniversity of A CoruñaA CoruñaSpain

Personalised recommendations