Advertisement

Minimizing Characterizing Sets

  • Kadir Bulut
  • Guy Vincent Jourdan
  • Uraz Cengiz TürkerEmail author
Conference paper
  • 30 Downloads
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12018)

Abstract

A characterizing set (CS) for a given finite state machine (FSM) defines a set of input sequences such that for any pair of states of FSM, there exists an input sequence in a CS that can separate these states. There are techniques that generate test sequences with guaranteed fault detection power using CSs. The number of inputs and input sequences in a CS directly impacts the cost of the test: the higher the number of elements, the longer it takes to generate the test. Despite the direct benefits of using CSs with fewer sequences, there has been no work focused on generating minimum sized characterizing sets. In this paper, we show that constructing CS with fewer elements is a PSPACE-Hard problem and that the corresponding decision problem is PSPACE-Complete. We then introduce a heuristic to construct CSs with fewer input sequences. We evaluate the proposed algorithm using randomly generated FSMs as well as some benchmark FSMs. The results are promising, and the proposed method reduces the number of test sequences by \(37.3\%\) and decreases the total length of the tests by \(34.6\%\) on the average.

Keywords

Model-based testing Characterization set Complexity 

Notes

Acknowledgements

This work is supported by the scientific and technological council of Turkey (TUBITAK) under the grant 117E987.

References

  1. 1.
    Aho, A.V., Dahbura, A.T., Lee, D., Uyar, M.U.: An optimization technique for protocol conformance test generation based on UIO sequences and rural Chinese postman tours. In: Protocol Specification, Testing, and Verification, North-Holland, Atlantic City, vol. VIII, pp. 75–86. Elsevier (1988)Google Scholar
  2. 2.
    Aho, A., Sethi, R., Ullman, J.: Compilers, Principles, Techniques, and Tools. Addison-Wesley Series in Computer Science. Addison-Wesley Publishing Company (1986)Google Scholar
  3. 3.
    Betin-Can, A., Bultan, T.: Verifiable concurrent programming using concurrency controllers. In: Proceedings of the 19th IEEE International Conference on Automated Software Engineering, pp. 248–257. IEEE Computer Society (2004)Google Scholar
  4. 4.
    Binder, R.V.: Testing Object-Oriented Systems: Models, Patterns, and Tools. Addison-Wesley (1999)Google Scholar
  5. 5.
    Brglez, F.: ACM/SIGMOD benchmark dataset. http://www.cbl.ncsu.edu/benchmarks/Benchmarks-upto-1996.html (1996). Accessed 12 Feb 2014
  6. 6.
    Brinksma, E.: A theory for the derivation of tests. In: Proceedings of Protocol Specification, Testing, and Verification, North-Holland, Atlantic City, vol. VIII, pp. 63–74 (1988)Google Scholar
  7. 7.
    Chow, T.S.: Testing software design modelled by finite state machines. IEEE Trans. Softw. Eng. 4, 178–187 (1978)CrossRefGoogle Scholar
  8. 8.
    Dahbura, A., Sabnani, K., Uyar, M.: Formal methods for generating protocol conformance test sequences. Proc. IEEE 78(8), 1317–1326 (1990).  https://doi.org/10.1109/5.58319CrossRefGoogle Scholar
  9. 9.
    Dorofeeva, R., El-Fakih, K., Maag, S., Cavalli, A.R., Yevtushenko, N.: FSM-based conformance testing methods: a survey annotated with experimental evaluation. Inf. Softw. Technol. 52(12), 1286–1297 (2010)CrossRefGoogle Scholar
  10. 10.
    Dorofeeva, R., El-Fakih, K., Yevtushenko, N.: An improved conformance testing method. In: Wang, F. (ed.) FORTE 2005. LNCS, vol. 3731, pp. 204–218. Springer, Heidelberg (2005).  https://doi.org/10.1007/11562436_16CrossRefGoogle Scholar
  11. 11.
    Friedman, A., Menon, P.: Fault Detection in Digital Circuits. Computer Applications in Electrical Engineering Series. Prentice-Hall (1971)Google Scholar
  12. 12.
    Friedman, A.D., Menon, P.R. (eds.): Fault Detection in Digital Circuits. Prentice-Hall Englewood Cliffs, N.J (1971)Google Scholar
  13. 13.
    Fujiwara, S., Bochmann, G.V., Khendek, F., Amalou, M., Ghedamsi, A.: Test selection based on finite state models. IEEE Trans. Softw. Eng. 17(6), 591–603 (1991)CrossRefGoogle Scholar
  14. 14.
    Gill, A.: Introduction to the Theory of Finite State Machines. McGraw-Hill, New York (1962)zbMATHGoogle Scholar
  15. 15.
    Gonenc, G.: A method for the design of fault detection experiments. IEEE Trans. Comput. 19, 551–558 (1970)CrossRefGoogle Scholar
  16. 16.
    Grieskamp, W., Kicillof, N., Stobie, K., Braberman, V.A.: Model-based quality assurance of protocol documentation: tools and methodology. Softw. Test. Verif. Reliab. 21(1), 55–71 (2011).  https://doi.org/10.1002/stvr.427CrossRefGoogle Scholar
  17. 17.
    Güniçen, C., Türker, U.C., Ural, H., Yenigün, H.: Generating preset distinguishing sequences using SAT. In: Gelenbe, E., Lent, R., Sakellari, G. (eds.) Computer and Information Sciences II, pp. 487–493. Springer, London (2011).  https://doi.org/10.1007/978-1-4471-2155-8_62CrossRefGoogle Scholar
  18. 18.
    Haydar, M., Petrenko, A., Sahraoui, H.: Formal verification of web applications modeled by communicating automata. In: de Frutos-Escrig, D., Núñez, M. (eds.) FORTE 2004. LNCS, vol. 3235, pp. 115–132. Springer, Heidelberg (2004).  https://doi.org/10.1007/978-3-540-30232-2_8CrossRefzbMATHGoogle Scholar
  19. 19.
    Hennie, F.C.: Fault-detecting experiments for sequential circuits. In: Proceedings of Fifth Annual Symposium on Switching Circuit Theory and Logical Design, pp. 95–110. Princeton, New Jersey, November 1964Google Scholar
  20. 20.
    Hierons, R.M.: Minimizing the number of resets when testing from a finite state machine. Inf. Process. Lett. 90(6), 287–292 (2004)CrossRefGoogle Scholar
  21. 21.
    Hierons, R.M., Türker, U.C.: Parallel algorithms for generating harmonised state identifiers and characterising sets. IEEE Trans. Comput. 65(11), 3370–3383 (2016).  https://doi.org/10.1109/TC.2016.2532869MathSciNetCrossRefzbMATHGoogle Scholar
  22. 22.
    Hopcroft, J.E.: An n log n algorithm for minimizing the states in a finite automaton. In: Kohavi, Z. (ed.) The theory of Machines and Computation, pp. 189–196. Academic Press (1971)Google Scholar
  23. 23.
    Hsieh, E.P.: Checking experiments for sequential machines. IEEE Trans. Comput. 20, 1152–1166 (1971)MathSciNetCrossRefGoogle Scholar
  24. 24.
    Kohavi, Z.: Switching and Finite State Automata Theory. McGraw-Hill, New York (1978) zbMATHGoogle Scholar
  25. 25.
    Koufareva, I., Dorofeeva, M.: A novel modification of w-method. Joint Bull. Novosibirsk Comput. 69–81 (2002)Google Scholar
  26. 26.
    Lee, D., Sabnani, K., Kristol, D., Paul, S.: Conformance testing of protocols specified as communicating finite state machines-a guided random walk based approach. IEEE Trans. Commun. 44(5), 631–640 (1996).  https://doi.org/10.1109/26.494307CrossRefGoogle Scholar
  27. 27.
    Lee, D., Yannakakis, M.: Testing finite-state machines: state identification and verification. IEEE Trans. Comput. 43(3), 306–320 (1994)MathSciNetCrossRefGoogle Scholar
  28. 28.
    Lee, D., Yannakakis, M.: Principles and methods of testing finite-state machines - a survey. Proc. IEEE 84(8), 1089–1123 (1996)CrossRefGoogle Scholar
  29. 29.
    Low, S.: Probabilistic conformance testing of protocols with unobservable transitions. In: 1993 International Conference on Network Protocols, pp. 368–375 (Oct).  https://doi.org/10.1109/ICNP.1993.340890
  30. 30.
    Luo, G., Petrenko, A., v. Bochmann, G.: Selecting test sequences for partially-specified nondeterministic finite state machines. In: Mizuno, T., Higashino, T., Shiratori, N. (eds.) Protocol Test Systems. ITIFIP, pp. 95–110. Springer, Boston, MA (1995).  https://doi.org/10.1007/978-0-387-34883-4_6CrossRefGoogle Scholar
  31. 31.
    Petrenko, A., Bochmann, G.V., Dssouli, R.: Conformance relations and test derivation. In: Proceedings of Protocol Test Systems VI (C-19), pp. 157–178 (1993)Google Scholar
  32. 32.
    Petrenko, A., Yevtushenko, N.: Testing from partial deterministic FSM specifications. IEEE Trans. Comput. 54(9), 1154–1165 (2005)CrossRefGoogle Scholar
  33. 33.
    Pomeranz, I., Reddy, S.M.: Test generation for multiple state-table faults in finite-state machines. IEEE Trans. Comput. 46(7), 783–794 (1997)CrossRefGoogle Scholar
  34. 34.
    Sabnani, K., Dahbura, A.: A protocol test generation procedure. Comput. Netw. 15(4), 285–297 (1988)Google Scholar
  35. 35.
    Sidhu, D.P., Leung, T.K.: Formal methods for protocol testing: a detailed study. IEEE Trans. Software Eng. 15(4), 413–426 (1989)CrossRefGoogle Scholar
  36. 36.
    Teetor, P.: R Cookbook, 1st edn. O’Reilly (2011). http://oreilly.com/catalog/9780596809157
  37. 37.
    Ural, H., Zhu, K.: Optimal length test sequence generation using distinguishing sequences. IEEE/ACM Trans. Netw. 1(3), 358–371 (1993)CrossRefGoogle Scholar
  38. 38.
    Ural, H.: Formal methods for test sequence generation. Comput. Commun. 15(5), 311–325 (1992).  https://doi.org/10.1016/0140-3664(92)90092-SCrossRefGoogle Scholar
  39. 39.
    Utting, M., Pretschner, A., Legeard, B.: A taxonomy of model-based testing approaches. Softw. Test. Verif. Reliab. 22(5), 297–312 (2012)CrossRefGoogle Scholar
  40. 40.
    Vasilevskii, M.P.: Failure diagnosis of automata. Cybernetics 4, 653–665 (1973)Google Scholar
  41. 41.
    Vuong, S.T., Chan, W.W.L., Ito, M.R.: The UIOv-method for protocol test sequence generation. In: The 2nd International Workshop on Protocol Test Systems, Berlin (1989)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Gebze Teknik ÜniversitesiGebzeTurkey
  2. 2.University of OttowaOttawaCanada

Personalised recommendations