Advertisement

Gray-Box Conformance Testing for Symbolic Reactive State Machines

  • Masoumeh TaromiradEmail author
  • Mohammad Reza Mousavi
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10522)

Abstract

Model-based testing (MBT) is typically a black-box testing technique. Therefore, generated test suites may leave some untested gaps in a given implementation under test (IUT). We propose an approach to use the structural and behavioural information exploited from the implementation domain to generate effective and efficient test suites. Our approach considers both specification models and implementation models, and generates an enriched test model which is used to automatically generate test suites. We show that the proposed approach is sound and exhaustive and cover both the specification and the implementation. We examine the applicability and the effectiveness of our approach by applying it to a well-known example from the railway domain.

Notes

Acknowledgements

This work was partially supported by ELLIIT, the strategic research area funded by Swedish government. The work of M.R. Mousavi has also been supported by the Swedish Research Council (Vetenskapsrådet) with award number 621-2014-5057, and the Swedish Knowledge Foundation in the context of the AUTO-CAAS project.

References

  1. 1.
    Walkinshaw, N., Taylor, R., Derrick, J.: Inferring extended finite state machine models from software executions. Empir. Software Eng. 21, 811–853 (2016)CrossRefGoogle Scholar
  2. 2.
    Cassel, S., Howar, F., Jonsson, B., Steffen, B.: Active learning for extended finite state machines. Formal Asp. Comput. 28(2), 233–263 (2016)CrossRefzbMATHMathSciNetGoogle Scholar
  3. 3.
    Gaudel, M.-C.: Testing can be formal, too. In: Mosses, P.D., Nielsen, M., Schwartzbach, M.I. (eds.) CAAP 1995. LNCS, vol. 915, pp. 82–96. Springer, Heidelberg (1995). doi: 10.1007/3-540-59293-8_188 Google Scholar
  4. 4.
    Chow, T.S.: Testing software design modeled by finite-state machines. IEEE TSE 4(3), 178–187 (1978)zbMATHGoogle Scholar
  5. 5.
    Petrenko, A., von Bochmann, G., Yao, M.Y.: On fault coverage of tests for finite state specifications. Comput. Networks ISDN Syst. 29(1), 81–106 (1996)CrossRefGoogle Scholar
  6. 6.
    Tretmans, J.: Model based testing with labelled transition systems. In: Hierons, R.M., Bowen, J.P., Harman, M. (eds.) Formal Methods and Testing. LNCS, vol. 4949, pp. 1–38. Springer, Heidelberg (2008). doi: 10.1007/978-3-540-78917-8_1 CrossRefGoogle Scholar
  7. 7.
    Petrenko, A., Yevtushenko, N., Bochmann, G.: Fault models for testing in context. In: Gotzhein, R., Bredereke, J. (eds.) Formal Description Techniques IX. IFIP AICT, pp. 163–178. Springer, Boston (1996)CrossRefGoogle Scholar
  8. 8.
    Kicillof, N., Grieskamp, W., Tillmann, N., Braberman, V.: Achieving both model and code coverage with automated gray-box testing. In: A-MOST 2007, pp. 1–11. ACM (2007)Google Scholar
  9. 9.
    Giantamidis, G., Tripakis, S.: Learning Moore machines from input-output traces. In: Fitzgerald, J., Heitmeyer, C., Gnesi, S., Philippou, A. (eds.) FM 2016. LNCS, vol. 9995, pp. 291–309. Springer, Cham (2016). doi: 10.1007/978-3-319-48989-6_18 CrossRefGoogle Scholar
  10. 10.
    Lee, C., Chen, F., Rosu, G.: Mining parametric specifications. In: ICSE 2011, pp. 591–600. ACM (2011)Google Scholar
  11. 11.
    Ernst, M.D., Cockrell, J., Griswold, W.G., Notkin, D.: Dynamically discovering likely program invariants to support program evolution. IEEE TSE 27(2), 99–123 (2001)Google Scholar
  12. 12.
    Grieskamp, W., Tillmann, N., Campbell, C., Schulte, W., Veanes, M.: Action machines - towards a framework for model composition, exploration and conformance testing based on symbolic computation. In: QSIC 2005, pp. 72–29. IEEE (2006)Google Scholar
  13. 13.
    Frantzen, L., Tretmans, J., Willemse, T.A.C.: A symbolic framework for model-based testing. In: Havelund, K., Núñez, M., Roşu, G., Wolff, B. (eds.) FATES/RV -2006. LNCS, vol. 4262, pp. 40–54. Springer, Heidelberg (2006). doi: 10.1007/11940197_3 CrossRefGoogle Scholar
  14. 14.
    Petrenko, A., Simao, A.: Checking experiments for finite state machines with symbolic inputs. In: El-Fakih, K., Barlas, G., Yevtushenko, N. (eds.) ICTSS 2015. LNCS, vol. 9447, pp. 3–18. Springer, Cham (2015). doi: 10.1007/978-3-319-25945-1_1 CrossRefGoogle Scholar
  15. 15.
    Petrenko, A.: Checking experiments for symbolic input/output finite state machines. In: IEEE ICSTW 2016, pp. 229–237 (2016)Google Scholar
  16. 16.
    Huang, W.-L., Peleska, J.: Complete model-based equivalence class testing. Int. J. Softw. Tools Technol. Transf. 18(3), 262–283 (2016)CrossRefGoogle Scholar
  17. 17.
    Braunstein, C., Peleska, J., Schulze, U., Hübner, F., Huang, W., Haxthausen, A., Vu, H.L.: A SysML test model and test suite for the ETCS ceiling speed monitor: Technical report, Work Package 4. Technical University of Denmark (2014)Google Scholar
  18. 18.
    Hübner, F., Huang, W., Peleska, J.: Experimental evaluation of a novel equivalence class partition testing strategy. In: Blanchette, J.C., Kosmatov, N. (eds.) TAP 2015. LNCS, vol. 9154, pp. 155–172. Springer, Cham (2015). doi: 10.1007/978-3-319-21215-9_10 CrossRefGoogle Scholar

Copyright information

© IFIP International Federation for Information Processing 2017

Authors and Affiliations

  1. 1.Centre for Research on Embedded Systems (CERES)Halmstad UniversityHalmstadSweden

Personalised recommendations