Advertisement

Classifying Test Suite Effectiveness via Model Inference and ROBBDs

  • Hermann FelbingerEmail author
  • Ingo Pill
  • Franz Wotawa
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9762)

Abstract

Deciding whether a given test suite is effective enough is certainly a challenging task. Focusing on a software program’s functionality, we propose in this paper a new method that leverages Boolean functions as abstract reasoning format. That is, we use machine learning in order to infer a special binary decision diagram from the considered test suite and extract a total variable order, if possible. Intuitively, if an ROBDD derived from the Boolean functions representing the program under test’s specification actually coincides with that of the test suite (using the same variable order), we conclude that the test suite is effective enough. That is, any program that passes such a test suite should clearly show the desired input-output behavior. In our paper, we provide the corresponding algorithms of our approach and their respective proofs. Our first experimental results illustrate our approach’s practicality and viability.

Keywords

Software testing Machine learning BDD ROBDD 

Notes

Acknowledgement

Parts of this work were accomplished at the VIRTUAL VEHICLE Research Center in Graz, Austria. The authors would like to acknowledge the financial support of the European Commission under FP-7 agreement number 608770 (project “edas”), of the COMET K2 - Competence Centers for Excellent Technologies Programme of the Austrian Federal Ministry for Transport, Innovation and Technology (bmvit), the Austrian Federal Ministry of Science, Research and Economy (bmwfw), the Austrian Research Promotion Agency (FFG), the Province of Styria and the Styrian Business Promotion Agency (SFG). They would furthermore like to express their thanks to their supporting industrial and scientific project partners, namely AVL List and to the Graz University of Technology.

References

  1. 1.
    Akers, S.B.: Binary decision diagrams. IEEE Trans. Comput. 27(6), 509–516 (1978)CrossRefzbMATHGoogle Scholar
  2. 2.
    Andersen, H.R.: An Introduction to Binary Decision Diagrams. Lecture notes, available online. IT University of Copenhagen (1997)Google Scholar
  3. 3.
    Brandis, M.M., Mössenböck, H.: Single-pass generation of static assignment form for structured languages. ACM TOPLAS 16(6), 1684–1698 (1994)CrossRefGoogle Scholar
  4. 4.
    Bryant, E.R.: Graph-based algorithms for Boolean function manipulation. IEEE Trans. Comput. 35(8), 677–691 (1986)CrossRefzbMATHGoogle Scholar
  5. 5.
    Bryant, R.E.: Symbolic Boolean manipulation with ordered binary-decision diagrams. ACM Comput. Surv. 24(3), 293–318 (1992)CrossRefGoogle Scholar
  6. 6.
    Chen, T., Lau, M., Yu, Y.: MUMCUT: a fault-based strategy for testing Boolean specifications. In: Proceedings of the Sixth Asia Pacific Software Engineering Conference (APSEC), pp. 606–613 (1999)Google Scholar
  7. 7.
    Chilenski, J., Miller, S.P.: Applicability of modified condition/decision coverage to software testing. Softw. Eng. J. 9(5), 193–200 (1994)CrossRefGoogle Scholar
  8. 8.
    Clarke, E.M., Fujita, M., Zhao, X.: Multi-terminal binary decision diagrams and hybrid decision diagrams. In: Sasao, T., Fujita, M. (eds.) Representations of Discrete Functions, pp. 93–108. Springer, New York (1996)CrossRefGoogle Scholar
  9. 9.
    Friedman, S.J., Supowit, K.J.: Finding the optimal variable ordering for binary decision diagrams. In: Proceedings of the 24th ACM/IEEE Design Automation Conference (DAC), pp. 348–356 (1987)Google Scholar
  10. 10.
    Grumberg, O., Livne, S., Markovitch, S.: Learning to order BDD variables in verification. J. Artif. Intell. Res. 18(1), 83–116 (2003)MathSciNetzbMATHGoogle Scholar
  11. 11.
    Henard, C., Papadakis, M., Traon, Y.L.: MutaLog: a tool for mutating logic formulas. In: Proceedings of the 7th IEEE International Conference onSoftware Testing, Verification, and Validation Workshops, ICSTW, pp. 399–404. IEEE Computer Society, Washington, DC (2014)Google Scholar
  12. 12.
    Lau, M.F., Yu, Y.T.: An extended fault class hierarchy for specification-based testing. ACM Trans. Softw. Eng. Methodol. 14(3), 247–276 (2005)CrossRefGoogle Scholar
  13. 13.
    Meinke, K., Sindhu, M.A.: Incremental learning-based testing for reactive systems. In: Gogolla, M., Wolff, B. (eds.) TAP 2011. LNCS, vol. 6706, pp. 134–151. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  14. 14.
    Paul, T.K., Lau, M.F.: Redefinition of fault classes in logic expressions. In: Proceedings of the 12th International Conference on Quality Software, pp. 144–153. QSIC, IEEE Computer Society, Washington, DC (2012)Google Scholar
  15. 15.
    Quinlan, J.R.: C4.5: Programs for Machine Learning. Morgan Kaufmann Publishers Inc., San Francisco (1993)Google Scholar
  16. 16.
    Shahbaz, M., Groz, R.: Inferring Mealy machines. In: Cavalcanti, A., Dams, D.R. (eds.) FM 2009. LNCS, vol. 5850, pp. 207–222. Springer, Heidelberg (2009)CrossRefGoogle Scholar
  17. 17.
    Shannon, C.: The synthesis of two-terminal switching circuits. Bell Syst. Tech. J. 28(1), 59–98 (1949)MathSciNetCrossRefGoogle Scholar
  18. 18.
    Steffen, B., Howar, F., Merten, M.: Introduction to active automata learning from a practical perspective. In: Bernardo, M., Issarny, V. (eds.) SFM 2011. LNCS, vol. 6659, pp. 256–296. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  19. 19.
    Tarjan, R.: Depth-first search and linear graph algorithms. In: 12th Annual Symposium on Switching and Automata Theory, pp. 114–121 (1971)Google Scholar
  20. 20.
    Valiant, L.G.: A theory of the learnable. Commun. ACM 27(11), 1134–1142 (1984)CrossRefzbMATHGoogle Scholar
  21. 21.
    Walkinshaw, N.: The practical assessment of test sets with inductive inference techniques. In: Proceedings of the 5th International Academic and Industrial Conference on Testing - Practice and Research Techniques (TAIC PART), pp. 165–172 (2010)Google Scholar
  22. 22.
    Walkinshaw, N., Taylor, R., Derrick, J.: Inferring extended finite state machine models from software executions. Empir. Softw. Eng. 21(3), 811–853 (2016)CrossRefGoogle Scholar
  23. 23.
    Weyuker, E.J.: Assessing test data adequacy through program inference. ACM Trans. Program. Lang. Syst. 5(4), 641–655 (1983)CrossRefzbMATHGoogle Scholar
  24. 24.
    Weyuker, E.J., Goradia, T., Singh, A.: Automatically generating test data from a Boolean specification. IEEE Trans. Softw. Eng. 20(5), 353–363 (1994)CrossRefzbMATHGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  1. 1.Virtual Vehicle Research CenterGrazAustria
  2. 2.Institute for Software TechnologyTU GrazGrazAustria

Personalised recommendations