Advertisement

Effectiveness of Combinatorial Test Design with Executable Business Processes

  • Daniel LübkeEmail author
  • Joel Greenyer
  • David Vatlin
Chapter

Abstract

Executable business processes contain complex business rules, control flow, and data transformations, which makes designing good tests difficult and, in current practice, requires extensive expert knowledge. In order to reduce the time and errors in manual test design, we investigated using automatic combinatorial test design (CTD) instead. CTD is a test selection method that aims at covering all interactions of a few input parameters. For this investigation, we integrated CTD algorithms with an existing framework that combines equivalence class partitioning with automatic BPELUnit test generation. Based on several industrial cases, we evaluated the effectiveness and efficiency of test suites selected via CTD algorithms against those selected by experts and random tests. The experiments show that CTD tests are not more efficient than tests designed by experts, but that they are a sufficiently effective automatic alternative.

Keywords

Executable business processes Testing Combinatorial test design Industrial case study IPOG AETG 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    V.R. Basili, Applying the goal/question/metric paradigm in the experience factory. Softw. Qual. Assur. Meas. Worldw. Perspect. 7(4), 21–44 (1993)Google Scholar
  2. 2.
    W. Berli, D. Lübke, W. Möckli, Terravis – large scale business process integration between public and private partners, in Lecture Notes in Informatics (LNI), Proceedings INFORMATIK 2014, volume P-232, ed. by E. Plödereder, L. Grunske, E. Schneider, D. Ull (Gesellschaft für Informatik e.V., Bonn, 2014), pp. 1075–1090Google Scholar
  3. 3.
    L.C. Briand, A critical analysis of empirical research in software testing, in First International Symposium on Empirical Software Engineering and Measurement (ESEM 2007) (IEEE, Piscataway, 2007), pp. 1–8CrossRefGoogle Scholar
  4. 4.
    D.M. Cohen, S.R. Dalal, M.L. Fredman, G.C. Patton, The aetg system: an approach to testing based on combinatorial design. IEEE Trans. Softw. Eng. 23(7), 437–444 (1997)CrossRefGoogle Scholar
  5. 5.
    M.B. Cohen, M.B. Dwyer, J. Shi, Constructing interaction test suites for highly-configurable systems in the presence of constraints: a greedy approach. IEEE Trans. Softw. Eng. 34(5), 633–650 (2008)CrossRefGoogle Scholar
  6. 6.
    M. Grochtmann, Test case design using classification trees, in Proceedings of the International Conference on Software Testing Analysis & Review (STAR 1994), Washington (1994), pp. 93–117Google Scholar
  7. 7.
    P. Kruse, O. Shehory, D. Citron, N. Condori-Fern’andez, T. Vos, B. Mendelson, Assessing the applicability of a combinatorial testing tool within an industrial environment, in CIBSE 2014: Proceedings of the 17th Ibero-American Conference Software Engineering (2014)Google Scholar
  8. 8.
    D.R. Kuhn, M.J. Reilly, An investigation of the applicability of design of experiments to software testing, in Software Engineering Workshop, 2002. Proceedings. 27th Annual NASA Goddard/IEEE (IEEE, Piscataway, 2002), pp. 91–95Google Scholar
  9. 9.
    D.R. Kuhn, D.R. Wallace, A.M. Gallo, Software fault interactions and implications for software testing. IEEE Trans. Softw. Eng. 30(6), 418–421 (2004)CrossRefGoogle Scholar
  10. 10.
    Y. Lei, R. Kacker, D.R. Kuhn, V. Okun, J. Lawrence, IPOG-IPOG-D: efficient test generation for multi-way combinatorial testing. Softw. Test. Verif. Reliab. 18(3), 125–148 (2008)CrossRefGoogle Scholar
  11. 11.
    D. Lübke, Unit testing bpel compositions, in Test and Analysis of Service-Oriented Systems, ed. by L. Baresi, E. Di Nitto (Springer, Berlin, 2007)Google Scholar
  12. 12.
    D. Lübke, Using metric time lines for identifying architecture shortcomings in process execution architectures, in 2015 IEEE/ACM 2nd International Workshop on Software Architecture and Metrics (SAM) (IEEE, Piscataway, 2015), pp. 55–58Google Scholar
  13. 13.
    D. Lübke, Calculating test coverage for BPEL processes with process log analysis, in Proceedings of the Eighth International Conference on Business Intelligence and Technology (BUSTECH 2018) (2018, accepted)Google Scholar
  14. 14.
    D. Lübke, L. Singer, A. Salnikow, Calculating BPEL test coverage through instrumentation, in Workshop on Automated Software Testing (AST 2009), ICSE 2009 (IEEE, Piscataway, 2009), pp. 115–122CrossRefGoogle Scholar
  15. 15.
    D. Lübke, A. Ivanchikj, C. Pautasso, A template for sharing empirical business process metrics, in Business Process Management Forum - BPM Forum 2017 (Springer, Cham, 2017), pp. 36–52Google Scholar
  16. 16.
    P. Mayer, D. Lübke, Towards a BPEL unit testing framework, in TAV-WEB ’06: Proceedings of the 2006 Workshop on Testing, Analysis, and Verification of Web Services and Applications, Portland (ACM, New York, 2006), pp. 33–42Google Scholar
  17. 17.
    E. Puoskari, T.E.J. Vos, N. Condori-Fernandez, P.M. Kruse, Evaluating applicability of combinatorial testing in an industrial environment: a case study, in Proceedings of the 2013 International Workshop on Joining AcadeMiA and Industry Contributions to Testing Automation, JAMAICA 2013 (ACM, New York, 2013), pp. 7–12Google Scholar
  18. 18.
    X.-F. Qi, Z.-Y. Wang, J.-Q. Mao, P. Wang, Automated testing of web applications using combinatorial strategies. J. Comput. Sci. Technol. 32(1), 199–210 (2017)CrossRefGoogle Scholar
  19. 19.
    T. Schnelle, D. Lübke, Towards the generation of test cases for executable business processes from classification trees, in Proceedings of the 9th Central European Workshop on Services and their Composition (ZEUS) 2017 (2017), pp. 15–22Google Scholar
  20. 20.
    K.-C. Tai, Y. Lei, A test generation strategy for pairwise testing. IEEE Trans. Softw. Eng. 28(1), 109–111 (2002)MathSciNetCrossRefGoogle Scholar
  21. 21.
    K. Tatsumi, Test case design support system, in Proceedings of International Conference on Quality Control, Tokyo (1987), pp. 615–620Google Scholar
  22. 22.
    D. Vatlin, Generation of test suites for business processes based on combinatorial test design, Bachelor thesis, Leibniz University of Hanover, 2017Google Scholar
  23. 23.
    L. Yu, Y. Lei, M. Nourozborazjany, R.N. Kacker, D.R. Kuhn, An efficient algorithm for constraint handling in combinatorial test generation, in 2013 IEEE Sixth International Conference on Software Testing, Verification and Validation (IEEE, Piscataway, 2013), pp. 242–251Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Leibniz Universität HannoverFachgebiet Software EngineeringHannoverGermany

Personalised recommendations