Software & Systems Modeling

, Volume 16, Issue 2, pp 417–441 | Cite as

Automated product line test case selection: industrial case study and controlled experiment

  • Shuai Wang
  • Shaukat Ali
  • Arnaud Gotlieb
  • Marius Liaaen
Special Section Paper

Abstract

Automated test case selection for a new product in a product line is challenging due to several reasons. First, the variability within the product line needs to be captured in a systematic way; second, the reusable test cases from the repository are required to be identified for testing a new product. The objective of such automated process is to reduce the overall effort for selection (e.g., selection time), while achieving an acceptable level of the coverage of testing functionalities. In this paper, we propose a systematic and automated methodology using a feature model for testing (FM_T) to capture commonalities and variabilities of a product line and a component family model for testing (CFM_T) to capture the overall structure of test cases in the repository. With our methodology, a test engineer does not need to manually go through the repository to select a relevant set of test cases for a new product. Instead, a test engineer only needs to select a set of relevant features using FM_T at a higher level of abstraction for a product and a set of relevant test cases will be selected automatically. We evaluated our methodology via three different ways: (1) We applied our methodology to a product line of video conferencing systems called Saturn developed by Cisco, and the results show that our methodology can reduce the selection effort significantly; (2) we conducted a questionnaire-based study to solicit the views of test engineers who were involved in developing FM_T and CFM_T. The results show that test engineers are positive about adapting our methodology and models (FM_T and CFM_T) in their current practice; (3) we conducted a controlled experiment with 20 graduate students to assess the performance (i.e., cost, effectiveness and efficiency) of our automated methodology as compared to the manual approach. The results showed that our methodology is cost-effective as compared to the manual approach, and at the same time, its efficiency is not affected by the increased complexity of products.

Keywords

Test case selection Product line  Feature model  Component family model 

References

  1. 1.
    Benavides, D., Segura, S., Cortés, A.R.: Automated analysis of feature models 20 years later. A literature review. Inf. Syst. 35, 615–636 (2010)CrossRefGoogle Scholar
  2. 2.
    Czarnecki, K., Kim, C., Kalleberg, K.: Feature models are views on ontologies. In: Proceedings of the International Software Product Line Conference, pp. 41–51. (2006)Google Scholar
  3. 3.
    Ali, S., Yue, T., Briand, L.C., Walawege, S.: A product line modeling and configuration methodology to support model-based testing: an industrial case study. In: Proceedings of the ACM International Conference on Model Driven Engineering Languages and Systems (MODELS), pp. 726–742. (2012)Google Scholar
  4. 4.
    Wang, S., Ali, S., Yue, T.: Product Line Modeling and configuration methodology using feature model for supporting model-based testing. Simula Research Laboratory. Technical Report, pp. 2012–2024. (2013)Google Scholar
  5. 5.
    McGregor, J.: Testing a Software Product Line. Technical Report. CMU/SEI-2001-TR-022. Software Engineering Institute, Carnegie Mellon University, Pittsburgh, Pennsylvania. (2001)Google Scholar
  6. 6.
    Engström, E.: Regression test selection and product line system testing. In: Proceedings of Third International Conference on Software Testing, Verification and Validation (ICST), pp. 512–515. (2010)Google Scholar
  7. 7.
    Engström, E., Runeson, P., Skoglund, M.: A systematic review on regression test selection techniques. Inf. Softw. Technol. 52(1), 14–30 (2010)CrossRefGoogle Scholar
  8. 8.
    Yoo, S., Harman, M.: Regression testing minimization, selection and prioritization: a survey. Softw. Test. Verif. Reliab. 22(2), 67–120 (2012)CrossRefGoogle Scholar
  9. 9.
  10. 10.
    Cisco Systems: Cisco telepresence codec c90, Data sheet. Available from http://www.cisco.com. (2010)
  11. 11.
    Wang, S., Gotlieb, A., Ali, S., Liaaen, M.: Automated test case selection using feature model: an industrial case study. In: Proceedings of the ACM Model-Driven Engineering Languages and Systems (MODELS), pp. 237–253. (2013)Google Scholar
  12. 12.
    Wang, S., Gotlieb, A., Liaaen, M., Briand, L.C.: Automated test case selection using feature model: an industrial case study. In: Proceedings of the ACM MODELS Workshop VARiability for You (VARY’ 12), pp. 32–37. (2012)Google Scholar
  13. 13.
    Wang, S., Ali, S., Gotlieb, A., Liaaen, M.: A systematic test case selection methodology for product lines: results and insights from an industrial case study. Accepted in Empirical Software Engineering Journal. (2014)Google Scholar
  14. 14.
    Beuche, D., Papajewski, H., Schröder-Preikschat, W.: Variability management with feature models. Sci. Comput. Program. 53(3), 333–352 (2004)MathSciNetCrossRefMATHGoogle Scholar
  15. 15.
    Pure systems GmbH: Variant management with pure::variants. Technical white paper. Available from http://web.pure-systems.com. (2006)
  16. 16.
    Pure systems GmbH: Pure: Variants user’s guide. Available from http://web.pure-systems.com. (2011)
  17. 17.
    Wang, S., Ali, S., Gotlieb, A.: Minimizing test suites in software product lines using weighted-based genetic algorithms. In: Proceedings of the 15th Annual Conference on Genetic and Evolutionary Computation Conference (GECCO), pp. 1493–1500. (2013)Google Scholar
  18. 18.
    Wang, S., Ali, S., Gotlieb, A.: Cost-effective test suite minimization in product lines using search techniques. Accepted in Journal of Systems and Software. Available on http://www.sciencedirect.com/science/article/pii/S0164121214001757. (2014)
  19. 19.
    Wohlin, C., Runeson, P., Host, M., Ohlsson, M.C., Regnell, B., Wesslen, A.: Experimentation in software engineering. Springer, New York (2012)CrossRefMATHGoogle Scholar
  20. 20.
    Wohlin, C., Runeson, P., Höst, M., Ohlsson, M.C., Regnell, B., Wesslén, A.: Experimentation in software engineering: an introduction. Springer, New York (1999)MATHGoogle Scholar
  21. 21.
    Sheskin, D.J.: Handbook of parametric and nonparametric statistical procedures. Chapman and Hall/CRC, London (2007)MATHGoogle Scholar
  22. 22.
    Thomas, L.: Retrospective power analysis. Conserv. Biol. 11, 276–280 (1997)CrossRefGoogle Scholar
  23. 23.
    Dybå, T., Kampenes, V.B., Hannay, J.E., Sjøberg, D.I.K.: Systematic review: a systematic review of effect size in software engineering experiments. Inf. Softw. Technol. 49, 1073–1086 (2007)CrossRefGoogle Scholar
  24. 24.
    Höst, M., Regnell, B., Wohlin, C.: Using students as subjects—a comparative study of students and professionals in lead-time impact assessment. Empir. Softw. Eng. 5, 201–214 (2000)CrossRefMATHGoogle Scholar
  25. 25.
    Arisholm, E., Sjoberg, D.I.K.: Evaluating the effect of a delegated versus centralized control style on the maintainability of object-oriented software. IEEE Trans. Softw. Eng. 30, 521–534 (2004)CrossRefGoogle Scholar
  26. 26.
    Holt, R.W., Boehm-Davis, D.A., Shultz, A.C.: Mental representations of programs for student and professional programmers. Empirical Studies of Programmers: Second Workshop, pp. 33–46. (1987)Google Scholar
  27. 27.
    Muccini, H., Van Der Hoek, A.: Towards testing product line architectures. Electron. Notes Theor. Comput. Sci. 82(6), 99–109 (2003)CrossRefGoogle Scholar
  28. 28.
    Uzuncaova, E., Garcia, D., Khurshid, S., Batory, D.: Testing software product lines using incremental test generations. In: Proceedings of the IEEE International Symposium on Software Reliability Engineering (ISSRE), pp. 249–258. (2008)Google Scholar
  29. 29.
    Nebut, C., Le Traon, Y., Jézéquel, J.M.: System testing of product lines: from requirements to test cases. Software Product Lines, Research Issues in Engineering and Management. Springer, New York (2006)Google Scholar
  30. 30.
    Olimpiew, E.M., Gomaa, H.: Reusable model-based testing. International Conference on Software Reuse (ICSR), pp. 76–85. (2009)Google Scholar
  31. 31.
    Olimpiew, E.M., Gomaa, H.: Model-based test design for software product lines. Software Product Line Conference (SPLC), pp. 173–178 (2008)Google Scholar
  32. 32.
    Olimpiew, E.M., Gomaa, H.: Model-based testing for applications derived from software product lines. ACM SIGSOFT Softw. Eng. Notes 30(4), 1–7 (2005)CrossRefGoogle Scholar
  33. 33.
    Olimpiew, E.M., Gomaa, H.: Model-based testing for applications derived from software product lines. In: Proceedings of the 1st International Workshop on Advances in Model-Based Testing (A-MOST ’05), pp. 1–7. (2005)Google Scholar
  34. 34.
    Wang, S., Ali, S., Tao, Y., Liaaen, M.: Using feature model to support model-based testing of product lines: an industrial case study. In: Proceedings of 13th International Conference on Quality Software (QSIC), pp 75–84. (2013)Google Scholar
  35. 35.
    Iqbal, M.Z., Ali, S., Yue, T., Briand, L.C: Applying UML/MARTE on industrial projects: challenges, experiences, and guidelines. Accepted in Software and Systems Modeling Journal. Available on http://link.springer.com/article/10.1007%2Fs10270-014-0405-5
  36. 36.
    Hutchins, M., Foster, H., Goradia, T., Ostrand, T.: Experiments on the effectiveness of dataflow- and controlflow-based test adequacy criteria. In: Proceedings of the 16th International Conference on Software Engineering. Sorrento, Italy: IEEE Computer Society Press, pp. 191–200 May (1994)Google Scholar
  37. 37.
    Rothermel, G., Elbaum, S., Malishevsky, A., Kallakuri, P., Davia, B.: The impact of test suite granularity on the cost-effectiveness of regression testing. In: Proceedings of International Conference on Software Engineering, pp. 130–140. (2002)Google Scholar
  38. 38.
    Wong, W.E., Horgan, J.R., Mathur, A.P., Pasquini, A.: Test set size minimization and fault detection effectiveness: a case study in a space application. In: Proceeding of the Computer Software Applications Conference, pp. 522–528. (1997)Google Scholar
  39. 39.
    Chen, Y.F., Rosenblum, D.S., Vo, K.P.: Test tube: a system for selective regression testing. In: Proceedings of IEEE International Conference on Software Engineering (ICSE), pp. 211–220. Los Alamitos, CA, USA (1994)Google Scholar
  40. 40.
    Hartmann, J., Robson, D.J.: Techniques for selective revalidation. IEEE Softw. 7(1), 31–36 (1990)CrossRefGoogle Scholar
  41. 41.
    Harrold, M.J., Souffa, M.L.: An incremental approach to unit testing during maintenance. In: Proceedings of IEEE International Conference on Software Maintenance (ICSM), pp. 362–367. (1988)Google Scholar
  42. 42.
    Orso, A., Harrold, M.J., Rosenblum, D., Rothermel, G., Soffa, M.L., Do, H.: Using component metacontent to support the regression testing of component-based software. In: Proceedings of IEEE International Conference on Software Maintenance (ICSM), pp. 716–725. (2001)Google Scholar
  43. 43.
    Chen, Y., Probert, R.L., Sims, D.P.: Specification-based regression test selection with risk analysis. In: Proceedings of Conference of the Centre for Advanced Studies on Collaborative research. IBM Press, Indianapolis (2002)Google Scholar
  44. 44.
    Bible, J., Rothermel, G., Rosenblum, D.S.: A comparative study of coarse- and fine-grained safe regression test-selection techniques. ACM Trans. Softw. Eng. Methodol. 10(2), 149–183 (2001)CrossRefMATHGoogle Scholar
  45. 45.
    Graves, T.L., Harrold, M.J., Kim, J.M., Porter, A., Rothermel, G.: An empirical study of regression test selection techniques. ACM Trans. Softw. Eng. Methodol. 10(2), 184–208 (2001)CrossRefMATHGoogle Scholar
  46. 46.
    Mansour, N., Bahsoon, R., Baradhi, G.: Empirical comparison of regression test selection algorithms. J. Syst. Softw. 57(1), 79–90 (2001)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2015

Authors and Affiliations

  • Shuai Wang
    • 1
    • 2
  • Shaukat Ali
    • 1
  • Arnaud Gotlieb
    • 1
  • Marius Liaaen
    • 3
  1. 1.Simula Research LaboratoryCertus Software V&V CenterLysakerNorway
  2. 2.Department of InformaticsUniversity of OsloLysakerNorway
  3. 3.Cisco Systems Inc.OsloNorway

Personalised recommendations