Advertisement

Evaluating the Effects of Different Requirements Representations on Writing Test Cases

  • Francisco Gomes de Oliveira NetoEmail author
  • Jennifer Horkoff
  • Richard Svensson
  • David Mattos
  • Alessia Knauss
Conference paper
  • 50 Downloads
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12045)

Abstract

[Context and Motivation] One must test a system to ensure that the requirements are met, thus, tests are often derived manually from requirements. However, requirements representations are diverse; from traditional IEEE-style text, to models, to agile user stories, the RE community of research and practice has explored various ways to capture requirements. [Question/problem] But, do these different representations influence the quality or coverage of test suites? The state-of-the-art does not provide insights on whether or not the representation of requirements has an impact on the coverage, quality, or size of the resulting test suite. [Results] In this paper, we report on a family of three experiment replications conducted with 148 students which examines the effect of different requirements representations on test creation. We find that, in general, the different requirements representations have no statistically significant impact on the number of derived tests, but specific affordances of the representation effect test quality, e.g., traditional textual requirements make it easier to derive less abstract tests, whereas goal models yield less inconsistent test purpose descriptions. [Contribution] Our findings give insights on the effects of requirements representation on test derivation for novice testers. Our work is limited in the use of students.

Keywords

Test design Requirements representation Experiment 

References

  1. 1.
    Bencomo, N., Whittle, J., Sawyer, P., Finkelstein, A., Letier, E.: Requirements reflection: requirements as runtime entities. In: International Conference on Software Engineering, (ICSE), pp. 199–202. ACM/IEEE (2010)Google Scholar
  2. 2.
    Brill, O., Schneider, K., Knauss, E.: Videos vs. use cases: can videos capture more requirements under time pressure? In: Wieringa, R., Persson, A. (eds.) REFSQ 2010. LNCS, vol. 6182, pp. 30–44. Springer, Heidelberg (2010).  https://doi.org/10.1007/978-3-642-14192-8_5CrossRefGoogle Scholar
  3. 3.
    Cohn, M.: User Stories Applied: For Agile Software Development. Addison Wesley Longman Publishing Co., Inc., Redwood City (2004)Google Scholar
  4. 4.
    Cruzes, D.S., Dyba, T.: Recommended steps for thematic synthesis in software engineering In: International Symposium on Empirical Software Engineering and Measurement, pp. 275–284, September 2011Google Scholar
  5. 5.
    Dalpiaz, F., Franch, X., Horkoff, J.: istar 2.0 language guide (2016). https://arxiv.org/abs/1605.07767
  6. 6.
    de Oliveira Neto, F.G., Horkoff, J., Knauss, E., Kasauli, R., Liebel, G.: Challenges of aligning requirements engingeering and system testing in large-scale agile: A multiple case study. In: 2017 IEEE 25th International Requirements Engineering Conference Workshops (REW), pp. 315–322, September 2017Google Scholar
  7. 7.
    Felderer, M., Beer, A., Peischl, B.: On the role of defect taxonomy types for testing requirements: Results of a controlled experiment. In: 2014 40th Euromicro Conference on Software Engineering and Advanced Applications (SEAA), pp. 377–384 (2014)Google Scholar
  8. 8.
    Felderer, M., Herrmann, A.: Manual test case derivation from uml activity diagrams and state machines: a controlled experiment. Inf. Soft. Technol. 61, 1–15 (2015)CrossRefGoogle Scholar
  9. 9.
    Felderer, M., Herrmann, A.: Comprehensibility of system models during test design: a controlled experiment comparing uml activity diagrams and state machines. Soft. Qual. J. 27(1), 125–147 (2019)CrossRefGoogle Scholar
  10. 10.
    Feldt, R., et al.: Four commentaries on the use of students and professionals in empirical software engineering experiments. Empir. Softw. Eng. 23(6), 3801–3820 (2018).  https://doi.org/10.1007/s10664-018-9655-0CrossRefGoogle Scholar
  11. 11.
    Fleiss, J.L., Levin, B., Paik, M.C.: Statistical Methods for Rates and Proportions. Wiley Series in Probability and Statistics, 3rd edn. Wiley, Hoboken (2003)CrossRefGoogle Scholar
  12. 12.
    Hadar, I., Reinhartz-Berger, I., Kuflik, T., Perini, A., Ricca, F., Susi, A.: Comparing the comprehensibility of requirements models expressed in use case and tropos: results from a family of experiments. Inf. Soft. Technol. 55(10), 1823–1843 (2013)CrossRefGoogle Scholar
  13. 13.
    Häser, F., Felderer, M., Breu, R.: Is business domain language support beneficial for creating test case specifications: a controlled experiment. Inf. Softw. Technol. 79, 52–62 (2016)CrossRefGoogle Scholar
  14. 14.
    Hayes, A.F., Krippendorff, K.: Answering the call for a standard reliability measure for coding data. Commun. Methods Meas. 1(1), 77–89 (2007)CrossRefGoogle Scholar
  15. 15.
    Horkoff, J., et al.: Goal-oriented requirements engineering: an extended systematic mapping study. Requir. Eng. 24(2), 133–160 (2017).  https://doi.org/10.1007/s00766-017-0280-zCrossRefGoogle Scholar
  16. 16.
    ISO/IEC/IEEE: Software and Systems Engineering - Soft. testing - Part 3: Test documentation. ISO/IEC/IEEE standard 29119–3:2013 (2016)Google Scholar
  17. 17.
    ISO/IEC/IEEE: Systems and Software Engineering - Life cycle processes - Requirements Engineering. ISO/IEC/IEEE standard 29148:2018 (2018)Google Scholar
  18. 18.
    Karac, E.I., Turhan, B., Juristo, N.: A controlled experiment with novice developers on the impact of task description granularity on software quality in test-driven development. IEEE Trans. on Soft. Eng. 1 (2019).  https://doi.org/10.1109/TSE.2019.2920377
  19. 19.
    Kasauli, R., Knauss, E., Kanagwa, B., Nilsson, A., Calikli, G.: Safety-critical systems and agile development: A mapping study. In: 2018 44th Euromicro Conference on Software Engineering and Advanced Applications (SEAA), pp. 470–477. IEEE (2018)Google Scholar
  20. 20.
    Larkin, J.H., Simon, H.A.: Why a diagram is (sometimes) worth ten thousand words. Cognit. Sci. 11(1), 65–100 (1987)CrossRefGoogle Scholar
  21. 21.
    Massey, A.K., Otto, P.N., Antón, A.I.: Evaluating legal implementation readiness decision-making. IEEE Trans. Soft. Eng. 41(6), 545–564 (2015)CrossRefGoogle Scholar
  22. 22.
    Matulevičius, R., Heymans, P.: Comparing goal modelling languages: an experiment. In: Sawyer, P., Paech, B., Heymans, P. (eds.) REFSQ 2007. LNCS, vol. 4542, pp. 18–32. Springer, Heidelberg (2007).  https://doi.org/10.1007/978-3-540-73031-6_2CrossRefGoogle Scholar
  23. 23.
    de Oliveira Neto, F.G., Torkar, R., Feldt, R., Gren, L., Furia, C.A., Huang, Z.: Evolution of statistical analysis in empirical software engineering research: current state and steps forward. J. Syst. Softw. 156, 246–267 (2019)CrossRefGoogle Scholar
  24. 24.
    Salman, I., Misirli, A.T., Juristo, N.: Are students representatives of professionals in software engineering experiments? In: 2015 IEEE/ACM 37th International Conference on Software Engineering, vol. 1, pp. 666–676. IEEE (2015)Google Scholar
  25. 25.
    Sharafi, Z., Marchetto, A., Susi, A., Antoniol, G., Guéhéneuc, Y.G.: An empirical study on the efficiency of graphical vs. textual representations in requirements comprehension. In: 2013 21st International Conference on Program Comprehension (ICPC), pp. 33–42. IEEE (2013)Google Scholar
  26. 26.
    Wohlin, C., Runeson, P., Höst, M., Ohlsson, M.C., Regnell, B., Wessln, A.: Experimentation in Software Engineering. Springer, Heidelberg (2012).  https://doi.org/10.1007/978-3-642-29044-2CrossRefzbMATHGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  • Francisco Gomes de Oliveira Neto
    • 1
    Email author
  • Jennifer Horkoff
    • 1
  • Richard Svensson
    • 1
  • David Mattos
    • 1
  • Alessia Knauss
    • 1
  1. 1.Chalmers and the University of GothenburgGothenburgSweden

Personalised recommendations