Advertisement

An Empirical Study on the Comprehensibility of Graphical Security Risk Models Based on Sequence Diagrams

  • Vetle Volden-FrebergEmail author
  • Gencer Erdogan
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11391)

Abstract

We report on an empirical study in which we evaluate the comprehensibility of graphical versus textual risk annotations in threat models based on sequence diagrams. The experiment was carried out on two separate groups where each group solved tasks related to either graphical or textual annotations. We also examined the efficiency of using these two annotations in terms of the average time each group spent per task. Our study reports that threat models with textual risk annotations are equally comprehensible to corresponding threat models with graphical risk annotations. With respect to efficiency, however, we found out that participants solving tasks related to the graphical annotations spent on average \(23\%\) less time per task.

Keywords

Security risk models Empirical study Comprehensibility 

Notes

Acknowledgments

This work has been conducted within the AGRA project (236657) funded by the Research Council of Norway.

References

  1. 1.
    Basili, V.R., Caldiera, G., Rombach, H.D.: Experience Factory. Wiley, Hoboken (2002)CrossRefGoogle Scholar
  2. 2.
    CORAL Plugin for Eclipse Papyrus. https://bitbucket.org/vetlevo/no.uio.ifi.coral.profile/. Accessed 6 July 2018
  3. 3.
    Dunning, D., Johnson, K., Ehrlinger, J., Kruger, J.: Why people fail to recognize their own incompetence. Curr. Dir. Psychol. Sci. 12(3), 83–87 (2003)CrossRefGoogle Scholar
  4. 4.
    Erdogan, G.: CORAL: a model-based approach to risk-driven security testing. Ph.D. thesis, University of Oslo (2016)Google Scholar
  5. 5.
    Erdogan, G., Li, Y., Runde, R.K., Seehusen, F., Stølen, K.: Approaches for the combined use of risk analysis and testing: a systematic literature review. Int. J. Softw. Tools Technol. Transfer 16(5), 627–642 (2014)CrossRefGoogle Scholar
  6. 6.
    Eval&Go. http://www.evalandgo.com/. Accessed 6 July 2018
  7. 7.
    Everitt, B.S., Skrondal, A.: The Cambridge Dictionary of Statistics. Cambridge University Press, Cambridge (2010)CrossRefGoogle Scholar
  8. 8.
    Felderer, M., Schieferdecker, I.: A taxonomy of risk-based testing. Int. J. Softw. Tools Technol. Transfer 16(5), 559–568 (2014)CrossRefGoogle Scholar
  9. 9.
    Field, A.: Discovering Statistics Using IBM SPSS Statistics. SAGE Publications, Newcastle upon Tyne (2013)Google Scholar
  10. 10.
    Hadar, I., Reinhartz-Berger, I., Kuflik, T., Perini, A., Ricca, F., Susi, A.: Comparing the comprehensibility of requirements models expressed in use case and Tropos: results from a family of experiments. Inf. Softw. Technol. 55(10), 1823–1843 (2013)CrossRefGoogle Scholar
  11. 11.
    Halford, G.S., Baker, R., McCredden, J.E., Bain, J.D.: How many variables can humans process? Psychol. Sci. 16(1), 70–76 (2005)CrossRefGoogle Scholar
  12. 12.
    Halford, G.S., Wilson, W.H., Phillips, S.: Processing capacity defined by relational complexity: implications for comparative, developmental, and cognitive psychology. Behav. Brain Sci. 21(6), 803–831 (1998)Google Scholar
  13. 13.
    Hogganvik, I., Stølen, K.: Empirical investigations of the CORAS language for structured brainstorming. Technical report A05041, SINTEF Information and Communication Technology (2005)Google Scholar
  14. 14.
    Hogganvik, I., Stølen, K.: On the comprehension of security risk scenarios. In: Proceedings of the 13th International Workshop on Program Comprehension (IWPC 2005), pp. 115–124. IEEE (2005)Google Scholar
  15. 15.
    Labunets, K., Massacci, F., Paci, F., Marczak, S., de Oliveira, F.M.: Model comprehension for security risk assessment: an empirical comparison of tabular vs. graphical representations. Empir. Softw. Eng. 22(6), 3017–3056 (2017)CrossRefGoogle Scholar
  16. 16.
    Labunets, K., Massacci, F., Tedeschi, A.: Graphical vs. tabular notations for risk models: on the role of textual labels and complexity. In: Proceedings of the 11th International Symposium on Empirical Software Engineering and Measurement (ESEM 2017), pp. 267–276. IEEE (2017)Google Scholar
  17. 17.
    Lund, M.S., Solhaug, B., Stølen, K.: Model-Driven Risk Analysis: The CORAS Approach. Springer, Heidelberg (2011).  https://doi.org/10.1007/978-3-642-12323-8CrossRefzbMATHGoogle Scholar
  18. 18.
    Madsen, B.S.: Statistics for Non-Statisticians. Springer, Heidelberg (2016).  https://doi.org/10.1007/978-3-662-49349-6CrossRefzbMATHGoogle Scholar
  19. 19.
    Meliá, S., Cachero, C., Hermida, J.M., Aparicio, E.: Comparison of a textual versus a graphical notation for the maintainability of MDE domain models: an empirical pilot study. Softw. Qual. J. 24(3), 709–735 (2016)CrossRefGoogle Scholar
  20. 20.
    Moody, D.L.: The “physics” of notations: toward a scientific basis for constructing visual notations in software engineering. Trans. Softw. Eng. IEEE 35(6), 756–779 (2009)CrossRefGoogle Scholar
  21. 21.
    Nilsson, E.G., Stølen, K.: The FLUIDE framework for specifying emergency response user interfaces employed to a search and rescue case. Technical report A27575, SINTEF Information and Communication Technology (2016)Google Scholar
  22. 22.
    Object Management Group. Unified Modeling Language (UML), Version 2.5.1, 2017. OMG Document Number: formal/2017-12-05Google Scholar
  23. 23.
    Papyrus Modeling Environment. https://www.eclipse.org/papyrus/. Accessed 6 July 2018
  24. 24.
    Schalles, C.: Usability Evaluation of Modeling Languages. Springer, Heidelberg (2012).  https://doi.org/10.1007/978-3-658-00051-6CrossRefGoogle Scholar
  25. 25.
    Schieferdecker, I., Großmann, J., Schneider, M.: Model-based security testing. In: Proceedings of the 7th Workshop on Model-Based Testing (MBT 2012). Electronic Proceedings in Theoretical Computer Science (EPTCS ), vol. 80, pp. 1–12 (2012)CrossRefGoogle Scholar
  26. 26.
    Shull, F., Singer, J., Sjøberg, D.I.K.: Guide to Advanced Empirical Software Engineering. Springer, London (2007).  https://doi.org/10.1007/978-1-84800-044-5CrossRefGoogle Scholar
  27. 27.
    Singh, K.: Quantitative Social Research Methods. SAGE Publications, Newcastle upon Tyne (2007)CrossRefGoogle Scholar
  28. 28.
    Staron, M., Kuzniarz, L., Wohlin, C.: Empirical assessment of using stereotypes to improve comprehension of UML models: a set of experiments. J. Syst. Softw. 79(5), 727–742 (2006)CrossRefGoogle Scholar
  29. 29.
    Volden-Freberg, V.: Development of tool support within the domain of risk-driven security testing. Master’s thesis, University of Oslo (2017)Google Scholar
  30. 30.
    Wohlin, C., Höst, M., Henningsson, K.: Empirical research methods in software engineering. In: Conradi, R., Wang, A.I. (eds.) Empirical Methods and Studies in Software Engineering. LNCS, vol. 2765, pp. 7–23. Springer, Heidelberg (2003).  https://doi.org/10.1007/978-3-540-45143-3_2CrossRefGoogle Scholar
  31. 31.
    Wohlin, C., Runeson, P., Höst, M., Ohlsson, M.C., Regnell, B., Wesslén, A.: Experimentation in Software Engineering. Springer, Heidelberg (2012).  https://doi.org/10.1007/978-3-642-29044-2CrossRefzbMATHGoogle Scholar
  32. 32.
    Cross-Site Scripting (XSS). https://www.owasp.org/index.php/Cross-site_Scripting_(XSS). Accessed 6 July 2018

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.SINTEF DigitalOsloNorway

Personalised recommendations