Advertisement

A Lightweight Approach for Estimating Probability in Risk-Based Software Testing

  • Rudolf RamlerEmail author
  • Michael Felderer
  • Matthias Leitner
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10224)

Abstract

Using risk information in testing is requested in many testing strategies and recommended by international standards. The resulting, widespread awareness creates an increasing demand for concrete implementation guidelines and for methodological support on risk-based testing. In practice, however, many companies still perform risk-based testing in an informal way, based only on expert opinion or intuition. In this paper we address the task of quantifying risks by proposing a lightweight approach for estimating risk probabilities. The approach follows the “yesterday’s weather” principle used for planning in Extreme Programming. Probability estimates are based on the number of defects in the previous version. This simple heuristic can easily be implemented as part of risk-based testing without specific prerequisites. It suits the need of small and medium enterprises as well as agile environments which have neither time nor resources for establishing elaborated approaches and procedures for data collection and analysis. To investigate the feasibility of the approach we used historical defect data from a popular open-source application. Our estimates for three consecutive versions achieved an accuracy of 73% to 78% and showed a low number of critical overestimates (<4%) and few underestimates (<1%). For practical risk-based testing such estimates provide a reliable quantitative basis that can be easily augmented with the expert knowledge of human decision-makers. Furthermore, these results also define a baseline for future research on improving probability estimation approaches.

Keywords

Risk-based testing Risk assessment Probability estimation Defect prediction Test management Software testing 

Notes

Acknowledgments

This work has been supported by the COMET Competence Center program of the Austrian Research Promotion Agency (FFG), and the project MOBSTECO (FWF P 26194-N15) funded by the Austrian Science Fund.

References

  1. 1.
    Felderer, M., Schieferdecker, I.: A taxonomy of risk-based testing. Int. J. Softw. Tools Technol. Transf. 16(5), 559–568 (2014)CrossRefGoogle Scholar
  2. 2.
    ISO/IEC/IEEE 29119-2:2013 Software and systems engineering – Software testing – Part 2: Test processes. International Organization for Standardization, Geneva (2013)Google Scholar
  3. 3.
    Felderer, M., Ramler, R.: A multiple case study on risk-based testing in industry. Int. J. Softw. Tools Technol. Transf. 16(5), 609–625 (2014)CrossRefGoogle Scholar
  4. 4.
    Felderer, M., Ramler, R.: Risk orientation in software testing processes of small and medium enterprises: an exploratory and comparative study. Software Qual. J. 24(3), 519–548 (2016)CrossRefGoogle Scholar
  5. 5.
    Ramler, R., Felderer, M.: Experiences from an initial study on risk probability estimation based on expert opinion. In: Joint Conference of the 23rd International Workshop on Software Measurement and the Eighth International Conference on Software Process and Product Measurement (IWSM-MENSURA), pp. 93–97. IEEE (2013)Google Scholar
  6. 6.
    Beck, K.: Extreme Programming Explained: Embrace Change. Addison-Wesley, Boston (2000)Google Scholar
  7. 7.
    Spillner, A., Rossner, T., Winter, M., Linz, T.: Software Testing Practice: Test Management: A Study Guide for the Certified Tester Exam ISTQB Advanced Level. Rocky Nook, Santa Barbara (2007)Google Scholar
  8. 8.
    Black, R.: Advanced Software Testing. Guide to the ISTQB Advanced Certification as an Advanced Test Manager, vol. 2. Rocky Nook, Santa Barbara (2009)Google Scholar
  9. 9.
    Bach, J.: James Bach on risk-based testing: how to conduct heuristic risk analysis. Softw. Test. Qual. Eng. (STQE) Mag., 23–28, November/December 1999Google Scholar
  10. 10.
    Amland, S.: Risk-based testing: risk analysis fundamentals and metrics for software testing including a financial application case study. J. Syst. Softw. 53(3), 287–295 (2000). ElsevierCrossRefGoogle Scholar
  11. 11.
    van Veenendaal, E.: The PRISMA Approach. Uitgeverij Tutein Nolthenius, The Netherlands (2012)Google Scholar
  12. 12.
    CERT: Risk Management Framework (RMF). United States Computer Emergency Readiness Team, US-CERT, July 2013Google Scholar
  13. 13.
    OWASP: Testing Guide Ver. 4, Open Web Application Security Project, September 2014Google Scholar
  14. 14.
    Kontio, J.: Risk management in software development: a technology overview and the Riskit method. In: 21st International Conference on Software Engineering. ACM (1999)Google Scholar
  15. 15.
    Felderer, M., Haisjackl, C., Pekar, V., Breu, R.: A risk assessment framework for software testing. In: Margaria, T., Steffen, B. (eds.) ISoLA 2014. LNCS, vol. 8803, pp. 292–308. Springer, Heidelberg (2014). doi: 10.1007/978-3-662-45231-8_21 Google Scholar
  16. 16.
    Herrmann, A.: The quantitative estimation of IT-related risk probabilities. Risk Anal. 33(8), 1510–1531 (2013)CrossRefGoogle Scholar
  17. 17.
    Vose, D.: Risk Analysis: A Quantitative Guide. Wiley, Hoboken (2008)zbMATHGoogle Scholar
  18. 18.
    Ramler, R., Felderer, M.: A process for risk-based test strategy development and its industrial evaluation. In: Abrahamsson, P., Corral, L., Oivo, M., Russo, B. (eds.) PROFES 2015. LNCS, vol. 9459, pp. 355–371. Springer, Cham (2015). doi: 10.1007/978-3-319-26844-6_26 Google Scholar
  19. 19.
    Felderer, M., Ramler, R.: Integrating risk-based testing in industrial test processes. Software Qual. J. 22(3), 543–575 (2014)CrossRefGoogle Scholar
  20. 20.
    ISTQB: Standard glossary of terms used in software testing. Version 2.1 (2010)Google Scholar
  21. 21.
    Felderer, M., Beer, A.: Using defect taxonomies for testing requirements. IEEE Softw. 32(3), 94–101 (2015)CrossRefGoogle Scholar
  22. 22.
    Gitzel, R., Krug, S., Brhel, M.: Towards a software failure cost impact model for the customer: an analysis of an open source product. In: 6th International Conference on Predictive Models in Software Engineering (PROMISE). ACM (2010)Google Scholar
  23. 23.
    Beck, K., Fowler, M.: Planning Extreme Programming. Addison-Wesley Professional, Boston (2001)Google Scholar
  24. 24.
    Felderer, M., Haisjackl, C., Breu, R., Motz, J.: Integrating manual and automatic risk assessment for risk-based testing. In: Biffl, S., Winkler, D., Bergsmann, J. (eds.) SWQD 2012. LNBIP, vol. 94, pp. 159–180. Springer, Heidelberg (2012). doi: 10.1007/978-3-642-27213-4_11 CrossRefGoogle Scholar
  25. 25.
    Jureczko, M., Madeyski, L.: Towards identifying software project clusters with regard to defect prediction. In: 6th International Conference on Predictive Models in Software Engineering (PROMISE). ACM (2010)Google Scholar
  26. 26.
    Witten, I.H., Eibe, F.: Data Mining: Practical Machine Learning Tools and Techniques. Morgan Kaufmann, San Francisco (2005)zbMATHGoogle Scholar
  27. 27.
    Runeson, P., Höst, M., Rainer, A., Regnell, B.: Case Study Research in Software Engineering: Guidelines and Examples. Wiley, Hoboken (2012)CrossRefGoogle Scholar
  28. 28.
    Ramler, R., Felderer, M.: Requirements for integrating defect prediction and risk-based testing. In: 42nd Euromicro Conference on Software Engineering and Advanced Applications. IEEE (2016)Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • Rudolf Ramler
    • 1
    Email author
  • Michael Felderer
    • 2
  • Matthias Leitner
    • 2
  1. 1.Software Competence Center Hagenberg GmbHHagenbergAustria
  2. 2.Department of Computer ScienceUniversity of InnsbruckInnsbruckAustria

Personalised recommendations