Skip to main content

Reproducibility of Software Bugs

Basic Concepts and Automatic Classification

  • Chapter
  • First Online:
Principles of Performance and Reliability Modeling and Evaluation

Part of the book series: Springer Series in Reliability Engineering ((RELIABILITY))

Abstract

Understanding software bugs and their effects is important in several engineering activities, including testing, debugging, and design of fault containment or tolerance methods. Dealing with hard-to-reproduce failures requires a deep comprehension of the mechanisms leading from bug activation to software failure. This chapter surveys taxonomies and recent studies about bugs from the perspective of their reproducibility, providing insights into the process of bug manifestation and the factors influencing it. These insights are based on the analysis of thousands of bug reports of a widely used open-source software, namely MySQL Server. Bug reports are automatically classified according to reproducibility characteristics, providing figures about the proportion of hard to reproduce bug their features, and evolution over releases.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 219.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    We follow the well-established notion of faulterrorfailure chain [5]: a software fault (bug) is a defect in the application code; when activated, faults cause errors; errors may lead to failures.

  2. 2.

    We use the expression “bug reproducibility”—widely used in the literature—to indicate the reproducibility of the failure caused by the bug.

  3. 3.

    Gray states: “Heisenbug may elude a bugcatcher for years of execution. The bugcatcher may perturb the situation just enough to make it disappear. This is analogous to Heisenberg’s Uncertainty Principle in physics.” Indeed, Heisenberg ascribed the uncertainty principle to the disturbance triggered by the act of measuring (observer effect). However, this argument is recognized to be misleading by modern physicists: the principle states a fundamental property of conjugate entities; it is not a statement about the observational success of the technology. Curiously, Mandelbugs are closer than Heisenbugs to the principle these were originally meant to resemble.

  4. 4.

    MySQL Bugs—https://bugs.mysql.com.

  5. 5.

    Valid means admissible, compatible environment state with reference to the input requests; in the case of user, it means that the same workload request(s) could be submitted in different timing/ordering producing the same result.

References

  1. Carrozza G, Pietrantuono R, Russo S (2014) Defect analysis in mission-critical software systems: a detailed investigation. J Softw Evol Process 27(1):22, 49

    Google Scholar 

  2. Grady RB (1992) Practical software metrics for project management and process improvement. Prentice Hall, Englewood Cliffs

    Google Scholar 

  3. IEEE Computer Society IEEE Standard Classification for Software Anomalies, IEEE Std 1044–2009

    Google Scholar 

  4. Chillarege R, Bhandari IS, Chaar JK, Halliday MJ, Moebus DS, Ray BK, Wong M-Y (1992) Orthogonal defect classification-a concept for in-process measurements. IEEE Trans Softw Eng 18(11):943–956

    Article  Google Scholar 

  5. Avizienis A, Laprie J-C, Randell B, Landwehr C (2004) Basic concepts and taxonomy of dependable and secure computing. IEEE Trans Dependable Secure Comput 1(1):11–33

    Article  Google Scholar 

  6. Gait J (1986) A probe effect in concurrent programs. Softw Pract Exp 16(3):225, 233

    Google Scholar 

  7. Gray J (1985) Why do computers stop and What can be done about it? Tandem Tech Report TR-85.7

    Google Scholar 

  8. Grottke M, Trivedi KS (2005) A classification of software faults. In: Supplemental proceedings 16th IEEE international symposium on software reliability engineering (ISSRE), pp 4.19–4.20

    Google Scholar 

  9. Grottke M, Trivedi KS (2007) Fighting bugs: remove, retry, replicate, and rejuvenate. Computer 40(2):107–109

    Article  Google Scholar 

  10. Grottke M, Nikora A, Trivedi KS (2010) An empirical investigation of fault types in space mission system software. In: Proceedings IEEE/IFIP international conference on dependable systems and networks (DSN), pp 447–456

    Google Scholar 

  11. Chillarege R (2011) Understanding Bohr-Mandel bugs through ODC triggers and a case study with empirical estimations of their field proportion. In: Proceedings 3rd IEEE international workshop on software aging and rejuvenation (WoSAR), pp 7–13

    Google Scholar 

  12. Cotroneo D, Grottke M, Natella R, Pietrantuono R, Trivedi KS (2013) Fault triggers in open-source software: an experience report. In: Proceedings 24th IEEE international symposium on software reliability engineering (ISSRE), pp 178–187

    Google Scholar 

  13. Lu S, Park S, Seo E, Zhou Y (2008) Learning from mistakes: a comprehensive study on real world concurrency bug characteristics. SIGARCH Comput Architect News 36(1):329–339

    Article  Google Scholar 

  14. Tan L, Liu C, Li Z, Wang X, Zhou Y, Zhai C (2014) Bug characteristics in open source software. Empirical Softw Eng 19(6):1665–1705

    Article  Google Scholar 

  15. Carrozza G, Cotroneo D, Natella R, Pietrantuono R, Russo S (2013) Analysis and prediction of mandelbugs in an industrial software system. In: Proceedings IEEE 6th international conference on software testing, verification and validation (ICST), pp 262–271

    Google Scholar 

  16. Cotroneo D, Natella R, Pietrantuono R (2013) Predicting aging-related bugs using software complexity metrics. Perform Eval 70(3):163–178

    Article  Google Scholar 

  17. Bovenzi A, Cotroneo D, Pietrantuono R, Russo S (2011) Workload characterization for software aging analysis. In: Proceedings 22nd IEEE international symposium on software reliability engineering (ISSRE), pp 240–249

    Google Scholar 

  18. Bovenzi A, Cotroneo D, Pietrantuono R, Russo S (2012) On the aging effects due to concurrency bugs: a case study on MySQL. In: Proceedings 23rd IEEE international symposium on software reliability engineering (ISSRE), pp 211–220

    Google Scholar 

  19. Cotroneo D, Natella R, Pietrantuono R (2010) Is software aging related to software metrics? In: Proceedings IEEE 2nd international workshop on software aging and rejuvenation (WoSAR), pp 1–6

    Google Scholar 

  20. Cotroneo D, Orlando S, Pietrantuono R, Russo S (2013) A measurement-based ageing analysis of the JVM. Softw Test Verif Reliab 23:199–239

    Article  Google Scholar 

  21. Chandra S, Chen PM (2000) Whither generic recovery from application faults? A fault study using open-source software. In: Proceedings international conference on dependable systems and networks (DSN), pp 97–106

    Google Scholar 

  22. Cavezza DG, Pietrantuono R, Russo S, Alonso J, Trivedi KS (2014) Reproducibility of environment-dependent software failures: an experience report. In: Proceedings 25th IEEE international symposium on software reliability engineering (ISSRE), pp 267–276

    Google Scholar 

  23. Pietrantuono R, Russo S, Trivedi K (2015) Emulating environment-dependent software faults. In: 2015 IEEE/ACM 1st international workshop on in complex faults and failures in large software systems (COUFLESS), pp 34–40

    Google Scholar 

  24. Fonseca P, Cheng L, Singhal V, Rodrigues R (2010) A study of the internal and external effects of concurrency bugs. In: Proceedings international conference on dependable systems and networks (DSN), pp 221–230

    Google Scholar 

  25. Nistor A, Jiang T, Tan L (2013) Discovering, reporting, and fixing performance bugs. In: Proceedings 10th conference on mining software repositories (MSR), pp 237–246

    Google Scholar 

  26. Jin G, Song L, Shi X, Scherpelz J, Lu S (2012) Understanding and detecting real-world performance bugs. In: Proceedings 33rd ACM SIGPLAN conference on programming languages design and implementation (PLDI), pp 77–88

    Google Scholar 

  27. Zaman S, Adams B, Hassan AE (2011) Security versus performance bugs: a case study on Firefox. In: Proceedings 8th conference on mining software repositories (MSR), pp 93–102

    Google Scholar 

  28. Li Z, Tan L, Wang X, Lu S, Zhou Y, Zhai C (2006) Have things changed now?: an empirical study of bug characteristics in modern open source software. In: Proceedings 1st workshop on architectural and system support for improving software dependability (ASID), pp 25–33

    Google Scholar 

  29. Lamkanfi A, Demeyer S, Soetens QD, Verdonck T (2011) Comparing mining algorithms for predicting the severity of a reported bug. In: Proceedings 15th European conference on software maintenance and reengineering (CSMR), pp 249–258

    Google Scholar 

  30. Menzies T, Marcus A (2008) Automated severity assessment of software defect reports. In: Proceedings IEEE international conference on software maintenance (ICSM), pp 346–355

    Google Scholar 

  31. Domingos P, Pazzani M (1997) On the optimality of the simple Bayesian classifier under zero-one loss. Mach Learn 29(23):103–130

    Article  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Flavio Frattini .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing Switzerland

About this chapter

Cite this chapter

Frattini, F., Pietrantuono, R., Russo, S. (2016). Reproducibility of Software Bugs. In: Fiondella, L., Puliafito, A. (eds) Principles of Performance and Reliability Modeling and Evaluation. Springer Series in Reliability Engineering. Springer, Cham. https://doi.org/10.1007/978-3-319-30599-8_21

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-30599-8_21

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-30597-4

  • Online ISBN: 978-3-319-30599-8

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics