Advertisement

Empirical Software Engineering

, Volume 21, Issue 5, pp 2107–2145 | Cite as

Towards building a universal defect prediction model with rank transformed predictors

  • Feng Zhang
  • Audris Mockus
  • Iman Keivanloo
  • Ying Zou
Article

Abstract

Software defects can lead to undesired results. Correcting defects costs 50 % to 75 % of the total software development budgets. To predict defective files, a prediction model must be built with predictors (e.g., software metrics) obtained from either a project itself (within-project) or from other projects (cross-project). A universal defect prediction model that is built from a large set of diverse projects would relieve the need to build and tailor prediction models for an individual project. A formidable obstacle to build a universal model is the variations in the distribution of predictors among projects of diverse contexts (e.g., size and programming language). Hence, we propose to cluster projects based on the similarity of the distribution of predictors, and derive the rank transformations using quantiles of predictors for a cluster. We fit the universal model on the transformed data of 1,385 open source projects hosted on SourceForge and GoogleCode. The universal model obtains prediction performance comparable to the within-project models, yields similar results when applied on five external projects (one Apache and four Eclipse projects), and performs similarly among projects with different context factors. At last, we investigate what predictors should be included in the universal model. We expect that this work could form a basis for future work on building a universal model and would lead to software support tools that incorporate it into a regular development workflow.

Keywords

Universal defect prediction model Defect prediction Context factors Rank transformation Large-scale Software quality 

Notes

Acknowledgments

The authors would like to thank Professor Ahmed E. Hassan from Software Analysis and Intelligence Lab (SAIL) at Queen’s University for his strong support during this work. The authors would also like to thank Professor Daniel German from University of Victoria for his insightful advice. The authors are appreciated for the great help of Mr. Shane McIntosh from Software Analysis and Intelligence Lab (SAIL) at Queen’s University during the improvement of this work. The authors are also grateful to the anonymous reviewers of MSR and EMSE for valuable and insightful comments.

References

  1. Akiyama F (1971) An example of software system debugging. In: Proceedings of the international federation of information processing societies congress, pp 353–359Google Scholar
  2. Alves T, Ypma C, Visser J (2010) Deriving metric thresholds from benchmark data. In: Proceedings of the 26th IEEE international conference on software maintenance, pp 1–10Google Scholar
  3. Arisholm E, Briand L C, Johannessen E B (2010) A systematic and comprehensive investigation of methods to build and evaluate fault prediction models. J Syst Softw 83(1):2–17CrossRefGoogle Scholar
  4. Baggen R, Correia J, Schill K, Visser J (2012) Standardized code quality benchmarking for improving software maintainability. Softw Qual J 20:287–307CrossRefGoogle Scholar
  5. Bettenburg N, Hassan AE (2010) Studying the impact of social structures on software quality. In: Proceedings of the 18th IEEE international conference on program comprehension, ICPC ’10, pp 124–133Google Scholar
  6. Bird C, Bachmann A, Aune E, Duffy J, Bernstein A, Filkov V, Devanbu P (2009) Fair and balanced?: bias in bug-fix datasets. In: Proceedings of the the 7th joint meeting of the European software engineering conference and the ACM SIGSOFT symposium on The foundations of software engineering, ESEC/FSE ’09, pp 121–130Google Scholar
  7. Cliff N (1993) Dominance statistics: ordinal analyses to answer ordinal questions. Psychol Bull 114(3):494–509MathSciNetCrossRefGoogle Scholar
  8. Cohen J (1988) Statistical power analysis for the behavioral sciences: Jacob Cohen, 2nd edn. Lawrence ErlbaumGoogle Scholar
  9. Cohen J (1992) A power primer. Psychol Bull 112(1):155–159CrossRefGoogle Scholar
  10. Cruz A, Ochimizu K (2009) Towards logistic regression models for predicting fault-prone code across software projects. In: 3rd international symposium on empirical software engineering and measurement, 2009. ESEM 2009, pp 460–463Google Scholar
  11. D’Ambros M, Lanza M, Robbes R (2010) An extensive comparison of bug prediction approaches. In: Proceedings of the 7th IEEE working conference on mining software repositories, MSR’10, pp 31–41Google Scholar
  12. D’Ambros M, Lanza M, Robbes R (2012) Evaluating defect prediction approaches: a benchmark and an extensive comparison. Empir Softw Eng 17(4-5):531–577CrossRefGoogle Scholar
  13. Denaro G, Pezzè M (2002) An empirical evaluation of fault-proneness models. In: Proceedings of the 24rd International Conference on Software Engineering, 2002. ICSE 2002, pp 241–251Google Scholar
  14. Hailpern B, Santhanam P (2002) Software debugging, testing, and verification. IBM Syst J 41(1):4–12CrossRefGoogle Scholar
  15. Hall T, Beecham S, Bowes D, Gray D, Counsell S (2012) A systematic literature review on fault prediction performance in software engineering. IEEE Trans Softw Eng 38(6):1276–1304CrossRefGoogle Scholar
  16. Hassan A (2009) Predicting faults using the complexity of code changes. In: Proceedings of the 31st IEEE international conference on software engineering, ICSE’09, pp 78–88Google Scholar
  17. He Z, Shu F, Yang Y, Li M, Wang Q (2012) An investigation on the feasibility of cross-project defect prediction. Autom Softw Eng 19(2):167–199CrossRefGoogle Scholar
  18. He Z, Peters F, Menzies T, Yang Y (2013) Learning from open-source projects: an empirical study on defect prediction. In: 2013 ACM / IEEE international symposium on empirical software engineering and measurement, pp 45–54Google Scholar
  19. Herzig K, Just S, Zeller A (2013) It’s not a bug, it’s a feature: how misclassification impacts bug prediction. In: Proceedings of the 35th international conference on software engineering, ICSE ’13, pp 392–401Google Scholar
  20. Hosmer DW Jr, Lemeshow S, Sturdivant RX (2013) Interpretation of the Fitted Logistic Regression Model. Wiley, pp 49–88Google Scholar
  21. Jiang Y, Cukic B, Menzies T (2008) Can data transformation help in the detection of fault-prone modules?. In: Proceedings of the 2008 workshop on defects in large software systems, DEFECTS ’08, pp 16–20Google Scholar
  22. Kim S, Zhang H, Wu R, Gong L (2011) Dealing with noise in defect prediction. In: Proceedings of the 33rd international conference on software engineering, ICSE ’11, pp 481–490Google Scholar
  23. Lessmann S, Baesens B, Mues C, Pietsch S (2008) Benchmarking classification models for software defect prediction: a proposed framework and novel findings. IEEE Trans Softw Eng (TSE) 34(4):485–496CrossRefGoogle Scholar
  24. Li M, Zhang H, Wu R, Zhou ZH (2012) Sample-based software defect prediction with active and semi-supervised learning. Autom Softw Eng 19(2):201–230CrossRefGoogle Scholar
  25. Ma Y, Luo G, Zeng X, Chen A (2012) Transfer learning for cross-company software defect prediction. Inf Softw Technol 54(3):248–256CrossRefGoogle Scholar
  26. Mair C, Shepperd M (2005) The consistency of empirical comparisons of regression and analogy-based software project cost prediction. In: Proceedings of the 2005 international symposium on empirical software engineering, pp 509–518Google Scholar
  27. Menzies T, Dekhtyar A, Distefano J, Greenwald J (2007a) Problems with precision: a response to comments on ‘data mining static code attributes to learn defect predictors. IEEE Trans Softw Eng (TSE) 33(9):637–640CrossRefGoogle Scholar
  28. Menzies T, Greenwald J, Frank A (2007b) Data mining static code attributes to learn defect predictors. IEEE Trans Softw Eng (TSE) 33(1):2–13CrossRefGoogle Scholar
  29. Menzies T, Butcher A, Marcus A, Zimmermann T, Cok D (2011) Local vs. global models for effort estimation and defect prediction. In: Proceedings of the 2011 26th IEEE/ACM international conference on automated software engineering, ASE ’11, pp 343–351Google Scholar
  30. Mockus A (2009) Amassing and indexing a large sample of version control systems: towards the census of public source code history. In: Proceedings of the 6th IEEE international working conference on mining software repositories, MSR’09, pp 11–20Google Scholar
  31. Mockus A, Votta L (2000) Identifying reasons for software changes using historic databases. In: Proceedings of the 16th international conference on software maintenance, ICSM ’00, pp 120–130Google Scholar
  32. Nagappan M, Zimmermann T, Bird C (2013) Diversity in software engineering research. In: Proceedings of the 2013 9th joint meeting on foundations of software engineering, vol 2013. ACM, New York, pp 466–476Google Scholar
  33. Nagappan N, Ball T, Zeller A (2006) Mining metrics to predict component failures. In: Proceedings of the 28th international conference on software engineering, ACM, ICSE ’06, pp 452–461Google Scholar
  34. Nam J, Pan SJ, Kim S (2013) Transfer defect learning. In: Proceedings of the 2013 international conference on software engineering, ICSE ’13, pp 382–391Google Scholar
  35. Nguyen TT, Nguyen TN, Phuong TM (2011) Topic-based defect prediction (nier track). In: Proceedings of the 33rd international conference on software engineering, ICSE ’11. ACM, New York, pp 932–935Google Scholar
  36. Pan S J, Yang Q (2010) A survey on transfer learning. IEEE Trans Knowl Data Eng 22(10):1345–1359CrossRefGoogle Scholar
  37. Peters F, Menzies T, Gong L, Zhang H (2013a) Balancing privacy and utility in cross-company defect prediction. IEEE Trans Softw Eng 39(8):1054–1068CrossRefGoogle Scholar
  38. Peters F, Menzies T, Marcus A (2013b) Better cross company defect prediction. In: Proceedings of the 10th Working Conference on Mining Software Repositories, MSR ’13, pp 409–418Google Scholar
  39. Posnett D, Filkov V, Devanbu P (2011) Ecological inference in empirical software engineering. In: Proceedings of the 26th IEEE/ACM international conference on automated software engineering, ASE ’11. IEEE Computer Society, Washington, pp 362–371Google Scholar
  40. Premraj R, Herzig K (2011) Network versus code metrics to predict defects: a replication study. In: 2011 international symposium on empirical software engineering and measurement (ESEM), pp 215–224Google Scholar
  41. Radjenović D, Heričko M, Torkar R, živkovič A (2013) Software fault prediction metrics: A systematic literature review. Inf Softw Technol 55(8):1397–1418CrossRefGoogle Scholar
  42. Rahman F, Posnett D, Devanbu P (2012) Recalling the “imprecision” of cross-project defect prediction. In: Proceedings of the ACM SIGSOFT 20th international symposium on the foundations of software engineering, FSE ’12, pp 61:1–61:11Google Scholar
  43. Rahman F, Posnett D, Herraiz I, Devanbu P (2013) Sample size vs. bias in defect prediction. In: Proceedings of the 21th ACM SIGSOFT symposium and the 15th European conference on foundations of software engineering, ESEC/FSE ’13Google Scholar
  44. Romano J, Kromrey JD, Coraggio J, Skowronek J (2006) Appropriate statistics for ordinal level data: should we really be using t-test and cohen’s d for evaluating group differences on the nsse and other surveys?. In: Annual meeting of the Florida association of institutional research, pp 1–33Google Scholar
  45. Sarro F, Di Martino S, Ferrucci F, Gravino C (2012) A further analysis on the use of genetic algorithm to configure support vector machines for inter-release fault prediction. In: Proceedings of the 27th annual ACM symposium on applied computing, SAC ’12. ACM, New York, pp 1215–1220CrossRefGoogle Scholar
  46. SciTools (2015) Understand 3.1 build 726. https://scitools.com, [Online; accessed 15-June-2015]
  47. Shatnawi R, Li W (2008) The effectiveness of software metrics in identifying error-prone classes in post-release software evolution process. J Syst Softw 81 (11):1868–1882CrossRefGoogle Scholar
  48. Sheskin DJ (2007) Handbook of parametric and nonparametric statistical procedures, 4th edn. Chapman & Hall/CRCGoogle Scholar
  49. Shihab E, Jiang ZM, Ibrahim WM, Adams B, Hassan AE (2010) Understanding the impact of code and process metrics on post-release defects: a case study on the eclipse project. In: Proceedings of the 2010 ACM/IEEE international symposium on empirical software engineering and measurement, ESEM ’10. ACM, New York, pp 4:1–4:10Google Scholar
  50. Śliwerski J, Zimmermann T, Zeller A (2005) When do changes induce fixes?. In: Proceedings of the 2nd international workshop on mining software repositories, MSR ’05, pp 1–5Google Scholar
  51. Tassey G (2002) The economic impacts of inadequate infrastructure for software testing. Tech. Rep. Planning Report 02-3, National Institute of Standards and TechnologyGoogle Scholar
  52. Turhan B, Menzies T, Bener A B, Di Stefano J (2009) On the relative value of cross-company and within-company data for defect prediction. Empir Softw Eng 14 (5):540–578CrossRefGoogle Scholar
  53. Watanabe S, Kaiya H, Kaijiri K (2008) Adapting a fault prediction model to allow inter languagereuse. In: Proceedings of the 4th international workshop on predictor models in software engineering, PROMISE ’08. ACM, New York, pp 19–24CrossRefGoogle Scholar
  54. Yin RK (2002) Case study research: design and methods, 3rd edn. SAGE PublicationsGoogle Scholar
  55. Zhang F, Mockus A, Zou Y, Khomh F, Hassan AE (2013) How does context affect the distribution of software maintainability metrics?. In: Proceedings of the 29th IEEE international conference on software maintainability, ICSM ’13, pp 350–359Google Scholar
  56. Zhang F, Mockus A, Keivanloo I, Zou Y (2014) Towards building a universal defect prediction model. In: Proceedings of the 11th working conference on mining software repositories, MSR ’14, pp 41–50Google Scholar
  57. Zhou Y, Leung H (2007) Predicting object-oriented software maintainability using multivariate adaptive regression splines. J Syst Softw 80(8):1349–1361CrossRefGoogle Scholar
  58. Zimmermann T, Nagappan N (2008) Predicting defects using network analysis on dependency graphs. In: Proceedings of the 30th international conference on software engineering, ICSE ’08. ACM, New York, pp 531–540Google Scholar
  59. Zimmermann T, Premraj R, Zeller A (2007) Predicting defects for eclipse. In: Proceedings of the international workshop on predictor models in software engineering, PROMISE ’07, p 9Google Scholar
  60. Zimmermann T, Nagappan N, Gall H, Giger E, Murphy B (2009) Cross-project defect prediction: a large scale experiment on data vs. domain vs. process. In: Proceedings of the the 7th joint meeting of the European software engineering conference and the ACM SIGSOFT symposium on The foundations of software engineering, ESEC/FSE ’09, pp 91–100Google Scholar
  61. Zimmermann T, Nagappan N, Guo PJ, Murphy B (2012) Characterizing and predicting which bugs get reopened. In: 34th International Conference on Software Engineering (ICSE), 2012, pp 1074–1083Google Scholar

Copyright information

© Springer Science+Business Media New York 2015

Authors and Affiliations

  1. 1.School of ComputingQueen’s UniversityKingstonCanada
  2. 2.Department of Electrical Engineering and Computer ScienceUniversity of TennesseeKnoxvilleUSA
  3. 3.Department of Electrical and Computer EngineeringQueen’s UniversityKingstonCanada

Personalised recommendations