An empirical study on the effect of community smells on bug prediction


Community-aware metrics through socio-technical developer networks or organizational structures have already been studied in the software bug prediction field. Community smells are also proposed to identify communication and collaboration patterns in developer communities. Prior work reports a statistical association between community smells and code smells identified in software modules. We investigate the contribution of community smells on predicting bug-prone classes and compare their contribution with that of code smell-related information and state-of-the-art process metrics. We conduct our empirical analysis on ten open-source projects with varying sizes, buggy and smelly class ratios. We build seven different bug prediction models to answer three RQs: a baseline model including a state-of-the-art metric set used, three models incorporating a particular metric set, namely community smells, code smells, code smell intensity, into the baseline, and three models incorporating a combination of smell-related metrics into the baseline. The performance of these models is reported in terms of recall, false positive rates, F-measure and AUC and statistically compared using Scott–Knott ESD tests. Community smells improve the prediction performance of a baseline model by up to 3% in terms of AUC, while code smell intensity improves the baseline models by up to 40% in terms of F-measure and up to 17% in terms of AUC. The conclusions are significantly influenced by the validation strategies used, algorithms and the selected projects’ data characteristics. While the code smell intensity metric captures the most information about technical flaws in predicting bug-prone classes, the community smells also contribute to bug prediction models by revealing communication and collaboration flaws in software development teams. Future research is needed to capture the communication patterns through multiple channels and to understand whether socio-technical flaws could be used in a cross-project bug prediction setting.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17


  1. 1.

  2. 2.


  1. Almarimi, N., Ouni, A., Chouchen, M., Saidani, I., & Mkaouer, M. W. (2020a) On the detection of community smells using genetic programming-based ensemble classifier chain. In: 15th IEEE/ACM International Conference on Global Software Engineering (ICGSE), pp 1-12.

  2. Almarimi, N., Ouni, A., & Mkaouer, M. W. (2020b). Learning to detect community smells in open source software projects. Knowledge-Based Systems,204, 106201.

    Article  Google Scholar 

  3. Arcoverde, R., Garcia, A., & Figueiredo, E. (2011) Understanding the longevity of code smells: preliminary results of an explanatory survey. In: Proceedings of the 4th Workshop on Refactoring Tools, ACM, pp 33-36.

  4. Bell, R. M., Ostrand, T. J., & Weyuker, E. J. (2013). The limited impact of individual developer data on software defect prediction. Empirical Software Engineering, 18(3), 478–505.

    Article  Google Scholar 

  5. Bird, C., Nagappan, N., Gall, H., Murphy, B., Devanbu, P. (2009) Putting it all together: Using socio-technical networks to predict failures. In: Software Reliability Engineering, 2009. ISSRE-09. 20th International Symposium on, IEEE, pp 109-119.

  6. Caglayan, B. (2014) An issue recommender model using the developer collaboration network. PhD dissertation, Bogazici University.

  7. Calikli, G., & Bener, A. (2015). Empirical analysis of factors affecting confirmation bias levels of software engineers. Software Quality Journal,23(4), 695–722.

    Article  Google Scholar 

  8. Calikli, G., Bener, A. B., Caglayan, B., & Misirli A. T (2012) Modeling human aspects to enhance software quality management. In: ICIS.

  9. Catolino, G. (2020) Refactoring community smells in the wild: the practitioner-field manual.

  10. Catolino, G., Palomba, F., Tamburri, D. A., Serebrenik, A., & Ferrucci, F. (2019) Gender diversity and women in software teams: How do they affect community smells? In: 2019 IEEE/ACM 41st International Conference on Software Engineering: Software Engineering in Society (ICSE-SEIS), IEEE, pp 11-20.

  11. Chatzigeorgiou, A., Manakos, A. (2010) Investigating the evolution of bad smells in object-oriented code. In: International Conference on the Quality of Information and Communications Technology, IEEE, pp 106-115.

  12. Di Nucci, D., Palomba, F., De Rosa, G., Bavota, G., Oliveto, R., & De Lucia, A. (2018). A developer centered bug prediction model. IEEE Transactions on Software Engineering,44(1), 5–24.

    Article  Google Scholar 

  13. Eken, B. (2018) Assessing personalized software defect predictors. In: Proceedings of the 40th International Conference on Software Engineering: Companion Proceeedings, ACM, pp 488-491.

  14. Eken, B., Tosun, A., Palma, F., & Bener, A. (2019) An Exploratory Study on the Impact of Community and Code Smells on Bug Prediction. URL, [Online; accessed 12-Apr-2020]

  15. Fontana, F. A., Ferme, V., Zanoni, M., & Roveda, R. (2015) Towards a prioritization of code debt: A code smell intensity index. In: 2015 IEEE 7th International Workshop on Managing Technical Debt (MTD), IEEE, pp 16-24

  16. Fowler M, Beck K, Brant J, Opdyke W, & Roberts, D. (1999) Refactoring: improving the design of existing code. 1999. ISBN: 0-201-48567-2

  17. Fu, W., Menzies, T., & Shen, X. (2016). Tuning for software analytics: Is it really necessary? Information and Software Technology,76, 135–146.

    Article  Google Scholar 

  18. Giarola F (2018) Detecting code and community smells in open-source: an automated approach. Italy

  19. Hall, M. A., & Holmes, G. (2002). Benchmarking attribute selection techniques for discrete class data mining. Department of Computer Science: University of Waikato.

    Google Scholar 

  20. Hall, T., Beecham, S., Bowes, D., Gray, D., & Counsell, S. (2012). A systematic literature review on fault prediction performance in software engineering. IEEE Transactions on Software Engineering,38(6), 1276–1304.

    Article  Google Scholar 

  21. Hassan, A. E. (2009) Predicting faults using the complexity of code changes. In: Proceedings of the 31st International Conference on Software Engineering, IEEE Computer Society, pp 78-88.

  22. Jureczko M, & Madeyski L (2010) Towards identifying software project clusters with regard to defect prediction. In: Proceedings of the 6th International Conference on Predictive Models in Software Engineering, ACM, p 9

  23. Khomh, F., Di Penta, M., Gueheneuc, Y. G., & Antoniol, G. (2012). An exploratory study of the impact of antipatterns on class change-and fault-proneness. Empirical Software Engineering,17(3), 243–275.

    Article  Google Scholar 

  24. Kini, S.O, & Tosun, A. (2018) Periodic developer metrics in software defect prediction. In: 18th IEEE International Working Conference on Source Code Analysis and Manipulation, SCAM 2018, Madrid, Spain, September 23-24, 2018, pp 72-81.

  25. Kirbas, S., Caglayan, B., Hall, T., Counsell, S., Bowes, D., Sen, A., et al. (2017). The relationship between evolutionary coupling and defects in large industrial software. Journal of Software: Evolution and Process,29(4), e1842.

    Google Scholar 

  26. Kohavi, R. (1995) The power of decision tables. In: 8th European Conference on Machine Learning, Springer, pp 174-189.

  27. Lessmann, S., Baesens, B., Mues, C., & Pietsch, S. (2008). Benchmarking classification models for software defect prediction: A proposed framework and novel findings. IEEE Transactions on Software Engineering,34(4), 485–496.

    Article  Google Scholar 

  28. Magnoni, S. (2016) An approach to measure community smells in software development communities.

  29. Malhotra, R. (2015). A systematic review of machine learning techniques for software fault prediction. Applied Soft Computing,27, 504–518.

    Article  Google Scholar 

  30. Mauerer, W. (2010) Codeface. [Online; accessed 7-Feb-2019]

  31. Meneely, A., & Williams, L. (2011) Socio-technical developer networks: Should we trust our measurements? In: Proceedings of the 33rd International Conference on Software Engineering, ACM, pp 281-290.

  32. Menzies, T., Milton, Z., Turhan, B., Cukic, B., Jiang, Y., & Bener, A. (2010). Defect prediction from static code features: current results, limitations, new approaches. Automated Software Engineering,17(4), 375–407.

    Article  Google Scholar 

  33. Menzies, T., Caglayan, B., He, Z., Kocaguneli, E., Krall, J., Peters, F., & Turhan, B. (2012) The promise repository of empirical software engineering data, 2012.

  34. Misirli, A. T., Shihab, E., & Kamei, Y. (2016). Studying high impact fix-inducing changes. Empirical Software Engineering,21(2), 605–641.

    Article  Google Scholar 

  35. Nagappan, N., Murphy, B., & Basili, V. (2008) The influence of organizational structure on software quality. In: 2008 ACM/IEEE 30th International Conference on Software Engineering, IEEE, pp 521-530.

  36. Ostrand, T. J, Weyuker, E. J, & Bell, R. M. (2010) Programmer-based fault prediction. In: Proceedings of the 6th International Conference on Predictive Models in Software Engineering, ACM, p 19.

  37. Palomba, F., Bavota, G., Di Penta, M., Oliveto, R., & De Lucia, A. (2014) Do they really smell bad? a study on developers’ perception of bad code smells. In: Software maintenance and evolution (ICSME), 2014 IEEE international conference on, IEEE, pp 101-110.

  38. Palomba, F., Zanoni, M., Fontana, F. A., De Lucia, A., & Oliveto, R. (2016) Smells like teen spirit: Improving bug prediction performance using the intensity of code smells. In: 2016 IEEE International Conference on Software Maintenance and Evolution (ICSME), IEEE, pp 244-255.

  39. Palomba, F., Zanoni, M., Fontana, F. A., De Lucia, A., & Oliveto, R. (2017). Toward a smell-aware bug prediction model. IEEE Transactions on Software Engineering,.

  40. Palomba, F., Bavota, G., Di Penta, M., Fasano, F., Oliveto, R., & De Lucia, A. (2018a). On the diffuseness and the impact on maintainability of code smells: a large scale empirical investigation. Empirical Software Engineering,23(3), 1188–1221.

    Article  Google Scholar 

  41. Palomba, F., Tamburri, D. A. A., Fontana, F. A., Oliveto, R., Zaidman, A., & Serebrenik, A. (2018b) Beyond technical aspects: How do community smells influence the intensity of code smells? IEEE transactions on software engineering.

  42. Peters, R., & Zaidman, A. (2012) Evaluating the lifespan of code smells using software repository mining. In: Software Maintenance and Reengineering (CSMR), 2012 16th European Conference on, IEEE, pp 411-416.

  43. Posnett, D., D’Souza, R., Devanbu, P., & Filkov, V. (2013) Dual ecological measures of focus in software development. In: Proceedings of the 2013 International Conference on Software Engineering, IEEE Press, pp 452-461.

  44. Radjenovic, D., Hericko, M., Torkar, R., & Zivkovic, A. (2013). Software fault prediction metrics: A systematic literature review. Information and Software Technology,55(8), 1397–1418.

    Article  Google Scholar 

  45. Soltanifar, B., Akbarinasaji, S., Caglayan, B., Bener, A. B., Filiz, A., & Kramer, B. M. (2016a) Software analytics in practice: A defect prediction model using code smells. In: Proceedings of the 20th International Database Engineering & Applications Symposium, IDEAS 2016, Montreal, QC, Canada, July 11-13, 2016, pp 148-155.

  46. Soltanifar, B., Erdem, A., & Bener, A. (2016b) Predicting defectiveness of software patches. In: Proceedings of the 10th ACM/IEEE International Symposium on Empirical Software Engineering and Measurement, ESEM 2016, Ciudad Real, Spain, September 8-9, 2016, pp 22:1-22:10.

  47. Taba, S. E. S., Khomh, F., Zou, Y., Hassan, A.E, & Nagappan, M .(2013) Predicting bugs using antipatterns. In: Software Maintenance (ICSM), 2013 29th IEEE International Conference on, IEEE, pp 270-279.

  48. Tamburri, D. A., Lago, P., & Hv, Vliet. (2013). Organizational social structures for software engineering. ACM Computing Surveys (CSUR),46(1), 3.

    Article  Google Scholar 

  49. Tamburri, D. A., Kruchten, P., Lago, P., & Van Vliet, H. (2015). Social debt in software engineering: insights from industry. Journal of Internet Services and Applications,6(1), 10.

    Article  Google Scholar 

  50. Tamburri, D. A., Palomba, F., & Kazman, R. (2017). Exploring community smells in open-source: An automated approach. IEEE Transactions on Software Engineering,14(8), 1–24.

    Google Scholar 

  51. Tamburri, D. A. A., Palomba, F., & Kazman, R. (2019) Exploring community smells in open-source: An automated approach. IEEE Transactions on software Engineering.

  52. Tantithamthavorn, C., McIntosh, S., Hassan, A. E., & Matsumoto, K. (2017). An empirical comparison of model validation techniques for defect prediction models. IEEE Transactions on Software Engineering,43(1), 1–18.

    Article  Google Scholar 

  53. Tosun, A., Turhan, B., & Bener, A. (2009) Validation of network measures as indicators of defective modules in software systems. In: Proceedings of the 5th international conference on predictor models in software engineering, ACM, p 5.

  54. Tosun, A., Bener, A., Turhan, B., & Menzies, T. (2010). Practical considerations in deploying statistical methods for defect prediction: A case study within the turkish telecommunications industry. Information and Software Technology,52(11), 1242–1257.

    Article  Google Scholar 

  55. Tsakiltsidis, S., Miranskyy, A., & Mazzawi, E. (2016) On automatic detection of performance bugs. In: Software Reliability Engineering Workshops (ISSREW), 2016 IEEE International Symposium on, IEEE, pp 132-139.

  56. Tufano, M., Palomba, F., Bavota, G., Oliveto, R., Di Penta, M., De Lucia, A., et al. (2017). When and why your code starts to smell bad (and whether the smells go away). IEEE Transactions on Software Engineering,43(11), 1063–1088.

    Article  Google Scholar 

  57. Turhan, B., Kocak, G., & Bener, A. B. (2009a) Data mining source code for locating software bugs: A case study in telecommunication industry. Expert Syst Appl 36(6):9986-9990.

  58. Turhan, B., Menzies, T., Bener, A. B., & Di Stefano, J. (2009b). On the relative value of cross-company and within-company data for defect prediction. Empirical Software Engineering,14(5), 540–578.

    Article  Google Scholar 

  59. Turhan, B., Misirli, A. T., & Bener, A. (2013). Empirical evaluation of the effects of mixed project data on learning defect predictors. Information and Software Technology,55(6), 1101–1118.

    Article  Google Scholar 

  60. Yin, R. K. (2009) Case study research: Design and methods. sage publications. Thousand oaks.

  61. Zhang, F., Hassan, A. E., McIntosh, S., & Zou, Y. (2017). The use of summation to aggregate software metrics hinders the performance of defect prediction models. IEEE Transactions on Software Engineering, 43(5), 476–491.

    Article  Google Scholar 

Download references


This study is supported in part by Collaborative Research and Development Grant (CRDPJ 499518-16) from NSERC, Canada and Mevlana Exchange Programme Grant (number 258) by Council of Higher Education, Turkey.

Author information



Corresponding author

Correspondence to Beyza Eken.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Eken, B., Palma, F., Ayşe, B. et al. An empirical study on the effect of community smells on bug prediction. Software Qual J (2021).

Download citation


  • Community smells
  • Bug prediction
  • Mining software repositories