Advertisement

A Systematic Literature Review on Empirical Analysis of the Relationship Between Code Smells and Software Quality Attributes

  • Amandeep KaurEmail author
Original Paper
  • 15 Downloads

Abstract

Code smells indicate problems in design or code which makes software hard to change and maintain. It has become a sign of software systems that cause complications in maintaining software quality. The detection of harmful code smells which deteriorate the software quality has resulted in a favourable shift in interest among researchers. Therefore, a significant research towards analysing the impact of code smells on software quality has been conducted over the last few years. This study aims at reporting a systematic literature review of such existing empirical studies investigate the impact of code smells on software quality attributes. The results indicate that the impact of code smells on software quality is not uniform as different code smells have the opposite effect on different software quality attributes. The findings of this review will provide the awareness to the researchers and a practitioner regarding the impact of code smells on software quality. It would be more advantageous to conduct further studies that consider less explored code smells, least or not investigated quality attributes, involve industry researchers and use large commercial software systems.

Notes

Acknowledgements

Author would like to acknowledge Kitchenham and Charters for providing the guidelines to conduct a Systematic Literature Review. In addition, the author would also like to thank Ms. Satnam kaur, and Dr. Gaurav Dhiman for their valuable time and support for assisting in collecting and analyzing the required data.

Compliance with Ethical Standards

Conflict of interest

The author declares that she has no conflict of interest.

References

  1. 1.
    Booch G, Maksimchuk RA, Engle MW, Young BJ, Conallen J, Houston KA (2006) Object-oriented analysis and design with applications, 3rd edn. Addison-Wesley, Upper Saddle RiverGoogle Scholar
  2. 2.
    Tufano M, Palomba F, Bavota G, Oliveto R, Di Penta M, De Lucia A, Poshyvanyk D (2015, May) When and why your code starts to smell bad. In: 37th IEEE international conference on software engineering (ICSE). IEEE, vol 1, pp 403–414Google Scholar
  3. 3.
    Fontana FA, Braione P, Zanoni M (2012) Automatic detection of bad smells in code: an experimental assessment. J Object Technol 11(2):5–13Google Scholar
  4. 4.
    Abdelmoez W, Kosba E, Iesa AF (2014, January) Risk-based code smells detection tool. In: The international conference on computing technology and information management (ICCTIM). Society of Digital Information and Wireless Communication, pp 148–159Google Scholar
  5. 5.
    Opdyke WF (1992) Refactoring object-oriented frameworks. Ph.D. Thesis. University of Illinois at Urbana-Champaign IllinoisGoogle Scholar
  6. 6.
    Mathur N (2011) Java smell detector, Master’s projects. San Jose State University 173:1–127Google Scholar
  7. 7.
    Mantyla M, Vanhanen J, Lassenius C (2003, September) A taxonomy and an initial empirical study of bad smells in code. In: Proceedings of international conference on software maintenance (ICSM). IEEE, pp 381–384Google Scholar
  8. 8.
    Fowler M, Beck K (1999) Refactoring: improving the design of existing code. Addison-Wesley, Upper Saddle RiverGoogle Scholar
  9. 9.
    Mäntylä MV, Lassenius C (2006) Subjective evaluation of software evolvability using code smells: an empirical study. Empir Softw Eng 11(3):395–431CrossRefGoogle Scholar
  10. 10.
    Wake WC (2004) Refactoring workbook. Addison Wesley Longman Publishing Company, BostonGoogle Scholar
  11. 11.
    Rasool G, Arshad Z (2015) A review of code smell mining techniques. J Softw Evol Process 27(11):867–895CrossRefGoogle Scholar
  12. 12.
    Zhang M, Hall T, Baddoo N (2011) Code bad smells: a review of current knowledge. J Softw Evol Process 23(3):179–202CrossRefGoogle Scholar
  13. 13.
    Khomh F, Vaucher S, Guéhéneuc Y, Sahraoui H (2011) The journal of systems and software BDTEX: a GQM-based Bayesian approach for the detection of antipatterns. J Syst Softw 84(4):559–572CrossRefGoogle Scholar
  14. 14.
    Kumar S, Chhabra JK (2014) Two level dynamic approach for Feature Envy detection. In: 2014 International conference on computer and communication technology (ICCCT). IEEE, pp 41–46Google Scholar
  15. 15.
    Rapu D, Ducasse S, Gîrba T, Marinescu R (2004, March) Using history information to improve design flaws detection. In: Proceedings of eighth European conference on software maintenance and reengineering (CSMR). IEEE, pp 223–232Google Scholar
  16. 16.
    Ligu E, Chatzigeorgiou A, Chaikalis T, Ygeionomakis N (2013, September) Identification of refused bequest code smells. In: 29th international conference on software maintenance (ICSM). IEEE, pp 392–395Google Scholar
  17. 17.
    Wangberg R (2010) A literature review on code smells and refactoring. Master’s thesisGoogle Scholar
  18. 18.
    Gupta A, Suri B, Misra S (2017) A systematic literature review: code bad smells in java source code. In: International conference on computational science and its applications. Springer, pp 665–682Google Scholar
  19. 19.
    Sharma T, Spinellis D (2017) A survey on software smells. J Syst Softw 138:158–173CrossRefGoogle Scholar
  20. 20.
    Haque MS, Carver J, Atkison T (2018, March) Causes, impacts, and detection approaches of code smell: a survey. In: Proceedings of the ACMSE 2018 conference. ACM, p 25Google Scholar
  21. 21.
    Kitchenham B, Charters S (2007) Guidelines for performing systematic literature reviews in software engineering. In: Technical report, Ver. 2.3 EBSEGoogle Scholar
  22. 22.
    Cohen J (1960) A coefficient of agreement for nominal scales. Education and Psychological Measurement 20:37–46CrossRefGoogle Scholar
  23. 23.
    Landis J, Koch G (1977) Measurement of observer agreement for categorical data. Biometrics 33:159–174CrossRefzbMATHGoogle Scholar
  24. 24.
  25. 25.
    Morales R, McIntosh S, Khomh F (2015) Do code review practices impact design quality? a case study of the QT, VTK, and ITK projects. In: 22nd International conference on software analysis, evolution and reengineering (SANER). IEEE, pp 171–180Google Scholar
  26. 26.
    Radjenović D, Heričko M, Torkar R, Živkovič A (2013) Software fault prediction metrics: a systematic literature review. Inf Softw Technol 55(8):1397–1418CrossRefGoogle Scholar
  27. 27.
    ISO/IEC-25010 (2010) Systems and software engineering–systems and software quality requirements and evaluation (SQuaRE)–system and software quality models. International Organization for StandardizationGoogle Scholar
  28. 28.
    Chidamber SR, Kemerer CF (1994) A metrics suite for object oriented design. IEEE Trans Softw Eng 20(6):476–493CrossRefGoogle Scholar
  29. 29.
    Krinke J (2007, October) A study of consistent and inconsistent changes to code clones. In: 14th Working conference on reverse engineering (WCRE). IEEE, pp 170–178Google Scholar
  30. 30.
    Krinke J (2008, September) Is cloned code more stable than non-cloned code? In: Eighth IEEE international working conference on source code analysis and manipulation. IEEE, pp 57–66Google Scholar
  31. 31.
    Krinke J (2011, May) Is cloned code older than non-cloned code? In: Proceedings of the 5th international workshop on software clones. ACM, pp 28–33Google Scholar
  32. 32.
    Lozano A, Wermelinger M (2010, May) Tracking clones’ imprint. In: Proceedings of the 4th international workshop on software clones. ACM, pp 65–72Google Scholar
  33. 33.
    Hotta K, Sano Y, Higo Y, Kusumoto S (2010, September) Is duplicate code more frequently modified than non-duplicate code in software evolution? an empirical study on open source software. In: Proceedings of the joint ERCIM workshop on software evolution (EVOL) and international workshop on principles of software evolution (IWPSE). ACM, pp 73–82Google Scholar
  34. 34.
    Saha RK, Asaduzzaman M, Zibran MF, Roy CK, Schneider KA (2010, September) Evaluating code clone genealogies at release level: an empirical study. In: 10th IEEE working conference on source code analysis and manipulation (SCAM). IEEE, pp 87–96Google Scholar
  35. 35.
    Abreu FB, Goulão M, Esteves R (1995, October) Toward the design quality evaluation of object-oriented software systems. In: Proceedings of the 5th international conference on software quality, Austin, Texas, USA, pp 44–57Google Scholar
  36. 36.
    Abreu FB, Melo W (1996, March) Evaluating the impact of object-oriented design on software quality. In: Proceedings of the 3rd international software metrics symposium. IEEE, pp 90–99Google Scholar
  37. 37.
    Lorenz M, Kidd J (1994) Object-oriented software metrics: a practical guide. Prentice-Hall, Upper Saddle RiverGoogle Scholar
  38. 38.
    Buse RP, Weimer WR (2010) Learning a metric for code readability. IEEE Trans Softw Eng 36(4):546–558CrossRefGoogle Scholar
  39. 39.
    Jabangwe R, Börstler J, Šmite D, Wohlin C (2015) Empirical evidence on the link between object-oriented measures and external quality attributes: a systematic literature review. Empir Softw Eng 20(3):640–693CrossRefGoogle Scholar
  40. 40.
    Moha N, Gueheneuc YG, Duchien L, Le Meur AF (2010) DECOR: a method for the specification and detection of code and design smells. IEEE Trans Softw Eng 36(1):20–36CrossRefzbMATHGoogle Scholar
  41. 41.
    Runeson P, Host M, Rainer A, Regnell B (2012) Case study research in software engineering: guidelines and examples. Wiley, New YorkCrossRefGoogle Scholar

Primary Studies References

  1. [S1] Deligiannis, I., Stamelos, I., Angelis, L., Roumeliotis, M., & Shepperd, M. (2004). A controlled experiment investigation of an object-oriented design heuristic for maintainability. Journal of Systems and Software72(2), 129–143Google Scholar
  2. [S2] Bavota, G., & Russo, B. (2016, May). A large-scale empirical study on self-admitted technical debt. In Proceedings of the 13th International Conference on Mining Software Repositories, ACM, 315–326.Google Scholar
  3. [S3] Wagey, B. C., Hendradjaya, B., & Mardiyanto, M. S. (2015, November). A proposal of software maintainability model using code smell measurement. InInternational Conference on Data and Software Engineering (ICoDSE), IEEE, 25–30Google Scholar
  4. [S4] Miyake, Y., Amasaki, S., Aman, H., & Yokogawa, T. (2017). A Replicated Study on Relationship Between Code Quality and Method Comments. In Applied Computing and Information Technology,Springer, 17–30Google Scholar
  5. [S5] Sabané, A., Di Penta, M., Antoniol, G., & Guéhéneuc, Y. G. (2013, March). A study on the relation between antipatterns and the cost of class unit testing. In 17th European Conference on Software Maintenance and Reengineering (CSMR), IEEE, 167–176.Google Scholar
  6. [S6] Aman, H. (2012, October). An empirical analysis on fault-proneness of well-commented modules. In Fourth International Workshop on Empirical Software Engineering in Practice (IWESEP), IEEE, 3–9Google Scholar
  7. [S7] Deligiannis, I., Shepperd, M., Roumeliotis, M., & Stamelos, I. (2003). An empirical investigation of an object-oriented design heuristic for maintainability. Journal of Systems and Software65(2), 127–139Google Scholar
  8. [S8] Kim, M., Sazawal, V., Notkin, D., & Murphy, G. (2005, September). An empirical study of code clone genealogies. In ACM SIGSOFT Software Engineering Notes, ACM, 30(5), 187–196Google Scholar
  9. [S9] Li, W., & Shatnawi, R. (2007). An empirical study of the bad smells and class error probability in the post-release object-oriented system evolution. Journal of systems and software80(7), 1120–1128Google Scholar
  10. [S10] Abbes, M., Khomh, F., Gueheneuc, Y. G., & Antoniol, G. (2011, March). An empirical study of the impact of two antipatterns, blob and spaghetti code, on program comprehension. In 15th European conference on Software maintenance and reengineering (CSMR), IEEE, 181–190Google Scholar
  11. [S11] Mondal, M., Rahman, M. S., Saha, R. K., Roy, C. K., Krinke, J., & Schneider, K. A. (2011, June). An empirical study of the impacts of clones in software maintenance. In 19th International Conference on Program Comprehension (ICPC), IEEE, 242–245Google Scholar
  12. [S12] Bettenburg, N., Shang, W., Ibrahim, W., Adams, B., Zou, Y., & Hassan, A. E. (2009, October). An empirical study on inconsistent changes to code clones at release level. In 16th Working Conference on Reverse Engineering (WCRE’09), IEEE, 85–94Google Scholar
  13. [S13] Thummalapenta, S., Cerulo, L., Aversano, L., & Di Penta, M. (2010). An empirical study on the maintenance of source code clones. Empirical Software Engineering15(1), 1–34Google Scholar
  14. [S14] Khomh, F., Di Penta, M., Guéhéneuc, Y. G., & Antoniol, G. (2012). An exploratory study of the impact of antipatterns on class change-and fault-proneness. Empirical Software Engineering17(3), 243–275Google Scholar
  15. [S15] Khomh, F., Di Penta, M., & Gueheneuc, Y. G. (2009, October). An exploratory study of the impact of code smells on software change-proneness. In 16th Working Conference on Reverse Engineering, (WCRE’09), IEEE, 75–84Google Scholar
  16. [S16] Romano, D., Raila, P., Pinzger, M., & Khomh, F. (2012, October). Analyzing the impact of antipatterns on change-proneness using fine-grained source code changes. In 19th Working Conference on Reverse Engineering (WCRE), IEEE, 437–446Google Scholar
  17. [S17] Olbrich, S. M., Cruzes, D. S., & Sjøberg, D. I. (2010, September). Are all code smells harmful? A study of God Classes and Brain Classes in the evolution of three open source systems. In International Conference on Software Maintenance (ICSM), IEEE, 1–10Google Scholar
  18. [S18] Marinescu, R., & Marinescu, C. (2011, September). Are the clients of flawed classes (also) defect prone?, In 11th IEEE International Working Conference on Source Code Analysis and Manipulation (SCAM), IEEE, 65–74Google Scholar
  19. [S19] Marinescu, R. (2012). Assessing technical debt by identifying design flaws in software systems. IBM Journal of Research and Development56(5), 9–1Google Scholar
  20. [S20] Lague, B., Proulx, D., Mayrand, J., Merlo, E. M., & Hudepohl, J. (1997, October). Assessing the benefits of incorporating function clone detection in a development process. In Proceedings of International Conference on Software Maintenance (ICSM), IEEE, 314–321Google Scholar
  21. [S21] Yamashita, A. (2014). Assessing the capability of code smells to explain maintenance problems: an empirical study combining quantitative and qualitative data. Empirical Software Engineering19(4), 1111–1143Google Scholar
  22. [S22] Lozano, A., & Wermelinger, M. (2008, September). Assessing the effect of clones on changeability. In International Conference on Software Maintenance (ICSM), IEEE,227–236Google Scholar
  23. [S23] Rahman, F., Bird, C., & Devanbu, P. (2012). Clones: What is that smell?. Empirical Software Engineering17(4–5), 503–530Google Scholar
  24. [S24] Kapser, C. J., & Godfrey, M. W. (2008). “Cloning considered harmful” considered harmful: patterns of cloning in software. Empirical Software Engineering13(6), 645Google Scholar
  25. [S25] Danphitsanuphan, P., & Suwantada, T. (2012, May). Code smell detecting tool and code smell-structure bug relationship. In Spring Congress on Engineering and Technology (S-CET), IEEE, 1–5Google Scholar
  26. [S26] Yamashita, A., & Counsell, S. (2013). Code smells as system-level indicators of maintainability: An empirical study. Journal of Systems and Software86(10), 2639–2653Google Scholar
  27. [S27] Juergens, E., Deissenboeck, F., Hummel, B., & Wagner, S. (2009, May). Do code clones matter?. In 31st International Conference on Software Engineering (ICSE), IEEE, 485–495Google Scholar
  28. [S28] Soh, Z., Yamashita, A., Khomh, F., & Guéhéneuc, Y. G. (2016, March). Do Code Smells Impact the Effort of Different Maintenance Programming Activities?. In 23rd International Conference on Software Analysis, Evolution, and Reengineering (SANER), IEEE, 1, 393–402Google Scholar
  29. [S29] Yamashita, A., & Moonen, L. (2012, September). Do code smells reflect important maintainability aspects?. In 2012 28th IEEE international conference on software maintenance (ICSM) (pp. 306–315). IEEEGoogle Scholar
  30. [S30] Palomba, F., Bavota, G., Di Penta, M., Oliveto, R., & De Lucia, A. (2014, September). Do they really smell bad? a study on developers’ perception of bad code smells. In international conference on Software maintenance and evolution (ICSME), IEEE, 101–110Google Scholar
  31. [S31] Linares-Vásquez, M., Klock, S., McMillan, C., Sabané, A., Poshyvanyk, D., & Guéhéneuc, Y. G. (2014, June). Domain matters: bringing further evidence of the relationships among anti-patterns, application domains, and quality-related metrics in Java mobile apps. In Proceedings of the 22nd International Conference on Program Comprehension, 232–243Google Scholar
  32. [S32] Chatterji, D., Carver, J. C., Kraft, N. A., & Harder, J. (2013, October). Effects of cloned code on software maintainability: A replicated developer study. In 20th Working Conference on Reverse Engineering (WCRE), IEEE, 112–121Google Scholar
  33. [S33] Aman, H., Amasaki, S., Sasaki, T., & Kawahara, M. (2015, October). Empirical analysis of change-proneness in methods having local variables with long names and comments. In International Symposium on Empirical Software Engineering and Measurement (ESEM), ACM/IEEE, 1–4Google Scholar
  34. [S34] Aman, H., Amasaki, S., Sasaki, T., & Kawahara, M. (2014, December). Empirical analysis of fault-proneness in methods by focusing on their comment lines. In 21st Asia-Pacific Software Engineering Conference (APSEC), IEEE, 2, 51–56Google Scholar
  35. [S35] Jaafar, F., Guéhéneuc, Y. G., Hamel, S., Khomh, F., & Zulkernine, M. (2016). Evaluating the impact of design pattern and anti-pattern dependencies on changes and faults. Empirical Software Engineering21(3), 896–931Google Scholar
  36. [S36] Wehaibi, S., Shihab, E., & Guerrouj, L. (2016, March). Examining the impact of self-admitted technical debt on software quality. In 23rd International Conference on Software Analysis, Evolution, and Reengineering (SANER), IEEE, 1, 179–188Google Scholar
  37. [S37] Yamashita, A., & Moonen, L. (2013, May). Exploring the impact of inter-smell relations on software maintainability: An empirical study. In 35th International Conference on Software Engineering (ICSE),IEEE, 682–691Google Scholar
  38. [S38] Yamashita, A. (2013, September). How Good Are Code Smells for Evaluating Software Maintainability? Results from a Comparative Case Study. In International Conference on Software Maintenance (ICSM), IEEE, 566–571Google Scholar
  39. [S39] Fontana, F. A., Ferme, V., & Spinelli, S. (2012, June). Investigating the impact of code smells debt on quality code evaluation. In Third International Workshop on Managing Technical Debt (MTD), IEEE, 15–22Google Scholar
  40. [S40] Fontana, F. A., Ferme, V., Marino, A., Walter, B., & Martenka, P. (2013, September). Investigating the impact of code smells on system’s quality: An empirical study on systems of different application domains. In 29th International Conference on Software Maintenance (ICSM), IEEE, 260–269Google Scholar
  41. [S41] Zazworka, N., Shaw, M. A., Shull, F., & Seaman, C. (2011, May). Investigating the impact of design debt on software quality. In Proceedings of the 2nd Workshop on Managing Technical Debt, ACM, 17–23Google Scholar
  42. [S42] Guerrouj, L., Kermansaravi, Z., Arnaoudova, V., Fung, B. C., Khomh, F., Antoniol, G., & Guéhéneuc, Y. G. (2015). Investigating the relation between lexical smells and change-and fault-proneness: an empirical study. Software Quality Journal, 1–30Google Scholar
  43. [S43] Hirohisa, A. M. A. N., Amasaki, S., Sasaki, T., & Kawahara, M. (2015). Lines of comments as a noteworthy metric for analyzing fault-proneness in methods. IEICE TRANSACTIONS on Information and Systems98(12), 2218–2228Google Scholar
  44. [S44] Aman, H., Amasaki, S., Yokogawa, T., & Kawahara, M. Local Variables with Compound Names and Comments as Signs of Fault-Prone Java Methods. In 4th International Workshop on Quantitative Approaches to Software Quality, 5–11Google Scholar
  45. [S45] Jaafar, F., Guéhéneuc, Y. G., Hamel, S., & Khomh, F. (2013, October). Mining the relationship between anti-patterns dependencies and fault-proneness. In 20th Working Conference on Reverse Engineering (WCRE), IEEE,351–360Google Scholar
  46. [S46] D’Ambros, M., Bacchelli, A., & Lanza, M. (2010, July). On the impact of design flaws on software defects. In 10th International Conference on Quality Software (QSIC), IEEE, 23–31Google Scholar
  47. [S47] Sjøberg, D. I., Yamashita, A., Anda, B. C., Mockus, A., & Dybå, T. (2013). Quantifying the effect of code smells on maintenance effort. IEEE Transactions on Software Engineering39(8), 1144–1156Google Scholar
  48. [S48] Bán, D., & Ferenc, R. (2014, June). Recognizing Antipatterns and Analyzing Their Effects on Software Maintainability. In International Conference on Computational Science and Its Applications, Springer International Publishing, 337–352Google Scholar
  49. [S49] Monden, A., Nakae, D., Kamiya, T., Sato, S. I., & Matsumoto, K. I. (2002). Software quality analysis by code clones in industrial legacy software. In Proceedings of Eighth IEEE Symposium on Software Metrics, IEEE, 87–94Google Scholar
  50. [S50] Hall, T., Zhang, M., Bowes, D., & Sun, Y. (2014). Some code smells have a significant but small effect on faults. ACM Transactions on Software Engineering and Methodology (TOSEM)23(4), 33Google Scholar
  51. [S51] Olbrich, S., Cruzes, D. S., Basili, V., & Zazworka, N. (2009, October). The evolution and impact of code smells: A case study of two open source systems. In Proceedings of the 2009 3rd international symposium on empirical software engineering and measurement, IEEE, 390–400Google Scholar
  52. [S52] Yamashita, A., & Moonen, L. (2013). To what extent can maintenance problems be predicted by code smell detection?–An empirical study. Information and Software Technology55(12), 2223–2242Google Scholar
  53. [S53] Marinescu, C., Stoenescu, Ş., & Fortiş, T. F. (2014, July). Towards the Impact of Design Flaws on the Resources Used by an Application. In International Workshop on Adaptive Resource Management and Scheduling for Cloud Computing, Springer, 180–192Google Scholar
  54. [S54] Jiang, L., Su, Z., & Chiu, E. (2007, September). Context-based detection of clone-related bugs. In Proceedings of the the 6th joint meeting of the European software engineering conference and the ACM SIGSOFT symposium on The foundations of software engineering (pp. 55–64). ACMGoogle Scholar
  55. [S55] Jaafar, F., Lozano, A., Guéhéneuc, Y. G., & Mens, K. (2017). Analyzing software evolution and quality by extracting Asynchrony change patterns. Journal of Systems and Software131, 311–322Google Scholar
  56. [S56] Palomba, F., Bavota, G., Di Penta, M., Fasano, F., Oliveto, R., & De Lucia, A. (2018). On the diffuseness and the impact on maintainability of code smells: a large scale empirical investigation. Empirical Software Engineering23(3), 1188–1221Google Scholar
  57. [S57] Husien, H. K., Harun, M. F., & Lichter, H. (2017). Towards a Severity and Activity based Assessment of Code Smells. Procedia Computer Science116, 460–467Google Scholar
  58. [S58] Mondal, M., Rahman, M. S., Roy, C. K., & Schneider, K. A. (2018). Is cloned code really stable?. Empirical Software Engineering23(2), 693–770Google Scholar
  59. [S59] Elish, M. O. (2017, July). On the association between code cloning and fault-proneness: An empirical investigation. In Computing Conference, 2017 (pp. 928–935). IEEEGoogle Scholar
  60. [S60] Zhang, X., Zhou, Y., & Zhu, C. (2017, November). An Empirical Study of the Impact of Bad Designs on Defect Proneness. In Software Analysis, Testing and Evolution (SATE), 2017 International Conference on (pp. 1–9). IEEEGoogle Scholar
  61. [S61] Islam, J. F., Mondal, M., Roy, C. K., & Schneider, K. A. (2017). A comparative study of software bugs in clone and non-clone code. In Proc. SEKE (pp. 436–443)Google Scholar
  62. [S62] Palomba, F., Zanoni, M., Fontana, F. A., De Lucia, A., & Oliveto, R. (2017). Toward a smell-aware bug prediction model. IEEE Transactions on Software Engineering Google Scholar
  63. [S63] Jaafar, F., Lozano, A., Guéhéneuc, Y. G., & Mens, K. (2017, July). On the Analysis of Co-Occurrence of Anti-Patterns and Clones. In Software Quality, Reliability and Security (QRS), 2017 IEEE International Conference on (pp. 274–284). IEEEGoogle Scholar
  64. [S64] Mondal, M., Roy, C. K., & Schneider, K. A. (2017, September). Bug propagation through code cloning: An empirical study. In Software Maintenance and Evolution (ICSME), 2017 IEEE International Conference on (pp. 227–237). IEEEGoogle Scholar
  65. [S65] Rahman, M. S., & Roy, C. K. (2017, September). On the relationships between stability and bug-proneness of code clones: An empirical study. In Source Code Analysis and Manipulation (SCAM), 2017 IEEE 17th International Working Conference on (pp. 131–140). IEEEGoogle Scholar
  66. [S66] Aman, H., Amasaki, S., Yokogawa, T., & Kawahara, M. (2017, August). Empirical Analysis of Words in Comments Written for Java Methods. In Software Engineering and Advanced Applications (SEAA), 2017 43rd Euromicro Conference on(pp. 375–379). IEEEGoogle Scholar
  67. [S67] Bán, D. (2017). The connection between antipatterns and maintainability in Firefox. Acta Cybernetica23(2), 471–490Google Scholar
  68. [S68] Chen, Z., Chen, L., Ma, W., Zhou, X., Zhou, Y., & Xu, B. (2018). Understanding metric-based detectable smells in Python software: A comparative study. Information and Software Technology94, 14–29Google Scholar
  69. [S69] Selim, G. M., Barbour, L., Shang, W., Adams, B., Hassan, A. E., & Zou, Y. (2010, October). Studying the impact of clones on software defects. In Reverse Engineering (WCRE), 2010 17th Working Conference on (pp. 13–21). IEEEGoogle Scholar
  70. [S70] Barbour, L., An, L., Khomh, F., Zou, Y., & Wang, S. (2017). An investigation of the fault-proneness of clone evolutionary patterns. Software Quality Journal, 1–36Google Scholar
  71. [S71] Geiger, R., Fluri, B., Gall, H. C., & Pinzger, M. (2006, March). Relation of code clones and change couplings. In International Conference on Fundamental Approaches to Software Engineering (pp. 411–425). Springer, Berlin, HeidelbergGoogle Scholar
  72. [S72] Kamei, Y., Sato, H., Monden, A., Kawaguchi, S., Uwano, H., Nagura, M., … & Ubayashi, N. (2011, November). An empirical study of fault prediction with code clone metrics. In Software Measurement, 2011 Joint Conference of the 21st Int’l Workshop on and 6th Int’l Conference on Software Process and Product Measurement (IWSM-MENSURA) (pp. 55–61). IEEEGoogle Scholar
  73. [S73] Taba, S. E. S., Khomh, F., Zou, Y., Hassan, A. E., & Nagappan, M. (2013, September). Predicting bugs using antipatterns. In Software Maintenance (ICSM), 2013 29th IEEE International Conference on (pp. 270–279). IEEEGoogle Scholar
  74. [S74] Saboury, A., Musavi, P., Khomh, F., & Antoniol, G. (2017, February). An empirical study of code smells in javascript projects. In 2017 IEEE 24th international conference on software analysis, evolution and reengineering (SANER) (pp. 294–305). IEEEGoogle Scholar

Copyright information

© CIMNE, Barcelona, Spain 2019

Authors and Affiliations

  1. 1.Department of Computer ScienceSri Guru Granth Sahib World UniversityFatehgarh SahibIndia

Personalised recommendations