Advertisement

Search-Based Refactoring: Metrics Are Not Enough

  • Chris Simons
  • Jeremy Singer
  • David R. White
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9275)

Abstract

Search-based Software Engineering (SBSE) techniques have been applied extensively to refactor software, often based on metrics that describe the object-oriented structure of an application. Recent work shows that in some cases applying popular SBSE tools to open-source software does not necessarily lead to an improved version of the software as assessed by some subjective criteria. Through a survey of professionals, we investigate the relationship between popular SBSE refactoring metrics and the subjective opinions of software engineers. We find little or no correlation between the two. Through qualitative analysis, we find that a simple static view of software is insufficient to assess software quality, and that software quality is dependent on factors that are not amenable to measurement via metrics. We recommend that future SBSE refactoring research should incorporate information about the dynamic behaviour of software, and conclude that a human-in-the-loop approach may be the only way to refactor software in a manner helpful to an engineer.

Keywords

Search-based software engineering Metrics Optimisation Software quality 

Notes

Acknowledgements

We would like to thank our colleagues at UWE for their help in the design of the survey; Mel Ó Cinnéide and Iman Hemati Moghadam for generously sharing code; Per Runeson for his insightful advice; the professional organisations the ACCU and BCS for permitting mailshots to their members; and we are very especially grateful to all those who took the time to respond to the survey.

References

  1. 1.
    ACCU: Association of C and C++ Users. http://www.accu.org/. Accessed: 3 June 2015
  2. 2.
    Alexander, C.: The Timeless Way of Building, vol. 1. Oxford University Press, Oxford (1979)Google Scholar
  3. 3.
    Amal, B., Kessentini, M., Bechikh, S., Dea, J., Said, L.B.: On the use of machine learning and search-based software engineering for Ill-defined fitness function: a case study on software refactoring. In: Le Goues, C., Yoo, S. (eds.) SSBSE 2014. LNCS, vol. 8636, pp. 31–45. Springer, Heidelberg (2014) Google Scholar
  4. 4.
    AMT: Amazon Mechanical Turk. http://www.mturk.com/mturk/welcome/. Accessed: 3 June 2015
  5. 5.
    Apperly, H., Hofman, R., Latchem, S., Maybank, B., McGibbon, B., Piper, D., Simons, C.: Service- and Component-based Development. Addison-Wesley (2003)Google Scholar
  6. 6.
    Azar, D., Harmanani, H., Korkmaz, R.: A hybrid heuristic approach to optimize rule-based software quality estimation models. Inf. Softw. Technol. 51(9), 1365–1376 (2009)CrossRefGoogle Scholar
  7. 7.
    Bansiya, J., Davis, C.G.: A hierarchical model for object-oriented design quality assessment. IEEE Trans. Softw. Eng. 28(1), 4–17 (2002)CrossRefGoogle Scholar
  8. 8.
    de Oliveira Barros, M., de Almeida Farzat, F.: What can a big program teach us about optimization? In: Ruhe, G., Zhang, Y. (eds.) SSBSE 2013. LNCS, vol. 8084, pp. 275–281. Springer, Heidelberg (2013) CrossRefGoogle Scholar
  9. 9.
    Bavota, G., De Lucia, A., Di Penta, M., Oliveto, R., Palomba, F.: An experimental investigation on the innate relationship between quality and refactoring. J. Syst. Softw. 107, 1–14 (2015)CrossRefGoogle Scholar
  10. 10.
    BCS: British Computer Society. http://www.bcs.org/. Accessed: 3 June 2015
  11. 11.
    BCS-BRISTOL: British Computer Society Bristol Branch. http://www.bristol.bcs.org.uk/. Accessed: 3 June 2015
  12. 12.
    Bjork, R.: ATM Simulation. http://www.math-cs.gordon.edu/courses/cs211/ATMExample/. Accessed: 3 June 2015
  13. 13.
    Blair, J., Czaja, R., Blair, E.: Designing Surveys: A Guide to Decisions and Procedures. Sage Publications, London (2014)Google Scholar
  14. 14.
    Gibbs, G.: Analysing Qualitative Data. Sage Publications, London (2007)Google Scholar
  15. 15.
    Glavas, G., Fertalj, K.: Metaheuristic approach to class responsibility assignment problem. In: Proceedings of the ITI 33rd International Conference on Information Technology Interfaces (ITI). IEEE (2011)Google Scholar
  16. 16.
    Glavaš, G., Fertalj, K.: Solving the class responsibility assignment problem using metaheuristic approach. J. Comput. Inf. Technol. 19(4), 275–283 (2011)Google Scholar
  17. 17.
    Groves, R., Fowler, F., Couper, M., Lepkowski, J., Singer, E., Tourangeau, R.: Survey Methodology. Wiley, New York (2004)zbMATHGoogle Scholar
  18. 18.
    Harman, M., Clark, J.: Metrics are fitness functions too. In: Proceedings of the 10th International Symposium on Software Metrics. IEEE (2004)Google Scholar
  19. 19.
    Harman, M., Swift, S., Mahdavi, K.: An empirical study of the robustness of two module clustering fitness functions. In: GECCO 2005. ACM (2005)Google Scholar
  20. 20.
    Harman, M., Tratt, L.: Pareto optimal search based refactoring at the design level. In: GECCO 2007. ACM (2007)Google Scholar
  21. 21.
    ISO/IEC: Standard 25010:2011. http://www.iso.org/iso/catalogue_detail.htm?csnumber=35733. Accessed: 3 June 2015
  22. 22.
    Jensen, A., Cheng, B.: On the use of genetic programming for automated refactoring and the introduction of design patterns. In: GECCO 2010. ACM (2010)Google Scholar
  23. 23.
    Katzmarski, B., Koschke, R.: Program complexity metrics and programmer opinions. In: Proceedings of the 20th International Conference on Program Comprehension (ICPC). IEEE (2012)Google Scholar
  24. 24.
    Khan, Y., Khararah, O.: A systematic review on the relationships between MOOD/QMOOD metrics and external software quality attributes. Technical report, Department of Information and Computer Science, King Fahd University of Petroleum and Minerals, Dhahran, Saudi Arabia (2014)Google Scholar
  25. 25.
    Kitchenham, B.: Guidelines for performing systematic literature reviews in software engineering. Technical report, EBSE-2007-01, School of Computer Science and Mathematics, Keele University, Keele, Staffs, ST5 5BG, United Kingdom (2007)Google Scholar
  26. 26.
    Kitchenham, B., Mendes, E., Travassos, G.: A systematic review of cross-vs. within-company cost estimation studies. In: Proceedings of the 10th International Conference on Evaluation and Assessment in Software Engineering. British Computer Society (2006)Google Scholar
  27. 27.
    Koc, E., Ersoy, N., Andac, A., Camlidere, Z., Cereci, I., Kilic, H.: An empirical study about search-based refactoring using alternative multiple and population-based search techniques. In: Gelenbe, E., Lent, R., Sakellari, G. (eds.) Computer and Information Sciences II, pp. 59–66. Springer, London (2012)Google Scholar
  28. 28.
    Koc, E., Ersoy, N., Camlidere, Z.S., Kilic, H.: A web-service for automated software refactoring using artificial bee colony optimization. In: Tan, Y., Shi, Y., Ji, Z. (eds.) ICSI 2012, Part I. LNCS, vol. 7331, pp. 318–325. Springer, Heidelberg (2012) CrossRefGoogle Scholar
  29. 29.
    Lazar, J., Feng, J., Hochheiser, H.: Research Methods in Human-Computing Interaction. Wiley, New York (2010)Google Scholar
  30. 30.
    LinkedIn: LinkedIn Professional Network. https://uk.linkedin.com/. Accessed: 3 June 2015
  31. 31.
    Moghadam, I., Ó Cinnéide, M.: Code-Imp: a tool for automated search-based refactoring. In: Proceedings of the 4th Workshop on Refactoring Tools (WRT 2011). ACM Press (2011)Google Scholar
  32. 32.
    Ó Cinnéide, M., Tratt, L., Harman, M., Counsell, S., Moghadam, I.: Experimental assessment of software metrics using automated refactoring. In: Proceedings of the ACM-IEEE International Symposium on Empirical Software Engineering and Measurement. ACM (2012)Google Scholar
  33. 33.
    Object Management Group: Unified Modelling Language. http://www.uml.org/. Accessed: 3 June 2015
  34. 34.
    O’Keeffe, M., Ó Cinnéide, M.: Automated design improvement by example. In: Frontiers in Artificial Intelligence and Applications, vol. 161, p. 315 (2007)Google Scholar
  35. 35.
    Petre, M.: UML in practice. In: Proceedings of the International Conference on Software Engineering (ICSE). IEEE (2013)Google Scholar
  36. 36.
    Simons, C., Singer, J., White, D.R.: Survey Materials and Data. http://www.cems.uwe.ac.uk/~clsimons/MetricsAreNotEnough/. Accessed: 3 June 2015
  37. 37.
    Simons, C., Parmee, I.: Elegant object-oriented software design via interactive, evolutionary computation. IEEE Trans. Syst. Man Cybern. Part C: Appl. Rev. 42(6), 1797–1805 (2012)CrossRefGoogle Scholar
  38. 38.
    Sjoberg, D., Anda, B., Mockus, A.: Questioning software maintenance metrics: a comparative case study. In: Proceedings of the ACM-IEEE International Symposium on Empirical Software Engineering and Measurement. ACM Press (2012)Google Scholar
  39. 39.
    Zhang, Y.: SBSE repository. http://crestweb.cs.ucl.ac.uk/resources/sbse_repository/. Accessed: 3 June 2015

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  1. 1.Department of Computer Science and Creative TechnologiesUniversity of West EnglandBristolUK
  2. 2.School of Computing ScienceUniversity of GlasgowGlasgowUK

Personalised recommendations