Advertisement

Assessing the User-Perceived Quality of Source Code Components Using Static Analysis Metrics

  • Valasia Dimaridou
  • Alexandros-Charalampos Kyprianidis
  • Michail Papamichail
  • Themistoklis DiamantopoulosEmail author
  • Andreas Symeonidis
Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 868)

Abstract

Nowadays, developers tend to adopt a component-based software engineering approach, reusing own implementations and/or resorting to third-party source code. This practice is in principle cost-effective, however it may also lead to low quality software products, if the components to be reused exhibit low quality. Thus, several approaches have been developed to measure the quality of software components. Most of them, however, rely on the aid of experts for defining target quality scores and deriving metric thresholds, leading to results that are context-dependent and subjective. In this work, we build a mechanism that employs static analysis metrics extracted from GitHub projects and defines a target quality score based on repositories’ stars and forks, which indicate their adoption/acceptance by developers. Upon removing outliers with a one-class classifier, we employ Principal Feature Analysis and examine the semantics among metrics to provide an analysis on five axes for source code components (classes or packages): complexity, coupling, size, degree of inheritance, and quality of documentation. Neural networks are thus applied to estimate the final quality score given metrics from these axes. Preliminary evaluation indicates that our approach effectively estimates software quality at both class and package levels.

Keywords

Code quality Static analysis metrics User-perceived quality Principal Feature Analysis 

References

  1. 1.
    Alves, T.L., Ypma, C., Visser, J.: Deriving metric thresholds from benchmark data. In: IEEE International Conference on Software Maintenance (ICSM), pp. 1–10. IEEE (2010)Google Scholar
  2. 2.
    Cai, T., Lyu, M.R., Wong, K.F., Wong, M.: ComPARE: a generic quality assessment environment for component-based software systems. In: proceedings of the 2001 International Symposium on Information Systems and Engineering (2001)Google Scholar
  3. 3.
    Chidamber, S.R., Kemerer, C.F.: A metrics suite for object oriented design. IEEE Trans. Softw. Eng. 20(6), 476–493 (1994)CrossRefGoogle Scholar
  4. 4.
    Diamantopoulos, T., Thomopoulos, K., Symeonidis, A.: QualBoa: reusability-aware recommendations of source code components. In: IEEE/ACM 13th Working Conference on Mining Software Repositories (MSR), pp. 488–491. IEEE (2016)Google Scholar
  5. 5.
    Dimaridou, V., Kyprianidis, A.C., Papamichail, M., Diamantopoulos, T., Symeonidis, A.: Towards modeling the user-perceived quality of source code using static analysis metrics. In: 12th International Conference on Software Technologies (ICSOFT), Madrid, Spain, pp. 73–84 (2017)Google Scholar
  6. 6.
    Ferreira, K.A., Bigonha, M.A., Bigonha, R.S., Mendes, L.F., Almeida, H.C.: Identifying thresholds for object-oriented software metrics. J. Syst. Softw. 85(2), 244–257 (2012)CrossRefGoogle Scholar
  7. 7.
    Foucault, M., Palyart, M., Falleri, J.R., Blanc, X.: Computing contextual metric thresholds. In: Proceedings of the 29th Annual ACM Symposium on Applied Computing, pp. 1120–1125. ACM (2014)Google Scholar
  8. 8.
    Hegedűs, P., Bakota, T., Ladányi, G., Faragó, C., Ferenc, R.: A drill-down approach for measuring maintainability at source code element level. Electron. Commun. EASST 60 (2013)Google Scholar
  9. 9.
    Heitlager, I., Kuipers, T., Visser, J.: A practical model for measuring maintainability. In: 6th International Conference on the Quality of Information and Communications Technology, QUATIC 2007, pp. 30–39. IEEE (2007)Google Scholar
  10. 10.
    ISO/IEC 25010:2011 (2011). https://www.iso.org/obp/ui/#iso:std:iso-iec:25010:ed-1:v1:en. Accessed Nov 2017
  11. 11.
    Kanellopoulos, Y., Antonellis, P., Antoniou, D., Makris, C., Theodoridis, E., Tjortjis, C., Tsirakis, N.: Code quality evaluation methodology using the ISO/IEC 9126 standard. Int. J. Softw. Eng. Appl. 1(3), 17–36 (2010)Google Scholar
  12. 12.
    Le Goues, C., Weimer, W.: Measuring code quality to improve specification mining. IEEE Trans. Softw. Eng. 38(1), 175–190 (2012)CrossRefGoogle Scholar
  13. 13.
    Lu, Y., Cohen, I., Zhou, X.S., Tian, Q.: Feature selection using principal feature analysis. In: Proceedings of the 15th ACM International Conference on Multimedia, pp. 301–304. ACM (2007)Google Scholar
  14. 14.
    Miguel, J.P., Mauricio, D., Rodríguez, G.: A review of software quality models for the evaluation of software products. arXiv preprint arXiv:1412.2977 (2014)
  15. 15.
    Papamichail, M., Diamantopoulos, T., Symeonidis, A.: User-perceived source code quality estimation based on static analysis metrics. In: IEEE International Conference on Software Quality, Reliability and Security (QRS), pp. 100–107. IEEE (2016)Google Scholar
  16. 16.
    Pfleeger, S.L., Atlee, J.M.: Software Engineering: Theory and Practice. Pearson Education India, Delhi (1998)Google Scholar
  17. 17.
    Pfleeger, S., Kitchenham, B.: Software quality: the elusive target. IEEE Softw. 13, 12–21 (1996)CrossRefGoogle Scholar
  18. 18.
    Samoladas, I., Gousios, G., Spinellis, D., Stamelos, I.: The SQO-OSS quality model: measurement based open source software evaluation. In: Russo, B., Damiani, E., Hissam, S., Lundell, B., Succi, G. (eds.) OSS 2008. ITIFIP, vol. 275, pp. 237–248. Springer, Boston, MA (2008).  https://doi.org/10.1007/978-0-387-09684-1_19CrossRefGoogle Scholar
  19. 19.
    Schmidt, C.: Agile Software Development Teams. Progress in IS. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-26057-0CrossRefGoogle Scholar
  20. 20.
    Shatnawi, R., Li, W., Swain, J., Newman, T.: Finding software metrics threshold values using ROC curves. J. Softw.: Evol. Process 22(1), 1–16 (2010)CrossRefGoogle Scholar
  21. 21.
    SourceMeter static analysis tool (2017). https://www.sourcemeter.com/. Accessed Nov 2017
  22. 22.
    Taibi, F.: Empirical analysis of the reusability of object-oriented program code in open-source software. Int. J. Comput. Inf. Syst. Control Eng. 8(1), 114–120 (2014)Google Scholar
  23. 23.
    Washizaki, H., Namiki, R., Fukuoka, T., Harada, Y., Watanabe, H.: A framework for measuring and evaluating program source code quality. In: Münch, J., Abrahamsson, P. (eds.) PROFES 2007. LNCS, vol. 4589, pp. 284–299. Springer, Heidelberg (2007).  https://doi.org/10.1007/978-3-540-73460-4_26CrossRefGoogle Scholar
  24. 24.
    Zhong, S., Khoshgoftaar, T.M., Seliya, N.: Unsupervised learning for expert-based software quality estimation. In: HASE, pp. 149–155 (2004)Google Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  • Valasia Dimaridou
    • 1
  • Alexandros-Charalampos Kyprianidis
    • 1
  • Michail Papamichail
    • 1
  • Themistoklis Diamantopoulos
    • 1
    Email author
  • Andreas Symeonidis
    • 1
  1. 1.Electrical and Computer Engineering DepartmentAristotle University of ThessalonikiThessalonikiGreece

Personalised recommendations