Empirical Analysis of Static Code Metrics for Predicting Risk Scores in Android Applications

Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 753)

Abstract

Recently, with the purpose of helping developers reduce the needed effort to build highly secure software, researchers have proposed a number of vulnerable source code prediction models that are built on different kinds of features. Identifying security vulnerabilities along with differentiating non-vulnerable from a vulnerable code is not an easy task. Commonly, security vulnerabilities remain dormant until they are exploited. Software metrics have been widely used to predict and indicate several quality characteristics about software, but the question at hand is whether they can recognize vulnerable code from non-vulnerable ones. In this work, we conduct a study on static code metrics, their interdependency, and their relationship with security vulnerabilities in Android applications. The aim of the study is to understand: (i) the correlation between static software metrics; (ii) the ability of these metrics to predict security vulnerabilities, and (iii) which are the most informative and discriminative metrics that allow identifying vulnerable units of code.

Keywords

Static code metrics Risk scores Android Security prediction models 

References

  1. 1.
    Basili, V.R., Briand, L.C., Melo, W.L.: A validation of object-oriented design metrics as quality indicators. IEEE Trans. Softw. Eng. 22(10), 751–761 (1996)CrossRefGoogle Scholar
  2. 2.
    Fernandez-Buglioni, E.: Security Patterns in Practice: Designing Secure Architectures Using Software Patterns. Wiley, Chichester (2013)Google Scholar
  3. 3.
    Wysopal, C., Nelson, L., Dustin, E., Zovi, D.D.: The Art of Software Security Testing: Identifying Software Security Flaws. Pearson Education (2006)Google Scholar
  4. 4.
    Antunes, N., Vieira, M.: Benchmarking vulnerability detection tools for web services. In: 2010 IEEE International Conference on Web Services (ICWS), pp. 203–210. IEEE (2010)Google Scholar
  5. 5.
    Abdlhamed, M., Kifayat, K., Shi, Q., Hurst, W.: Intrusion prediction systems. In: Information Fusion for Cyber-Security Analytics, pp. 155–174. Springer, Cham (2017)Google Scholar
  6. 6.
    Messier, R.: Intrusion detection systems. In: Network Forensics, pp. 187–209 (2017)Google Scholar
  7. 7.
    Cusumano, M.A.: Who is liable for bugs and security flaws in software? Commun. ACM 47(3), 25–27 (2004)CrossRefGoogle Scholar
  8. 8.
    Gorla, A., Tavecchia, I., Gross, F., Zeller, A.: Checking app behavior against app descriptions. In: Proceedings of the 36th International Conference on Software Engineering, pp. 1025–1035. ACM (2014)Google Scholar
  9. 9.
    Peng, H., Gates, C., Sarma, B., Li, N., Qi, Y., Potharaju, R., Nita-Rotaru, C., Molloy, I.: Using probabilistic generative models for ranking risks of Android apps. In: Proceedings of the 2012 ACM Conference on Computer and Communications Security, pp. 241–252. ACM (2012)Google Scholar
  10. 10.
    Rahman, A., Pradhan, P., Partho, A., Williams, L.: Predicting Android application security and privacy risk with static code metrics. In: Proceedings of the 4th International Conference on Mobile Software Engineering and Systems, pp. 149–153. IEEE Press (2017)Google Scholar
  11. 11.
    Syer, M.D., Nagappan, M., Adams, B., Hassan, A.E.: Studying the relationship between source code quality and mobile platform dependence. Softw. Qual. J. 23(3), 485–508 (2015)CrossRefGoogle Scholar
  12. 12.
    Corral, L., Fronza, I.: Better code for better apps: a study on source code quality and market success of Android applications. In: Proceedings of the Second ACM International Conference on Mobile Software Engineering and Systems, pp. 22–32. IEEE Press (2015)Google Scholar
  13. 13.
    Ostrand, T.J., Weyuker, E.J., Bell, R.M.: Predicting the location and number of faults in large software systems. IEEE Trans. Softw. Eng. 31(4), 340–355 (2005)CrossRefGoogle Scholar
  14. 14.
    Menzies, T., Greenwald, J., Frank, A.: Data mining static code attributes to learn defect predictors. IEEE Trans. Softw. Eng. 33(1), 2–13 (2007)CrossRefGoogle Scholar
  15. 15.
    Chess, B., McGraw, G.: Static analysis for security. IEEE Secur. Priv. 2(6), 76–79 (2004)CrossRefGoogle Scholar
  16. 16.
    Arkin, B., Stender, S., McGraw, G.: Software penetration testing. IEEE Secur. Priv. 3(1), 84–87 (2005)CrossRefGoogle Scholar
  17. 17.
    Ko, C., Ruschitzka, M., Levitt, K.: Execution monitoring of security-critical programs in distributed systems: a specification-based approach. In: IEEE Symposium on Security and Privacy, Proceedings, pp. 175–187. IEEE (1997)Google Scholar
  18. 18.
    Liu, M.Y., Traore, I.: Empirical relation between coupling and attackability in software systems: a case study on DOS. In: Proceedings of the 2006 Workshop on Programming Languages and Analysis for Security, pp. 57–64. ACM (2006)Google Scholar
  19. 19.
    Howard, M., Pincus, J., Wing, J.M.: Measuring relative attack surfaces. In: Computer Security in the 21st Century, pp. 109–137. Springer, Heidelberg (2005)Google Scholar
  20. 20.
    Alhazmi, O.H., Malaiya, Y.K., Ray, I.: Measuring, analyzing and predicting security vulnerabilities in software systems. Comput. Secur. 26(3), 219–228 (2007)CrossRefGoogle Scholar
  21. 21.
    Neuhaus, S., Zimmermann, T., Holler, C., Zeller, A.: Predicting vulnerable software components. In: Proceedings of the 14th ACM Conference on Computer and Communications Security, pp. 529–540. ACM (2007)Google Scholar
  22. 22.
    Shin, Y.: Exploring complexity metrics as indicators of software vulnerability. In: Proceedings of the 3rd International Doctoral Symposium on Empirical Software Engineering, Kaiserslautem, Germany (2008)Google Scholar
  23. 23.
    Chowdhury, I., Zulkernine, M.: Using complexity, coupling, and cohesion metrics as early indicators of vulnerabilities. J. Syst. Architect. 57(3), 294–313 (2011)CrossRefGoogle Scholar
  24. 24.
    Shin, Y., Meneely, A., Williams, L., Osborne, J.A.: Evaluating complexity, code churn, and developer activity metrics as indicators of software vulnerabilities. IEEE Trans. Softw. Eng. 37(6), 772–787 (2011)CrossRefGoogle Scholar
  25. 25.
    Alves, H., Fonseca, B., Antunes, N.: Software metrics and security vulnerabilities: dataset and exploratory study. In: 2016 12th European Dependable Computing Conference (EDCC), pp. 37–44. IEEE (2016)Google Scholar
  26. 26.
    Campbell, G., Papapetrou, P.P.: SonarQube in Action. Manning Publications Co., Greenwich (2013)Google Scholar
  27. 27.
    Dunham, K., Hartman, S., Quintans, M., Morales, J.A., Strazzere, T.: Android Malware and Analysis. CRC Press, Boca Raton (2014)CrossRefGoogle Scholar
  28. 28.
    Montgomery, D.C., Runger, G.C.: Applied Statistics and Probability for Engineers. John Wiley & Sons, New York (2010)MATHGoogle Scholar
  29. 29.
    Abunadi, I., Alenezi, M.: An empirical investigation of security vulnerabilities within web applications. J. UCS 22(4), 537–551 (2016)Google Scholar
  30. 30.
    Czibula, G., Marian, Z., Czibula, I.G.: Software defect prediction using relational association rule mining. Inf. Sci. 264, 260–278 (2014)CrossRefGoogle Scholar
  31. 31.
    Hall, M.A., Holmes, G.: Benchmarking attribute selection techniques for discrete class data mining. IEEE Trans. Knowl. Data Eng. 15(6), 1437–1447 (2003)CrossRefGoogle Scholar
  32. 32.
    Lagerström, R., Baldwin, C., MacCormack, A., Sturtevant, D., Doolan, L.: Exploring the relationship between architecture coupling and software vulnerabilities. In: International Symposium on Engineering Secure Software and Systems, pp. 53–69. Springer, Heidelberg (2017)Google Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  1. 1.College of Computer and Information SciencesPrince Sultan UniversityRiyadhSaudi Arabia

Personalised recommendations