An ANN Based Approach for Software Fault Prediction Using Object Oriented Metrics

  • Rajdeep Kaur
  • Sumit Sharma
Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 955)


During recent years, the enormous increase in demand for software products has been experienced. High quality software is the major demand of users. Predicting the faults in early stages will improve the quality of software and apparently reduce the development efforts or cost. Fault prediction is majorly based on the selection of technique and the metrics to predict the fault. Thus metrics selection is a critical part of software fault prediction. Currently techniques been evaluated based on traditional set of metrics. There is a need to identify the different techniques and evaluate them on the bases of appropriate metrics. In this research, Artificial neural network is used. For classification task, ANN is one of the most effective technique. Artificial neural network based SFP model is designed for classification in this study. Prediction is performed on the basis of object-oriented metrics. 5 object oriented metrics from CK and Martin metric sets are selected as input parameters. The experiments are performed on 18 public datasets from PROMISE repository. Receiver operating characteristíc curve, accuracy, and Mean squared error are taken as performance parameters for the prediction task. Results of the proposed systems signify that ANN provides significant results in terms of accuracy and error rate.


Fault Software fault prediction Machine learning Artificial intelligence Neural network 


  1. 1.
    Menzies, T., Greenwald, J., Frank, A.: Data mining static code attributes to learn defect predictors. IEEE Trans. Softw. Eng. 33(1), 2–13 (2007)CrossRefGoogle Scholar
  2. 2.
    Cagatay, C.: Software fault prediction: a literature review and current trends. Expert Syst. Appl. 38(4), 4626–4636 (2011)CrossRefGoogle Scholar
  3. 3.
    Pandey, A.K., Goyal, N.K.: Prediction and ranking of fault-prone software modules. In: Pandey, A.K., Goyal, N.K. (eds.) Early Software Reliability Prediction. Springer Series, vol. 303, pp. 81–104. Springer, Heidelberg (2013). Scholar
  4. 4.
    Fenton, E., Ohlsson, N.: Quantitative analysis of faults and failures in a complex software system. IEEE Trans. Softw. Eng. 26(8), 797–814 (2000)CrossRefGoogle Scholar
  5. 5.
    Kaur, R., Sharma, E.S.: Various techniques to detect and predict faults in software system: survey. Int. J. Futur. Revolut. Comput. Sci. Commun. Eng. (IJFRSCE) 4(2), 330–336 (2018)Google Scholar
  6. 6.
    Porter, A., Selby, R.: Empirically guided software development using metric-based classification trees. IEEE Softw. 7(2), 46–54 (1990)CrossRefGoogle Scholar
  7. 7.
    Catal, C., Diri, B.: Investigating the effect of dataset size, metrics sets, and feature selection techniques on software fault prediction problem. Inf. Sci. 179(8), 1040–1058 (2009)CrossRefGoogle Scholar
  8. 8.
    Zheng, J., Williams, L., Nagappan, N., Snipes, W., Hudepohl, J.P., Vouk, M.A.: On the value of static analysis for fault detection in software. IEEE Trans. Softw. Eng. 32(4), 1–14 (2006)CrossRefGoogle Scholar
  9. 9.
    Jiang, Y., Cukic, B., Menzies, T.: Fault prediction using early lifecycle data. In: The 18th IEEE Symposium on Software Reliability Engineering ISSRE 2007, pp. 237–246. IEEE Computer Society, Sweden (2007)Google Scholar
  10. 10.
    Seliya, N., Khoshgoftaar, T.M., Zhong, S.: Analyzing software quality with limited fault-proneness defect data. In: Proceedings of the Ninth IEEE International Symposium on High Asssurance System Engineering, Germany, pp. 89–98 (2005)Google Scholar
  11. 11.
    Erturk, E., Sezer, E.A.: Software fault prediction using Mamdani type fuzzy inference system. Int. J. Data Anal. Tech. Strat. 8(1), 14–28 (2016)CrossRefGoogle Scholar
  12. 12.
    Olague, H.M., Etzkorn, L.H., Gholston, S., Quattlebaum, S.: Empirical validation of three software metrics suites to predict fault-proneness of object-oriented classes developed using highly iterative or agile software development processes. IEEE Trans. Softw. Eng. 33(6), 402–419 (2007)CrossRefGoogle Scholar
  13. 13.
    Cruz, E.C., Ochimizu, K.: Towards logistic regression models for predicting fault-prone code across software projects. In: 3rd International Symposium on Empirical Software Engineering and Measurement, ESEM 2009, pp. 460–463 (2009)Google Scholar
  14. 14.
    Burrows, R., Ferrari, F.C., Lemos, O.A., Garcia, A., Taiani, F.: The impact of coupling on the fault-proneness of aspect-oriented programs: an empirical study. In: 2010 IEEE 21st International Symposium on Software Reliability Engineering (ISSRE), pp. 329–338 (2010)Google Scholar
  15. 15.
    Kapila, H., Singh, S.: Analysis of CK metrics to predict software fault-proneness using bayesian inference. Int. J. Comput. Appl. 74(2), 1–4 (2013)Google Scholar
  16. 16.
    Dejaeger, K., Verbraken, T., Baesens, B.: Towards comprehensible software fault prediction models using Bayesian network classifiers. Inst. Electr. Electron. Eng. IEEE Trans. Softw. Eng. 39(2), 237–257 (2013)Google Scholar
  17. 17.
    Pai, G.J., Dugan, J.B.: Empirical analysis of software fault content and fault proneness using Bayesian methods. Inst. Electr. Electron. Eng. (IEEE) Trans. Softw. Eng. 33(10), 675–686 (2007)Google Scholar
  18. 18.
    Mishra, B., Shukla, K.K.: Defect prediction for object oriented software using support vector based fuzzy classification model. Int. J. Comput. Appl. 60(15), 8–16 (2012)Google Scholar
  19. 19.
    Singh, P., Pal, N.R., Verma, S., Vyas, O.P.: Fuzzy rule-based approach for software fault prediction. Inst. Electr. Electron. Eng. (IEEE) Trans. Syst. Man Cybern.: Syst. 47(5), 826–837 (2017)Google Scholar
  20. 20.
    Goyal, R., Chandra, P., Singh, Y.: Suitability of KNN regression in the development of interaction based software fault prediction models. IERI Procedia 6, 15–21 (2014)CrossRefGoogle Scholar
  21. 21.
    Gyimothy, T., Ferenc, R., Siket, I.: Empirical validation of object-oriented metrics on open source software for fault prediction. IEEE Trans. Softw. Eng. 31(10), 897–910 (2005)CrossRefGoogle Scholar
  22. 22.
    Fokaefs, M., Mikhaiel, R., Tsantalis, N., Stroulia, E., Lau, A.: An empirical study on web service evolution. In: IEEE International Conference on Web Services (ICWS 2011), pp. 49–56 (2011)Google Scholar
  23. 23.
    Malhotra, R., Jain, A.: Fault prediction using statistical and machine learning methods for improving software quality. J. Inf. Process. Syst. 8(2), 241–262 (2012)CrossRefGoogle Scholar
  24. 24.
    Radjenović, D., Heričko, M., Torkar, R., Živkovič, A.: Software fault prediction metrics: a systematic literature review. Inf. Softw. Technol. 55(8), 1397–1418 (2013)CrossRefGoogle Scholar
  25. 25.
    Nagappan, N., Williams, L., Vouk, M., Osborne, J.: Early estimation of software quality using in-process testing metrics. In: Proceedings of the Third Workshop on Software Quality - 3-WoSQ (2005)Google Scholar
  26. 26.
    Pai, G.J., Bechta Dugan, J.: Empirical analysis of software fault content and fault proneness using bayesian methods. IEEE Trans. Softw. Eng. 33(10), 675–686 (2007)CrossRefGoogle Scholar
  27. 27.
    Gondra, A.: Applying machine learning to software fault-proneness prediction. J. Syst. Softw. 81(2), 186–195 (2008)CrossRefGoogle Scholar
  28. 28.
    Lu, H., Cukic, B.: An adaptive approach with active learning in software fault prediction. In: Proceedings of the 8th International Conference on Predictive Models in Software Engineering - PROMISE 2012 (2012)Google Scholar
  29. 29.
    Abaei, G., Selamat, A., Fujita, H.: An empirical study based on semi-supervised hybrid self-organizing map for software fault prediction. Knowl.-Based Syst. 74, 28–39 (2015)CrossRefGoogle Scholar
  30. 30.
    Rathore, S., Gupta, A.: Investigating object-oriented design metrics to predict fault-proneness of software modules. In: 2012 CSI Sixth International Conference on Software Engineering (CONSEG) (2012)Google Scholar
  31. 31.
    Chidamber, S., Kemerer, C.: A metrics suite for object oriented design. IEEE Trans. Softw. Eng. 20(6), 476–493 (1994)CrossRefGoogle Scholar
  32. 32.
    Basili, V., Briand, L., Melo, W.: A validation of object-oriented design metrics as quality indicators. IEEE Trans. Softw. Eng. 22(10), 751–761 (1996)CrossRefGoogle Scholar
  33. 33.
    McCabe, T.: A complexity measure. IEEE Trans. Softw. Eng. 2(4), 308–320 (1976)MathSciNetzbMATHCrossRefGoogle Scholar
  34. 34.
    Chidamber, S., Kemerer, C.: Towards a metrics suite for object oriented design. In: Conference Proceedings on Object-Oriented Programming Systems, Languages, and Applications - OOPSLA 1991 (1991)Google Scholar
  35. 35.
    Chen, J., Liu, S., Chen, X., Gu, Q., Chen, D.: Empirical studies on feature selection for software fault prediction. In: Proceedings of the 5th Asia-Pacific Symposium on Internetware - Internetware 2013 (2013)Google Scholar
  36. 36.
    Chen, J., Liu, S., Liu, W., Chen, X., Gu, Q., Chen, D.: A two-stage data preprocessing approach for software fault prediction. In: 2014 Eighth International Conference on Software Security and Reliability (2014)Google Scholar
  37. 37.
    tera-PROMISE: Welcome to one of the largest repositories of SE research data.
  38. 38.
    Choudhary, G., Kumar, S., Kumar, K., Mishra, A., Catal, C.: Empirical analysis of change metrics for software fault prediction. Comput. Electr. Eng. 67, 15–24 (2018)CrossRefGoogle Scholar
  39. 39.
    Kumar, L., Sureka, A.: Analyzing fault prediction usefulness from cost perspective using source code metrics. In: 2017 Tenth International Conference on Contemporary Computing (IC3) (2017)Google Scholar
  40. 40.
    Owhadi-Kareshk, M., Sedaghat, Y., Akbarzadeh-T, M.: Pre-training of an artificial neural network for software fault prediction. In: 2017 7th International Conference on Computer and Knowledge Engineering (ICCKE) (2017)Google Scholar
  41. 41.
    Rathore, S., Kumar, S.: Towards an ensemble based system for predicting the number of software faults. Expert Syst. Appl. 82, 357–382 (2017)CrossRefGoogle Scholar
  42. 42.
    Arshad, A., Riaz, S., Jiao, L., Murthy, A.: Semi-supervised deep fuzzy c-mean clustering for software fault prediction. IEEE Access 6, 25675–25685 (2018)CrossRefGoogle Scholar
  43. 43.
    Miholca, D., Czibula, G., Czibula, I.: A novel approach for software defect prediction through hybridizing gradual relational association rules with artificial neural networks. Inf. Sci. 441, 152–170 (2018)MathSciNetCrossRefGoogle Scholar
  44. 44.
    Singh, P.: Comprehensive model for software fault prediction. In: Proceedings of the International Conference on Inventive Computing and Informatics (ICICI 2017), pp. 1103–1108 (2017)Google Scholar
  45. 45.
    Jin, C., Jin, S.: Prediction approach of software fault-proneness based on hybrid artificial neural network and quantum particle swarm optimization. Appl. Soft Comput. 35, 717–725 (2015)CrossRefGoogle Scholar
  46. 46.
    Erturk, E., Akcapinar Sezer, E.: Iterative software fault prediction with a hybrid approach. Appl. Soft Comput. 49, 1020–1033 (2016)CrossRefGoogle Scholar
  47. 47.
    Singh, P., Pal, N., Verma, S., Vyas, O.: Fuzzy rule-based approach for software fault prediction. IEEE Trans. Syst. Man Cybern.: Syst. 47(5), 826–837 (2017)CrossRefGoogle Scholar
  48. 48.
    Pattnaik, S., Kumar Pattanayak, B.: Empirical analysis of software quality prediction using a TRAINBFG algorithm. Int. J. Eng. Technol. 7(26), 259 (2018)CrossRefGoogle Scholar
  49. 49.
    Meyer, B.: The role of object-oriented metrics. Computer 31(11), 123–127 (1998)CrossRefGoogle Scholar
  50. 50.
    Aggarwal, K., Singh, Y., Kaur, A., Malhotra, R.: Empirical study of object-oriented metrics. J. Object Technol. 5(8), 149 (2006)CrossRefGoogle Scholar
  51. 51.
    Erturk, E., Sezer, E.: A comparison of some soft computing methods for software fault prediction. Expert Syst. Appl. 42(4), 1872–1879 (2015)CrossRefGoogle Scholar
  52. 52.
    Castro, C., Braga, A.: Optimization of the area under the ROC curve. In: 2008 10th Brazilian Symposium on Neural Networks (2008)Google Scholar
  53. 53.
    An analysis of the area under the ROC curve and its use as a metric for comparing clinical scorecards. In: 2014 IEEE International Conference on Bioinformatics and Biomedicine (BIBM) (2014)Google Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2019

Authors and Affiliations

  1. 1.Department of Computer Science EngineeringChandigarh UniversityGharuan, MohaliIndia

Personalised recommendations