Effects of Mean Metric Value Over CK Metrics Distribution Towards Improved Software Fault Predictions

  • Pooja KapoorEmail author
  • Deepak Arora
  • Ashwani Kumar
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 553)


Object Oriented software design metrics has already proven capability in assessing the overall quality of any object oriented software system. At the design level it is very much desirable to estimate software reliability, which is one of the major indicators of software quality. The reliability can also be predicted with help of identifying useful patterns and applying that knowledge in constructing the system in a more specified and reliable manner. Prediction of software fault at design level will also be helpful in reducing the overall development and maintenance cost. Authors have classified data on the basis of fault occurrence and identified some of the classification algorithm performance up to 97%. The classification is carried out using different classification techniques available in Waikato Environment for Knowledge Analysis (WEKA). Classifiers were applied over defect dataset collected from NASA promise repository for different versions of four systems namely jedit, tomact, xalan, and lucene. The defect data set consist of six metrics of CK metric suite as input set and fault as class variable. Outputs of different classifiers are discussed using measures produced by data mining tool WEKA. Authors found Naive Bayes classifier as one of the best classifiers in terms of classification accuracy. Results show that if overall distribution of CK metrics is as per proposed Mean Metric Value (MMV), the probability of overall fault occurrence can be predicted under consideration of lower standard deviation values with respect to given metric values.


CK metrics Classifier Threshold WEKA Naive Bayes 


  1. 1.
    V. Basili, L. Briand, and W. Melo, “A Validation of Object-Oriented Design Metrics as Quality Indicators,” IEEE Trans. Software Eng., vol. 22, no. 10, pp. 751–761, Oct. 1996.Google Scholar
  2. 2.
    A. Deepak, K. Pooja, T. Alpika, S. Sharma,“Software quality estimation through object oriented design metrics”, IJCSNS International journal of computer science and network security, april 2011, pp 100–104.Google Scholar
  3. 3.
    Chug, S. Dhall, “Software Defect Prediction Using Supervised Learning Algorithm and Unsupervised Learning Algorithm”, The Next Generation Information Technology Summit (4th International Conference), 2013, pp. 5.01–5.01Google Scholar
  4. 4.
    Laing, Victor & Coleman, Charles: “Principal Components of Orthogonal Object-Oriented Metrics”. White Paper Analyzing Results of NASA Object-Oriented Data. SATC, NASA, 2001.Google Scholar
  5. 5.
    S Benlarbi and W. Melo: “Polymorphism Measures for Earl Risk Prediction”. In Proceedings of the 2 Lt lntenational Conference on Software Engineering, pages 334–344, 1999.Google Scholar
  6. 6.
    A. Binkle and S. Schach: “Validation of the Coupling be pendency Metric as a Predictor of Run-Time Fauilures and Maintenance Measures”. In Proceedings of the 20th lnternational Conference on!$oftware Engineering, pages 452–455, 1998.Google Scholar
  7. 7.
    L. Briand, P. Devanbu, and W. Melo: “An Investigation into Coupling Measures for C++”. In Proceedings of the 19th lntemational Conference on Software Engineering, 1997.Google Scholar
  8. 8.
    L. Briand, J. Wuest, S. Ikonomovski, and H. Lounis: A Comprehensive Investigation of Quality Factors in Object-Oriented Designs: An Industrial Case Stud International Software Engineering Researcc’Network technical report.Google Scholar
  9. 9.
    Chidamber, Shyam, Kemerer, Chris F. “A Metrics Suite for Object-Oriented Design.” M.I.T. Sloan School of Management E53–315, 1993.Google Scholar
  10. 10.
    Abreu, Fernando B., Carapuca, Rogerio.: “Candidate Metrics for Object-Oriented Software within a Taxonomy Framework.”, Journal of systems software 26, 1(July 1994).Google Scholar
  11. 11.
    Lorenz, Mar, K. Jeff: “Object-Oriented Software Metrics”, Prentice Hall, 1994.Google Scholar
  12. 12.
    Qureshi, M., and WaseemQureshi. “Evaluation of the Design Metric to Reduce the Number of Defects in Software Development.” arXiv preprint arXiv:1204.4909, 2012.
  13. 13.
    R. Shatnawi and W. Li, “The effectiveness of software metrics in identifying error-prone classes in post-release software evolution process,” Journal of Systems and Software, vol. 81, no. 11, pp. 1868–1882, 2008.Google Scholar
  14. 14.
    Benlarbi, S., El-Emam, K., Goel, N., and Rai, S., Thresholds for ObjectOriented Measures. NRC/ERB 1073. (National Research Council of Canada), 2000.Google Scholar
  15. 15.
    ShatnawiRaed “A Quantitative Investigation of the Acceptable Risk Levels of Object-Oriented Metrics in Open-Source Systems”, IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, VOL. 36, NO. 2, MARCH/APRIL 2010.Google Scholar
  16. 16.
    Shatnawi, Raed, et al. “Finding software metrics threshold values using ROC curves.” Journal of software maintenance and evolution: Research and practice 22.1, 2010: 1–16.Google Scholar
  17. 17.
    ShatnawiRaed, and QutaibahAlthebyan. “An Empirical Study of the Effect of Power Law Distribution on the Interpretation of OO Metrics.” ISRN Software Engineering 2013.Google Scholar
  18. 18.
    ShatnawiRaed, “The Validation and Threshold Values of Object-Oriented Metrics”, Ph.D. Dissertation. University of Alabama in Huntsville, Huntsville, AL, USA. Advisor(s) Wei Li, 2000.Google Scholar
  19. 19.
    RaedShatnawi, “Empirical study of fault prediction for open-source systems using the Chidamber and Kemerer metrics”, IET Software, Volume 8, issue 3, 2014, pp. 113–119.Google Scholar
  20. 20.
    C. Jin, S.-W. Jin, J.-M. Ye, “Artificial neural network-based metric selection for softwarefault-prone prediction model”, IET Software, Volume 6, issue 6, 2012, pp. 479–487.Google Scholar
  21. 21.
    T. Gyimothy, R. Ferenc, and I. Sik et, “Empirical validation of object-oriented metrics on open source software for fault prediction,” IEEE Transactions on Software Engineering, vol. 31, no. 10, pp. 897–910, 2005.Google Scholar
  22. 22.
    I. TURNU, G. CONCAS, M. MARCHESI, R. TONELLI, “Entropy of some CK metrics to assess object oriented software quality”, International Journal of Software Engineering and Knowledge Engineering 2013, 173–188.Google Scholar
  23. 23.
  24. 24.
    T. Mitchell, Machine Learning, Tata McGraw-Hill, 2013. ISBN: 9781259096952.Google Scholar
  25. 25.
    Fawcett, Tom. “An introduction to ROC analysis.” Pattern recognition letters 27.8, 2006: 861–874.Google Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2017

Authors and Affiliations

  1. 1.Department of Computer Science & EngineeringAmity UniversityLucknowIndia
  2. 2.Area of IT & SystemsIIMLucknowIndia

Personalised recommendations