Background: Software Quality and Reliability Prediction

Part of the Studies in Fuzziness and Soft Computing book series (STUDFUZZ, volume 303)


Size, complexity, and human dependency on software--based products have grown dramatically during past decades. Software developers are struggling to deliver reliable software with acceptable level of quality, within given budget and schedule. One measure of software quality and reliability is the number of residual faults. Therefore, researchers are focusing on the identification of the number of fault presents in the software or identification of program modules that are most likely to contain faults. A lot of models have been developed using various techniques. A common approach is followed for software reliability prediction utilizing failure data. Software reliability and quality prediction is highly desired by the stakeholders, developers, managers, and end users. Detecting software faults early during development will definitely improve the reliability and quality in cost-effective way.


Software Reliability Fault Prediction Software Metrics Capability Maturity Model Test Case Prioritization 


  1. Musa, J. D., Iannino, A., & Okumoto, K. (1987). Software reliability: measurement, prediction, and application. New York: McGraw–Hill Publication.Google Scholar
  2. Goel, A. L., & Okumoto, K. (1979). A time-dependent error detection rate model for software reliability and other performance measure. IEEE Transaction on Reliability, R-28, 206–211.Google Scholar
  3. Pham, H. (2006). System software reliability, reliability engineering series. London: Springer.Google Scholar
  4. Lyu, M. R. (1996). Handbook of software reliability engineering. NY: McGraw–Hill/IEE Computer Society Press.Google Scholar
  5. Goel, A. L. (1985). Software reliability models: assumptions, limitations, and applicability. IEEE Transaction on Software Engineering, SE–11(12), 1411–1423.Google Scholar
  6. Kapur, P. K., & Garg, R. B. (1990). A software reliability growth model under imperfect debugging. RAIRO, 24, 295–305.MATHGoogle Scholar
  7. Mills, H. D. (1972). On the statistical validation of computer program (pp. 72–6015). Gaithersburg, MD: IBM Federal Systems Division.Google Scholar
  8. Lipow, M. (1972). Estimation of software package residual errors. Software Series Report TRW-SS-09, Redondo Beach, CA: TRW.Google Scholar
  9. Cai, K. Y. (1998). On estimating the number of defects remaining in software. Journal of System and Software, 40(1).Google Scholar
  10. Tohma, Y., Yamano, H., Ohba, M., & Jacoby, R. (1991). The estimation of parameter of the hypergeometric distribution and its application to the software reliability growth model. IEEE Transaction on Software Engineering, SE 17(2).Google Scholar
  11. Wood, A. (1996). Software reliability growth models. Technical report 96.1, part number 130056.Google Scholar
  12. Gokhale, S. S., Wong, W. E., Horgan, J. R., & Trivedi, K. S. (2004). An analytical approach to architecture-based software performance and reliability prediction. Performance Evaluation, 58, 391–412.CrossRefGoogle Scholar
  13. Littlewood, B. (1979). Software reliability model for modular program structure. IEEE Transaction on Reliability, R-28(3), 241–247.Google Scholar
  14. Popstojanova, K. G., & Trivedi, K. S. (2001). Architecture-based approach to reliability assessment of software systems. Performance Evaluation, 45, 179–204.MATHCrossRefGoogle Scholar
  15. Gokhale, S. S., & Trivedi, K. S. (2006). Analytical models for architecture-based software reliability prediction: a unification framework. IEEE Transaction on Reliability, 55(4), 578–590.CrossRefGoogle Scholar
  16. Gokhale, S. S. (2007). Architecture-based software reliability analysis: overview and limitations. IEEE Transaction on Dependable and Secure Computing, 4(1), 32–40.CrossRefGoogle Scholar
  17. Littlewood, B., & Verrall, J. (1973). A bayesian reliability growth model for computer software. Journal of the Royal Statistical Society, series C, 22(3), 332–346.MathSciNetGoogle Scholar
  18. Gaffney, G. E., & Pietrolewiez, J. (1990). An automated model for software early error prediction (SWEEP). In Proceeding of 13th Minnow Brook Workshop on Software Reliability.Google Scholar
  19. Rome Laboratory (1992). Methodology for software reliability prediction and assessment (Vols. 1–2). Technical report RL-TR-92-52.Google Scholar
  20. Li, M., & Smidts, C. (2003). A ranking of software engineering measures based on expert opinion. IEEE Transaction on Software Engineering, 29(9), 24–811.Google Scholar
  21. Kumar, K. S., & Misra, R. B. (2008). An enhanced model for early software reliability prediction using software engineering metrics. In Proceedings of 2nd International Conference on Secure System Integration and Reliability Improvement (pp. 177–178).Google Scholar
  22. IEEE (1988). IEEE guide for the use of IEEE standard dictionary of measures to produce reliable software. IEEE Standard 982.2.Google Scholar
  23. Fenton, N. (1991). Software metrics: A rigorous approach. London: Chapmann & Hall.MATHGoogle Scholar
  24. Zhang, X., & Pham, H. (2000). An analysis of factors affecting software reliability. The Journal of Systems and Software, 50(1), 43–56.CrossRefGoogle Scholar
  25. Agrawal, M., & Chari, K. (2007). Software effort, quality and cycle time: A study of CMM level 5 projects. IEEE Transaction on Software Engineering, 33(3), 145–156.CrossRefGoogle Scholar
  26. Paulk, M. C., Weber, C. V., Curtis, B., & Chrissis, M. B. (1993). Capability maturity model version 1.1. IEEE Software, 10(3), 18–27.CrossRefGoogle Scholar
  27. Diaz, M., & Sligo, J. (1997). How software process improvement helped Motorola. IEEE Software, 14(5), 75–81.CrossRefGoogle Scholar
  28. Krishnan, M. S., & Kellner, M. I. (1999). Measuring process consistency: implications reducing software defects. IEEE Transaction on Software Engineering, 25(6), 800–815.CrossRefGoogle Scholar
  29. Harter, D. E., Krishnan, M. S., & Slaughter, S. A. (2000). Effects of process maturity on quality, cycle time and effort in software product development. Management Science, 46, 451–466.CrossRefGoogle Scholar
  30. Lipow, M. (1982). Number of faults per line of code. IEEE Transaction on Software Engineering, SE–8(4), 437–439.Google Scholar
  31. Yu, T. J., Shen, V. Y., & Dunsmore, H. E. (1988). An analysis of several software defect models. IEEE Transaction on Software Engineering, 14(9), 261–270.Google Scholar
  32. Levendel, Y. (1990). Reliability analysis of large software systems: Defect data modeling. IEEE Transaction on Software Engineering, 16(2), 141–152.CrossRefGoogle Scholar
  33. Agresti, W. W., & Evanco, W. M. (1992). Projecting software defect from analyzing Ada design. IEEE Transaction on Software Engineering, 18(11), 988–997.CrossRefGoogle Scholar
  34. Wohlin, C. & Runeson, P. (1998). Defect content estimations from review data. In Proceedings of 20th International Conference on Software Engineering (pp. 400–409).Google Scholar
  35. Fenton, N. E., & Neil, M. (1999). A critique of software defect prediction models. IEEE Transaction on Software Engineering, 25(5), 675–689.CrossRefGoogle Scholar
  36. Briand, L. C., Emam, K. E., Freimut, B. G., & Laitenberger, O. (2000). A comprehensive evaluation of capture: Recapture models for estimating software defect content. IEEE Transaction on Software Engineering, 26(8), 518–540.CrossRefGoogle Scholar
  37. El-Emam, K., Melo, W., & Machado, J. C. (2001). The prediction of faulty classes using object-oriented design metrics. Journal of Systems and Software, 56(1), 63–75.CrossRefGoogle Scholar
  38. Fenton, N., Neil, N., Marsh, W., Hearty, P., Radlinski, L., & Krause, P. (2008). On the effectiveness of early life cycle defect prediction with Bayesian Nets. Empirical of Software Engineering, 13, 499–537.CrossRefGoogle Scholar
  39. Catal, C., & Diri, B. (2009). Investigating the effect of dataset size, metrics set, and feature selection techniques on software fault prediction problem. Information Sciences, 179(8), 1040–1058.CrossRefGoogle Scholar
  40. Pandey, A. K., & Goyal, N. K. (2009). A fuzzy model for early software fault prediction using process maturity and software metrics. International Journal of Electronics Engineering, 1(2), 239–245.Google Scholar
  41. Khoshgoftaar, T. M., & Allen, E. B. (1999). A comparative study of ordering and classification of fault-prone software modules. Empirical Software Engineering, 4, 159–186.CrossRefGoogle Scholar
  42. Khoshgoftaar, T. M., & Seliya, N. (2002). Tree-based software quality models for fault prediction. In Proceedings of 8th International Software Metrics Symposium, Ottawa, Ontario, Canada (203–214).Google Scholar
  43. Khoshgoftaar, T. M., & Seliya, N. (2003). Fault prediction modeling for software quality estimation: comparing commonly used techniques. Empirical Software Engineering, 8, 255–283.CrossRefGoogle Scholar
  44. Singh, Y., Kaur, A., & Malhotra, R. (2008). Predicting software fault proneness model using neural network. LNBIP 9, Springer.Google Scholar
  45. Singh, Y., Kaur, A., & Malhotra, R. (2009). Software fault proneness prediction using support vector machines. In The Proceedings of the World Congress on Engineering, London, UK, 1–3 July.Google Scholar
  46. Kumar, K. S. (2009). Early software reliability and quality prediction (Ph.D. Thesis, IIT Kharagpur, Kharagpur, India).Google Scholar
  47. Schneidewind, N. F. (2001). Investigation of logistic regression as a discriminant of software quality. In The Proceedings of 7th International Software Metrics Symposium, London, UK (pp. 328–337).Google Scholar
  48. Menzies, T., Greenwald, J., & Frank, A. (2007). Data mining static code attributes to learn defect predictors. IEEE Transactions on Software Engineering, 33(1), 2–13.CrossRefGoogle Scholar
  49. Seliya, N., & Khoshgoftaar, T. M. (2007). Software quality estimation with limited fault data: A semi-supervised learning perspective. Software Quality Journal, 15(3), 327–344.CrossRefGoogle Scholar
  50. Catal, C., & Diri, B. (2008). A fault prediction model with limited fault data to improve test process. In Proceedings of the 9th International Conference on Product Focused Software Process Improvement (pp. 244–257).Google Scholar
  51. Munson, J. C., & Khoshgoftaar, T. M. (1992). The detection of fault-prone programs. IEEE Transactions on Software Engineering, 18(5), 423–433.CrossRefGoogle Scholar
  52. Ohlsson, N., & Alberg, H. (1996). Predicting fault-prone software modules in telephone switches. IEEE Transaction on Software Engineering, 22(12), 886–894.CrossRefGoogle Scholar
  53. Harrold, M., Gupta, R., & Soffa, M. (1993). A methodology for controlling the size of a test suite. ACM Transaction on Software Engineering and Methodology, 2(3), 270–285.CrossRefGoogle Scholar
  54. Rothermel, G., & Harrold, M. J. (1996). Analyzing regression test selection techniques. IEEE Transaction on Software Engineering, 22(8), 529–551.CrossRefGoogle Scholar
  55. Wong, W. E., Horgan, J. R., London, S. & Agrawal, H. (1997). A study of effective regression testing in practice. In Proceedings of the Eighth Int’l Symposium on Software Reliability Engineering (pp. 230–238).Google Scholar
  56. Rothermel, G., Untch, R. H., Chu, C., & Harrold, M. J. (1999). Test case prioritization: An empirical study. In Procedings of the. Int’l Conf. Software Maintenance (pp. 179–188).Google Scholar
  57. Elbaum, S., Malishevsky, A., & Rothermel, G. (2000). Prioritizing test cases for regression testing. In Proceedings of the International Symposium on Software Testing and Analysis (pp. 102–112).Google Scholar
  58. Elbaum, S., Malishevsky, A., & Rothermel, G. (2002). Test case prioritization: a family of empirical studies. IEEE Transaction of Software Engineering, 28(2), 159–182.CrossRefGoogle Scholar
  59. Elbaum, S., Kallakuri, P., Malishevsky, A., Rothermel, G., & Kanduri, S. (2003). Understanding the effects of changes on the cost-effectiveness of regression testing techniques. Journal of Software, Verification and Reliability, 12(2), 65–83.CrossRefGoogle Scholar
  60. Elbaum, S., Rothermel, G., Kanduri, S., & Malishevsky, A. G. (2004). Selecting a cost-effective test case prioritization technique. Software Quality Journal, 12(3), 185–210.CrossRefGoogle Scholar
  61. Do, H., Rothermel, G., & Kinneer, A. (2006). Prioritizing Junit test cases: An empirical assessment and cost-benefits analysis. Empirical Software Engineering, 11, 33–70.CrossRefGoogle Scholar
  62. Qu, B., Nie, C., Xu, B. & Zhang, X. (2007). Test case prioritization for black box testing. In The Proceedings of 31st Annual International Computer Software and Applications Conference.Google Scholar
  63. Park, H., Ryu, H., & Baik, J. (2008). Historical value-based approach for cost-cognizant test case prioritization to improve the effectiveness of regression testing. In The Proceedings 2nd International Conference on Secure System Integration and Reliability Improvement (pp. 39–46).Google Scholar
  64. Khan, S. R., Rehman, I., & Malik, S. (2009). The impact of test case reduction and prioritization on software testing effectiveness. In Proceeding of International Conference on Emerging Technologies (pp. 416–421).Google Scholar
  65. Kim, S., & Baik J. (2010). An effective fault aware test case prioritization by incorporating a fault localization technique. In Proceedings of ESEM-10, Bolzano-Bozen, Italy (pp. 16–17).Google Scholar
  66. Musa, J. D. (2005). Software reliability engineering: more reliable software faster and cheaper (2nd ed.). Tata McGraw-Hill Publication.Google Scholar
  67. Musa, J. D. (1993). Operational profiles in software reliability engineering. IEEE Software Magazine.Google Scholar
  68. Koziolek, H. (2005). Operational profiles for software reliability. Seminar on Dependability Engineering, Germany.Google Scholar
  69. Arora, S., Misra, R. B., & Kumre, V. M. (2005). Software reliability improvement through operational profile driven testing. In Proceedings of Annual IEEE Conference on Reliability and Maintainability Symposium, Virginia (pp. 621–627).Google Scholar
  70. Pandey, A. K., Smith, J., & Diwanji, V. (2012). Cost effective reliability centric validation model for automotive ECUs. In The 23rd IEEE International Symposium on Software Reliability Engineering, Dallas, TX USA (pp. 38–44).Google Scholar

Copyright information

© Springer India 2013

Authors and Affiliations

  1. 1.Engineering and Manufacturing ServicesCognizant Technology SolutionHyderabadIndia
  2. 2.Reliability Engineering CentreIndian Institute of Technology KharagpurKharagpurIndia

Personalised recommendations