Skip to main content

Background: Software Quality and Reliability Prediction

  • Chapter
  • First Online:
Early Software Reliability Prediction

Part of the book series: Studies in Fuzziness and Soft Computing ((STUDFUZZ,volume 303))

Abstract

Size, complexity, and human dependency on software--based products have grown dramatically during past decades. Software developers are struggling to deliver reliable software with acceptable level of quality, within given budget and schedule. One measure of software quality and reliability is the number of residual faults. Therefore, researchers are focusing on the identification of the number of fault presents in the software or identification of program modules that are most likely to contain faults. A lot of models have been developed using various techniques. A common approach is followed for software reliability prediction utilizing failure data. Software reliability and quality prediction is highly desired by the stakeholders, developers, managers, and end users. Detecting software faults early during development will definitely improve the reliability and quality in cost-effective way.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

eBook
USD 16.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  • Musa, J. D., Iannino, A., & Okumoto, K. (1987). Software reliability: measurement, prediction, and application. New York: McGraw–Hill Publication.

    Google Scholar 

  • Goel, A. L., & Okumoto, K. (1979). A time-dependent error detection rate model for software reliability and other performance measure. IEEE Transaction on Reliability, R-28, 206–211.

    Google Scholar 

  • Pham, H. (2006). System software reliability, reliability engineering series. London: Springer.

    Google Scholar 

  • Lyu, M. R. (1996). Handbook of software reliability engineering. NY: McGraw–Hill/IEE Computer Society Press.

    Google Scholar 

  • Goel, A. L. (1985). Software reliability models: assumptions, limitations, and applicability. IEEE Transaction on Software Engineering, SE–11(12), 1411–1423.

    Google Scholar 

  • Kapur, P. K., & Garg, R. B. (1990). A software reliability growth model under imperfect debugging. RAIRO, 24, 295–305.

    MATH  Google Scholar 

  • Mills, H. D. (1972). On the statistical validation of computer program (pp. 72–6015). Gaithersburg, MD: IBM Federal Systems Division.

    Google Scholar 

  • Lipow, M. (1972). Estimation of software package residual errors. Software Series Report TRW-SS-09, Redondo Beach, CA: TRW.

    Google Scholar 

  • Cai, K. Y. (1998). On estimating the number of defects remaining in software. Journal of System and Software, 40(1).

    Google Scholar 

  • Tohma, Y., Yamano, H., Ohba, M., & Jacoby, R. (1991). The estimation of parameter of the hypergeometric distribution and its application to the software reliability growth model. IEEE Transaction on Software Engineering, SE 17(2).

    Google Scholar 

  • Wood, A. (1996). Software reliability growth models. Technical report 96.1, part number 130056.

    Google Scholar 

  • Gokhale, S. S., Wong, W. E., Horgan, J. R., & Trivedi, K. S. (2004). An analytical approach to architecture-based software performance and reliability prediction. Performance Evaluation, 58, 391–412.

    Article  Google Scholar 

  • Littlewood, B. (1979). Software reliability model for modular program structure. IEEE Transaction on Reliability, R-28(3), 241–247.

    Google Scholar 

  • Popstojanova, K. G., & Trivedi, K. S. (2001). Architecture-based approach to reliability assessment of software systems. Performance Evaluation, 45, 179–204.

    Article  MATH  Google Scholar 

  • Gokhale, S. S., & Trivedi, K. S. (2006). Analytical models for architecture-based software reliability prediction: a unification framework. IEEE Transaction on Reliability, 55(4), 578–590.

    Article  Google Scholar 

  • Gokhale, S. S. (2007). Architecture-based software reliability analysis: overview and limitations. IEEE Transaction on Dependable and Secure Computing, 4(1), 32–40.

    Article  Google Scholar 

  • Littlewood, B., & Verrall, J. (1973). A bayesian reliability growth model for computer software. Journal of the Royal Statistical Society, series C, 22(3), 332–346.

    MathSciNet  Google Scholar 

  • Gaffney, G. E., & Pietrolewiez, J. (1990). An automated model for software early error prediction (SWEEP). In Proceeding of 13th Minnow Brook Workshop on Software Reliability.

    Google Scholar 

  • Rome Laboratory (1992). Methodology for software reliability prediction and assessment (Vols. 1–2). Technical report RL-TR-92-52.

    Google Scholar 

  • Li, M., & Smidts, C. (2003). A ranking of software engineering measures based on expert opinion. IEEE Transaction on Software Engineering, 29(9), 24–811.

    Google Scholar 

  • Kumar, K. S., & Misra, R. B. (2008). An enhanced model for early software reliability prediction using software engineering metrics. In Proceedings of 2nd International Conference on Secure System Integration and Reliability Improvement (pp. 177–178).

    Google Scholar 

  • IEEE (1988). IEEE guide for the use of IEEE standard dictionary of measures to produce reliable software. IEEE Standard 982.2.

    Google Scholar 

  • Fenton, N. (1991). Software metrics: A rigorous approach. London: Chapmann & Hall.

    MATH  Google Scholar 

  • Zhang, X., & Pham, H. (2000). An analysis of factors affecting software reliability. The Journal of Systems and Software, 50(1), 43–56.

    Article  Google Scholar 

  • Agrawal, M., & Chari, K. (2007). Software effort, quality and cycle time: A study of CMM level 5 projects. IEEE Transaction on Software Engineering, 33(3), 145–156.

    Article  Google Scholar 

  • Paulk, M. C., Weber, C. V., Curtis, B., & Chrissis, M. B. (1993). Capability maturity model version 1.1. IEEE Software, 10(3), 18–27.

    Article  Google Scholar 

  • Diaz, M., & Sligo, J. (1997). How software process improvement helped Motorola. IEEE Software, 14(5), 75–81.

    Article  Google Scholar 

  • Krishnan, M. S., & Kellner, M. I. (1999). Measuring process consistency: implications reducing software defects. IEEE Transaction on Software Engineering, 25(6), 800–815.

    Article  Google Scholar 

  • Harter, D. E., Krishnan, M. S., & Slaughter, S. A. (2000). Effects of process maturity on quality, cycle time and effort in software product development. Management Science, 46, 451–466.

    Article  Google Scholar 

  • Lipow, M. (1982). Number of faults per line of code. IEEE Transaction on Software Engineering, SE–8(4), 437–439.

    Google Scholar 

  • Yu, T. J., Shen, V. Y., & Dunsmore, H. E. (1988). An analysis of several software defect models. IEEE Transaction on Software Engineering, 14(9), 261–270.

    Google Scholar 

  • Levendel, Y. (1990). Reliability analysis of large software systems: Defect data modeling. IEEE Transaction on Software Engineering, 16(2), 141–152.

    Article  Google Scholar 

  • Agresti, W. W., & Evanco, W. M. (1992). Projecting software defect from analyzing Ada design. IEEE Transaction on Software Engineering, 18(11), 988–997.

    Article  Google Scholar 

  • Wohlin, C. & Runeson, P. (1998). Defect content estimations from review data. In Proceedings of 20th International Conference on Software Engineering (pp. 400–409).

    Google Scholar 

  • Fenton, N. E., & Neil, M. (1999). A critique of software defect prediction models. IEEE Transaction on Software Engineering, 25(5), 675–689.

    Article  Google Scholar 

  • Briand, L. C., Emam, K. E., Freimut, B. G., & Laitenberger, O. (2000). A comprehensive evaluation of capture: Recapture models for estimating software defect content. IEEE Transaction on Software Engineering, 26(8), 518–540.

    Article  Google Scholar 

  • El-Emam, K., Melo, W., & Machado, J. C. (2001). The prediction of faulty classes using object-oriented design metrics. Journal of Systems and Software, 56(1), 63–75.

    Article  Google Scholar 

  • Fenton, N., Neil, N., Marsh, W., Hearty, P., Radlinski, L., & Krause, P. (2008). On the effectiveness of early life cycle defect prediction with Bayesian Nets. Empirical of Software Engineering, 13, 499–537.

    Article  Google Scholar 

  • Catal, C., & Diri, B. (2009). Investigating the effect of dataset size, metrics set, and feature selection techniques on software fault prediction problem. Information Sciences, 179(8), 1040–1058.

    Article  Google Scholar 

  • Pandey, A. K., & Goyal, N. K. (2009). A fuzzy model for early software fault prediction using process maturity and software metrics. International Journal of Electronics Engineering, 1(2), 239–245.

    Google Scholar 

  • Khoshgoftaar, T. M., & Allen, E. B. (1999). A comparative study of ordering and classification of fault-prone software modules. Empirical Software Engineering, 4, 159–186.

    Article  Google Scholar 

  • Khoshgoftaar, T. M., & Seliya, N. (2002). Tree-based software quality models for fault prediction. In Proceedings of 8th International Software Metrics Symposium, Ottawa, Ontario, Canada (203–214).

    Google Scholar 

  • Khoshgoftaar, T. M., & Seliya, N. (2003). Fault prediction modeling for software quality estimation: comparing commonly used techniques. Empirical Software Engineering, 8, 255–283.

    Article  Google Scholar 

  • Singh, Y., Kaur, A., & Malhotra, R. (2008). Predicting software fault proneness model using neural network. LNBIP 9, Springer.

    Google Scholar 

  • Singh, Y., Kaur, A., & Malhotra, R. (2009). Software fault proneness prediction using support vector machines. In The Proceedings of the World Congress on Engineering, London, UK, 1–3 July.

    Google Scholar 

  • Kumar, K. S. (2009). Early software reliability and quality prediction (Ph.D. Thesis, IIT Kharagpur, Kharagpur, India).

    Google Scholar 

  • Schneidewind, N. F. (2001). Investigation of logistic regression as a discriminant of software quality. In The Proceedings of 7th International Software Metrics Symposium, London, UK (pp. 328–337).

    Google Scholar 

  • Menzies, T., Greenwald, J., & Frank, A. (2007). Data mining static code attributes to learn defect predictors. IEEE Transactions on Software Engineering, 33(1), 2–13.

    Article  Google Scholar 

  • Seliya, N., & Khoshgoftaar, T. M. (2007). Software quality estimation with limited fault data: A semi-supervised learning perspective. Software Quality Journal, 15(3), 327–344.

    Article  Google Scholar 

  • Catal, C., & Diri, B. (2008). A fault prediction model with limited fault data to improve test process. In Proceedings of the 9th International Conference on Product Focused Software Process Improvement (pp. 244–257).

    Google Scholar 

  • Munson, J. C., & Khoshgoftaar, T. M. (1992). The detection of fault-prone programs. IEEE Transactions on Software Engineering, 18(5), 423–433.

    Article  Google Scholar 

  • Ohlsson, N., & Alberg, H. (1996). Predicting fault-prone software modules in telephone switches. IEEE Transaction on Software Engineering, 22(12), 886–894.

    Article  Google Scholar 

  • Harrold, M., Gupta, R., & Soffa, M. (1993). A methodology for controlling the size of a test suite. ACM Transaction on Software Engineering and Methodology, 2(3), 270–285.

    Article  Google Scholar 

  • Rothermel, G., & Harrold, M. J. (1996). Analyzing regression test selection techniques. IEEE Transaction on Software Engineering, 22(8), 529–551.

    Article  Google Scholar 

  • Wong, W. E., Horgan, J. R., London, S. & Agrawal, H. (1997). A study of effective regression testing in practice. In Proceedings of the Eighth Int’l Symposium on Software Reliability Engineering (pp. 230–238).

    Google Scholar 

  • Rothermel, G., Untch, R. H., Chu, C., & Harrold, M. J. (1999). Test case prioritization: An empirical study. In Procedings of the. Int’l Conf. Software Maintenance (pp. 179–188).

    Google Scholar 

  • Elbaum, S., Malishevsky, A., & Rothermel, G. (2000). Prioritizing test cases for regression testing. In Proceedings of the International Symposium on Software Testing and Analysis (pp. 102–112).

    Google Scholar 

  • Elbaum, S., Malishevsky, A., & Rothermel, G. (2002). Test case prioritization: a family of empirical studies. IEEE Transaction of Software Engineering, 28(2), 159–182.

    Article  Google Scholar 

  • Elbaum, S., Kallakuri, P., Malishevsky, A., Rothermel, G., & Kanduri, S. (2003). Understanding the effects of changes on the cost-effectiveness of regression testing techniques. Journal of Software, Verification and Reliability, 12(2), 65–83.

    Article  Google Scholar 

  • Elbaum, S., Rothermel, G., Kanduri, S., & Malishevsky, A. G. (2004). Selecting a cost-effective test case prioritization technique. Software Quality Journal, 12(3), 185–210.

    Article  Google Scholar 

  • Do, H., Rothermel, G., & Kinneer, A. (2006). Prioritizing Junit test cases: An empirical assessment and cost-benefits analysis. Empirical Software Engineering, 11, 33–70.

    Article  Google Scholar 

  • Qu, B., Nie, C., Xu, B. & Zhang, X. (2007). Test case prioritization for black box testing. In The Proceedings of 31st Annual International Computer Software and Applications Conference.

    Google Scholar 

  • Park, H., Ryu, H., & Baik, J. (2008). Historical value-based approach for cost-cognizant test case prioritization to improve the effectiveness of regression testing. In The Proceedings 2nd International Conference on Secure System Integration and Reliability Improvement (pp. 39–46).

    Google Scholar 

  • Khan, S. R., Rehman, I., & Malik, S. (2009). The impact of test case reduction and prioritization on software testing effectiveness. In Proceeding of International Conference on Emerging Technologies (pp. 416–421).

    Google Scholar 

  • Kim, S., & Baik J. (2010). An effective fault aware test case prioritization by incorporating a fault localization technique. In Proceedings of ESEM-10, Bolzano-Bozen, Italy (pp. 16–17).

    Google Scholar 

  • Musa, J. D. (2005). Software reliability engineering: more reliable software faster and cheaper (2nd ed.). Tata McGraw-Hill Publication.

    Google Scholar 

  • Musa, J. D. (1993). Operational profiles in software reliability engineering. IEEE Software Magazine.

    Google Scholar 

  • Koziolek, H. (2005). Operational profiles for software reliability. Seminar on Dependability Engineering, Germany.

    Google Scholar 

  • Arora, S., Misra, R. B., & Kumre, V. M. (2005). Software reliability improvement through operational profile driven testing. In Proceedings of Annual IEEE Conference on Reliability and Maintainability Symposium, Virginia (pp. 621–627).

    Google Scholar 

  • Pandey, A. K., Smith, J., & Diwanji, V. (2012). Cost effective reliability centric validation model for automotive ECUs. In The 23rd IEEE International Symposium on Software Reliability Engineering, Dallas, TX USA (pp. 38–44).

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ajeet Kumar Pandey .

Rights and permissions

Reprints and permissions

Copyright information

© 2013 Springer India

About this chapter

Cite this chapter

Pandey, A.K., Goyal, N.K. (2013). Background: Software Quality and Reliability Prediction. In: Early Software Reliability Prediction. Studies in Fuzziness and Soft Computing, vol 303. Springer, India. https://doi.org/10.1007/978-81-322-1176-1_2

Download citation

  • DOI: https://doi.org/10.1007/978-81-322-1176-1_2

  • Published:

  • Publisher Name: Springer, India

  • Print ISBN: 978-81-322-1175-4

  • Online ISBN: 978-81-322-1176-1

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics