Skip to main content

Software testing for dependability assessment

  • Conference paper
  • First Online:
Objective Software Quality (SQ 1995)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 926))

Included in the following conference series:

Abstract

Software testing can be aimed at two different goals: removing faults and evaluating dependability. Testing methods described in textbooks having the word “testing” in their title or more commonly used in the industry are mostly intended to accomplish the first goal: revealing failures, so that the faults that caused them can be located and removed. However, the final goal of a software validation process should be to achieve an objective measure of the confidence that can be put on the software being developed. For this purpose, conventional reliability theory has been applied to software engineering and nowadays several reliability growth models can be used to accurately predict the future reliability of a program based on the failures observed during testing. Paradoxically, the most difficult situation is that of a software product that does not fail during testing, as is normally the case for safety-critical applications. In fact, quantification of ultrareliability is impossible at the current state of the art and is the subject of active research. It has been recently suggested that measures of software testability could be used to predict higher dependability than black-box testing alone could do.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Abdel-Ghaly, A. A., Chan, P. Y., Littlewood, B.: Evaluation of Competing Software Reliability Predictions. IEEE Trans. Software Eng. SE-12(9) (1986) 950–967

    Google Scholar 

  2. Adrion, W. R., Branstad, M. A., Cherniavsky, J. C.: Validation, Verification and Testing of Computer Software. ACM Computing Surveys. 14(2) (1982)

    Google Scholar 

  3. Beizer, B.: Software Testing Techniques, Second Edition. Van Nostrand Reinhold, New York. 1990

    Google Scholar 

  4. Bertolino, A., Strigini, L.: Using Testability Measures for Dependability Assessment. Proc. 17th Intern. Conference on Software Engineering. Seattle, Wa. April (1995).

    Google Scholar 

  5. Butler, R. W., Finelli, G. B.: The Infeasibility of Experimental Quantification of Life-Critical Software Reliability. IEEE Trans. Software Eng. 19(1) (1993) 3–12

    Google Scholar 

  6. Duran, J. W., Ntafos, S. C.: An Evaluation of Random Testing. IEEE Trans. Software Eng. SE-10(4) (1984) 438–444

    Google Scholar 

  7. Frankl, P. G.,Weyuker, E. J.: A Formal Analysis of the Fault Detection Ability of Testing Methods. IEEE Trans. Software Eng. 19(3) (1993) 202–213

    Google Scholar 

  8. Frankl, P. G., Weiss, S. N.: An Experimental Comparison of the Effectiveness of Branch Testing and Data Flow Testing. IEEE Trans. Software Eng. 19(8) (1993) 774–787

    Google Scholar 

  9. Hamlet, D.: Are We Testing for True Reliability? IEEE Software. July (1992) 21–27

    Google Scholar 

  10. Hamlet, D.: Foundations of Software Testing: Dependability Theory. Proc. 2nd ACM SIGSOFT Symp. Foundations Software Eng. New Orleans, USA. December 1994. In ACM SIGSOFT 19(5) (1994) 128–139

    Google Scholar 

  11. Hamlet, D., Taylor, R.: Partition Testing Does Not Inspire Confidence. IEEE Trans. Software Eng. 16(12) (1990) 1402–1411

    Google Scholar 

  12. Hamlet, D., Voas, J.: Faults on Its Sleeve: Amplifying Software Reliability Testing. Proc. Int. Symposium on Software Testing and Analysis (ISSTA). Cambridge, Massachusetts. June (1993) 89–98

    Google Scholar 

  13. Howden, W. E., Huang, Y.: Analysis of Testing Methods Using Failure Rate and Testability Models. Tech. Report CSE, Univ. of California at San Diego. (1993)

    Google Scholar 

  14. Jelinski, Z., Moranda, P. B.: Software Reliability Research. In Freiberger, W. (Ed.): Statistical Computer Performance. Academic Press, New York. (1972) 465–484

    Google Scholar 

  15. Laprie, J. C.: Dependability: Basic Concepts and Terminology. Dependable Computing and Fault-Tolerant Systems. Vol. 5. Springer-Verlag, Wien New York. 1992

    Google Scholar 

  16. Littlewood, B.: How to Measure Software Reliability and How Not To. IEEE Trans. Reliability. R-28(2) (1979) 103–110

    Google Scholar 

  17. Littlewood, B.: Modelling Growth in Software Reliability. In Rook, P. (Ed.): Software Reliability Handbook. Elsevier Applied Science, London and New York. 1990

    Google Scholar 

  18. Littlewood, B., Strigini, L.: Validation of Ultra-High Dependability for Software-based Systems. Communications of the ACM. 36(11) (1993) 69–80

    Google Scholar 

  19. Littlewood, B., Verrall, J. L.: A Bayesian Reliability Growth Model for Computer Software. J. Royal Statist. Soc., Appl. Statist. 22(3) (1973) 332–346

    Google Scholar 

  20. Musa, J. D., Iannino, A., Okumoto, K.: Software Reliability Measurement, Prediction, Application. McGraw-Hill, New York. 1987

    Google Scholar 

  21. Myers, G. J.: The Art of Software Testing. J. Wiley & Sons, New York. 1979.

    Google Scholar 

  22. Nelson, E.: Estimating Software Reliability from Test Data. Microelectron. Rel. 17 (1978) 67–74

    Google Scholar 

  23. Voas, J. M.: PIE: A Dynamic Failure-Based Technique. IEEE Trans. Software Eng. 18(8) (1992) 717–727

    Google Scholar 

  24. Voas, J., Morell, L., Miller, K.: Predicting Where Faults Can Hide from Testing. IEEE Software. March (1991) 41–48

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Paolo Nesi

Rights and permissions

Reprints and permissions

Copyright information

© 1995 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Bertolino, A. (1995). Software testing for dependability assessment. In: Nesi, P. (eds) Objective Software Quality. SQ 1995. Lecture Notes in Computer Science, vol 926. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-59449-3_36

Download citation

  • DOI: https://doi.org/10.1007/3-540-59449-3_36

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-59449-9

  • Online ISBN: 978-3-540-49268-9

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics