Confidently Assessing a Zero Probability of Software Failure
Randomly generated software tests are an established method of estimating software reliability [5, 7]. But as software applications require higher and higher reliabilities, practical difficulties with random testing have become increasingly problematic. These practical problems are particularly acute in life-critical applications, where requirements of 10−7failures per hour of system reliability translate into a probability of failure (pof) of perhaps 10−9 or less for each individual execution of the software . We refer to software with reliability requirements of this magnitude as ultra-reliable software.
This paper presents a method for assessing the confidence that the software does not contain any faults given that software testing and software testability analysis have been performed. In this method, it is assumed that software testing of the current version has not resulted in any failures, and that software testing has not been exhaustive. In previous publications, we have termed this method of combining testability and testing to assess a confidence in correctness as the “Squeeze Play” and “Reliability Amplification,” [15, 13] however, we have not formally developed the mathematical foundation for quantifying a confidence that the software is correct. We do so in this paper.
KeywordsRandom Testing Software Reliability True Probability Software Fault Input Distribution
Unable to display preview. Download preview PDF.
- R. Butler and G. Finelli. The infeasibility of experimental quantification of life-critical software reliability. Proceedings of SIGSOFT ’91: Software for Critical Systems (December 4–6, 1991), New Orleans, LA., 66–76.Google Scholar
- D. R. Miller. Making statistical inferences about software reliability. NASA Contractor Report 4197 (December 1988).Google Scholar
- T. A. Thayer, M. Lipow, and E. C. Nelson. Software Reliability (TRW Series of Software Technology, Vol. 2). New York: North-Holland, 1978.Google Scholar
- J. Voas, L. Morell, and K. Miller. Predicting where faults can hide from testing. IEEE Software (March 1991), 41–48.Google Scholar
- L. J. Morell. Theoretical Insights into Fault-Based Testing. Proc. of the Second Workshop on Software Testing, Validation, and Analysis, July, 1988, 45–62.Google Scholar
- J. Voas and K. Miller. The Revealing Power of a Test Case. Journal of Software Testing, Verification, and Reliability 2(1), 1992.Google Scholar
- J. Voas and K. Miller. PA: A Dynamic Method for Debugging Certain Classes of Software Faults. To appear in Software Quality Journal, 1993.Google Scholar
- J. Voas and K. Miller. Improving the Software Development Process Using Testability Research. Proc. of the 3rd International Symposium on Software Reliability Engineering, October, 1992, Research Triangle Park, NC.Google Scholar
- R. Hamlet and J. Voas. Faults on Its Sleeve: Amplifying Software Reliability Testing. Proc. of the International Symposium on Software Testing and Analysis, June 28–30, 1993.Google Scholar
- W. Hoeffding. Probability Inequalities for Sums of Bounded Random Variables. American Statistical Association Journal, March, 1963, p.13–30.Google Scholar
- J. Voas, K. Miller, and J. Payne. PISCES: A Tool for Predicting Software Testability. Proc. of the 2nd Symposium on Assessment of Quality Software Development Tools, May, 1992. IEEE Computer Society.Google Scholar