Advertisement

Software Validation by Means of Statistical Testing: Retrospect and Future Direction

  • Pascale Thévenod-Fosse
Part of the Dependable Computing and Fault-Tolerant Systems book series (DEPENDABLECOMP, volume 4)

Abstract

Statistical testing is a practical approach to software validation, involving both fault removal and fault forecasting. It consists in stimulating a program by test samples which are randomly selected based on a defined probability distribution of the input data. The first part of the paper provides a short view of the current state of investigation in statistical testing area. Then a comparison of the strengths and weaknesses of statistical testing with those of deterministic testing allows to put forward the complementary, rather than competing, features of these two methods of generating test data. Hence, a validation strategy organized in three steps is proposed, which mixes statistical and deterministic test data. The first two steps aim at revealing faults, and the third one provides an assessment of operational reliability. Future work to support the strategy is outlined.

Keywords

Failure Probability Software Reliability Input Domain Partition Testing Software Validation 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    A. A. Abdel-Ghaly, P. Y. Chan, B. Littlewood, “Evaluation of competing software reliability predictions”, IEEE Transactions on Software Engineering, Vol. SE-12, No. 9, September 1986, pp. 950–967.Google Scholar
  2. [2]
    P. E. Ammann, S. S. Brilliant, J. C. Knight, “Using multiple versions for verification”, proc. 4th NSIA Annual National Joint Conference on Software Quality and Productivity, Washington, USA, March 1988, pp. 220–223.Google Scholar
  3. [3]
    P. G. Bishop & al., “PODS — A project on diverse software”, IEEE Transactions on Software Engineering, Vol. SE-12, No. 9, September 1986, pp. 929–940.Google Scholar
  4. [4]
    P. G. Bishop & al., “STEM — A project on software test and evaluation methods”, proc. Conference SARS’87, Altrincham, UK, November 1987, pp. 100–117.Google Scholar
  5. [5]
    P. G. Bishop, F. D. Pullen, “PODS revisited — A study of software failure behaviour”, proc. 18th Symposium on Fault-Tolerant Computing, Tokyo, Japan, June 1988, pp. 2–8.Google Scholar
  6. [6]
    S. S. Brilliant, “Analysis of faults in a multi-version software experiment”, Master’s Thesis, University of Virginia, USA, May 1985.Google Scholar
  7. [7]
    S. S. Brilliant, “Testing software using multiple versions”, Doctoral Dissertation, University of Virginia, USA, January 1988.Google Scholar
  8. [8]
    C-K. Cho, An introduction to software quality control, Wiley, New york, 1980.Google Scholar
  9. [9]
    C-K. Cho, Quality programming: developing and testing software with statistical quality control, John Wiley & Sons, 1987.Google Scholar
  10. [10]
    P. A. Currit, M. Dyer, H. D. Mills, “Certifying the reliability of software”, IEEE Transactions on Software Engineering, Vol. SE-12, No. 1, January 1986, pp. 3–11.Google Scholar
  11. [11]
    P. A. Currit & al., “Correction to Certifying the reliability of software”, IEEE Transactions on Software Engineering, Vol. SE-15, No. 3, March 1989, pp. 362.CrossRefGoogle Scholar
  12. [12]
    R. A. DeMillo, R. J. Lipton, F. G. Sayward, “Hints on test data selection: help for the practicing programmer”, IEEE Computer Magazine, Vol. 11, No. 4, April 1978, pp. 34–41.Google Scholar
  13. [13]
    R. A. DeMillo & al., “An extended overview of the Mothra software testing environment”, proc. 2nd IEEE Workshop on Software Testing, Banff, Canada, July 1988, pp. 142–151.Google Scholar
  14. [14]
    J. R. Dunham, “Experiments in software reliability: life-critical applications”, IEEE Transactions on Software Engineering, Vol. SE-12, No. 1, January 1986, pp. 110–123.MathSciNetGoogle Scholar
  15. [15]
    J. R. Dunham, “Software errors in experimental systems having ultra-reliability requirements”, proc. 16th Int. Symposium on Fault-Tolerant Computing, Vienna, Austria, July 1986, pp. 158–164.Google Scholar
  16. [16]
    J. W. Duran, J. J. Wiorkowski, “Quantifying software validity by sampling”, IEEE Transactions on Reliability, Vol. R-29, No. 2, June 1980, pp. 141–144.CrossRefGoogle Scholar
  17. [17]
    J. W. Duran, S. C. Ntafos, “A report on random testing”, proc. 5th Conference on Software Engineering, San Diego, USA, March 1981, pp. 179–183.Google Scholar
  18. [18]
    J. W. Duran, S. C. Ntafos, “An evaluation of random testing”, IEEE Transactions on Software Engineering, Vol. SE-10, No. 4, July 1984, pp. 438–444.CrossRefGoogle Scholar
  19. [19]
    M. Dyer, H. D. Mills, “The Cleanroom approach to reliable software development”, proc. Validation Methods Research for Fault-Tolerant Avionics and Control Systems Sub-Working-Group Meeting: Production of Reliable Flight-Crucial Software, Research Triangle Institute, NC, USA, November 1981.Google Scholar
  20. [20]
    D. E. Eckhardt, L. D. Lee, “A theoretical basis for the analysis of multi-version software subject to coincident errors”, IEEE Transactions on Software Engineering, Vol. SE-11, No. 12, December 1985, pp. 1511–1517.CrossRefGoogle Scholar
  21. [21]
    E. Girard, J-C. Rault, “A programming technique for software reliability”, proc. 1st IEEE Symposium on Computer Software Reliability, New York, USA, 1973, pp. 44–50.Google Scholar
  22. [22]
    R. G. Hamlet, “Testing for probable correctness”, proc. 1st IEEE Workshop on Software Testing, Banff, Canada, July 1986, pp. 92–97.Google Scholar
  23. [23]
    R. G. Hamlet, “Probable correctness theory”, Information Processing Letters, Vol. 25, No. 1, April 1987, pp. 17–25.MathSciNetCrossRefGoogle Scholar
  24. [24]
    R. G. Hamlet, “Testing for trustworthiness”, proc. Symposium on Directions & Implications of Advanced Computing (DIAC-87), Washington, USA, July 1987, pp. 87–93.Google Scholar
  25. [25]
    J-C. Laprie, “Dependable computing and fault tolerance: concepts and terminology”, proc. 15th Int. Symposium on Fault-Tolerant Computing, Ann Arbor, USA, June 1985, pp. 2–11.Google Scholar
  26. [26]
    B. Littlewood, D. R. Miller, “A conceptual model of multi-version software”, proc. 17th Int. Symposium on Fault-Tolerant Computing, Pittsburgh, USA, July 1987, pp. 150–155.Google Scholar
  27. [27]
    B. Littlewood, D. R. Miller, “A conceptual model of the effect of diverse methodologies on coincident failures in multi-version software”, proc. 3rd Int. GI/ITG/GMA Conference on Fault-Tolerant Computing Systems, Bremerhaven, RFA, September 1987, pp. 263–272.Google Scholar
  28. [28]
    H. D. Mills, M. Dyer, R. C. Linger, “Cleanroom software engineering”, IEEE Software magazine, September 1987, pp. 19–25.Google Scholar
  29. [29]
    S. C. Ntafos, “On testing with required elements”, proc. COMPSAC’81, November 1981, pp. 132–139.Google Scholar
  30. [30]
    S. C. Ntafos, “A comparison of some structural testing strategies”, IEEE Transactions on Software Engineering, Vol. SE-14, No. 6, June 1988, pp. 868–874.CrossRefGoogle Scholar
  31. [31]
    C. V. Ramamoorthy, S-B. F. Ho, W. T. Chen, “On the automated generation of program test data”, IEEE Transactions on Software Engineering, Vol. SE-2, No. 4, December 1976, pp. 293–300.CrossRefGoogle Scholar
  32. [32]
    C. V. Ramamoorthy, A. Prakash, W-T. Tsai, Y. Usuda, “Software engineering: problems and perspectives”, Computer, Vol. 17, No. 10, October 1984, pp. 191–209.CrossRefGoogle Scholar
  33. [33]
    S. Rapps, E. J. Weyuker, “Selecting software test data using data flow information”, IEEE Transactions on Software Engineering, Vol. SE-11, No. 4, April 1985, pp. 367–375.CrossRefGoogle Scholar
  34. [34]
    J-C. Rault, “Extension of hardware fault detection models to the verification of software”, Chapter 19 in Program test methods, Edited by W. C. Hetzel, Prentice-Hall, Inc., Englewood Cliffs, New Jersey, USA, 1973, pp. 255–262.Google Scholar
  35. [35]
    R. W. Selby, V. R. Basili, F. T. Baker, “Cleanroom software development: an empirical evaluation”, IEEE Transactions on Software Engineering, Vol. SE-13, No. 9, September 1987, pp. 1027–1037.CrossRefGoogle Scholar
  36. [36]
    M. L. Shooman, “Software reliability: a historical perspective”, IEEE Transactions on Reliability, Vol. R-33, No. 1, April 1984, pp. 48–55.Google Scholar
  37. [37]
    P. Thévenod-Fosse, “Statistical testing of software: a survey”, LAAS Research Report No. 88.355, December 1988.Google Scholar
  38. [38]
    L. G. Valiant, “A theory of the learnable”, Communications of the ACM, Vol. 27, No. 11, November 1984, pp. 1134–1142.MATHCrossRefGoogle Scholar
  39. [39]
    U. Voges (ed.), Software diversity in computerized control systems, Series on Dependable Computing and Fault-Tolerant Systems, Vol. 2, Springer-Verlag, Wien, Austria, 1988.Google Scholar

Copyright information

© Springer-Verlag/Wien 1991

Authors and Affiliations

  • Pascale Thévenod-Fosse
    • 1
  1. 1.Laboratoire d’Automatique et d’Analyse des Systèmes du C.N.R.S.Toulouse CedexFrance

Personalised recommendations