Skip to main content

Software Validation by Means of Statistical Testing: Retrospect and Future Direction

  • Chapter
Dependable Computing for Critical Applications

Part of the book series: Dependable Computing and Fault-Tolerant Systems ((DEPENDABLECOMP,volume 4))

Abstract

Statistical testing is a practical approach to software validation, involving both fault removal and fault forecasting. It consists in stimulating a program by test samples which are randomly selected based on a defined probability distribution of the input data. The first part of the paper provides a short view of the current state of investigation in statistical testing area. Then a comparison of the strengths and weaknesses of statistical testing with those of deterministic testing allows to put forward the complementary, rather than competing, features of these two methods of generating test data. Hence, a validation strategy organized in three steps is proposed, which mixes statistical and deterministic test data. The first two steps aim at revealing faults, and the third one provides an assessment of operational reliability. Future work to support the strategy is outlined.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

eBook
USD 16.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. A. A. Abdel-Ghaly, P. Y. Chan, B. Littlewood, “Evaluation of competing software reliability predictions”, IEEE Transactions on Software Engineering, Vol. SE-12, No. 9, September 1986, pp. 950–967.

    Google Scholar 

  2. P. E. Ammann, S. S. Brilliant, J. C. Knight, “Using multiple versions for verification”, proc. 4th NSIA Annual National Joint Conference on Software Quality and Productivity, Washington, USA, March 1988, pp. 220–223.

    Google Scholar 

  3. P. G. Bishop & al., “PODS — A project on diverse software”, IEEE Transactions on Software Engineering, Vol. SE-12, No. 9, September 1986, pp. 929–940.

    Google Scholar 

  4. P. G. Bishop & al., “STEM — A project on software test and evaluation methods”, proc. Conference SARS’87, Altrincham, UK, November 1987, pp. 100–117.

    Google Scholar 

  5. P. G. Bishop, F. D. Pullen, “PODS revisited — A study of software failure behaviour”, proc. 18th Symposium on Fault-Tolerant Computing, Tokyo, Japan, June 1988, pp. 2–8.

    Google Scholar 

  6. S. S. Brilliant, “Analysis of faults in a multi-version software experiment”, Master’s Thesis, University of Virginia, USA, May 1985.

    Google Scholar 

  7. S. S. Brilliant, “Testing software using multiple versions”, Doctoral Dissertation, University of Virginia, USA, January 1988.

    Google Scholar 

  8. C-K. Cho, An introduction to software quality control, Wiley, New york, 1980.

    Google Scholar 

  9. C-K. Cho, Quality programming: developing and testing software with statistical quality control, John Wiley & Sons, 1987.

    Google Scholar 

  10. P. A. Currit, M. Dyer, H. D. Mills, “Certifying the reliability of software”, IEEE Transactions on Software Engineering, Vol. SE-12, No. 1, January 1986, pp. 3–11.

    Google Scholar 

  11. P. A. Currit & al., “Correction to Certifying the reliability of software”, IEEE Transactions on Software Engineering, Vol. SE-15, No. 3, March 1989, pp. 362.

    Article  Google Scholar 

  12. R. A. DeMillo, R. J. Lipton, F. G. Sayward, “Hints on test data selection: help for the practicing programmer”, IEEE Computer Magazine, Vol. 11, No. 4, April 1978, pp. 34–41.

    Google Scholar 

  13. R. A. DeMillo & al., “An extended overview of the Mothra software testing environment”, proc. 2nd IEEE Workshop on Software Testing, Banff, Canada, July 1988, pp. 142–151.

    Google Scholar 

  14. J. R. Dunham, “Experiments in software reliability: life-critical applications”, IEEE Transactions on Software Engineering, Vol. SE-12, No. 1, January 1986, pp. 110–123.

    MathSciNet  Google Scholar 

  15. J. R. Dunham, “Software errors in experimental systems having ultra-reliability requirements”, proc. 16th Int. Symposium on Fault-Tolerant Computing, Vienna, Austria, July 1986, pp. 158–164.

    Google Scholar 

  16. J. W. Duran, J. J. Wiorkowski, “Quantifying software validity by sampling”, IEEE Transactions on Reliability, Vol. R-29, No. 2, June 1980, pp. 141–144.

    Article  Google Scholar 

  17. J. W. Duran, S. C. Ntafos, “A report on random testing”, proc. 5th Conference on Software Engineering, San Diego, USA, March 1981, pp. 179–183.

    Google Scholar 

  18. J. W. Duran, S. C. Ntafos, “An evaluation of random testing”, IEEE Transactions on Software Engineering, Vol. SE-10, No. 4, July 1984, pp. 438–444.

    Article  Google Scholar 

  19. M. Dyer, H. D. Mills, “The Cleanroom approach to reliable software development”, proc. Validation Methods Research for Fault-Tolerant Avionics and Control Systems Sub-Working-Group Meeting: Production of Reliable Flight-Crucial Software, Research Triangle Institute, NC, USA, November 1981.

    Google Scholar 

  20. D. E. Eckhardt, L. D. Lee, “A theoretical basis for the analysis of multi-version software subject to coincident errors”, IEEE Transactions on Software Engineering, Vol. SE-11, No. 12, December 1985, pp. 1511–1517.

    Article  Google Scholar 

  21. E. Girard, J-C. Rault, “A programming technique for software reliability”, proc. 1st IEEE Symposium on Computer Software Reliability, New York, USA, 1973, pp. 44–50.

    Google Scholar 

  22. R. G. Hamlet, “Testing for probable correctness”, proc. 1st IEEE Workshop on Software Testing, Banff, Canada, July 1986, pp. 92–97.

    Google Scholar 

  23. R. G. Hamlet, “Probable correctness theory”, Information Processing Letters, Vol. 25, No. 1, April 1987, pp. 17–25.

    Article  MathSciNet  Google Scholar 

  24. R. G. Hamlet, “Testing for trustworthiness”, proc. Symposium on Directions & Implications of Advanced Computing (DIAC-87), Washington, USA, July 1987, pp. 87–93.

    Google Scholar 

  25. J-C. Laprie, “Dependable computing and fault tolerance: concepts and terminology”, proc. 15th Int. Symposium on Fault-Tolerant Computing, Ann Arbor, USA, June 1985, pp. 2–11.

    Google Scholar 

  26. B. Littlewood, D. R. Miller, “A conceptual model of multi-version software”, proc. 17th Int. Symposium on Fault-Tolerant Computing, Pittsburgh, USA, July 1987, pp. 150–155.

    Google Scholar 

  27. B. Littlewood, D. R. Miller, “A conceptual model of the effect of diverse methodologies on coincident failures in multi-version software”, proc. 3rd Int. GI/ITG/GMA Conference on Fault-Tolerant Computing Systems, Bremerhaven, RFA, September 1987, pp. 263–272.

    Google Scholar 

  28. H. D. Mills, M. Dyer, R. C. Linger, “Cleanroom software engineering”, IEEE Software magazine, September 1987, pp. 19–25.

    Google Scholar 

  29. S. C. Ntafos, “On testing with required elements”, proc. COMPSAC’81, November 1981, pp. 132–139.

    Google Scholar 

  30. S. C. Ntafos, “A comparison of some structural testing strategies”, IEEE Transactions on Software Engineering, Vol. SE-14, No. 6, June 1988, pp. 868–874.

    Article  Google Scholar 

  31. C. V. Ramamoorthy, S-B. F. Ho, W. T. Chen, “On the automated generation of program test data”, IEEE Transactions on Software Engineering, Vol. SE-2, No. 4, December 1976, pp. 293–300.

    Article  Google Scholar 

  32. C. V. Ramamoorthy, A. Prakash, W-T. Tsai, Y. Usuda, “Software engineering: problems and perspectives”, Computer, Vol. 17, No. 10, October 1984, pp. 191–209.

    Article  Google Scholar 

  33. S. Rapps, E. J. Weyuker, “Selecting software test data using data flow information”, IEEE Transactions on Software Engineering, Vol. SE-11, No. 4, April 1985, pp. 367–375.

    Article  Google Scholar 

  34. J-C. Rault, “Extension of hardware fault detection models to the verification of software”, Chapter 19 in Program test methods, Edited by W. C. Hetzel, Prentice-Hall, Inc., Englewood Cliffs, New Jersey, USA, 1973, pp. 255–262.

    Google Scholar 

  35. R. W. Selby, V. R. Basili, F. T. Baker, “Cleanroom software development: an empirical evaluation”, IEEE Transactions on Software Engineering, Vol. SE-13, No. 9, September 1987, pp. 1027–1037.

    Article  Google Scholar 

  36. M. L. Shooman, “Software reliability: a historical perspective”, IEEE Transactions on Reliability, Vol. R-33, No. 1, April 1984, pp. 48–55.

    Google Scholar 

  37. P. Thévenod-Fosse, “Statistical testing of software: a survey”, LAAS Research Report No. 88.355, December 1988.

    Google Scholar 

  38. L. G. Valiant, “A theory of the learnable”, Communications of the ACM, Vol. 27, No. 11, November 1984, pp. 1134–1142.

    Article  MATH  Google Scholar 

  39. U. Voges (ed.), Software diversity in computerized control systems, Series on Dependable Computing and Fault-Tolerant Systems, Vol. 2, Springer-Verlag, Wien, Austria, 1988.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 1991 Springer-Verlag/Wien

About this chapter

Cite this chapter

Thévenod-Fosse, P. (1991). Software Validation by Means of Statistical Testing: Retrospect and Future Direction. In: Avižienis, A., Laprie, JC. (eds) Dependable Computing for Critical Applications. Dependable Computing and Fault-Tolerant Systems, vol 4. Springer, Vienna. https://doi.org/10.1007/978-3-7091-9123-1_2

Download citation

  • DOI: https://doi.org/10.1007/978-3-7091-9123-1_2

  • Publisher Name: Springer, Vienna

  • Print ISBN: 978-3-7091-9125-5

  • Online ISBN: 978-3-7091-9123-1

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics