Advertisement

Testing Programs to Detect Malicious Faults

  • Richard Hamlet
Part of the Dependable Computing and Fault-Tolerant Systems book series (DEPENDABLECOMP, volume 6)

Abstract

Program testing has traditionally been of two kinds: for fault finding (debugging), and for establishing operational reliability (confidence). We investigate the question of using traditional methods to determine the dependability of a program, under two assumptions: (1) the only sources of failure are inadvertent mistakes in design, coding, etc., and the program developers cooperate in trying to eliminate such faults. (2) the source of failure is sabotage — malicious code is inserted in the program and cleverly concealed. Paradoxically, it appears to be easier to detect sabotage than subtle unintentional mistakes, in the off-line situation where the sabotage takes place during development, and must be detected prior to program release. Furthermore, the very situations that can make traditional testing a nightmare, for example, real-time constraints, actually may help a tester trying to detect sabotage.

Keywords

Fault Tree Trojan Horse Malicious Code Path Testing Illicit Action 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    D. Gelperin and W. Hetzel, “The growth of software testing,” CACM, vol. 31, pp. 687–695, June 1988.Google Scholar
  2. [2]
    R. Hamlet, “Editor’s introduction, special section on software testing,” CACM, vol. 31, pp. 662–667, June 1988.Google Scholar
  3. [3]
    A. Avizienis and J. Kelly, “Fault tolerance by design diversity: concepts and experiments,” Computer, vol. 17, pp. 67–80, August 1984.CrossRefGoogle Scholar
  4. [4]
    G. Myers, The Art of Software Testing. Wiley, 1979.Google Scholar
  5. [5]
    G. Weinberg and D. Freeman, “Reviews, walkthroughs, and inspections,” IEEE Trans. Software Engineering, vol. SE-10, pp. 68–72, January 1984.CrossRefGoogle Scholar
  6. [6]
    J. Goodenough and S. Gerhart, “Toward a theory of test data selection,” IEEE Trans. Software Engineering, vol. SE-2, pp. 156–173, 1976.Google Scholar
  7. [7]
    S. Rapps and E. Weyuker, “Selecting software test data using data flow information,” IEEE Trans. Software Engineering, vol. SE-11, pp. 367–375, April 1985.CrossRefGoogle Scholar
  8. [8]
    R. Hamlet, “Testing programs with the aid of a compiler,” IEEE Trans. Software Engineering, vol. SE-3, pp. 279–290, July 1977.CrossRefGoogle Scholar
  9. [9]
    R. DeMillo, R. Lipton, and F. Sayward, “Hints on test data selection: help for the practicing programmer,” Computer, vol. 11, pp. 34–43, April 1978.CrossRefGoogle Scholar
  10. [10]
    W. Howden, “Weak mutation testing and completeness of test sets,” IEEE Trans. Software Engineering, vol. SE-8, pp. 371–379, July 1982.CrossRefGoogle Scholar
  11. [11]
    N. Leveson, “Software safety: why, what and how,” Computing Surveys, vol. 18, pp. 125–163, 1986.CrossRefGoogle Scholar
  12. [12]
    R. Hamlet and R. Taylor, “Partition testing does not inspire confidence,” IEEE Trans. Software Engineering, vol. SE-16, pp. 1402–1411, December 1990.MathSciNetCrossRefGoogle Scholar
  13. [13]
    J. Duran and S. Ntafos, “An evaluation of random testing,” IEEE Trans. Software Engineering, vol. SE-10, pp. 438–444, July 1984.CrossRefGoogle Scholar
  14. [14]
    D. Parnas, A. van Schouwen, and S. Kwan, “Evaluation of safety-critical software,” CACM, vol. 33, pp. 636–648, June 1990.Google Scholar
  15. [15]
    R. Hamlet, “Probable correctness theory,” Info. Proc. Letters, vol. 25, pp. 17–25, April 1987.MathSciNetCrossRefGoogle Scholar
  16. [16]
    J. Laski, “Data flow testing in STAD,” Journal of Systems and Software, vol. 12, pp. 3–14, 1990.CrossRefGoogle Scholar
  17. [17]
    M. K. Joseph, “Architectural issues in fault-tolerant, secure computing systems,” Tech. Rep. CSD-880047, UCLA.Google Scholar
  18. [18]
    N. Leveson, S. Cha, J. Knight, and S. T., “The use of self checks and voting in software error detection: an empirical study,” IEEE Trans. Software Engineering, vol. SE-16, pp. 432–443, April 1990.CrossRefGoogle Scholar
  19. [19]
    M. Blum, “Designing programs to check their work,” Tech. Rep. TR88-009, International Computer Science Institute, Berkeley, November 1988.Google Scholar
  20. [20]
    L. Morell, “A theory of fault-based testing,” IEEE Trans. Software Engineering, vol. SE-16, pp. 844–857, August 1990.CrossRefGoogle Scholar
  21. [21]
    J. M. Voas and L. J. Morell, “Applying sensitivity analysis estimates to a minimum failure probability for software testing,” in Proc. 8th Pacific Northwest Software Quality Conference, pp. 362-371, October 1990.Google Scholar
  22. [22]
    A. Babbitt and S. Powell, “Building prototype testing tools,” in Proc. 8th Pacific Northwest Software Quality Conference, pp. 264-280, October 1990.Google Scholar

Copyright information

© Springer-Verlag/Wien 1992

Authors and Affiliations

  • Richard Hamlet
    • 1
  1. 1.Computer Science DepartmentPortland State UniversityPortlandUSA

Personalised recommendations