The formal idea of reliability of a set of test data for a program is explored. Although this idea captures something of what testing should accomplish in practice, it has two defects: in general it is impossible to tell if a given test is reliable; and, if reliability is attained, the test points are linked to errors no longer present, not to the corrected program. Should the program be changed, these tests are intuitively worthless. Two variations of the idea to overcome these defects are suggested:
Augment program specifications so that the equivalence problem for programs meeting a given specification is solvable by testing;
Restrict the class of errors a test must expose so that within this class testing can distinguish the correct programs.
In both variations a new idea arises naturally. Test data “determines” programs for which it is reliable (in the variations defined): given the data there is an algorithm for deciding if programs satisfying it have unique behavior. Any variation of the reliability idea which can be effectively recognized can be used to determine programs in this way.
A testing methodology is proposed based on any effective reliability notion. A human being, using noneffective methods, attempts to satisfy a mechanical judgement of reliability. If the person succeeds, the resulting test can be attached to the program, where it is useful when the program is changed. Confidence in the program/test combination is based on the knowledge that no program can satisfy the test yet differ from the given one. That is, the test itself is an unambiguous specification of the program.
It is proposed that testing theory seek out modified reliability ideas with this effective, determining property, and that noneffective ideas of program correctness may find their practical place in aiding people to discover the necessary tests.
This is a preview of subscription content, log in to check access.
Buy single article
Instant access to the full article PDF.
Price includes VAT for USA
Subscribe to journal
Immediate online access to all issues from 2019. Subscription will auto renew annually.
This is the net price. Taxes to be calculated in checkout.
DeMillo, R.A., Lipton, R.J., Sayward, F.G.: Hints on test data selection: help for the practicing programmer, Computer 11, 34–43 (1978)
Foster, K.A.: Error-sensitive test case analysis. IEEE Trans. Software Engrg. SE-6, 258–264 (1980)
Goodenough, J.B., Gerhart, S.L.: Toward a theory of test data selection. Proc. Int. Conf. on Reliable Software, pp. 493–510. Los Angeles, 1975
Hamlet, R.G.: A patent problem for abstract programming languages: machine-independent computations. Proc. 4th Symp. on Theory of Computing, pp. 193–197. Denver, 1972
Hamlet, R.G.: Testing programs with finite sets of data. Computer J. 20, 232–237 (1977)
Hamlet, R.G.: Testing programs with the aid of a compiler. IEEE Trans. Software Engrg. SE-3, 279–290 (1977)
Howden, W.E.: Reliability of the path analysis testing strategy. IEEE Trans. Software Engrg. SE-2, 208–215 (1976)
Howden, W.E.: Algebraic program testing. Acta Informat. 10, 55–69 (1978)
Linger, R.C., Mills, H.D., Witt, B.I.: Structured programming theory and practice. New York: Addison-Wesley, 1979
About this article
Cite this article
Hamlet, R. Reliability theory of program testing. Acta Informatica 16, 31–43 (1981). https://doi.org/10.1007/BF00289588
- Test Data
- Computational Mathematic
- System Organization
- Reliability Idea
- Program Specification