Using Examples, Case Analysis, and Dependency Graphs in Theorem Proving
The use of examples seems to be fundamental to human methods of proving and understanding theorems. Whether the examples are drawn on paper or simply visualized, they seem to be more common in theorem proving and understanding by humans than in textbook proofs using the syntactic transformations of formal logic. What is the significance of this use of examples, and how can it be exploited to get better theorem provers and better interaction of theorem provers with haman users? We present a theorem proving strategy which seems to mimic the human tendency to use examples, and has other features in common with human theorem proving methods. This strategy may be useful in itself, as well as giving insight into human thought processes. This strategy proceeds by finding relevant facts, connecting them together by causal relations, and abstracting the causal dependencies to obtain a proof. The strategy can benefit by examining several examples to observe common features in their causal dependencies before abstracting to obtain a general proof. Also, the strategy often needs to perform a case analysis to obtain a proof, with different examples being used for each case, and a systematic method of linking the proofs of the cases to obtain a general proof. The method distinguishes between positive and negative literals in a nontrivial way, similar to the different perceptions people have of the logically equivalent statements A ⊃ B and (¬ B) ⊃ (¬ A). This work builds on earlier work of the author on abstraction strategies  and problem redaction methods , and also on recent artificial intelligence work on annotating facts with explanatory information [6,7,9]. This method differs from the abstraction strategy in that it is possible to choose a different abstraction for each case in a case analysis proof; there are other differences as well. For other recent work concerning the use of examples in theorem proving see  and .
KeywordsGoal Node Causal Relation Theorem Prove Dependency Graph Causal Chain
Unable to display preview. Download preview PDF.
- 1.Ballantyne, A., and Bledsoe, W., On generating and using examples in proof discovery, Machine Intelligence 10 (Harwood, Chichester, 1982) 3–39.Google Scholar
- 2.Bledsoe, W., Using examples to generate instantiations for set variables, IJCAI(1983)892–901.Google Scholar
- 4.Chang, C., The decomposition principle for theorem proving systems, Proc. Tenth Annual Allerton Conference on Circuit and System Theory, University of Illinois(1972)20–28.Google Scholar
- 6. Charniak, E., Riesbeck, C., and McDermott, D., Data dependencies, Artificial Intelligence Programming (Lawrence Erlbaum Associates, Hillsdale, N.J., 1980) 193–226Google Scholar
- 8.Fay, M., First-order unification in an equational theory, Proceedings 4th Workshop on Automated Deduction, Austin, Texas (1979)161–167.Google Scholar
- 9.Fikes, R., Deductive retrieval mechanisms for state description models, Proceedings of the Fourth International Joint Conference on Artificial Intelligence, Tbilisi, Georgia, USSR (1975) 99–106.Google Scholar
- 10.Gelernter, H., Realization of a geometry theorem-proving machine, Proe. IFIP Congr. (1959)273–282.Google Scholar
- 13.Huet, G. and Oppen, D., Equations and rewrite rules: a survey, in Formal Languages: Perspectives and Open Problems (R. Book, ed.), Academic Press, New York, 1980.Google Scholar
- 14.Lankford, D., Canonical algebraic simplification in computational logic, Memo ATP-25, Automatic Theorem Proving Project, University of Texas, Austin, TX, 1975.Google Scholar
- 16.Plaisted, D., An efficient relevance criterion for mechanical theorem proving, Proceedings of the First Annual National Conference on Artificial Intelligence, Stanford University, August, 1980.Google Scholar
- 19.Reiter, R., A semantically guided deductive system for automatic theorem proving, Proc. 3rd IJCAI (1973 41–46.Google Scholar