Counter Examples and Explanations

  • Christel Vrain
Conference paper


In the field of learning, counter-examples are very important to reduce the number of possible generalizations or to simplify them.

An algorithm of generalization, AGAPE, based on the principle of structural matching has been implemented. Examples are progressively transformed to get the same form i.e. to be instantiations of a common formula, with different links between variables. That formula, and the common links between variables give the generalization of the examples. that algorithm, a method has been developed to take account of From th alg , p counter-example. The algorithm of generalization provides us a generalization of the examples and then we compare the description of the generalization cri tion of the coun er-exam le. We need knowledge eralization and the description p g about the predicates present in the descriptions, knowledge being given under the form of taxonomies, tangled or untangled. If a predicate in the generalization does not appear in the description of the counter-example, two cases are possible:
  • either it can be deduced from the atoms present in the counterexample, the predicate is not an explanation of the counter-example,

  • or it cannot be deduced, so it explains the counter-example.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. (Bollinger 86).
    Thèse de troisième cycle (not still published)Google Scholar
  2. (Biermann 84).
    Biermann A.W., Guiho G., Kodratoff Y. eds Macmillan Publishing Company,1984, pp. 463–482.Google Scholar
  3. (Ganascia 85).
    Ganascia J.G.: “Comment oublier à l’aide de contre-exemples?” Congrès AFCET 1985Google Scholar
  4. (Kodratoff 83).
    Kodratoff Y.: “Generalizing and particularizing as the techniques of learning” Computers and Artificial Intelligence 2, 1983, 417–441.zbMATHGoogle Scholar
  5. (Kodratoff 84).
    Kodratoff Y., Ganascia J.-G., Clavieras B., Bollinger T., Tecuci G.: “Careful generalization for concept learning” Proc. ECAI-84, Pisa 1984, pp. 483–492.Google Scholar
  6. (Kodratoff 85).
    Kodratoff Y. : “A theory and a methodology for Symbolic Learning” COGNITIVA 85. June 4–7, 1985Google Scholar
  7. (Michalski 84).
    Michalski R. S. : “A theory and Methodology of Inductive Learning” Machine Learning, an Artificial Intelligence Approach. Michalski R. S., Carbonell J. G., Mitchell T. M. eds Springer Verlag 1984, pp. 83–129.Google Scholar
  8. (Michalski 84).
    Michalski R.S.: “Inductive Learning as Rule-guided Transformation of Symbolic Descriptions : a Theory and Implementation” Automatic Program Construction Techniques, Biermann A.W., Guiho G., Kodratoff Y. eds, Macmillan Publishing Company,1984, pp. 517–552.Google Scholar
  9. (Mitchell 83).
    Mitchell T. M. : “Learning and Problem Solving” Proc. IJCAI-83, Karlsruhe 1983, pp. 1139–1151.Google Scholar
  10. (Mitchell 83).
    Mitchell T.M., Utgoff P.E., Banerji R.: “Learning by experimentation, acquiring and refining problem-solving heuristics” Machine Learning, an Artificial Intelligence Approach, Michalski R.S., Carbonell J.G., Mitchell T.M. eds, Tioga Publishing Company 1983, pp. 163–190.Google Scholar
  11. (Moreau).
    Moreau C.: “Guide des champignons comestibles et vénéneux” Librairie LarousseGoogle Scholar
  12. (Vere 80).
    Vere S.A.: “Multilevel Counterfactuals for Generalizations of Relational Concepts and Productions”, Artificial Intelligence 14, pp. 139–164, 180.Google Scholar
  13. (Winston 75).
    Winston P.H.: “Learning Structural Descriptions from Examples” The Psychology of Computer Vision, P.H. Winston eds., McGraw Hill 1975.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1986

Authors and Affiliations

  • Christel Vrain
    • 1
  1. 1.Laboratoire de Recherche en InformatiqueUA 410 CNRS, Université de Paris SudOrsay CedexFrance

Personalised recommendations