Skip to main content

Recurrent neural networks to approximate the semantics of acceptable logic programs

  • Scientific Track
  • Conference paper
  • First Online:
Book cover Advanced Topics in Artificial Intelligence (AI 1998)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 1502))

Included in the following conference series:

Abstract

In [9] we have shown how to construct a 3-layer recurrent neural network (RNN) that computes the iteration of the meaning function T p of a given propositional logic program, what corresponds to the computation of the semantics of the program.

In this paper we define a notion of approximation for interpretations and prove that there exists a feed forward neural network (FNN) that approximates the calculation of T p for a given (first order) acceptable logic program with an injective level mapping arbitrarily well. By extending the FNN by recurrent connections we get a RNN whose iteration approximates the fixed point of T p .

The proof is found by taking advantage of the fact that for acceptable logic programs, T p is a contraction mapping on the complete metric space of the interpretations for the program. Mapping this metric space to the metric space IR the real valued function f p corresponding to T p turns out to be continuous and a contraction and for this reason can be approximated by an indicated class of FNN.

The author acknowledge support from the German Academic Exchange Service (DAAD) under grant no. D/97/29570.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. K.R. Apt and M.H. Van Emden. Contributions to the Theory of Logic Programming. Journal of the ACM, 29, pp. 841–862, 1982.

    Article  MATH  Google Scholar 

  2. S.-E. Bornscheuer. Generating Rational Models. In M. Maher, editor, Proceedings of the Joint International Conference and Symposium on Logic Programming (JICSLP), p. 547. MIT Press, 1996.

    Google Scholar 

  3. K. L. Clark. Negation as failure. In Gallaire and Nicolas, editors, Workshop Logic and Databases, CERT, Toulouse, France, 1977.

    Google Scholar 

  4. P. Devienne and P. Lebégue and A. Parrain and J. C. Routier and J. Würz. Smallest Horn Clause Programs. Journal of Logic Programming, 19(20):pp. 635–679, 1994.

    Google Scholar 

  5. A.S. d’Avila Garcez, G. Zaverucha, and L.A.V. de Carvalho. Logic programming and inductive learning in artificial neural networks. In Ch. Herrmann, F. Reine, and A. Strohmaier, editors, Knowledge Representation in Neural Networks, pp. 33–46, Berlin, Logos Verlag, 1997.

    Google Scholar 

  6. M. Fitting. Metric methods—three examples and a theorem. Journal of Logic Programming, 21(3), pp. 113–127, 1994.

    MATH  MathSciNet  Google Scholar 

  7. M. Fujita and R. Hasegawa and M. Koshimura and H. Fujita. Model Generation Theorem Provers on a Parallel Inference Machine. Proceedings of the International Conference on Generation Computer Systems, 1992.

    Google Scholar 

  8. K.-I. Funahashi. On the approximate realization of continuous mappings by neural networks. Neural Networks, 2, pp. 183–192, 1989.

    Article  Google Scholar 

  9. S. Hölldobler and Y. Kalinke. Towards a massively parallel computational model for logic programming. In Proceedings of the ECA194, Workshop on Combining Symbolic and Connectionist Processing, pp. 68–77, ECCAI, 1994.

    Google Scholar 

  10. K. Hornik and M. Stinchcombe and H. White. Multilayer feedforward networks are universal approximators. Neuronal Networks, 2, pp. 359–366, 1989.

    Article  Google Scholar 

  11. P.N. Johnson-Laird and R.M.J. Byrne. Deduction. LEA, Hove and London, 1991.

    Google Scholar 

  12. J. W. Lloyd. Foundations of Logic Programming. Springer, 1987.

    Google Scholar 

  13. R. Manthey and F. Bry. SATCHMO: A Theorem Prover Implemented in Prolog. In: E. Lusk and R. Overbeek, editors, LLNCS 310, Springer, pp. 415–434, 1988.

    Google Scholar 

  14. T. A. Plate. Distributed Representations and Nested Compositional Structure. PhD thesis, Department of Computer Science, University of Toronto, 1994.

    Google Scholar 

  15. J. Slaney. Scott: A model-guided theorem prover. In Proceedings of the International Joint Conference on Artificial Intelligence, pp. 109–114, 1993.

    Google Scholar 

  16. P. Smolensky. On the Proper Treatment of Connectionism. Behavioral and Brain Sciences, 11, pp. 1–74, 1988.

    Article  Google Scholar 

  17. A. Sperduti. Labeling RAAM. Technical Report TR-93-029, International Computer Science Institute, Berkeley, Ca, 1992.

    Google Scholar 

  18. S. Willard. General Topology. Addison-Wesley, 1970.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Grigoris Antoniou John Slaney

Rights and permissions

Reprints and permissions

Copyright information

© 1998 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Hölldobler, S., Kalinke, Y., Störr, HP. (1998). Recurrent neural networks to approximate the semantics of acceptable logic programs. In: Antoniou, G., Slaney, J. (eds) Advanced Topics in Artificial Intelligence. AI 1998. Lecture Notes in Computer Science, vol 1502. Springer, Berlin, Heidelberg . https://doi.org/10.1007/BFb0095050

Download citation

  • DOI: https://doi.org/10.1007/BFb0095050

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-65138-3

  • Online ISBN: 978-3-540-49561-1

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics