Skip to main content

Refining first order theories with neural networks

  • Communications Session 1A Logic for AI
  • Conference paper
  • First Online:
Foundations of Intelligent Systems (ISMIS 1997)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 1325))

Included in the following conference series:

  • 93 Accesses

Abstract

This paper presents the experimental evaluation of a neural network architecture that can manage structured data and refine knowledge bases expressed in a first order logic language.

This new framework is well suited to classification problems in which concept descriptions depend upon numerical features of the data and data have variable size. In fact, the main goal of the neural architecture is that of refining the numerical part of the knowledge base, without changing its structure.

Several experiments are described in the paper in order to evaluate the potential benefits with respect to the more classical architectures based on the propositional framework. In a first case a classification theory has been manually handcrafted and then refined automatically. In a second case it has been automatically acquired by a symbolic relational learning system able to deal with numerical features. An extensive experimentation ha been also done with most popular propositional learners showing that the new network architecture converges quite fastly and generalizes better than all of them.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. C. Baroglio, A. Giordana, M. Kaiser, M. Nuttin, and R. Piola. Learning controllers for industrial robots. Machine Learning, 23:221–250, July 1996.

    Google Scholar 

  2. H.R. Berenji. Fuzzy logic controllers. In R.R. Yager and L.A. Zadeh, editors, An Introduction to Fuzzy Logic Applications in Intelligent Systems, pages 69–96. Kluwer Academic Publishers, 1992.

    Google Scholar 

  3. H.R. Berenji and P. Khedkar. Learning and tuning fuzzy controllers through reinforcements. IEEE Transactions on Neural Networks, 3(5):724–740, September 1992.

    Google Scholar 

  4. E.B. Blumer, A. Ehrenfeucht, D. Haussler, and M.K. Warmuth. Learnability and the Vapnik-Chervonenkis dimension. Journal of the ACM, 36:929–965, 1989.

    Google Scholar 

  5. M. Botta, A. Giordana, and R. Piola. FONN: Combining first order logic with connectionist learning. In Proceedings of the 14th International Conference on Machine Learning ICML-97, Nashville, TN, July 1997. Morgan Kaufmann.

    Google Scholar 

  6. L. Breiman, J.H. Friedman, R.A. Ohlsen, and C.J. Stone. Classification And Regression Trees. Wadsworth & Brooks, Pacific Grove, CA, 1984.

    Google Scholar 

  7. S.E. Fahlman and C. Lebiere. The cascade-correlation learning architecture. In D. S. Touretzky, editor, Advances in Neural Information Processing Systems 2. Morgan Kaufmann, 1990.

    Google Scholar 

  8. L.M. Fu. Knowledge-based connectionism for revising domain theories. IEEE Transactions on Systems, Man and Cybernetics, 23(1):173–182, January 1993.

    Google Scholar 

  9. J.J. Mahoney and R.J. Mooney. Comparing methods for refining certainity-factor rule-bases. In Proc. of the Eleventh Internetional Workshop on Machine Learning ML-94, Rutgers University, NJ, July 1994.

    Google Scholar 

  10. R. Michalski. A theory and methodology of inductive learning. In R. Michalski, J. Carbonell, and T. Mitchell, editors, Machine Learning: An Artificial Intelligence Approach, pages 83–134, Los Altos, CA, 1983. Morgan Kaufmann.

    Google Scholar 

  11. T. Poggio and F. Girosi. Networks for approximation and learning. Proceedings of the IEEE, 78(9):1481–1497, September 1990.

    Google Scholar 

  12. R.J. Quinlan. Induction of decision trees. Machine Learning, 1:81–106, 1986.

    Google Scholar 

  13. D. E. Rumelhart and J. L. McClelland. Parallel Distributed Processing: Explorations in the Microstructure of Cognition, Parts I & II. MIT Press, Cambridge, Massachusetts, 1986.

    Google Scholar 

  14. G. Towell and J. W. Shavlik. Knowledge based artificial neural networks. Artf cial Intelligence, 70(4):119–166, 1994.

    Google Scholar 

  15. G.G. Towell, J.W. Shavlik, and M.O. Noordwier. Refinement of approximate domain theories by knowledge-based neural networks. In AAAI'90, pages 861–866, 1990.

    Google Scholar 

  16. V. Tresp, J. Hollatz, and S. Ahmad. Network structuring and training using rule-based knowledge. In Advances in Neural Information Processing Systems 5 (NIPS5), 1993.

    Google Scholar 

  17. L.A. Zadeh. Knowledge representation in fuzzy logic. In R.R. Yager and L.A. Zadeh, editors, An Introduction to Fuzzy Logic Applications in Intelligent Systems, pages 1–25. Kluwer Academic Publishers, 1992.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Zbigniew W. Raś Andrzej Skowron

Rights and permissions

Reprints and permissions

Copyright information

© 1997 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Botta, M., Giordana, A., Piola, R. (1997). Refining first order theories with neural networks. In: Raś, Z.W., Skowron, A. (eds) Foundations of Intelligent Systems. ISMIS 1997. Lecture Notes in Computer Science, vol 1325. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-63614-5_8

Download citation

  • DOI: https://doi.org/10.1007/3-540-63614-5_8

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-63614-4

  • Online ISBN: 978-3-540-69612-4

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics