Skip to main content

Towards a Hybrid Model of First-Order Theory Refinement

  • Conference paper
Hybrid Neural Systems (Hybrid Neural Systems 1998)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 1778))

Included in the following conference series:

Abstract

The representation and learning of a first-order theory using neural networks is still an open problem. We define a propositional theory refinement system which uses min and max as its activation functions, and extend it to the first-order case. In this extension, the basic computational element of the network is a node capable of performing complex symbolic processing. Some issues related to learning in this hybrid model are discussed.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Botta, M., Giordana, A., Piola, R.: FONN: Combining first order logic with connectionist learning. In: Proceedings of the International Conference on Machine Learning-1997, pp. 48–56 (1997)

    Google Scholar 

  2. Optiz, D.W., Shavlik, J.W.: Dynamically adding symbolically meaningful nodes to knowledge-based neural networks. Knowledge-Based Systems. 8, 301–311 (1995)

    Article  Google Scholar 

  3. Wogulis, J.: Revising relational domain theories. In: Proceedings of the Eighth In- ternational Workshop on Machine Learning., pp. 462–466 (1991)

    Google Scholar 

  4. Towell, G., Shavlik, J.W.: Extracting refined rules from knowledge-based neural networks. Machine Learning. 13, 71–101 (1993)

    Google Scholar 

  5. Jordan, M.I.: Attractor dynamics and parallelism in a connectionist sequential machine. In: Proceedings of the Eighth Annual Conference of the Cognitive Science Society, pp. 531–546 (1986)

    Google Scholar 

  6. Mahoney, J.J.: Combining symbolic and connectionist learning methods to refine certainty-factor rule-bases. Ph.D. Thesis. University of Texas at Austin (1996)

    Google Scholar 

  7. Mahoney, J.J., Mooney, R.J.: Combining connectionist and symbolic learning methods to refine certainty-factor rule-bases. In: Connection Science. 5 (special issue on architectures for integrating neural and symbolic processing), pp. 339–364 (1993)

    Google Scholar 

  8. Towell, G., Shavlik, J.: Knowledge-based artificial neural networks. Artificial Intelligence 69, 119–165 (1994)

    Article  Google Scholar 

  9. Towell, G.: Symbolic knowledge and neural networks: Insertion, refinement and extraction. Ph.D. Thesis. Computer Science Department, University of Wisconsin, Madison (1992)

    Google Scholar 

  10. Riedmiller, M., Braun, H.: A direct adaptive method for faster backpropagation learning: The RProp algorithm. In: Proceedings of the International Conference on Neural Networks, pp. 586–591 (1993)

    Google Scholar 

  11. Rummelhart, D.E., Durbin, R., Golden, R., Chauvin, Y.: Backpropagation: The basic theory. In: Rummelhart, D.E., Chauvin, Y. (eds.) Backpropagation: Theory, Architectures, and Applications., pp. 1–34. Lawrence Erlbaum Associates, Hillsdale NJ (1995)

    Google Scholar 

  12. Machado, R.J., Rocha, A.F.: The combinatorial neural network: A connectionist model for knowledge-based systems. In: Bouchon, B., Zadeh, L., Yager, R. (eds.) Uncertainty in Knowledge Bases, pp. 578–587. Springer, Berlin (1991)

    Chapter  Google Scholar 

  13. Setiono, R., Liu, H.: Improving backpropagation learning with feature selection. Applied Intelligence. 6(2), 129–140 (1996)

    Article  Google Scholar 

  14. Machado, R.J., Barbosa, V.C., Neves, P.A.: Learning in the combinatorial neural model. IEEE Transactions on Neural Networks 9(5), 831–847 (1998)

    Article  Google Scholar 

  15. Garcez, A., Zaverucha, G., Carvalho, L.A.: Logic programming and inductive learning in artificial neural networks. In: Workshop on Knowledge Representation in Neural Networks (KI 1996), Budapest, pp. 9–18 (1996)

    Google Scholar 

  16. Garcez, A., Zaverucha, G.: The connectionist inductive learning and logic programming system. Applied Intelligence Journal (special issue on neural networks and structured knowledge: representation and reasoning) 11(1), 59–77 (1999)

    Google Scholar 

  17. Pinkas, G.: Logical inference in symmetric connectionist networks. Doctoral thesis. Sever Institute of Technology, Washington University (1992)

    Google Scholar 

  18. Shastri, L., Ajjanagadde, V.: From simple associations to systematic reasoning. Behavioral and Brain Sciences. 16(3), 417–494 (1993)

    Article  Google Scholar 

  19. Holldobler, S.: Automated inferencing and connectionist models. Postdoctoral thesis. Intellektik, Informatik, TH Darmstadt (1993)

    Google Scholar 

  20. Kalinke, Y.: Using connectionist term representations for first-order deduction – a critical view. In: CADE-14, Workshop on Connectionist Systems for Knowledge Representation and Deduction., Townsville, Australia (1997) http://pikas.inf.tudresden.de/~yve/publ.html

  21. Sun, R.: Robust reasoning: integrating rule-based and similarity-based reasoning. Artificial Intelligence. 75, 241–295 (1995)

    Article  Google Scholar 

  22. Gelfond, M., Lifschitz, V.: Classical negation in logic programs and disjunctive databases. New Generation Computing. 9, 365–385 (1991)

    Article  Google Scholar 

  23. Lavrac, N., Dzeroski, S.: Inductive Logic Programming: techniques and applications. Ellis Horwood series in Artificial Intelligence 44 (1994)

    Google Scholar 

  24. Quinlan, J.R.: Learning logical definitions from relations. Machine Learning 5, 239–266 (1990)

    Google Scholar 

  25. Menezes, R., Zaverucha, G., Barbosa, V.C.: A penalty-function approach to rule extraction from knowledge-based neural networks. In: International Conference on Neural Information Processing (ICONIP 1998), Kitakyushu, Japan, pp. 1497–1500 (1998)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2000 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Hallack, N.A., Zaverucha, G., Barbosa, V.C. (2000). Towards a Hybrid Model of First-Order Theory Refinement. In: Wermter, S., Sun, R. (eds) Hybrid Neural Systems. Hybrid Neural Systems 1998. Lecture Notes in Computer Science(), vol 1778. Springer, Berlin, Heidelberg. https://doi.org/10.1007/10719871_7

Download citation

  • DOI: https://doi.org/10.1007/10719871_7

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-67305-7

  • Online ISBN: 978-3-540-46417-4

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics