Skip to main content

Cognitive Algorithms and Systems: Reasoning and Knowledge Representation

  • Chapter
  • First Online:
Book cover Perception-Action Cycle

Abstract

This chapter reviews recent advances in computational cognitive reasoning and their underlying algorithmic foundations. It summarises the neural-symbolic approach to cognition and computation. Neural-symbolic systems integrate two fundamental phenomena of intelligent behaviour: reasoning and the ability to learn from experience. The chapter illustrates how to represent, learn and compute several expressive forms of symbolic knowledge using neural networks. The goal is to provide computational models with integrated reasoning capabilities, where the neural networks offer the machinery for cognitive reasoning and learning while symbolic logic offers explanations to the neural models facilitating the necessary interaction with the world and other systems.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 299.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 379.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 379.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    It is worth noting that Hinton et al. (2006) is concerned mainly with unsupervised learning, while SVMs are supervised learning systems that require a certain amount of data preprocessing.

  2. 2.

    Each network in the ensemble can be responsible for a specific task or logic, with the overall model being potentially very expressive. The methodology that we use to combine networks is that of fibring (Gabbay 1999) as discussed in some detail later.

  3. 3.

    McCarthy (1988) identifies four knowledge representation problems for neural networks: the problem of elaboration tolerance (the ability of a representation to be elaborated to take additional phenomena into account), the propositional fixation of neural networks (based on the assumption that neural networks cannot represent relational knowledge), the problem of how to make use of any available background knowledge as part of learning and the problem of how to obtain domain descriptions from trained networks as opposed to mere discriminations. Neural-symbolic integration can address each of the above challenges. In a nutshell, the problem of elaboration tolerance can be resolved by having networks that are fibred forming a modular hierarchy, similar to the idea of using self-organising maps (Gärdenfors 2000; Haykin 1999) for language processing, where the lower levels of abstraction are used for the formation of concepts that are then used at the higher levels of the hierarchy. CML (d’Avila Garcez et al. 2007b) deals with the so-called propositional fixation of neural networks by allowing them to encode relational knowledge in the form of accessibility relations; a number of other formalisms have also tackled this issue as early as 1990 (Bader et al. 20052007; Hölldobler 1993; Shastri and Ajjanagadde 1990), the key question being how to have simple representations that promote effective learning. Learning with background knowledge can be achieved by the usual translation of symbolic rules into neural networks. Problem descriptions can be obtained by rule extraction; a number of such translation and extraction algorithms have been proposed (e.g. Bologna 2004; d’Avila Garcez et al. 2001; d’Avila Garcez and Zaverucha 1999; Lozowski and Zurada 2000; Hitzler et al. 2004; Jacobsson 2005; Nunez et al. 2006; Setiono 1997; Sun 1995).

  4. 4.

    We depart from distributed representations for two main reasons: localist representations can be associated with highly effective learning algorithms such as backpropagation, and in our view localist networks are at an appropriate level of abstraction for symbolic knowledge representation. As advocated in Page (2000), we believe one should be able to achieve the goals of distributed representations by properly changing the levels of abstraction of localist networks, while some of the desirable properties of localist models cannot be exhibited by fully distributed ones.

  5. 5.

    We follow the muddy children problem description presented in Fagin et al. (1995). We must also assume that all the agents involved in the situation are truthful and intelligent.

  6. 6.

    The representation of common knowledge in neural networks throws some interesting questions. In CML, common knowledge is represented implictly by connecting neurons appropriately as reasoning progresses (e.g. as it becomes known at round two that at least two children should be muddy). The representation of common knowledge explicitly in the object level would require the use of neurons that are activated when “everybody knows” something (implementing in a finite domain the common knowledge axioms of Fagin et al. 1995), but this would complicate the formalisation of the puzzle given in this chapter. This explicit form of representation and its ramifications are worth investigating though and can be treated in their own right in future work.

References

  • F. Baader, D. Calvanese, D. L. McGuinness, D. Nardi, and P. F. Patel-Schneider, editors. The Description Logic Handbook: Theory, Implementation, and Applications. Cambridge University Press, Cambridge, 2003.

    Google Scholar 

  • S. Bader, A. d’Avila Garcez, and P. Hitzler. Computing first-order logic programs by fibring artificial neural networks. In Proceedings of the AAAI International FLAIRS Conference, pages 314–319, 2005.

    Google Scholar 

  • S. Bader, P. Hitzler, S. Holldobler, and A. Witzel. A fully connectionist model generator for covered first-order logic programs. In Proceedings of the International Joint Conference on Artificial Intelligence IJCAI-07, pages 666–671, Hyderabad, India, 2007. AAAI.

    Google Scholar 

  • B. Bennett. Spatial reasoning with propositional logics. In Proceedings of the Fourth International Conference on Principles of Knowledge Representation and Reasoning KR-94, pages 51–62, 1994.

    Google Scholar 

  • P. Blackburn, J. van Benthem, and F. Wolter, editors. Handbook of Modal Logic. Studies in Logic and Practical Reasoning. Elsevier, Amsterdam, 2006.

    Google Scholar 

  • G. Bologna. Is it worth generating rules from neural network ensembles? Journal of Applied Logic, 2(3):325–348, 2004.

    Article  Google Scholar 

  • R. V. Borger, A. d’Avila Garcez, and L. Lamb. A neural-symbolic perspective on analogy. Behavioral and Brain Sciences, 31(4):379–380, 2008.

    Google Scholar 

  • K. Broda, D. Gabbay, L. Lamb, and A. Russo. Labelled natural deduction for conditional logics of normality. Logic Journal of the IGPL, 10(2):123–163, 2002.

    Article  Google Scholar 

  • K. Broda, D. Gabbay, L. Lamb, and A. Russo. Compiled Labelled Deductive Systems: A Uniform Presentation of Non-classical Logics. Studies in Logic and Computation. Research Studies Press/Institute of Physics Publishing, Baldock, UK, Philadelphia, PA, 2004.

    Google Scholar 

  • A. Browne and R. Sun. Connectionist inference models. Neural Networks, 14:1331–1355, 2001.

    Article  CAS  PubMed  Google Scholar 

  • I. Cloete and J. Zurada, editors. Knowledge-Based Neurocomputing. MIT, Cambridge, MA, 2000.

    Google Scholar 

  • A. d’Avila Garcez. Fewer epistemological challenges for connectionism. In S. B. Cooper, B. Lowe, and L. Torenvliet, editors, Proceedings of Computability in Europe, CiE 2005, volume LNCS 3526, pages 139–149, Amsterdam, The Netherlands, June 2005. Springer, Berlin.

    Google Scholar 

  • A. d’Avila Garcez and D. Gabbay. Fibring neural networks. In Proceedings of 19th National Conference on Artificial Intelligence (AAAI-04), pages 342–347, San Jose, CA, 2004.

    Google Scholar 

  • A. d’Avila Garcez and P. Hitzler, editors. Proceedings of IJCAI International Workshop on Neural-Symbolic Learning and Reasoning NeSy09, Pasadena, California, USA, 2009.

    Google Scholar 

  • A. d’Avila Garcez and L. Lamb. Reasoning about time and knowledge in neural-symbolic learning systems. In S. Thrun, L. Saul, and B. Schoelkopf, editors, Advances in Neural Information Processing Systems 16, Proceedings of  NIPS 2003, pages 921–928. MIT, Cambridge, MA, 2004.

    Google Scholar 

  • A. d’Avila Garcez and L. Lamb. Neural-symbolic systems and the case for non-classical reasoning. In S. Artëmov, H. Barringer, A. d’Avila Garcez, L. Lamb, and J. Woods, editors, We Will Show Them! Essays in Honour of Dov Gabbay, pages 469–488. College Publications, International Federation for Computational Logic, UK, 2005.

    Google Scholar 

  • A. d’Avila Garcez and L. Lamb. A connectionist computational model for epistemic and temporal reasoning. Neural Computation, 18(7):1711–1738, 2006.

    Google Scholar 

  • A. d’Avila Garcez and G. Zaverucha. The connectionist inductive learning and logic programming system. Applied Intelligence Journal, Special Issue on Neural Networks and Structured Knowledge, 11(1):59–77, 1999.

    Google Scholar 

  • A. d’Avila Garcez, K. Broda, and D. Gabbay. Symbolic knowledge extraction from trained neural networks: A sound approach. Artificial Intelligence, 125:155–207, 2001.

    Google Scholar 

  • A. d’Avila Garcez, K. Broda, and D. Gabbay. Neural-Symbolic Learning Systems: Foundations and Applications. Perspectives in Neural Computing. Springer, Berlin, 2002a.

    Google Scholar 

  • A. d’Avila Garcez, L. Lamb, and D. Gabbay. A connectionist inductive learning system for modal logic programming. In Proceedings of the 9th International Conference on Neural Information Processing ICONIP’02, pages 1992–1997, Singapore, 2002b. IEEE.

    Google Scholar 

  • A. d’Avila Garcez, L. Lamb, K. Broda, and D. Gabbay. Distributed knowledge representation in neural-symbolic learning systems: A case study. In Proceedings of AAAI International FLAIRS Conference, pages 271–275, St. Augustine, FL, 2003a. AAAI.

    Google Scholar 

  • A. d’Avila Garcez, L. Lamb, and D. Gabbay. Neural-symbolic intuitionistic reasoning. Frontiers in Artificial Intelligence and Applications Vol. 104, pages 399–408. IOS, 2003b.

    Google Scholar 

  • A. d’Avila Garcez, D. Gabbay, and L. Lamb. Argumentation neural networks. In Proceedings of the 11th International Conference on Neural Information Processing, ICONIP’04, volume 3316 of Lecture Notes in Computer Science, pages 606–612. Springer, New York, 2004a.

    Google Scholar 

  • A. d’Avila Garcez, D. Gabbay, and L. Lamb. Towards a connectionist argumentation framework. In Proceedings of the 16th European Conference on Artificial Intelligence, ECAI 2004, including Prestigious Applicants of Intelligent Systems, PAIS 2004, Valencia, Spain, August 22–27, 2004, pages 987–988, 2004b.

    Google Scholar 

  • A. d’Avila Garcez, L. Lamb, K. Broda, and D. Gabbay. Applying connectionist modal logics to distributed knowledge representation problems. International Journal on Artificial Intelligence Tools, 13(1):115–139, 2004c.

    Google Scholar 

  • A. d’Avila Garcez, D. Gabbay, and L. Lamb. Value-based argumentation frameworks as neural-symbolic learning systems. Journal of Logic and Computation, 15(6):1041–1058, 2005.

    Google Scholar 

  • A. d’Avila Garcez, L. Lamb, and D. Gabbay. Connectionist computations of intuitionistic reasoning. Theoretical Computer Science, 358(1):34–55, 2006a.

    Google Scholar 

  • A. d’Avila Garcez, L. Lamb, and D. Gabbay. A connectionist model for constructive modal reasoning. In Advances in Neural Information Processing Systems 18, Proceedings of  NIPS 2005, pages 403–410. MIT, 2006b.

    Google Scholar 

  • A. d’Avila Garcez, D. M. Gabbay, O. Ray, and J. Woods. Abductive reasoning in neural-symbolic systems. TOPOI: An International Review of Philosophy, 26:37–49, 2007a.

    Google Scholar 

  • A. d’Avila Garcez, L. Lamb, and D. Gabbay. Connectionist modal logic: Representing modalities in neural networks. Theoretical Computer Science, 371(1–2):34–53, 2007b.

    Google Scholar 

  • A. d’Avila Garcez, L. Lamb, and D. Gabbay. Neural-Symbolic Cognitive Reasoning. Cognitive Technologies. Springer, Berlin, 2009.

    Google Scholar 

  • J. Elman. Finding structure in time. Cognitive Science, 14(2):179–211, 1990.

    Article  Google Scholar 

  • R. Fagin, J. Halpern, Y. Moses, and M. Vardi. Reasoning About Knowledge. MIT, Cambridge, MA, 1995.

    Google Scholar 

  • D. Gabbay. Elementary Logics: a Procedural Perspective. Prentice Hall, London, 1998.

    Google Scholar 

  • D. Gabbay. Fibring Logics. Oxford University Press, Oxford, 1999. Oxford Logic Guides, Vol. 38.

    Google Scholar 

  • D. M. Gabbay and A. Hunter. Making inconsistency respectable: Part 2 – meta-level handling of inconsistency. In Symbolic and Quantitative Approaches to Reasoning and Uncertainty ECSQARU’93, volume LNCS 747, pages 129–136. Springer, Berlin, 1993.

    Google Scholar 

  • D. Gabbay and J. Woods. A Practical Logic of Cognitive Systems, Volume 2: The reach of abduction: Insight and trial. Elsevier, New York, 2005.

    Google Scholar 

  • D. Gabbay, I. Hodkinson, and M. Reynolds. Temporal logic: mathematical foundations and computational aspects, volume 1. Oxford University Press, Oxford, 1994. Oxford Logic Guides, Vol. 28.

    Google Scholar 

  • D. Gabbay, A. Kurucz, F. Wolter, and M. Zakharyaschev. Many-dimensional Modal Logics: Theory and Applications, volume 148 of Studies in Logic and the Foundations of Mathematics. Elsevier Science, Amsterdam, The Netherlands, 2003.

    Google Scholar 

  • P. Gärdenfors. Conceptual Spaces: The Geometry of Thought. MIT, Cambridge, MA, 2000.

    Google Scholar 

  • J. Halpern. Reasoning About Uncertainty. MIT, Cambridge, MA, 2003.

    Google Scholar 

  • S. Haykin. Neural Networks: A Comprehensive Foundation. Prentice Hall, New Jersey, 1999.

    Google Scholar 

  • G. E. Hinton, S. Osindero, and Y. Teh. A fast learning algorithm for deep belief nets. Neural Computation, 18:1527–1554, 2006.

    Article  PubMed  Google Scholar 

  • P. Hitzler, S. Holldobler, and A. K. Seda. Logic programs and connectionist networks. Journal of Applied Logic, 2(3):245–272, 2004. Special Issue on Neural-Symbolic Systems.

    Google Scholar 

  • S. Hölldobler. Automated inferencing and connectionist models. Postdoctoral Thesis, Intellektik, Informatik, TH Darmstadt, 1993.

    Google Scholar 

  • S. Hölldobler and Y. Kalinke. Toward a new massively parallel computational model for logic programming. In Proceedings of the Workshop on Combining Symbolic and Connectionist Processing, ECAI 1994, pages 68–77, 1994.

    Google Scholar 

  • M. Huth and M. Ryan. Logic in Computer Science: Modelling and Reasoning About Systems. Cambridge University Press, Cambridge, 2000.

    Google Scholar 

  • H. Jacobsson. Rule extraction from recurrent neural networks: A taxonomy and review. Neural Computation, 17(6):1223–1263, 2005.

    Article  Google Scholar 

  • L. Lamb, R. Borges, and A. d’Avila Garcez. A connectionist cognitive model for temporal synchronisation and learning. In Proceedings of the Twenty-Second AAAI Conference on Artificial Intelligence AAAI 2007, pages 827–832. AAAI, 2007.

    Google Scholar 

  • H. Leitgeb. Neural network models of conditionals: an introduction. In X. Arrazola and J. M. Larrazabal et al., editors, Proceedings of ILCLI International Workshop on Logic and Philosophy of Knowledge, Communication and Action, pages 191–223, Bilbao, 2007.

    Google Scholar 

  • A. Lozowski and J. Zurada. Extraction of linguistic rules from data via neural networks and fuzzy approximation. In I. Cloete and J. Zurada, editors, Knowledge-Based Neurocomputing, pages 403–417. MIT, Cambridge, 2000.

    Google Scholar 

  • J. McCarthy. Epistemological challenges for connectionism. Behavioral and Brain Sciences, 11(1):44, 1988.

    Google Scholar 

  • M. Mendler. Characterising combinatorial timing analysis in intuitionistic modal logic. Logic Journal of the IGPL, 8(6):821–852, 2000.

    Article  Google Scholar 

  • R. Mooney and D. Ourston. A multistrategy approach to theory refinement. In R. Michalski and G. Teccuci, editors, Machine Learning: A Multistrategy Approach, volume 4, pages 141–164. Morgan Kaufmann, San Mateo, CA, 1994.

    Google Scholar 

  • H. Nunez, C. Angulo, and A. Catala. Rule based learning systems for support vector machines. Neural Processing Letters, 24(1):1–18, 2006.

    Article  Google Scholar 

  • M. Orgun and W. Ma. An overview of temporal and modal logic programming. In Proceedings of the International Conference on Temporal Logic ICTL’94, volume 827 of Lecture Notes in Artificial Intelligence, pages 445–479. Springer, Berlin, 1994.

    Google Scholar 

  • M. Page. Connectionist modelling in psychology: A localist manifesto. Behavioral and Brain Sciences, 23:443–467, 2000.

    Article  CAS  PubMed  Google Scholar 

  • S. Pinker. The Stuff of Thought: Language as a Window into Human Nature. Viking, New York, 2007.

    Google Scholar 

  • S. Pinker, M. A. Nowak, and J. J. Lee. The logic of indirect speech. Proceedings of the National Academy of Sciences USA, 105(3):833–838, 2008.

    Article  CAS  Google Scholar 

  • A. Pnueli. The temporal logic of programs. In Proceedings of 18th IEEE Annual Symposium on Foundations of Computer Science, pages 46–57, 1977.

    Google Scholar 

  • A. Rao and M. Georgeff. Decision procedures for BDI logics. Journal of Logic and Computation, 8(3):293–343, 1998.

    Article  Google Scholar 

  • D. Rumelhart, G. Hinton, and R. Williams. Learning internal representations by error propagation. In D. Rumelhart and J. McClelland, editors, Parallel Distributed Processing: Explorations in the Microstructure of Cognition, volume 1, pages 318–362. MIT, Cambridge, 1986.

    Google Scholar 

  • R. Setiono. Extracting rules from neural networks by pruning and hidden-unit splitting. Neural Computation, 9:205–225, 1997.

    Article  CAS  PubMed  Google Scholar 

  • L. Shastri. Advances in SHRUTI: a neurally motivated model of relational knowledge representation and rapid inference using temporal synchrony. Applied Intelligence Journal, Special Issue on Neural Networks and Structured Knowledge, 11:79–108, 1999.

    Google Scholar 

  • L. Shastri. Shruti: A neurally motivated architecture for rapid, scalable inference. In B. Hammer and P. Hitzler, editors, Perspectives of Neural-Symbolic Integration, pages 183–203. Springer, Heidelberg, 2007.

    Chapter  Google Scholar 

  • L. Shastri and V. Ajjanagadde. From simple associations to semantic reasoning: A connectionist representation of rules, variables and dynamic binding. Technical report, University of Pennsylvania, 1990.

    Google Scholar 

  • J. Shawe-Taylor and N. Cristianini. Kernel Methods for Pattern Analysis. Cambridge University Press, Cambridge, 2004.

    Google Scholar 

  • P. Smolensky. On the proper treatment of connectionism. Behavioral and Brain Sciences, 44:1–74, 1988.

    Article  Google Scholar 

  • P. Smolensky and G. Legendre. The Harmonic Mind: From Neural Computation to Optimality-Theoretic Grammar. MIT, Cambridge, MA, 2006.

    Google Scholar 

  • K. Stenning and M. van Lambalgen. Human reasoning and cognitive science, 2008.

    Google Scholar 

  • R. Sun. Robust reasoning: integrating rule-based and similarity-based reasoning. Artificial Intelligence, 75(2):241–296, 1995.

    Article  Google Scholar 

  • R. Sun. Theoretical status of computational cognitive modeling. Cognitive Systems Research, 10(2):124–140, 2009.

    Article  Google Scholar 

  • J. Taylor. Cognitive computation. Cognitive Computation, 1(1):4–16, 2009.

    Article  Google Scholar 

  • S. Thrun. Extracting provably correct rules from artificial neural networks. Technical report, Institut für Informatik, Universität Bonn, 1994.

    Google Scholar 

  • G. Towell and J. Shavlik. Knowledge-based artificial neural networks. Artificial Intelligence, 70(1):119–165, 1994.

    Article  Google Scholar 

  • L. Valiant. A theory of the learnable. Communications of the ACM, 27(11):1134–1142, 1984.

    Article  Google Scholar 

  • L. Valiant. A neuroidal architecture for cognitive computation. Journal of the ACM, 47(5):854–882, 2000.

    Article  Google Scholar 

  • L. Valiant. Three problems in computer science. Journal of the ACM, 50(1):96–99, 2003.

    Article  Google Scholar 

  • L. Valiant. Knowledge infusion: In pursuit of robustness in artificial intelligence. In Proceedings of the 28th Conference on Foundations of Software Technology and Theoretical Computer Science, pages 415–422, Bangalore, India, 2008.

    Google Scholar 

  • J. van Benthem. Correspondence theory. In D. Gabbay and F. Guenthner, editors, Handbook of Philosophical Logic, chapter II.4, pages 167–247. D. Reidel Publishing Company, Dordrecht, 1984.

    Google Scholar 

  • D. van Dalen. Intuitionistic logic. In D. Gabbay and F. Guenthner, editors, Handbook of Philosophical Logic, volume 5. Kluwer, Dordrecht, 2nd edition, 2002.

    Google Scholar 

  • M. Vardi. Why is modal logic so robustly decidable. In N. Immerman and P. Kolaitis, editors, Descriptive Complexity and Finite Models, volume 31 of Discrete Mathematics and Theoretical Computer Science, pages 149–184. DIMACS, 1997.

    Google Scholar 

  • M. Wooldridge. Introduction to Multi-agent Systems. Wiley, New York, 2001.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Artur S. d’Avila Garcez .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2011 Springer Science+Business Media, LLC

About this chapter

Cite this chapter

Garcez, A.S.d., Lamb, L.C. (2011). Cognitive Algorithms and Systems: Reasoning and Knowledge Representation. In: Cutsuridis, V., Hussain, A., Taylor, J. (eds) Perception-Action Cycle. Springer Series in Cognitive and Neural Systems. Springer, New York, NY. https://doi.org/10.1007/978-1-4419-1452-1_18

Download citation

Publish with us

Policies and ethics