Skip to main content

Large Patterns Make Great Symbols: An Example of Learning from Example

  • Conference paper
Hybrid Neural Systems (Hybrid Neural Systems 1998)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 1778))

Included in the following conference series:

Abstract

We look at distributed representation of structure with variable binding, that is natural for neural nets and that allows traditional symbolic representation and processing. The representation supports learning from example. This is demonstrated by taking several instances of the mother-of relation implying the parent-of relation, by encoding them into a mapping vector, and by showing that the mapping vector maps new instances of mother-of into parent-of. Possible implications to AI are considered.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Bodén, M.B., Niklasson, L.F.: Features of distributed representation for tree structures: A study of RAAM. In: Niklasson, L.F., Bodén, M.B. (eds.) Current Trends in Connectionism, Erlbaum, Hillsdale, NJ, pp. 121–139 (1995)

    Google Scholar 

  2. Chalmers, D.J.: Syntactic transformations on distributed representations. Connection Science 2(1-2), 53–62 (1990)

    Article  Google Scholar 

  3. Gayler, R.W., Wales, R.: Connections, binding, unification and analogical promiscuity. In: Holyoak, K., Gentner, D., Kokinov, B. (eds.) Advances in Analogy Research: Integration of Theory and Data from the Cognitive, Computational, and Neural Sciences (Proc. Analogy 1998 workshop, Sofia). NBU Series in Cognitive Science, pp. 181–190. New Bulgarian University, Sofia (1998)

    Google Scholar 

  4. Hinton, G.E.: Mapping part–whole hierarchies into connectionist networks. Artificial Intelligence 46(1-2), 47–75 (1990)

    Article  Google Scholar 

  5. Hummel, J.E., Holyoak, K.J.: Distributed representation of structure: A theory of analogical access and mapping. Psychological Review 104, 427–466 (1997)

    Article  Google Scholar 

  6. Kanerva, P.: Binary spatter-coding of ordered K-tuples. In: Vorbrüggen, J.C., von Seelen, W., Sendhoff, B. (eds.) ICANN 1996. LNCS, vol. 1112, pp. 869–873. Springer, Heidelberg (1996)

    Google Scholar 

  7. Kanerva, P.: Dual role of analogy in the design of a cognitive computer. In: Holyoak, K., Gentner, D., Kokinov, B. (eds.) Advances in Analogy Research: Integration of Theory and Data from the Cognitive, Computational, and Neural Sciences (Proc. Analogy 1998 workshop, Sofia). NBU Series in Cognitive Science, pp. 164–170. New Bulgarian University, Sofia (1998)

    Google Scholar 

  8. Kussul, E.M., Rachkovskij, D.A., Baidyk, T.N.: Associative-projective neural networks: Architecture, implementation, applications. In: Proc. Neuro-Nimes 1991, The Fourth Int’l Conference on Neural Networks and their Applications, pp. 463–476 (1991)

    Google Scholar 

  9. Plate, T.A.: Distributed Representations and Nested Compositional Structure. PhD thesis. Graduate Department of Computer Science, University of Toronto (1994), Available by ftpatftp.cs.utoronto.caas/pub/tap/plate.thesis.ps.Z

  10. Plate, T.A.: A common framework for distributed representation schemes for compositional structure. In: Maire, F., Hayward, R., Diederich, J. (eds.) CADE 1997. LNCS, vol. 1249, pp. 15–34. Springer, Heidelberg (1997)

    Google Scholar 

  11. Pollack, J.P.: Recursive distributed representations. Artificial Intelligence 46(1-2), 77–105 (1990)

    Article  Google Scholar 

  12. Rachkovskij, D.A., Kussul, E.M.: Binding and normalization of binary sparse distributed representations by context-dependent thinning. Manuscript available at http://cogprints.soton.ac.uk/abs/comp/199904008

  13. Sharkey, N.E.: Connectionist representation techniques. AI Review 5(3), 143–167 (1991)

    Google Scholar 

  14. Shastri, L., Ajjanagadde, V.: From simple associations to systematic reasoning. Behavioral and Brain Sciences 16(3), 417–494 (1993)

    Article  Google Scholar 

  15. Wermter, S.: Hybrid Approaches to Neural-Network-Based Language Processing. Report ICSI TR-97-030, International Computer Science Institute, Berkeley, California (1997)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2000 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Kanerva, P. (2000). Large Patterns Make Great Symbols: An Example of Learning from Example. In: Wermter, S., Sun, R. (eds) Hybrid Neural Systems. Hybrid Neural Systems 1998. Lecture Notes in Computer Science(), vol 1778. Springer, Berlin, Heidelberg. https://doi.org/10.1007/10719871_13

Download citation

  • DOI: https://doi.org/10.1007/10719871_13

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-67305-7

  • Online ISBN: 978-3-540-46417-4

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics