Skip to main content

Paying Attention to Relevant Dimensions: A Localist Approach

  • Conference paper

Part of the book series: Perspectives in Neural Computing ((PERSPECT.NEURAL))

Abstract

Localist models of, for example, the classification of multidimensional stimuli, can run into problems if generalization is attempted when many of the stimulus dimensions are irrelevant to the classification task in hand. A procedure is suggested by which a localist model can learn prototype representations that foeus on the relevant dimensions only. These permit good generalization which would be lacking in a simple exemplar-based model.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Calder, A. J., Burton, A. M., Miller, P., Young, A. J., & Akamatsu, S. (submitted). A Principal Component Analysis of Facial Expressions. Vision Research.

    Google Scholar 

  2. Földiák, P. (1989). Adaptive Network for Optimal Linear Feature Extraction. In International Joint Conference on Neural Networks (Washington). IEEE NewYork, Vol. 1401–05.

    Google Scholar 

  3. Kruschke, J. K. (1992). ALCOVE: An Exemplar-based Connectionist Model of Category Learning. Psychological Review, 99, 22–44.

    Article  Google Scholar 

  4. Nosofsky, R. M. (1986). Attention, Similarity and the Identification-Categorization Relationship. Journal of Experimental Psychology: Learning, Memory and Cognition, 115,39–57.

    Google Scholar 

  5. Oja, E. (1982). A Simplified Neuron as a Principal Component Analyzer. Journal of Mathematical Biology, 15, 267–273.

    Article  MathSciNet  MATH  Google Scholar 

  6. Oja, E. (1989). Neural Networks, Principal Components and Subspaces. International Journal of Neural Systems, 1, 61–68.

    Article  MathSciNet  Google Scholar 

  7. Page, M. P. A. (1998). Some Advantages of Localist over Distributed Representations. In J. A. Bullinaria, D. W. Glasspool, & G. Houghton, (Eds.), Proceedings of the Fourth Neural Computation and Psychology Workshop: Connectionist Representations (pp. 3–15). London: Springer-Verlag.

    Google Scholar 

  8. Page, M. P. A. (2000). Connectionist Modelling in Psychology. Behavioral and Brain Sciences, 23, 443–67.

    Article  Google Scholar 

  9. Sanger, T. D. (1989). Optimal Unsupervised Learning in a Single-layer Linear Feedforward Neural Network. Neural Networks, 2, 459–473.

    Article  Google Scholar 

Download references

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2001 Springer-Verlag London

About this paper

Cite this paper

Page, M. (2001). Paying Attention to Relevant Dimensions: A Localist Approach. In: French, R.M., Sougné, J.P. (eds) Connectionist Models of Learning, Development and Evolution. Perspectives in Neural Computing. Springer, London. https://doi.org/10.1007/978-1-4471-0281-6_11

Download citation

  • DOI: https://doi.org/10.1007/978-1-4471-0281-6_11

  • Publisher Name: Springer, London

  • Print ISBN: 978-1-85233-354-6

  • Online ISBN: 978-1-4471-0281-6

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics