Skip to main content

Unsupervised Detection of Illusory Contours

  • Conference paper
Connectionist Models in Cognitive Neuroscience

Part of the book series: Perspectives in Neural Computing ((PERSPECT.NEURAL))

  • 137 Accesses

Abstract

We present an unsupervised artificial neural network that is capable of detecting illusory objects in artificial image data. The network has previously been shown [4, 3] to self-organise to identify multiple causes in data distributions where each of the individual causes are independent and may be hidden from direct observation. Learning in the network is completely local and the form of coding that is achieved is sparse. The network is based on a non-linear Principal Components Analysis (PCA) network using a soft positive threshold function as a non-linearity instead of the typical sigmoid (or logistic) function. After the application of the non-linearity in the network we add zero-mean Gaussian noise. The non-linear PCA aspect of this network ensures an coding which is more accessable to human interpretation and the additive noise ensures a sparse representation in which only as many output neurons as are required are used. Illusory contours are detected when the probability of a feature appearing at the input to the network is so high that the most sparse response is to learn an illusory feature between the actual features. By adding lateral connections between the output neurons to provide temporal context in the network, we can achieve a temporal ordering of outputs in response to a particular feature moving across the input space.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. H. B. Barlow. Unsupervised learning. Neural Computation, 1:295–311, 1989.

    Article  Google Scholar 

  2. S. Becker. Learning temporally persistent hierarchical representations. Advances in Neural Information Processing Systems, 9, 1997.

    Google Scholar 

  3. D. Charles and C. Fyfe. Constrained pea techniques for the identification of common factors in data. Neurocomputing, To Appear 1998.

    Google Scholar 

  4. D. Charles and C. Fyfe. Modelling multiple cause structure using rectification constraints. Network: Computation in Neural Systems, (9):167–182, May 1998.

    Article  MATH  Google Scholar 

  5. P. S. Churchland and T. J. Sejnowski. The Computational Brain. M.I.T. Press, Cambridge, Mass., 1992.

    Google Scholar 

  6. P. Földiäk. Models of Sensory Coding. PhD thesis, University of Cambridge, 1992.

    Google Scholar 

  7. P. Foldiak and M. Young. The Handbook of Brain Theory and Neural Networks, chapter Sparse Coding in the Primate Cortex, pages 895–898. MA:MIT Press, 1995.

    Google Scholar 

  8. G.E. Hinton, and Z. Ghahramani. Hierarchical non-linear factor analysis and topographic maps. To appear in Advances in Neural Information Processing Systems, 10, 1998.

    Google Scholar 

  9. I.T. Jolliffe. Principal Component Analysis. Springer-Verlag, 1986.

    Google Scholar 

  10. Juha Karhunen and Jyrki Joutsensalo. Representation and separation of signals using nonlinear pea type learning. Neural Networks, 7(1):113–127, 1994.

    Article  Google Scholar 

  11. B. Olshausen and D. Field. Natural image statistics and efficient coding. Network:Computation in Neural Systems, 7(2):333–339, 1996.

    Article  Google Scholar 

  12. E. Saund. A multiple cause mixture model for unsupervised learning. Neural Computation, 7:51–71, 1995.

    Article  Google Scholar 

  13. Matthew Turk and Alex Pentland. Eigenfaces for recognition. Journal of Cognitive Neuroscience, 3(1):71–86, 1991.

    Article  Google Scholar 

  14. C J S Weber. Generalisation and discrimination emerge from a self-organising componential network: a speech example. Network: Computation in Neural Systems, 8:425–440, 1997

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 1999 Springer-Verlag London Limited

About this paper

Cite this paper

Charles, D., Fyfe, C. (1999). Unsupervised Detection of Illusory Contours. In: Heinke, D., Humphreys, G.W., Olson, A. (eds) Connectionist Models in Cognitive Neuroscience. Perspectives in Neural Computing. Springer, London. https://doi.org/10.1007/978-1-4471-0813-9_17

Download citation

  • DOI: https://doi.org/10.1007/978-1-4471-0813-9_17

  • Publisher Name: Springer, London

  • Print ISBN: 978-1-85233-052-1

  • Online ISBN: 978-1-4471-0813-9

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics