Abstract
We present an unsupervised artificial neural network that is capable of detecting illusory objects in artificial image data. The network has previously been shown [4, 3] to self-organise to identify multiple causes in data distributions where each of the individual causes are independent and may be hidden from direct observation. Learning in the network is completely local and the form of coding that is achieved is sparse. The network is based on a non-linear Principal Components Analysis (PCA) network using a soft positive threshold function as a non-linearity instead of the typical sigmoid (or logistic) function. After the application of the non-linearity in the network we add zero-mean Gaussian noise. The non-linear PCA aspect of this network ensures an coding which is more accessable to human interpretation and the additive noise ensures a sparse representation in which only as many output neurons as are required are used. Illusory contours are detected when the probability of a feature appearing at the input to the network is so high that the most sparse response is to learn an illusory feature between the actual features. By adding lateral connections between the output neurons to provide temporal context in the network, we can achieve a temporal ordering of outputs in response to a particular feature moving across the input space.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
H. B. Barlow. Unsupervised learning. Neural Computation, 1:295–311, 1989.
S. Becker. Learning temporally persistent hierarchical representations. Advances in Neural Information Processing Systems, 9, 1997.
D. Charles and C. Fyfe. Constrained pea techniques for the identification of common factors in data. Neurocomputing, To Appear 1998.
D. Charles and C. Fyfe. Modelling multiple cause structure using rectification constraints. Network: Computation in Neural Systems, (9):167–182, May 1998.
P. S. Churchland and T. J. Sejnowski. The Computational Brain. M.I.T. Press, Cambridge, Mass., 1992.
P. Földiäk. Models of Sensory Coding. PhD thesis, University of Cambridge, 1992.
P. Foldiak and M. Young. The Handbook of Brain Theory and Neural Networks, chapter Sparse Coding in the Primate Cortex, pages 895–898. MA:MIT Press, 1995.
G.E. Hinton, and Z. Ghahramani. Hierarchical non-linear factor analysis and topographic maps. To appear in Advances in Neural Information Processing Systems, 10, 1998.
I.T. Jolliffe. Principal Component Analysis. Springer-Verlag, 1986.
Juha Karhunen and Jyrki Joutsensalo. Representation and separation of signals using nonlinear pea type learning. Neural Networks, 7(1):113–127, 1994.
B. Olshausen and D. Field. Natural image statistics and efficient coding. Network:Computation in Neural Systems, 7(2):333–339, 1996.
E. Saund. A multiple cause mixture model for unsupervised learning. Neural Computation, 7:51–71, 1995.
Matthew Turk and Alex Pentland. Eigenfaces for recognition. Journal of Cognitive Neuroscience, 3(1):71–86, 1991.
C J S Weber. Generalisation and discrimination emerge from a self-organising componential network: a speech example. Network: Computation in Neural Systems, 8:425–440, 1997
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 1999 Springer-Verlag London Limited
About this paper
Cite this paper
Charles, D., Fyfe, C. (1999). Unsupervised Detection of Illusory Contours. In: Heinke, D., Humphreys, G.W., Olson, A. (eds) Connectionist Models in Cognitive Neuroscience. Perspectives in Neural Computing. Springer, London. https://doi.org/10.1007/978-1-4471-0813-9_17
Download citation
DOI: https://doi.org/10.1007/978-1-4471-0813-9_17
Publisher Name: Springer, London
Print ISBN: 978-1-85233-052-1
Online ISBN: 978-1-4471-0813-9
eBook Packages: Springer Book Archive