A Scalable Neural Architecture Combining Unsupervised and Suggestive Learning
Multi-layered perceptions are, in theory, capable of solving a wide range of problems. However, as the scale of many problems is increased, or requirements change, multi-layered perceptrons fail to learn or become impractical to implement. Self-organizing networks are not so limited by scale, but require a-priori information, typically in the form of preset weights or suitable control parameters to achieve a good categorization of a data set.
Based on research into the behaviour of biological neurons during learning, a new self-organizing neural network has been devised. Moving away from the traditional McCulloch and Pitts model, each neuron stores several independent patterns, each capable of initiating a neuron output. By structuring such neurons into a network, a rapid and equal distribution of data across competitive nodes is possible.
This paper introduces the new network, known as a Master-Slave architecture, and learning paradigm. By using competitive and suggestive learning, inputs are distributed across all available classification units, without the need for a-priori knowledge. Two experiments are described, highlighting the potential of the master-slave architecture as a building block for larger networks.
KeywordsWeight Vector Network Input Categorization Node Matching Node Competitive Network
Unable to display preview. Download preview PDF.
- Rosenblatt F, ‘The Perceptron: a probabilistic model for information storage and organization in the brain.’, Psychological Review, 65, 1958.Google Scholar
- Rumelhart D E, Hinton G E and Williams R J, ‘Learning internal representations by error propagation’, Parallel Distributed Processing, Vol. 1, Cambridge, MA: MIT Press, 1986.Google Scholar
- Alkon D L, ‘Memory Storage and Neural Systems’ Scientific American, July:26-34, 1989.Google Scholar
- Hebb D O, ‘The Organization of Behaviour’, Wiley, 1949.Google Scholar