A Scalable Neural Architecture Combining Unsupervised and Suggestive Learning

  • R. B. Lambert
  • W. P. Cockshott
  • R. J. Fryer
Conference paper


Multi-layered perceptions are, in theory, capable of solving a wide range of problems. However, as the scale of many problems is increased, or requirements change, multi-layered perceptrons fail to learn or become impractical to implement. Self-organizing networks are not so limited by scale, but require a-priori information, typically in the form of preset weights or suitable control parameters to achieve a good categorization of a data set.

Based on research into the behaviour of biological neurons during learning, a new self-organizing neural network has been devised. Moving away from the traditional McCulloch and Pitts model, each neuron stores several independent patterns, each capable of initiating a neuron output. By structuring such neurons into a network, a rapid and equal distribution of data across competitive nodes is possible.

This paper introduces the new network, known as a Master-Slave architecture, and learning paradigm. By using competitive and suggestive learning, inputs are distributed across all available classification units, without the need for a-priori knowledge. Two experiments are described, highlighting the potential of the master-slave architecture as a building block for larger networks.


Weight Vector Network Input Categorization Node Matching Node Competitive Network 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. [1]
    Rosenblatt F, ‘The Perceptron: a probabilistic model for information storage and organization in the brain.’, Psychological Review, 65, 1958.Google Scholar
  2. [2]
    Cybenko G, ‘Approximation by Superposition of a Sigmoidal Function’, Mathematics of Control, Signals, and Systems, 2: 303–314, 1989.MathSciNetzbMATHCrossRefGoogle Scholar
  3. [3]
    Rumelhart D E, Hinton G E and Williams R J, ‘Learning representations by back-propagating errors’, Nature, 323: 533–536, 1986.CrossRefGoogle Scholar
  4. [4]
    Rumelhart D E, Hinton G E and Williams R J, ‘Learning internal representations by error propagation’, Parallel Distributed Processing, Vol. 1, Cambridge, MA: MIT Press, 1986.Google Scholar
  5. [5]
    Baum E B and Haussler, ‘What size of net gives valid generalization?’, Neural Computation, 1:151–160, 1989.CrossRefGoogle Scholar
  6. [6]
    Hecht-Nielsen R, ‘Counter-propagation Networks’, Applied Optics, 26: 4979–4984, 1987.CrossRefGoogle Scholar
  7. [7]
    Grossberg S, ‘Adaptive Pattern Classification and Universal Recoding: I. Parallel Development and Coding of Neural Feature Detectors’, Biological Cybernetics, 23: 121–134, 1976.MathSciNetzbMATHCrossRefGoogle Scholar
  8. [8]
    Kohonen T, ‘Self-Organized Formation of Topologically Correct Feature Maps’, Biological Cybernetics, 43: 59–69, 1982.MathSciNetzbMATHCrossRefGoogle Scholar
  9. [9]
    McCulloch W S and Pitts W, ‘A Logical Calculus of Ideas Immanent in Nervous Activity’, Bulletin of Mathematical Biophysics, 5: 115–133, 1943.MathSciNetzbMATHCrossRefGoogle Scholar
  10. [10]
    Alkon D L, ‘Memory Storage and Neural Systems’ Scientific American, July:26-34, 1989.Google Scholar
  11. [11]
    Olds J L, Anderson M L, McPhie D L, Staten L D and Alkon D L, ‘Imaging memory-specific changes in the distribution of protein kinase C within the hippocampus’, Science 245:866–869, 1989.CrossRefGoogle Scholar
  12. [12]
    Alkon D L, Blackwell K T, Barbour G S, Rigler A K and Vogl T P, ‘Pattern-Recognition by an Artificial Network Derived from Biologic Neuronal Systems’, Biological Cybernetics, 62:363–376, 1990.zbMATHCrossRefGoogle Scholar
  13. [13]
    Hebb D O, ‘The Organization of Behaviour’, Wiley, 1949.Google Scholar
  14. [14]
    Grossberg S, ‘Some nonlinear networks capable of learning a spatial pattern of arbitrary complexity’, Proceeding of the National Academy of Sciences USA, 59: 368–372, 1968.zbMATHCrossRefGoogle Scholar

Copyright information

© Springer-Verlag/Wien 1993

Authors and Affiliations

  • R. B. Lambert
    • 1
  • W. P. Cockshott
    • 1
  • R. J. Fryer
    • 1
  1. 1.Dept. Computer ScienceUniversity of StrathclydeGlasgowScotland

Personalised recommendations