Structural Information Control for Flexible Competitive Learning

  • Ryotaro Kamimura
  • Taeko Kamimura
  • Thomas R. Shultz
Conference paper


In this paper, we propose a new information theoretic method called structural information to overcome fundamental problems inherent in conventional competitive learning such as dead neurons and deciding on the appropriate number of neurons in the competitive layer. Our method is based on defining and controlling several kinds of information, thus generating particular neuron firing patterns. For one firing pattern, some neurons are completely inactive, meaning that some dead neurons are generated. For another firing pattern, all neurons are active, that is, there is no dead neurons. This means that we can control the number of dead neurons and choose the appropriate number of neurons by controlling information content. We applied this method to simple pattern classification to show that information can be controlled, and that different neuron firing patterns can be generated.


Structural Information Input Pattern Firing Pattern Order Information Input Unit 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. [1]
    S. Grossberg, “Competitive learning: from interactive activation to adaptive resonance” Cognitive Science, vol. 11, pp. 23–63, 1987.CrossRefGoogle Scholar
  2. [2]
    D. E. Rumelhart D. Zipser, “Feature discovery by competitive learning” in Parallel Distributed Processing (D. E. Rumelhart G. E. Hinton R. J. Williams, eds., vol. 1, pp. 151–193, Cambrige: MIT Press, 1986.Google Scholar
  3. [3]
    D. Diesieno, “Adding a conscience to competitive learning” in Proceedings of IEEE International Conference on Neural Networks, pp. 117–124, IEEE, 1988.Google Scholar
  4. [4]
    S. C. Ahalt A. K. K. Chen P. Chen D. E. Melton, “Competitive learning algorithms for vector quantization” Neural Networks, vol. 3, pp. 277–290, 1990.CrossRefGoogle Scholar
  5. [5]
    L. Xu, “Rival penalized competitive learning for clustering analysis, rbf net, and curve detection” IEEE Transaction on Neural Networks, vol. 4, no. 4, pp. 636–649, 1993.CrossRefGoogle Scholar
  6. [6]
    A. Luk S. Lien, “Properties of the generalized lotto-type competitive learning” in Proceedings of International conference on neural information processing, pp. 1180–1185, 2000.Google Scholar
  7. [7]
    R. Linsker, “Self-organization in a perceptual network,” Computer, vol. 21, pp. 105–117, 1988.CrossRefGoogle Scholar
  8. [8]
    R. Kamimura S. Nakanishi, “Hidden information maximization for feature detection and rule discovery” Network, vol. 6, pp. 577–622, 1995.CrossRefMATHGoogle Scholar
  9. [9]
    L. L. Gatlin, Information Theory and Living Systems. Columbia University Press, 1972.Google Scholar

Copyright information

© Springer-Verlag Wien 2001

Authors and Affiliations

  • Ryotaro Kamimura
    • 1
  • Taeko Kamimura
    • 2
  • Thomas R. Shultz
    • 3
  1. 1.Information Science LaboratoryTokai UniversityKanagawa, HiratsukaJapan
  2. 2.Department of EnglishSenshu UniversityTama-ku,Kawasaki, KanagawaJapan
  3. 3.Department of PsychologyMcGill UniversityMontreal, QuebecCanada

Personalised recommendations