Optimizing the self-organizing-process of topology maps
This contribution proposes, how the self-organizing process of feature maps can be improved.
The self-organizing process converges to a map, which preserves the neighbourhood relations of the input data, if the learning parameters, learning coefficient and width of the neighbourhood function, are chosen correctly. In general, the parameters are chosen empirically, dependent on the distribution of the training data and the network architecture . Consequently, some experience with the algorithm and the training data is needed to choose proper courses of learning parameters. To avoid time consuming parameter studies a system model of the self-organizing process is developed and a linear Kalman filter used to estimate the learning coefficient. To estimate the width of the neighbourhood function the process of neighbourhood preservation during the training is modelled for the first time successfully. This process is then followed by an extended Kalman filter algorithm, which estimates the width of the neighbourhood function.
In case of fast self-organizing algorithms, as published in , the proposed parameter estimation method is essential for the training of data with unknown density distribution.
KeywordsKalman Filter Input Space Extended Kalman Filter Learning Parameter Neighbourhood Relation
Unable to display preview. Download preview PDF.
- 1.K. Haese, H.-D. vom Stein: Fast Self-Organizing of n-dimensional Topology Maps. In: VIII European Signal Processing Conference. Trieste, Italy: 1996, pages 835–838Google Scholar
- 2.H.-U. Bauer, K.R. Pawelzik: Quantifying the Neighborhood Preservation of Self-Organizing Feature Maps. IEEE Transactions on Neural Networks, 3(4):570–579, 1992Google Scholar
- 3.C. Bouton, G. Pagès: Self-organization and a.s. convergence of the one-dimensional Kohonen algorithm with non-uniformly distributed stimuli. Stochastic Processes and their Applications, 47:249–274, 1993Google Scholar
- 4.C. Bouton, G. Pagès: Convergence in distribution of the one-dimensional Kohonen algorithm when the stimuli are non uniform. Advances in Applied Probability, 26:80–103, 1994Google Scholar
- 5.P. S. Chandran: Comments on the ”Comparative Analysis of Backpropagation and the Extended Kalman Filter for training Multilayer Perceptrons”. IEEE Transaction on Pattern Analysis and Machine Intelligence, 16(8):862–863, 1994Google Scholar
- 6.Y. P. Jun, H. Yoon, J. W. Cho: L*-Learning: A Fast Self-Organizing Feature Map Learning Algorithm Based on Incremental Ordering. IEICE Transaction on Information & Systems, E76-D(6):698–706, June 1993Google Scholar
- 7.T. Kohonen: Self-organized Formation of Topologically Correct Feature Maps. Biological Cybernetics, 43:59–69, 1982Google Scholar
- 8.T. Kohonen: Self-Organization and Associative Memory. Springer Series in Information Sciences 8, Heidelberg, 1984Google Scholar
- 9.D. W. Ruck, S. K. Rogers, P. S. Kabrisky, M. E. Oxley: Comparative Analysis of Backpropagation and the Extended Kalman Filter for training Multilayer Perceptrons. IEEE Transaction on Pattern Analysis and Machine Intelligence, 14(6):686–691, 1992Google Scholar
- 10.H. Ritter, T. Martinetz, K. Schulten: Neuronale Netze. Addison-Wesley (Deutschland) GmbH, Bonn, 1991, 2. erweiterte AuflageGoogle Scholar
- 11.E. Erwin, K. Obermayer, K. Schulten: Self-organzing maps: ordering, convergence properties and energy functions. Biological Cybernetics, 67:47–55, 1992Google Scholar