Abstract
Nowadays, the Self-Organizing Map (SOM) is configured as one of the most widely used algorithms for visualising and exploring high-dimensional data: this visual formation is derived from the learning map process in which neighbouring neurons, on the input lattice surface, are connected to their respective neighbouring neurons on the output lattice surface. The present methodological note will introduce the Self-Organizing Map (SOM) algorithm, discussing, in the first part, its background, properties, applications and extensions and, in the second part, its evolution: the formation of new types of topographic maps, used for categorical data, such as time series and the tree structured data. In particular, this new type of map could be useful for further micro data analysis applications, such as document analysis or web navigation analysis, going beyond the limitation of the kernel-based topographic maps or creating new type of kernels, detailed in the Support Vector Machine literature.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
In the neural network approach, there are two types of architecture: feed-forward, in which neurons are concatenated, and the recurrent architecture, in which neurons receive feedback from themselves and others (e.g. Multi-Layer Perceptron and the Hopfield network). Subsequently, there are two principal types of learning paradigms: supervised and unsupervised learning. The first is based on input-output (input-target) relationships, while the second lies in self-organization and inter-relationships/associations processes: there is no direct adviser to determine how many output errors have originated, but they have been evaluated as ‘positive’ or ‘negative’ in terms of proximity or distance from the goal.
- 2.
The lattice is an undirected graph in which every non-border vertex has the same fixed number of incident edges. Its common representation is an array with a rectangular/simplex morphology.
- 3.
These rules represent the most common rules for unsupervised or self-organizing learning algorithms. Mathematically, the Hebbian learning rule could be written as:
$$ \frac{{\vartheta w_{ij} (t)}}{\vartheta t} = \alpha x_{i} (t)y_{i} (t) $$where \( \alpha \) is the learning rate (\( 0 < \alpha < 1 \)), \( x \) and \( y \) are the input and output of the neural system. In particular, to prevent the weight increase or decrease it is necessary to insert a ‘forgetting term’ as it occurs in the SOM process.
- 4.
In this methodological note, this is considered the minimum Euclidean distance.
- 5.
Under visualisation point of view, the lattice coordinate system could be considered to be a global coordinate system of different types of data.
- 6.
Clustering is defined as the partitions of the dataset into subsets of ‘similar’ data without using prior knowledge about the subsets.
- 7.
He maximised the average mutual information between the output and the input signal component.
- 8.
It is necessary to stress out that the SOM algorithm is just extended in terms of strings and trees (e.g. Steil [9]).
References
Chappell, G., Taylor, J.: The temporal Kohonen map. Neural Netw. 6, 441–445 (1993)
Jin, B., Zhang, Y.-Q., Wang, B.: Evolutionary granular kernel trees and applications in drug activity comparisons. In: Proceedings of the 2005 IEEE Symposium on Computational Intelligence (2005)
Kaski, S., Honkela, T., Lagus, K., Kohonen, T.: WEBSOM - self-organizing maps of document collections. Neurocomputing 21, 101–117 (1998)
Kohonen, T.: Self-Organization and Associative Memory. Springer, Heidelberg (1984)
Linsker, R.: Self-organization in a perceptual network. Computer 21, 105–117 (1988)
Luttrell, S.P.: Derivation of a class of training algorithms. IEEE Trans. Neural Netw. 1, 229–232 (1990)
Mulier, F., Cherkassy, V.: Self-organization as an interactive kernel smoothing process. Neural Comput. 7, 1165–1177 (1995)
Shaw, J., Cristianini, N.: Kernel Methods in Computational Biology. MIT Press, Cambridge (2004)
Steil, J.J., Sperduti, A.: Indices to evaluate self-organizing maps for structures. In: WSOM07, Bielefeld, Germany (2007)
Strickert, M., Hammer, B.: Merge SOM for temporal data. Neurocomputing 64, 39–72 (2005)
Ultsch, A., Siemon, H.P.: Kohonen’s self-organizing feature maps for explanatory data analysis. In: Proceedings International Neural Networks, pp. 305–308. Kluwer Academic Press, Paris (1990)
Van Hulle, M.M.: Faithful Representation and Topographic Maps: From Distortion to Information-Based Self-organization. Wiley, New York (2000)
Von der Malsburg, C., Willshaw, D.J.: Self-organizing of orientation sensitive cells in the striate cortex. Kybernetik 4, 85–100 (1973)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Cialfi, D. (2020). The Self-Organizing Map: An Methodological Note. In: Bucciarelli, E., Chen, SH., Corchado, J. (eds) Decision Economics: Complexity of Decisions and Decisions for Complexity. DECON 2019. Advances in Intelligent Systems and Computing, vol 1009. Springer, Cham. https://doi.org/10.1007/978-3-030-38227-8_28
Download citation
DOI: https://doi.org/10.1007/978-3-030-38227-8_28
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-38226-1
Online ISBN: 978-3-030-38227-8
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)