Abstract
Artificial neural networks have been studied for more than five decades since the pioneering work of McCulloch and Pitts [193] in which they proposed a model of an artificial neuron, and slightly later Hebb’s psychological study [133] pointing out the importance of the connections between artificial neurons to the process of learning. Artificial neural networks are a new generation of biologically-inspired, massively-parallel, distributed information processing systems. They consist of processing elements (also called nodes, units or artificial neurons) and connections between them with coefficients (weights) bound to these connections, which constitute the neuronal structure, as well as learning algorithms attached to this structure. They are also called connectionist systems because of the main role of the connections in them; the connection weights play the role of the “memory” of the system.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2002 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
Gorzałczany, M.B. (2002). Essentials of artificial neural networks. In: Computational Intelligence Systems and Applications. Studies in Fuzziness and Soft Computing, vol 86. Physica, Heidelberg. https://doi.org/10.1007/978-3-7908-1801-7_3
Download citation
DOI: https://doi.org/10.1007/978-3-7908-1801-7_3
Publisher Name: Physica, Heidelberg
Print ISBN: 978-3-662-00334-3
Online ISBN: 978-3-7908-1801-7
eBook Packages: Springer Book Archive