Abstract
Neural networks are networks of nerve cells in the brains of humans and animals. The human brain has about 100 billion nerve cells. We humans owe our intelligence and our ability to learn various motor skills and intellectual capabilities to the brain’s complex relays and adaptivity. The nerve cells and their connections are responsible for awareness, associations, thoughts, consciousness and the ability to learn. Mathematical models of neural networks and their implementation on computers are nowadays used in many applications such as pattern recognition or robot learning. The power of this fascinating bionics branch of AI is demonstrated on some popular network models applied to various tasks.
Notes
- 1.
Bionics is concerned with unlocking the “discoveries of living nature” and its innovative conversion into technology [Wik10].
- 2.
Even the author was taken up by this wave, which carried him from physics into AI in 1987.
- 3.
For a clear differentiation between training data and other values of a neuron, in the following discussion we will refer to the query vector as q and the desired response as t (target).
- 4.
Support vector machines are not neural networks. Due to their historical development and mathematical relationship to linear networks, however, they are discussed here.
- 5.
A data point is contradictory if it belongs to both classes.
References
E. Alpaydin. Introduction to Machine Learning. MIT Press, Cambridge, 2004.
J. Anderson, A. Pellionisz, and E. Rosenfeld. Neurocomputing (vol. 2): Directions for Research. MIT Press, Cambridge, 1990.
J. Anderson and E. Rosenfeld. Neurocomputing: Foundations of Research. MIT Press, Cambridge, 1988. Collection of fundamental original papers.
C. M. Bishop. Neural Networks for Pattern Recognition. Oxford University Press, London, 2005.
C. J. Burges. A tutorial on support vector machines for pattern recognition. Data Min. Knowl. Discov., 2(2):121–167, 1998.
W. Ertel, J. Schumann, and Ch. Suttner. Learning heuristics for a theorem prover using back propagation. In J. Retti and K. Leidlmair, editors, 5. Österreichische Artificial-Intelligence-Tagung. Informatik-Fachberichte, volume 208, pages 87–95. Springer, Berlin, 1989.
J. Hertz, A. Krogh, and R. Palmer. Introduction to the Theory of Neural Computation. Addison–Wesley, Reading, 1991.
J. J. Hopfield. Neural networks and physical systems with emergent collective computational abilities. Proc. Natl. Acad. Sci. USA, 79:2554–2558, 1982. Reprint in [AR88], pages 460–464.
J. J. Hopfield and D. W. Tank. “Neural” computation of decisions in optimization problems. Biol. Cybern., 52(3):141–152, 1985. Springer.
T. Kohonen. Correlation matrix memories. IEEE Trans. Comput., C-21(4):353–359, 1972. Reprint in [AR88], pages 171–174.
G. Palm. On associative memory. Biol. Cybern., 36:19–31, 1980.
G. Palm. Memory capacities of local rules for synaptic modification. Concepts Neurosci., 2(1):97–128, 1991. MPI Tübingen.
M. Riedmiller and H. Braun. A direct adaptive method for faster backpropagation learning: the rprop algorithm. In Proceedings of the IEEE International Conference on Neural Networks, pages 586–591, 1993.
D. E. Rumelhart, G. E. Hinton, and R. J. Williams. Learning internal representations by error propagation. In [RM86].
D. Rumelhart and J. McClelland. Parallel Distributed Processing, volume 1. MIT Press, Cambridge, 1986.
H. Ritter, T. Martinez, and K. Schulten. Neural Computation and Self-organizing Maps. Addison–Wesley, Reading, 1992.
R. Rojas. Neural Networks: A Systematic Introduction. Springer, Berlin, 1996.
Ch. Suttner and W. Ertel. Automatic acquisition of search guiding heuristics. In 10th Int. Conf. on Automated Deduction. LNAI, volume 449, pages 470–484. Springer, Berlin, 1990.
T. J. Sejnowski and C. R. Rosenberg. NETtalk: a parallel network that learns to read aloud. Technical Report JHU/EECS-86/01. The John Hopkins University Electrical Engineering and Computer Science Technical Report, 1986. Reprint in [AR88], pages 661–672.
S. Schölkopf and A. Smola. Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond. MIT Press, Cambridge, 2002.
Wikipedia, the free encyclopedia, 2010. http://en.wikipedia.org.
A. Zell. Simulation Neuronaler Netze. Addison–Wesley, Reading, 1994. Description of SNNS and JNNS: www-ra.informatik.uni-tuebingen.de/SNNS.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Copyright information
© 2011 Springer-Verlag London Limited
About this chapter
Cite this chapter
Ertel, W. (2011). Neural Networks. In: Introduction to Artificial Intelligence. Undergraduate Topics in Computer Science. Springer, London. https://doi.org/10.1007/978-0-85729-299-5_9
Download citation
DOI: https://doi.org/10.1007/978-0-85729-299-5_9
Publisher Name: Springer, London
Print ISBN: 978-0-85729-298-8
Online ISBN: 978-0-85729-299-5
eBook Packages: Computer ScienceComputer Science (R0)