Skip to main content

Part of the book series: Undergraduate Topics in Computer Science ((UTICS))

  • 12k Accesses

Abstract

Neural networks are networks of nerve cells in the brains of humans and animals. The human brain has about 100 billion nerve cells. We humans owe our intelligence and our ability to learn various motor skills and intellectual capabilities to the brain’s complex relays and adaptivity. The nerve cells and their connections are responsible for awareness, associations, thoughts, consciousness and the ability to learn. Mathematical models of neural networks and their implementation on computers are nowadays used in many applications such as pattern recognition or robot learning. The power of this fascinating bionics branch of AI is demonstrated on some popular network models applied to various tasks.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Notes

  1. 1.

    Bionics is concerned with unlocking the “discoveries of living nature” and its innovative conversion into technology [Wik10].

  2. 2.

    Even the author was taken up by this wave, which carried him from physics into AI in 1987.

  3. 3.

    For a clear differentiation between training data and other values of a neuron, in the following discussion we will refer to the query vector as q and the desired response as t (target).

  4. 4.

    Support vector machines are not neural networks. Due to their historical development and mathematical relationship to linear networks, however, they are discussed here.

  5. 5.

    A data point is contradictory if it belongs to both classes.

References

  1. E. Alpaydin. Introduction to Machine Learning. MIT Press, Cambridge, 2004.

    Google Scholar 

  2. J. Anderson, A. Pellionisz, and E. Rosenfeld. Neurocomputing (vol. 2): Directions for Research. MIT Press, Cambridge, 1990.

    Google Scholar 

  3. J. Anderson and E. Rosenfeld. Neurocomputing: Foundations of Research. MIT Press, Cambridge, 1988. Collection of fundamental original papers.

    Google Scholar 

  4. C. M. Bishop. Neural Networks for Pattern Recognition. Oxford University Press, London, 2005.

    Google Scholar 

  5. C. J. Burges. A tutorial on support vector machines for pattern recognition. Data Min. Knowl. Discov., 2(2):121–167, 1998.

    Article  Google Scholar 

  6. W. Ertel, J. Schumann, and Ch. Suttner. Learning heuristics for a theorem prover using back propagation. In J. Retti and K. Leidlmair, editors, 5. Österreichische Artificial-Intelligence-Tagung. Informatik-Fachberichte, volume 208, pages 87–95. Springer, Berlin, 1989.

    Google Scholar 

  7. J. Hertz, A. Krogh, and R. Palmer. Introduction to the Theory of Neural Computation. Addison–Wesley, Reading, 1991.

    Google Scholar 

  8. J. J. Hopfield. Neural networks and physical systems with emergent collective computational abilities. Proc. Natl. Acad. Sci. USA, 79:2554–2558, 1982. Reprint in [AR88], pages 460–464.

    Article  MathSciNet  Google Scholar 

  9. J. J. Hopfield and D. W. Tank. “Neural” computation of decisions in optimization problems. Biol. Cybern., 52(3):141–152, 1985. Springer.

    MATH  MathSciNet  Google Scholar 

  10. T. Kohonen. Correlation matrix memories. IEEE Trans. Comput., C-21(4):353–359, 1972. Reprint in [AR88], pages 171–174.

    Article  Google Scholar 

  11. G. Palm. On associative memory. Biol. Cybern., 36:19–31, 1980.

    Article  MATH  Google Scholar 

  12. G. Palm. Memory capacities of local rules for synaptic modification. Concepts Neurosci., 2(1):97–128, 1991. MPI Tübingen.

    Google Scholar 

  13. M. Riedmiller and H. Braun. A direct adaptive method for faster backpropagation learning: the rprop algorithm. In Proceedings of the IEEE International Conference on Neural Networks, pages 586–591, 1993.

    Chapter  Google Scholar 

  14. D. E. Rumelhart, G. E. Hinton, and R. J. Williams. Learning internal representations by error propagation. In [RM86].

    Google Scholar 

  15. D. Rumelhart and J. McClelland. Parallel Distributed Processing, volume 1. MIT Press, Cambridge, 1986.

    Google Scholar 

  16. H. Ritter, T. Martinez, and K. Schulten. Neural Computation and Self-organizing Maps. Addison–Wesley, Reading, 1992.

    MATH  Google Scholar 

  17. R. Rojas. Neural Networks: A Systematic Introduction. Springer, Berlin, 1996.

    Google Scholar 

  18. Ch. Suttner and W. Ertel. Automatic acquisition of search guiding heuristics. In 10th Int. Conf. on Automated Deduction. LNAI, volume 449, pages 470–484. Springer, Berlin, 1990.

    Google Scholar 

  19. T. J. Sejnowski and C. R. Rosenberg. NETtalk: a parallel network that learns to read aloud. Technical Report JHU/EECS-86/01. The John Hopkins University Electrical Engineering and Computer Science Technical Report, 1986. Reprint in [AR88], pages 661–672.

    Google Scholar 

  20. S. Schölkopf and A. Smola. Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond. MIT Press, Cambridge, 2002.

    Google Scholar 

  21. Wikipedia, the free encyclopedia, 2010. http://en.wikipedia.org.

  22. A. Zell. Simulation Neuronaler Netze. Addison–Wesley, Reading, 1994. Description of SNNS and JNNS: www-ra.informatik.uni-tuebingen.de/SNNS.

    MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wolfgang Ertel .

Rights and permissions

Reprints and permissions

Copyright information

© 2011 Springer-Verlag London Limited

About this chapter

Cite this chapter

Ertel, W. (2011). Neural Networks. In: Introduction to Artificial Intelligence. Undergraduate Topics in Computer Science. Springer, London. https://doi.org/10.1007/978-0-85729-299-5_9

Download citation

  • DOI: https://doi.org/10.1007/978-0-85729-299-5_9

  • Publisher Name: Springer, London

  • Print ISBN: 978-0-85729-298-8

  • Online ISBN: 978-0-85729-299-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics