Skip to main content

The Computational Capabilities of Neural Networks

  • Conference paper
Artificial Neural Nets and Genetic Algorithms
  • 288 Accesses

Abstract

The (artificial) neural networks represent a widely applied computational paradigm that is an alternative to the conventional computers in many areas of artificial intelligence. By analogy with classical models of computation such as Turing machines which are useful for understanding the computational potential and limits of conventional computers, the capability of neural networks to realize general computations have been studied for more than decade and many relevant results have been achieved [1, 2, 3, 5]. The neural networks are classified into a computational taxonomy according to the restrictions that are imposed on their parameters. Thus, various models are obtained which have different computational capabilities and enrich the traditional repertoire of computational means. In particular, the computational power of neural networks have been investigated by comparing their variants with each other and with more traditional computational tools including finite automata, Turing machines, Boolean circuits, etc. The aim of this approach is to find out what is, in principle, or efficiently, computable by particular neural networks, and how to optimally implement required functions.

Research partially supported by project LN00A056 of The Ministry of Education of the Czech Republic.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. I. Parberry: Circuit Complexity and Neural Networks. The MIT Press, Cambridge, MA, 1994.

    MATH  Google Scholar 

  2. V.P. Roychowdhury K.-Y. Siu, A. Orlitsky (eds.): Theoretical Advances in Neural Computation and Learning. Kluwer Academic Publishers, 1994.

    Google Scholar 

  3. H.T. Siegelmann: Neural Networks and Analog Computation: Beyond the Turing Limit. Birkhäuser, Boston, 1999.

    Book  MATH  Google Scholar 

  4. J. Šíma: The computational theory of neural networks. TR V-823, rcs, AS CR, Prague, 2000.

    Google Scholar 

  5. K.-Y. Siu V.P. Roychowdhury T. Kailath: Discrete Neural Computation: A Theoretical Foundation. Prentice Hall, Englewood Cliffs, NJ, 1995.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2001 Springer-Verlag Wien

About this paper

Cite this paper

Šíma, J. (2001). The Computational Capabilities of Neural Networks. In: Kůrková, V., Neruda, R., Kárný, M., Steele, N.C. (eds) Artificial Neural Nets and Genetic Algorithms. Springer, Vienna. https://doi.org/10.1007/978-3-7091-6230-9_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-7091-6230-9_4

  • Publisher Name: Springer, Vienna

  • Print ISBN: 978-3-211-83651-4

  • Online ISBN: 978-3-7091-6230-9

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics