Advertisement

The Computational Capabilities of Neural Networks

  • Jiří Šíma
Conference paper

Abstract

The (artificial) neural networks represent a widely applied computational paradigm that is an alternative to the conventional computers in many areas of artificial intelligence. By analogy with classical models of computation such as Turing machines which are useful for understanding the computational potential and limits of conventional computers, the capability of neural networks to realize general computations have been studied for more than decade and many relevant results have been achieved [1, 2, 3, 5]. The neural networks are classified into a computational taxonomy according to the restrictions that are imposed on their parameters. Thus, various models are obtained which have different computational capabilities and enrich the traditional repertoire of computational means. In particular, the computational power of neural networks have been investigated by comparing their variants with each other and with more traditional computational tools including finite automata, Turing machines, Boolean circuits, etc. The aim of this approach is to find out what is, in principle, or efficiently, computable by particular neural networks, and how to optimally implement required functions.

Keywords

Boolean Function Turing Machine Finite Automaton Computational Capability Input Length 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    I. Parberry: Circuit Complexity and Neural Networks. The MIT Press, Cambridge, MA, 1994.MATHGoogle Scholar
  2. [2]
    V.P. Roychowdhury K.-Y. Siu, A. Orlitsky (eds.): Theoretical Advances in Neural Computation and Learning. Kluwer Academic Publishers, 1994.Google Scholar
  3. [3]
    H.T. Siegelmann: Neural Networks and Analog Computation: Beyond the Turing Limit. Birkhäuser, Boston, 1999.CrossRefMATHGoogle Scholar
  4. [4]
    J. Šíma: The computational theory of neural networks. TR V-823, rcs, AS CR, Prague, 2000.Google Scholar
  5. [5]
    K.-Y. Siu V.P. Roychowdhury T. Kailath: Discrete Neural Computation: A Theoretical Foundation. Prentice Hall, Englewood Cliffs, NJ, 1995.Google Scholar

Copyright information

© Springer-Verlag Wien 2001

Authors and Affiliations

  • Jiří Šíma
    • 1
    • 2
  1. 1.Institute of Computer ScienceAcademy of Sciences of the Czech RepublicPrague 8Czech Republic
  2. 2.Institute of Theoretical Science (ITI)Charles UniversityCzech Republic

Personalised recommendations