The Computational Capabilities of Neural Networks
The (artificial) neural networks represent a widely applied computational paradigm that is an alternative to the conventional computers in many areas of artificial intelligence. By analogy with classical models of computation such as Turing machines which are useful for understanding the computational potential and limits of conventional computers, the capability of neural networks to realize general computations have been studied for more than decade and many relevant results have been achieved [1, 2, 3, 5]. The neural networks are classified into a computational taxonomy according to the restrictions that are imposed on their parameters. Thus, various models are obtained which have different computational capabilities and enrich the traditional repertoire of computational means. In particular, the computational power of neural networks have been investigated by comparing their variants with each other and with more traditional computational tools including finite automata, Turing machines, Boolean circuits, etc. The aim of this approach is to find out what is, in principle, or efficiently, computable by particular neural networks, and how to optimally implement required functions.
KeywordsBoolean Function Turing Machine Finite Automaton Computational Capability Input Length
Unable to display preview. Download preview PDF.
- V.P. Roychowdhury K.-Y. Siu, A. Orlitsky (eds.): Theoretical Advances in Neural Computation and Learning. Kluwer Academic Publishers, 1994.Google Scholar
- J. Šíma: The computational theory of neural networks. TR V-823, rcs, AS CR, Prague, 2000.Google Scholar
- K.-Y. Siu V.P. Roychowdhury T. Kailath: Discrete Neural Computation: A Theoretical Foundation. Prentice Hall, Englewood Cliffs, NJ, 1995.Google Scholar