Abstract
It is known that each algorithm is Turing solvable. In the context of function computability, the Church-Turing thesis states that each intuitively computable function is Turing computable. The languages accepted by Turing machines form the recursively enumerable language family L 0 and, according to the Church-Turing thesis, L 0 is also the class of algorithmic computable sets. In spite of its generality, the Turing model can not solve any problem. Recall, for example, that the halting problem is Turing unsolvable: it is algorithmic undecidable if an arbitrary Turing machine will eventually halt when given some specified, but arbitrary, input.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Andonie R. The new computational power of neural networks. Neural Network World1996; 6: 469–475
Aizenstein H and Pitt L. On the learnability of disjunctive normal form formulas. Machine Learning1995; 19: 183–208
Beiu V. Constant fan-in discrete neural networks are VLSI-optimal. Submitted to Neural Processing Letters, June, 1996
Beiu V and Taylor JG. Optimal Mapping of Neural Networks onto FPGAs — A New Costructive Learning Algorithm. In: J. Mira and F. Sandoval (eds). From Natural to Artificial Neural Computation. Springer–Verlag, Berlin, pp 822–829, 1995
Beiu V and Taylor JG. On the circuit complexity of sigmoid feedforward neural networks. Neural Networks1996; 9: 1155–1171
Blum AL and Rivest R. Training a 3node network is NP-complete Neural Networks 1992; 5: 117–127
Carnevali P and Paternello S. Exhaustive thermodynamical analysis of Boolean learning networks. Europhys Lett1987; 4: 1199
Chen T, Chen H and Liu RW. Approximation capability in C(Rn)by multilayer feedforward networks and related problems. IEEE Trans Neural Networks1995; 6: 25–30
Chen T and Chen H. Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamic systems. IEEE Trans Neural Networks1995; 6: 911–917
Cybenko G. Approximation by superpositions of sigmoidal functions. Mathematics of Control, Signals and Systems1989; 2: 303–314
DasGupta B, Siegelmann HT and Sontag E. On the complexity of training neural networks with continuous activation functions. IEEE Trans Neural Networks1995; 6: 1490–1504
de Garis H. Evolvable hardware: genetic programming of a Darwin machine. In: R.F. Albert, C.R. Reeves and N.C. Steele (eds). Artificial Neural Nets and Genetic Algorithms. SpringerüVerlag, New York, pp 441–449,1993
Franklin SP and Garzon M. Neural computability. In: O. Omidvar (ed). Progress in Neural networks. vol 1, ch. 6, Ablex Pu Co, Norwood, NJ, 1990
Franklin SP and Garzon M. Neural computability II. Submitted, 1994. Extended abstract in: Proceedings 3rd Int Joint Conf on Neural Networks, vol 1, Washington DC, pp 631–637, 1989
Gallant S. Neural network learning and expert systems. The MIT Press, Cambridge, Mass, second printing, 1994
Garey MR and Johnson DS. Computers and Intractability: A Guide to the Theory of NP-Completeness. W.H. Freeman and Co, San Francisco, 1979
Girosi F and Poggio T. Networks and the best approximation property. Biological Cybernetics1990; 63: 169–176
Hartley R and Szu H. A comparison of the computational power of neural network models. In: Proceedings IEEE 1st Int Conf on Neural Networks, vol 3, pp 17–22, 1987
Hassoun MH. Fundamentals of artificial neural networks. The MIT Press, Cambridge, Mass, 1995
Hecht-Nielsen R. Kolmogorov’s mapping neural network existence theorem. In: Proceedings Int Conf on Neural Networks, IEEE Press, vol 3, New York, pp 11–13, 1987
Ito Y. Finite mapping by neural networks and truth functions. Math Scientist1992; 17: 69–77
Judd JS. Neural network design and the complexity of learning. The MIT Press, Cambridge, Mass, 1990
Judd JS. The complexity of learning. In: M.A. Arbib (ed). The Handbook of Brain Theory and Neural Networks. The MIT Press, Cambridge, Mass, pp 984–987, 1995
Kearns MJ and Vazirani UV. An introduction to computational learning theory. The MIT Press, Cambridge, Mass, 1994
Keating JK and Noonan D. The structure and performance of trained Boolean networks. In: G. Orchard (ed). Neural Computing (Proceedings of the Irish Neural Networks Conference, Belfast). The Irish Neural Networks Association, Belfast, pp 69–76, 1994
Kurkovâ V. Approximation of functions by perceptron networks with bounded number of hidden units. Neural Networks1995; 8: 745–750
Lapedes AS and Farber RM. How neural networks work. In: Y.S. Lee (ed). Evolution, Learning and Cognition. World Scientific, Singapore, 1988
McCulloch W and Pitts W. A logical calculus of the ideas immanent in nervous activity. Bull Math Biophys1943; 5: 115–133
Müzard M and Nadal JP. Learning in feedforward layered neural networks: the tiling algorithm. J Phys 1989; A 22: 2191–2203
Minsky ML and Papert SA. Perceptrons. The MIT Press: Cambridge, Mass, third printing, 1988
Parberry I. Circuit complexity and neural networks. The MIT Press, Cambridge Mass, 1994
Paugam-Moisy H. Optimisations des réseaux de neurones artificiels. These de doctorat, Ecole Normal Supérieure de Lyon, LIP-IMAG, URA CNRS nr. 1398, 1992
Smieja FJ. Neural network constructive algorithm: Trading generalization for learning efficiency?. Circuits, System, Signal Processing1993; 12: 331–374
Sontag ED. Feedforward nets for interpolation and classification. J Comp Syst Sci1992; 45: 20–48
Sprecher DA. A numerical implementation of Kolmogorov’s superpositions. Neural Networks1995; 8: 1–8
Sprecher DA. A universal construction of a universal function for Kolmogorov’s superpositions. Neural Network World1996; 6: 711–718
Wegener I. The complexity of boolean functions. Wiley-Teubner, Chichester, 1987
Wray J and Green GGR. Neural networks, approximation theory, and finite precision computation. Neural Networks1995; 8: 31–37
Síma J. Back-propagation is not efficient. Neural Networks1996; 9: 1017–1023
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 1998 Springer-Verlag London Limited
About this chapter
Cite this chapter
Kárný, M., Warwick, K., Kůrková, V. (1998). The Psychological Limits of Neural Computation. In: Kárný, M., Warwick, K., Kůrková, V. (eds) Dealing with Complexity. Perspectives in Neural Computing. Springer, London. https://doi.org/10.1007/978-1-4471-1523-6_17
Download citation
DOI: https://doi.org/10.1007/978-1-4471-1523-6_17
Publisher Name: Springer, London
Print ISBN: 978-3-540-76160-0
Online ISBN: 978-1-4471-1523-6
eBook Packages: Springer Book Archive