Skip to main content

The Psychological Limits of Neural Computation

  • Chapter
Book cover Dealing with Complexity

Abstract

It is known that each algorithm is Turing solvable. In the context of function computability, the Church-Turing thesis states that each intuitively computable function is Turing computable. The languages accepted by Turing machines form the recursively enumerable language family L 0 and, according to the Church-Turing thesis, L 0 is also the class of algorithmic computable sets. In spite of its generality, the Turing model can not solve any problem. Recall, for example, that the halting problem is Turing unsolvable: it is algorithmic undecidable if an arbitrary Turing machine will eventually halt when given some specified, but arbitrary, input.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Andonie R. The new computational power of neural networks. Neural Network World1996; 6: 469–475

    Google Scholar 

  2. Aizenstein H and Pitt L. On the learnability of disjunctive normal form formulas. Machine Learning1995; 19: 183–208

    MATH  Google Scholar 

  3. Beiu V. Constant fan-in discrete neural networks are VLSI-optimal. Submitted to Neural Processing Letters, June, 1996

    Google Scholar 

  4. Beiu V and Taylor JG. Optimal Mapping of Neural Networks onto FPGAs — A New Costructive Learning Algorithm. In: J. Mira and F. Sandoval (eds). From Natural to Artificial Neural Computation. Springer–Verlag, Berlin, pp 822–829, 1995

    Google Scholar 

  5. Beiu V and Taylor JG. On the circuit complexity of sigmoid feedforward neural networks. Neural Networks1996; 9: 1155–1171

    Article  Google Scholar 

  6. Blum AL and Rivest R. Training a 3node network is NP-complete Neural Networks 1992; 5: 117–127

    Article  Google Scholar 

  7. Carnevali P and Paternello S. Exhaustive thermodynamical analysis of Boolean learning networks. Europhys Lett1987; 4: 1199

    Article  Google Scholar 

  8. Chen T, Chen H and Liu RW. Approximation capability in C(Rn)by multilayer feedforward networks and related problems. IEEE Trans Neural Networks1995; 6: 25–30

    Article  MATH  Google Scholar 

  9. Chen T and Chen H. Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamic systems. IEEE Trans Neural Networks1995; 6: 911–917

    Article  Google Scholar 

  10. Cybenko G. Approximation by superpositions of sigmoidal functions. Mathematics of Control, Signals and Systems1989; 2: 303–314

    Article  MATH  MathSciNet  Google Scholar 

  11. DasGupta B, Siegelmann HT and Sontag E. On the complexity of training neural networks with continuous activation functions. IEEE Trans Neural Networks1995; 6: 1490–1504

    Article  Google Scholar 

  12. de Garis H. Evolvable hardware: genetic programming of a Darwin machine. In: R.F. Albert, C.R. Reeves and N.C. Steele (eds). Artificial Neural Nets and Genetic Algorithms. SpringerüVerlag, New York, pp 441–449,1993

    Google Scholar 

  13. Franklin SP and Garzon M. Neural computability. In: O. Omidvar (ed). Progress in Neural networks. vol 1, ch. 6, Ablex Pu Co, Norwood, NJ, 1990

    Google Scholar 

  14. Franklin SP and Garzon M. Neural computability II. Submitted, 1994. Extended abstract in: Proceedings 3rd Int Joint Conf on Neural Networks, vol 1, Washington DC, pp 631–637, 1989

    Google Scholar 

  15. Gallant S. Neural network learning and expert systems. The MIT Press, Cambridge, Mass, second printing, 1994

    Google Scholar 

  16. Garey MR and Johnson DS. Computers and Intractability: A Guide to the Theory of NP-Completeness. W.H. Freeman and Co, San Francisco, 1979

    Google Scholar 

  17. Girosi F and Poggio T. Networks and the best approximation property. Biological Cybernetics1990; 63: 169–176

    Article  MATH  MathSciNet  Google Scholar 

  18. Hartley R and Szu H. A comparison of the computational power of neural network models. In: Proceedings IEEE 1st Int Conf on Neural Networks, vol 3, pp 17–22, 1987

    Google Scholar 

  19. Hassoun MH. Fundamentals of artificial neural networks. The MIT Press, Cambridge, Mass, 1995

    Google Scholar 

  20. Hecht-Nielsen R. Kolmogorov’s mapping neural network existence theorem. In: Proceedings Int Conf on Neural Networks, IEEE Press, vol 3, New York, pp 11–13, 1987

    Google Scholar 

  21. Ito Y. Finite mapping by neural networks and truth functions. Math Scientist1992; 17: 69–77

    MATH  Google Scholar 

  22. Judd JS. Neural network design and the complexity of learning. The MIT Press, Cambridge, Mass, 1990

    Google Scholar 

  23. Judd JS. The complexity of learning. In: M.A. Arbib (ed). The Handbook of Brain Theory and Neural Networks. The MIT Press, Cambridge, Mass, pp 984–987, 1995

    Google Scholar 

  24. Kearns MJ and Vazirani UV. An introduction to computational learning theory. The MIT Press, Cambridge, Mass, 1994

    Google Scholar 

  25. Keating JK and Noonan D. The structure and performance of trained Boolean networks. In: G. Orchard (ed). Neural Computing (Proceedings of the Irish Neural Networks Conference, Belfast). The Irish Neural Networks Association, Belfast, pp 69–76, 1994

    Google Scholar 

  26. Kurkovâ V. Approximation of functions by perceptron networks with bounded number of hidden units. Neural Networks1995; 8: 745–750

    Article  Google Scholar 

  27. Lapedes AS and Farber RM. How neural networks work. In: Y.S. Lee (ed). Evolution, Learning and Cognition. World Scientific, Singapore, 1988

    Google Scholar 

  28. McCulloch W and Pitts W. A logical calculus of the ideas immanent in nervous activity. Bull Math Biophys1943; 5: 115–133

    Article  MATH  MathSciNet  Google Scholar 

  29. Müzard M and Nadal JP. Learning in feedforward layered neural networks: the tiling algorithm. J Phys 1989; A 22: 2191–2203

    Google Scholar 

  30. Minsky ML and Papert SA. Perceptrons. The MIT Press: Cambridge, Mass, third printing, 1988

    Google Scholar 

  31. Parberry I. Circuit complexity and neural networks. The MIT Press, Cambridge Mass, 1994

    MATH  Google Scholar 

  32. Paugam-Moisy H. Optimisations des réseaux de neurones artificiels. These de doctorat, Ecole Normal Supérieure de Lyon, LIP-IMAG, URA CNRS nr. 1398, 1992

    Google Scholar 

  33. Smieja FJ. Neural network constructive algorithm: Trading generalization for learning efficiency?. Circuits, System, Signal Processing1993; 12: 331–374

    MATH  Google Scholar 

  34. Sontag ED. Feedforward nets for interpolation and classification. J Comp Syst Sci1992; 45: 20–48

    Article  MATH  MathSciNet  Google Scholar 

  35. Sprecher DA. A numerical implementation of Kolmogorov’s superpositions. Neural Networks1995; 8: 1–8

    Article  Google Scholar 

  36. Sprecher DA. A universal construction of a universal function for Kolmogorov’s superpositions. Neural Network World1996; 6: 711–718

    Google Scholar 

  37. Wegener I. The complexity of boolean functions. Wiley-Teubner, Chichester, 1987

    Google Scholar 

  38. Wray J and Green GGR. Neural networks, approximation theory, and finite precision computation. Neural Networks1995; 8: 31–37

    Article  Google Scholar 

  39. Síma J. Back-propagation is not efficient. Neural Networks1996; 9: 1017–1023

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 1998 Springer-Verlag London Limited

About this chapter

Cite this chapter

Kárný, M., Warwick, K., Kůrková, V. (1998). The Psychological Limits of Neural Computation. In: Kárný, M., Warwick, K., Kůrková, V. (eds) Dealing with Complexity. Perspectives in Neural Computing. Springer, London. https://doi.org/10.1007/978-1-4471-1523-6_17

Download citation

  • DOI: https://doi.org/10.1007/978-1-4471-1523-6_17

  • Publisher Name: Springer, London

  • Print ISBN: 978-3-540-76160-0

  • Online ISBN: 978-1-4471-1523-6

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics