Skip to main content

Universality and Complexity of Approximation of Multivariable Functions by Feedforward Networks

  • Chapter

Abstract

A theoretical framework for investigation of approximation capabilities of feed-forward networks is presented in the context of nonlinear approximation theory. Some recent results on universal approximation property and estimates of network complexity, measured by the number of hidden units, are described.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Barron, A. R. (1992). Neural net approximation. InProceedings of the 7th Yale Workshop on Adaptive and Learning Systems (pp. 69–72).

    Google Scholar 

  2. Barron, A.R. (1993). Universal approximation bounds for superposition of a sigmoidal function.IEEE Transactions on Information Theory, 39, 930–945.

    Article  MathSciNet  MATH  Google Scholar 

  3. Carroll, S.M. & Dickinson, B. W. (1989). Construction of neural nets using the Radon transform. InProceedings of IJCNN’89 (pp.I. 607–611).New York: IEEE Press.

    Google Scholar 

  4. Courant, R. & Hilbert, D. (1962).Methods of Mathematical Physics, vol.2. New York: Wiley.

    MATH  Google Scholar 

  5. Cybenko, G. (1989). Approximation by superpositions of a sigmoidal function.Mathematics of Control and Signals Systems, 2, 303–314.

    Article  MathSciNet  MATH  Google Scholar 

  6. Darken, C., Donahue, M., Gurvits, L. & Sontag, E. (1993). Rate of approximation results motivated by robust neural network learning. InProceedings of the 6th Annual ACM Conference on Computational Learning Theory (pp.303–309). New York: ACM.

    Google Scholar 

  7. Girosi, F. & Poggio, T. (1990). Networks and the best approximation property. InBiological Cybernetics, 63, 169–176.

    Article  MathSciNet  MATH  Google Scholar 

  8. Girosi, F., & Anzellotti, G. (1993). Rates of convergence for radial basis function and neural networks. InArtificial Neural Networks for Speech and Vision(pp.97–113). London: Chapman & Hall.

    Google Scholar 

  9. Girosi, F. (1995). Approximation error bounds that use VC-bounds. InProceedings of ICANN’95(pp. 295–302). Paris: EC2 & Cie.

    Google Scholar 

  10. Gurvits, L. & Koiran, P. (1997). Approximation and learning of convex superpositions.Journal of Computer and System Sciences, 55, 161–170.

    Article  MathSciNet  MATH  Google Scholar 

  11. Ito, Y. (1992). Finite mapping by neural networks and truth functions.Mathematical Scientist, 17, 69–77.

    MathSciNet  MATH  Google Scholar 

  12. Jones, L. K. (1992). A simple lemma on greedy approximation in Hilbert space and convergence rates for projection pursuit regression and neural network training.Annals of Statistics, 20, 608–613.

    Article  MathSciNet  MATH  Google Scholar 

  13. Kainen, P. C., Kůrkova, V. & Vogt, A. (1999). Approximation by neural networks is not continuous. Neurocomputing, 29, 47–56.

    Article  Google Scholar 

  14. Kainen, P. C., Kůrková, V. & Vogt, A. (2000). Best approximation by Heaviside perceptron networks.Neural Networks, 13, 645–647.

    Article  Google Scholar 

  15. Kainen, P. C; Kůrková, V. & Vogt, A. (2001). Continuity of approximation by neural networks inL p -spaces.Annals of Operational Research, 101, 143–147.

    Article  MATH  Google Scholar 

  16. Kůrková, V. (1992). Kolmogorov’s theorem and multilayer neural networks.Neural Networks, 5, 501–506.

    Article  Google Scholar 

  17. Kůrková, V. (1997). Dimension-independent rates of approximation by neural networks. InComputer-Intensive Methods in Control and Signal Processing: Curse of Dimensionality (Eds. Warwick, K., Kárný, M.) (pp. 261–270). Boston: Birkhauser.

    Chapter  Google Scholar 

  18. Kůrková, V. (1998). Incremental approximation by neural networks. InComplexity: Neural Network Approach. (Eds. Warwick, K., Kárný, M., Kůrková, V.) (pp. 177–188). London: Springer.

    Google Scholar 

  19. Kůrková, V., Kainen, P. C. & Kreinovich, V. (1997). Estimates of the numberof hidden unitsand variation with respect to half-spaces. Neural Networks, 10,1061-1068.

    Google Scholar 

  20. Kůrková, V. & Sanquineti, M. (2001). Bounds on rates of variable-basis and neural network approximation.IEEE Trans. on Information Theory, 47, 2659–2665.

    Article  MATH  Google Scholar 

  21. Kůrková, V., Savický, P. & Hlaváčková, K. (1998). Representations and rates of approximation of real-valued Boolean functions by neural networks.Neural Networks, 11,651–659.

    Article  Google Scholar 

  22. Leshno, M., Lin, V. Y, Pinkus, A. & Schocken, S. (1993). Multilayer feedforward networks with a non-polynomial activation can approximate any function.Neural Networks, 6, 861–867.

    Article  Google Scholar 

  23. Makovoz, Y. (1998). Uniform approximation by neural networks.Journal of Approximation Theory, 95, 215–228.

    Article  MathSciNet  MATH  Google Scholar 

  24. Mhaskar, H. N. (1995). Versatile Gaussian networks. InProceedings of IEEE Workshop of Nonlinear Image Processing (pp. 70–73).

    Google Scholar 

  25. Micchelli, C. A. (1986). Interpolation of scattered data: distance matrices and conditionally positive definite functions.Constructive Approximation, 2, 11–22.

    Article  MathSciNet  MATH  Google Scholar 

  26. Park, J., & Sandberg, I. W. (1993). Approximation and radial-basis-function networks.Neural Computation, 5, 305–316.

    Article  Google Scholar 

  27. Pinkus, A. (1986). n -Width in Approximation Theory. Berlin: Springer.

    Google Scholar 

  28. Pisier, G. (1981). Remarques sur un resultat non publié de B. Maurey. InSeminaire d’Analyse Fonctionelle I., n.12.

    Google Scholar 

  29. Sejnowski, T. J. & Rosenberg, C. (1987). Parallel networks that learn to pronounce English text.Complex Systems 1, 145–168.

    MATH  Google Scholar 

  30. Stinchcombe, M. & White, H. (1990). Approximating and learning unknown mappings using multilayer networks with bounded weights. InProceedings of IJCNN’90(pp. III. 7–16).New York: IEEE Press.

    Google Scholar 

  31. Zemanian, A. H. (1987).Distribution Theory and Transform Analysis. New York: Dover.

    MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2002 Springer-Verlag London

About this chapter

Cite this chapter

Kůrková, V. (2002). Universality and Complexity of Approximation of Multivariable Functions by Feedforward Networks. In: Roy, R., Köppen, M., Ovaska, S., Furuhashi, T., Hoffmann, F. (eds) Soft Computing and Industry. Springer, London. https://doi.org/10.1007/978-1-4471-0123-9_2

Download citation

  • DOI: https://doi.org/10.1007/978-1-4471-0123-9_2

  • Publisher Name: Springer, London

  • Print ISBN: 978-1-4471-1101-6

  • Online ISBN: 978-1-4471-0123-9

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics