Skip to main content

Rates of Approximation of Multivariable Functions by One-hidden-layer Neural Networks

  • Conference paper

Part of the book series: Perspectives in Neural Computing ((PERSPECT.NEURAL))

Abstract

We investigate rates of approximation of multivariable functions by one-hidden-layer neural networks with a general hidden unit function. Under mild assumptions on hidden unit function we derive upper bounds on rates of approximation (measured by both the number of hidden units and the size of parameters) in terms of various norms of the function to be approximated and its higher order moduli of continuity.

This work was partially supported by GA AV grant A2030602 and KBN grant 8T11A02311.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. A. R. Barron. Neural net approximation. In Proceedings of the 7th Yale Workshop on Adaptive and Learning Systems (pp. 69–72 ), 1992.

    Google Scholar 

  2. A. R. Barron. Universal approximation bounds for superposition of a sigmoidal function. IEEE Transactions on Information Theory, 39 (3): 930–945, 1993.

    Article  MATH  MathSciNet  Google Scholar 

  3. R. DeVore, R. Howard and C. Micchelli. Optimal nonlinear approximation. Manuscripta Mathematica, 63: 469–478, 1989.

    Article  MathSciNet  Google Scholar 

  4. F. Girosi and G. Anzellotti. Rates of convergence for radial basis function and neural networks. In Artificial Neural Networks for Speech and Vision (pp. 97–113 ). Chapman and Hall, London, 1993.

    Google Scholar 

  5. M. Gori, F. Scarselli and A. C. Tsoi. Which classes of functions can a given multilayer perceptron approximate? In Proceedings of the ICNN’96 (pp. 2226–2231), IEEE, 1996.

    Google Scholar 

  6. K. Hlavácková, V. Kůrková and P. Savicky. Representations and rates of approximation of real-valued Boolean functions by neural networks (manuscript).

    Google Scholar 

  7. K. Hornik. Some new results on neural network approximation. Neural Networks, 6: 1069–1072, 1993.

    Article  Google Scholar 

  8. L. K. Jones. A simple lemma on greedy approximation in Hilbert space and convergence rates for projection pursuit regression and neural network training. Annals of Statistics, 20: 608–613, 1992.

    Article  MATH  MathSciNet  Google Scholar 

  9. V. Kůrková. Approximation of functions by perceptron networks with bounded number of hidden units. Neural Networks, 8: 745–750, 1995.

    Article  Google Scholar 

  10. V. Kůrková. Dimension-independent rates of approximation by neural networks. In Computer-Intensive Methods in Control and Signal Processing: Curse of Dimensionality (pp. 261–270 ), Birkhauser, Boston, 1997.

    Chapter  Google Scholar 

  11. V. Kůrková . Incremental approximation by neural networks. In Dealing with Complexity: A Neural Network Approach. Springer, London, 1997 (in press)

    Google Scholar 

  12. V. Kůrková. Trade-off between the size of weights and the number of hidden units in feedforward networks. Technical Report ICS-96-495.

    Google Scholar 

  13. V. Kůrková, P.C. Kainen and V. Kreinovich. Estimates of the number of hidden units and variation with respect to half-spaces. Neural Networks, 1997 (in press).

    Google Scholar 

  14. H. N. Mhaskar and C. A. Micchelli. Approximation by superposition of sigmoidal and radial basis functions. Advances in Applied Mathematics, 13: 350–373, 1992.

    Article  MATH  MathSciNet  Google Scholar 

  15. H. N. Mhaskar and C. A. Micchelli. Dimension-independent bounds on the degree of approximation by neural networks. IBM Journal of Research and Development, 38: 277–284, 1994.

    Article  MATH  Google Scholar 

  16. J. Park and I. W. Sandberg. Approximation and radial-basis-function networks. Neural Computation, 5: 305–316, 1993.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 1998 Springer-Verlag London Limited

About this paper

Cite this paper

Kůrková, V. (1998). Rates of Approximation of Multivariable Functions by One-hidden-layer Neural Networks. In: Marinaro, M., Tagliaferri, R. (eds) Neural Nets WIRN VIETRI-97. Perspectives in Neural Computing. Springer, London. https://doi.org/10.1007/978-1-4471-1520-5_9

Download citation

  • DOI: https://doi.org/10.1007/978-1-4471-1520-5_9

  • Publisher Name: Springer, London

  • Print ISBN: 978-1-4471-1522-9

  • Online ISBN: 978-1-4471-1520-5

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics