Skip to main content

Model Complexity of Neural Networks and Integral Transforms

  • Conference paper
Artificial Neural Networks – ICANN 2009 (ICANN 2009)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 5768))

Included in the following conference series:

Abstract

Model complexity of neural networks is investigated using tools from nonlinear approximation and integration theory. Estimates of network complexity are obtained from inspection of upper bounds on decrease of approximation errors in approximation of multivariable functions by networks with increasing numbers of units. The upper bounds are derived using integral transforms with kernels corresponding to various types of computational units. The results are applied to perceptron networks.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Carroll, S.M., Dickinson, B.W.: Construction of neural net using the radon transform. In: Proc. IJCN, vol. I, pp. 607–611 (1989)

    Google Scholar 

  2. Ito, Y.: Representation of functions by superpositions of a step or sigmoid function and their applications to neural network theory. Neural Networks 4, 385–394 (1991)

    Article  Google Scholar 

  3. Park, J., Sandberg, I.: Universal approximation using radial–basis–function networks. Neural Computation 3, 246–257 (1991)

    Article  Google Scholar 

  4. Park, J., Sandberg, I.: Approximation and radial basis function networks. Neural Computation 5, 305–316 (1993)

    Article  Google Scholar 

  5. Jones, L.K.: A simple lemma on greedy approximation in Hilbert space and convergence rates for projection pursuit regression and neural network training. Annals of Statistics 20, 608–613 (1992)

    Article  MathSciNet  MATH  Google Scholar 

  6. Barron, A.R.: Universal approximation bounds for superpositions of a sigmoidal function. IEEE Transactions on Information Theory 39, 930–945 (1993)

    Article  MathSciNet  MATH  Google Scholar 

  7. Girosi, F., Anzellotti, G.: Rates of convergence for Radial Basis Functions and neural networks. In: Mammone, R.J. (ed.) Artificial Neural Networks for Speech and Vision, pp. 97–113. Chapman & Hall, Boca Raton (1993)

    Google Scholar 

  8. Kůrková, V., Kainen, P.C., Kreinovich, V.: Estimates of the number of hidden units and variation with respect to half-spaces. Neural Networks 10, 1061–1068 (1997)

    Article  Google Scholar 

  9. Kainen, P.C., Kůrková, V., Vogt, A.: A Sobolev-type upper bound for rates of approximation by linear combinations of Heaviside plane waves. J. of Approximation Theory 147, 1–10 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  10. Kainen, P.C., Kůrková, V.: An integral upper bound for neural network approximation. Neural Computation (to appear, 2009)

    Google Scholar 

  11. Girosi, F.: Approximation error bounds that use VC- bounds. In: Proceedings of the International Conference on Artificial Neural Networks, Paris, pp. 295–302 (1995)

    Google Scholar 

  12. Kainen, P.C., Kůrková, V., Sanguineti, M.: Complexity of Gaussian radial basis networks approximating smooth functions. J. of Complexity 25, 63–74 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  13. Kainen, P.C., Kůrková, V., Vogt, A.: Integral combinations of heavisides. Mathematische Nachrichten (to appear, 2009)

    Google Scholar 

  14. Pisier, G.: Remarques sur un résultat non publié de B. Maurey. In: Séminaire d’Analyse Fonctionnelle 1980-1981, École Polytechnique, Centre de Mathématiques, Palaiseau, France, vol. I(12) (1981)

    Google Scholar 

  15. Darken, C., Donahue, M., Gurvits, L., Sontag, E.: Rate of approximation results motivated by robust neural network learning. In: Proceedings of the Sixth Annual ACM Conference on Computational Learning Theory, pp. 303–309. The Association for Computing Machinery, New York (1993)

    Chapter  Google Scholar 

  16. Kůrková, V.: High-dimensional approximation and optimization by neural networks. In: Suykens, J., Horváth, G., Basu, S., Micchelli, C., Vandewalle, J. (eds.) Advances in Learning Theory: Methods, Models and Applications, ch. 4, pp. 69–88. IOS Press, Amsterdam (2003)

    Google Scholar 

  17. Kůrková, V., Sanguineti, M.: Error estimates for approximate optimization by the extended Ritz method. SIAM J. on Optimization 15, 461–487 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  18. Yoshida, K.: Functional Analysis. Springer, Berlin (1965)

    Book  Google Scholar 

  19. Kůrková, V., Savický, P., Hlaváčková, K.: Representations and rates of approximation of real–valued Boolean functions by neural networks. Neural Networks 11, 651–659 (1998)

    Article  Google Scholar 

  20. Friedman, A.: Learning and Soft Computing. Dover, New York (1982)

    Google Scholar 

  21. Rudin, W.: Real and Complex Analysis. MacGraw-Hill, New York (1974)

    MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2009 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Kůrková, V. (2009). Model Complexity of Neural Networks and Integral Transforms. In: Alippi, C., Polycarpou, M., Panayiotou, C., Ellinas, G. (eds) Artificial Neural Networks – ICANN 2009. ICANN 2009. Lecture Notes in Computer Science, vol 5768. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-04274-4_73

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-04274-4_73

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-04273-7

  • Online ISBN: 978-3-642-04274-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics