Skip to main content

Rates of Approximation in a Feedforward Network Depend on the Type of Computational Unit

  • Chapter
Dealing with Complexity

Abstract

The approximation capabilities of feedforward neural networks with a single hidden layer and with various activation functions has been widely studied ([19], [8], [1], [2], [13]). Mhaskar and Micchelli have shown in [22] that a network using any non-polynomial locally Riemann integrable activation can approximate any continuous function of any number of variables on a compact set to any desired degree of accuracy (i.e. it has the universal approximation property). This important result has advanced the investigation of the complexity problem: If one needs to approximate a function from a known class of functions within a prescribed accuracy, how many neurons are necessary to realize this approximation for all functions in the class? De Vore et al. ([3],) proved the following result: if one approximates continuously a class of functions of d variables with bounded partial derivatives on a compacta, in order to accomplish the order of approximation O(1/n), it is necessary to use at least O(n d) number of neurons, regardless of the activation function. In other words, when the class of functions being approximated is defined in terms of bounds on the partial derivatives, a dimension independent bound for the degree of approximation is not possible. Kůrková studied the relationship between approximation rates of one-hidden-layer neural networks with different types of hidden units. She showed in [14] that no sufficiently large class of functions can be approximated by one-hidden-layer networks with another type of unit than Heaviside perceptrons with a rate of approximation related to the rate of approximation by perceptron networks.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

eBook
USD 16.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Barron A.R.: Universal Bounds for Superpositions of a Sigmoidal Function, IEEE Transactions on Information Theory 1993; vol. 39; 3: 930–945

    Article  MATH  MathSciNet  Google Scholar 

  2. Cybenko G.: Approximation by Superposition of Sigmoidal Functions, Mathematics of Control, Signals and Systems 1989, 2; 4: 303–314

    Article  MATH  MathSciNet  Google Scholar 

  3. DeVore R.H., Micchelli C.A.: Optimal Nonlinear Approximation, Manuscripta Mathematica 1989; 63: 469–478

    Article  MathSciNet  Google Scholar 

  4. Hecht-Nielsen R.: Theory of Backpropagation Neural Network. In Proceedings of IEEE International Conference on Neural Networks 1, pp 593605,1988

    Google Scholar 

  5. Hlavâckovâ K: An Upper Estimate of the Error of Approximation of continuous multivariable functions by KBF networks. In Proceedings of ESANN’95, Brussels, pp 333–340, 1995

    Google Scholar 

  6. Hlavâckovâ K., Knrkovâ V., Savickÿ P.: Upper Bounds on the Approximation Rates of Real-valued Boolean Functions by Neural Networks. In Proceedings of ICANNGA’97, Norwich, England, in print, 1997

    Google Scholar 

  7. Hlavâckovâ, K., Knrkovâ, V.: Rates of approximation of real -valued Boolean functions by neural networks. In Proceedings of ESANN’96, Bruges, Belgium, pp 167–172, 1996

    Google Scholar 

  8. Hornik K., Stinchcombe M., White H.: Multilayer Feedforward networks Are Universal Approximators. Neural Networks 1989; 2: 359–366

    Article  Google Scholar 

  9. Hornik K., Stinchcombe M., White H., Auer P.: Degree of Approximation Results for Feedforward Networks Approximating Unknown Mappings and Their Derivatives. Neural Computation 1994; 6: 1265–1275

    Article  Google Scholar 

  10. Ito, Y.: Finite mapping by neural networks and truth functions. Math. Scientist 1992; vol 17: 69–77

    MATH  Google Scholar 

  11. Jones, L.K.: A simple lemma on greedy approximation in Hilbert space and convergence rates for projection pursuit regression and neural network training. Annals of Statistics 1992; 20, 601–613

    Article  Google Scholar 

  12. Knrkovâ V., Kainen P.C., Kreinovich V.: Dimension-independent Rates of Approximation by Neural Networks and Variation with Respect to Half-spaces. In Proceedings of WCNN’95. INNS Press, vol 1, pp 54–57, 1995

    Google Scholar 

  13. Knrkovâ V., Hlavâckovâ K.: Uniform Approximation by KBF Networks. In Proceedings of NEURONET’93. Prague, 1–7, 1993

    Google Scholar 

  14. Knrkovâ V.: Approximation of Functions by Perceptron Networks with Bounded Number of Hidden Units. Neural Networks 1995, vol 8; 5: 745750

    Google Scholar 

  15. Knrkovâ V., Kainen, P.C., Kreinovich, V.: Estimates of the number of hidden units and variation with respect to half-spaces. Neural Networks 1997 (in press)

    Google Scholar 

  16. Knrkovâ, V.: Dimension-independent rates of approximation by neural networks. Computer-intensive methods in Control and Signal Processing: Curse of Dimensionality (Eds. K. Warwick, M. Kârnÿ). Birkhauser, pp 261–270, 1997

    Google Scholar 

  17. Kushilevitz E., Mansour Y.: Learning decision trees using the Fourier spectrum. In Proceedings of 23rd STOC, pp 455–464, 1991

    Google Scholar 

  18. Lorentz G.G.: Approximation of Functions. Holt, Rienhart and Winston, New York, 1966

    Google Scholar 

  19. Mhaskar H.N.: Noniterative Training Algorithms for Mapping Networks, Research Report, California State University, USA, 1993, Version 183601281993

    Google Scholar 

  20. Mhaskar H.N.: Approximation Properties of a Multilayered Feedforward Artificial Neural Network. Advances in Computational Mathematics 1993; 1: 61–80

    Article  MATH  MathSciNet  Google Scholar 

  21. Mhaskar H.N., Micchelli C.A.: Approximation by Superposition of a Sigmoidal Function and Radial Basis functions. Advances in Applied Mathematics 1992; 13: 350–373

    Article  MATH  MathSciNet  Google Scholar 

  22. Mhaskar H.N., Micchelli C.A.: How to Choose an Activation Function. Manuscript, 1994

    Google Scholar 

  23. Mhaskar H.N., Micchelli C.A.: Degree of Approximation by Superposition of a Fixed Function. In preparation.

    Google Scholar 

  24. Mhaskar H.N., Micchelli C.A.: Dimension-independent bounds on the degree of approximation by neural networks. IBM J. Res Develop. 1994; vol 38, 3: 277–283

    Article  MATH  Google Scholar 

  25. Rothaus O.S.: On `Bent“ Functions. Journal of Combin. Theory 1976; Ser. A; 20: 300–305

    Google Scholar 

  26. Weaver, H.J.: Applications of discrete and continuous Fourier analysis. John Wiley, New York, 1983

    MATH  Google Scholar 

  27. Williamson R.C., Bartlett P.L.: Splines, Rational Functions and Neural Networks. Touretzky Edition, CA, 1992

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 1998 Springer-Verlag London Limited

About this chapter

Cite this chapter

Kárný, M., Warwick, K., Kůrková, V. (1998). Rates of Approximation in a Feedforward Network Depend on the Type of Computational Unit. In: Kárný, M., Warwick, K., Kůrková, V. (eds) Dealing with Complexity. Perspectives in Neural Computing. Springer, London. https://doi.org/10.1007/978-1-4471-1523-6_14

Download citation

  • DOI: https://doi.org/10.1007/978-1-4471-1523-6_14

  • Publisher Name: Springer, London

  • Print ISBN: 978-3-540-76160-0

  • Online ISBN: 978-1-4471-1523-6

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics