Abstract
The approximation capabilities of feedforward neural networks with a single hidden layer and with various activation functions has been widely studied ([19], [8], [1], [2], [13]). Mhaskar and Micchelli have shown in [22] that a network using any non-polynomial locally Riemann integrable activation can approximate any continuous function of any number of variables on a compact set to any desired degree of accuracy (i.e. it has the universal approximation property). This important result has advanced the investigation of the complexity problem: If one needs to approximate a function from a known class of functions within a prescribed accuracy, how many neurons are necessary to realize this approximation for all functions in the class? De Vore et al. ([3],) proved the following result: if one approximates continuously a class of functions of d variables with bounded partial derivatives on a compacta, in order to accomplish the order of approximation O(1/n), it is necessary to use at least O(n d) number of neurons, regardless of the activation function. In other words, when the class of functions being approximated is defined in terms of bounds on the partial derivatives, a dimension independent bound for the degree of approximation is not possible. Kůrková studied the relationship between approximation rates of one-hidden-layer neural networks with different types of hidden units. She showed in [14] that no sufficiently large class of functions can be approximated by one-hidden-layer networks with another type of unit than Heaviside perceptrons with a rate of approximation related to the rate of approximation by perceptron networks.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Barron A.R.: Universal Bounds for Superpositions of a Sigmoidal Function, IEEE Transactions on Information Theory 1993; vol. 39; 3: 930–945
Cybenko G.: Approximation by Superposition of Sigmoidal Functions, Mathematics of Control, Signals and Systems 1989, 2; 4: 303–314
DeVore R.H., Micchelli C.A.: Optimal Nonlinear Approximation, Manuscripta Mathematica 1989; 63: 469–478
Hecht-Nielsen R.: Theory of Backpropagation Neural Network. In Proceedings of IEEE International Conference on Neural Networks 1, pp 593605,1988
Hlavâckovâ K: An Upper Estimate of the Error of Approximation of continuous multivariable functions by KBF networks. In Proceedings of ESANN’95, Brussels, pp 333–340, 1995
Hlavâckovâ K., Knrkovâ V., Savickÿ P.: Upper Bounds on the Approximation Rates of Real-valued Boolean Functions by Neural Networks. In Proceedings of ICANNGA’97, Norwich, England, in print, 1997
Hlavâckovâ, K., Knrkovâ, V.: Rates of approximation of real -valued Boolean functions by neural networks. In Proceedings of ESANN’96, Bruges, Belgium, pp 167–172, 1996
Hornik K., Stinchcombe M., White H.: Multilayer Feedforward networks Are Universal Approximators. Neural Networks 1989; 2: 359–366
Hornik K., Stinchcombe M., White H., Auer P.: Degree of Approximation Results for Feedforward Networks Approximating Unknown Mappings and Their Derivatives. Neural Computation 1994; 6: 1265–1275
Ito, Y.: Finite mapping by neural networks and truth functions. Math. Scientist 1992; vol 17: 69–77
Jones, L.K.: A simple lemma on greedy approximation in Hilbert space and convergence rates for projection pursuit regression and neural network training. Annals of Statistics 1992; 20, 601–613
Knrkovâ V., Kainen P.C., Kreinovich V.: Dimension-independent Rates of Approximation by Neural Networks and Variation with Respect to Half-spaces. In Proceedings of WCNN’95. INNS Press, vol 1, pp 54–57, 1995
Knrkovâ V., Hlavâckovâ K.: Uniform Approximation by KBF Networks. In Proceedings of NEURONET’93. Prague, 1–7, 1993
Knrkovâ V.: Approximation of Functions by Perceptron Networks with Bounded Number of Hidden Units. Neural Networks 1995, vol 8; 5: 745750
Knrkovâ V., Kainen, P.C., Kreinovich, V.: Estimates of the number of hidden units and variation with respect to half-spaces. Neural Networks 1997 (in press)
Knrkovâ, V.: Dimension-independent rates of approximation by neural networks. Computer-intensive methods in Control and Signal Processing: Curse of Dimensionality (Eds. K. Warwick, M. Kârnÿ). Birkhauser, pp 261–270, 1997
Kushilevitz E., Mansour Y.: Learning decision trees using the Fourier spectrum. In Proceedings of 23rd STOC, pp 455–464, 1991
Lorentz G.G.: Approximation of Functions. Holt, Rienhart and Winston, New York, 1966
Mhaskar H.N.: Noniterative Training Algorithms for Mapping Networks, Research Report, California State University, USA, 1993, Version 183601281993
Mhaskar H.N.: Approximation Properties of a Multilayered Feedforward Artificial Neural Network. Advances in Computational Mathematics 1993; 1: 61–80
Mhaskar H.N., Micchelli C.A.: Approximation by Superposition of a Sigmoidal Function and Radial Basis functions. Advances in Applied Mathematics 1992; 13: 350–373
Mhaskar H.N., Micchelli C.A.: How to Choose an Activation Function. Manuscript, 1994
Mhaskar H.N., Micchelli C.A.: Degree of Approximation by Superposition of a Fixed Function. In preparation.
Mhaskar H.N., Micchelli C.A.: Dimension-independent bounds on the degree of approximation by neural networks. IBM J. Res Develop. 1994; vol 38, 3: 277–283
Rothaus O.S.: On `Bent“ Functions. Journal of Combin. Theory 1976; Ser. A; 20: 300–305
Weaver, H.J.: Applications of discrete and continuous Fourier analysis. John Wiley, New York, 1983
Williamson R.C., Bartlett P.L.: Splines, Rational Functions and Neural Networks. Touretzky Edition, CA, 1992
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 1998 Springer-Verlag London Limited
About this chapter
Cite this chapter
Kárný, M., Warwick, K., Kůrková, V. (1998). Rates of Approximation in a Feedforward Network Depend on the Type of Computational Unit. In: Kárný, M., Warwick, K., Kůrková, V. (eds) Dealing with Complexity. Perspectives in Neural Computing. Springer, London. https://doi.org/10.1007/978-1-4471-1523-6_14
Download citation
DOI: https://doi.org/10.1007/978-1-4471-1523-6_14
Publisher Name: Springer, London
Print ISBN: 978-3-540-76160-0
Online ISBN: 978-1-4471-1523-6
eBook Packages: Springer Book Archive