Artificial Neural Nets and Genetic Algorithms pp 172-177 | Cite as

# Some Comparisons Between Linear Approximation and Approximation by Neural Networks

## Abstract

We present some comparisons between the approximation rates relevant to linear approximators and the rates relevant to neural networks, i.e., nonlinear approximators represented by sets of parametrized functions corresponding to a type of computational unit. Our analysis uses the concept of variation of a function with respect to a set. The comparison is made in terms of Kolmogorov *n-*width for linear spaces and a proper nonlinear n-width for the nonlinear context represented by neural networks. The results of this paper contribute to the theoretical understanding of the superiority of neural networks with respect to linear approximators in complex tasks, as is confirmed by a wide variety of applications (recognition of handwritten characters and spoken numerals, approximate solution of functional optimization problems from control theory, etc.).

## Keywords

Neural Network Hilbert Space Unit Ball Dimensional Subspace Hide Unit## Preview

Unable to display preview. Download preview PDF.

## References

- [1]Barron, A.R.: Universal approximation bounds for superpositions of a sigmoidal function. IEEE Transactions on Information Theory 39, pp. 930–945, 1993.MathSciNetMATHCrossRefGoogle Scholar
- [2]Barron, A. R.: Neural net approximation. Proc. 7th Yale Workshop on Adaptive and Learning Systems. K. Narendra Ed., Yale University Press, 1992.Google Scholar
- [3]Burr, D.J.: Experiments on neural net recognition of spoken and written text. IEEE Trans. Acoust. Speech and Signal Processing 36, pp. 1162–1168, 1988.MATHCrossRefGoogle Scholar
- [4]Cybenko, G.: Approximation by superposition of a sigmoidal function, Math. Control Signal Systems 2, pp. 303–314, 1989.MathSciNetMATHCrossRefGoogle Scholar
- [5]Girosi, F., Jones, M. and Poggio, T.: Regularization theory and neural networks architectures. Neural Computation 7, pp. 219–269, 1995.CrossRefGoogle Scholar
- [6]Hlavácková, K., Sanguineti, M.: On the rates of linear and nonlinear approximations. Proc. 3rd IEEE European Workshop on Computer-Intensive Methods in Control and Signal Processing (CMP), pp. 211–216, 1998.Google Scholar
- [7]Hornik, K., Stinchcombe, M., White H.: Multilayer feedforward networks are universal ap-proximators. Neural Networks 2, pp. 359–366, 1989.CrossRefGoogle Scholar
- [8]Kainen, P.C., Kůrková, V., Vogt, A.: Approximation by neural networks is not continuous. Submitted to Neurocomputing.Google Scholar
- [9]Kurkova, V.: Dimension-independent rates of approximation by neural networks. Computer-intensive methods in Control and Signal Processing: Curse of Dimensionality (Eds. K. Warwick, M. Kárny). Birkhäuser, Boston, pp. 261–270, 1997.CrossRefGoogle Scholar
- [10]Kůrková, V., Savicků, P., Hlavácková, K.: Representations and rates of approximation of real-valued Boolean functions by neural networks. Neural Networks 11, pp. 651–659, 1998.CrossRefGoogle Scholar
- [11]Mhaskar, H.N., Micchelli, C.A.: Dimension-independent bounds on the degree of approximation by neural networks. IBM Journal of Research and Development 38, pp. 277–284, 1994.MATHCrossRefGoogle Scholar
- [12]Parisini, T., Sanguineti, M., Zoppoli, R.: Nonlinear stabilization by receding-horizon neural regulators. International Journal of Control 70, no.3, pp. 341–362, 1998.MathSciNetMATHCrossRefGoogle Scholar
- [13]Park J., Sandberg, I. W.: Approximation and radial-basis-function networks. Neural Computation 5, pp. 305–316, 1993.CrossRefGoogle Scholar
- [14]Pinkus, A.:
*N*— Widths in Approximation Theory. Springer-Verlag, New York, 1986.Google Scholar - [15]Sejnowski, T.J., Rosenberg, C: Parallel networks that learn to pronounce English text. Complex Systems 1, pp. 145–168, 1987.MATHGoogle Scholar