Some Comparisons Between Linear Approximation and Approximation by Neural Networks
We present some comparisons between the approximation rates relevant to linear approximators and the rates relevant to neural networks, i.e., nonlinear approximators represented by sets of parametrized functions corresponding to a type of computational unit. Our analysis uses the concept of variation of a function with respect to a set. The comparison is made in terms of Kolmogorov n-width for linear spaces and a proper nonlinear n-width for the nonlinear context represented by neural networks. The results of this paper contribute to the theoretical understanding of the superiority of neural networks with respect to linear approximators in complex tasks, as is confirmed by a wide variety of applications (recognition of handwritten characters and spoken numerals, approximate solution of functional optimization problems from control theory, etc.).
KeywordsNeural Network Hilbert Space Unit Ball Dimensional Subspace Hide Unit
Unable to display preview. Download preview PDF.
- Barron, A. R.: Neural net approximation. Proc. 7th Yale Workshop on Adaptive and Learning Systems. K. Narendra Ed., Yale University Press, 1992.Google Scholar
- Hlavácková, K., Sanguineti, M.: On the rates of linear and nonlinear approximations. Proc. 3rd IEEE European Workshop on Computer-Intensive Methods in Control and Signal Processing (CMP), pp. 211–216, 1998.Google Scholar
- Kainen, P.C., Kůrková, V., Vogt, A.: Approximation by neural networks is not continuous. Submitted to Neurocomputing.Google Scholar
- Pinkus, A.: N — Widths in Approximation Theory. Springer-Verlag, New York, 1986.Google Scholar