Incremental Function Approximation Based on Gram-Schmidt Orthonormalisation Process
In this paper we present an incremental function approximation in Hilbert space, based on Gram-Schmidt orthonormalisation process. Two bases of approximation space are determined and mantained during approximation process. The first one is used for neural network implementation, the second — orthonormal one, is treated as an intermediate step of calculations. Only after terminating all iterations the output weights are calculated (once).
KeywordsHilbert Space Hide Unit Output Weight Approximation Space Incremental Function
Unable to display preview. Download preview PDF.
- B. Beliczynski. Incremental Function Approximation by Using Neural Networks, volume 112 of Electrical Engineering Series. Warsaw University of Technology Press, 2000 (in Polish).Google Scholar
- S. E. Falhman C. Lebiere. The cascade correlation learning architecture. Technical report, CMU-CS-90–100, 1991.Google Scholar
- B. Fritzke. Fast learning with incremental rbf networks. Neural Processing Letters, 1(2), 1994.Google Scholar
- E. Kreyszig. Introductory Functional Analysis with Applications. J.Wiley, 1978.Google Scholar
- T. Y. Kwok D. Y. Yeung. Objective functions for training new hidden units in constructive neural networks. IEEE Trans. Neural Networks, 8(5):1131–1148, 1997.Google Scholar
- M. Leshno V. Lin A. Pinkus S. Schocken. Multilayer feedforward networks with a nonpolynomial activation function can approximate any function. Neural Networks, 13:350–373, 1993.Google Scholar