Advertisement

Incremental Function Approximation Based on Gram-Schmidt Orthonormalisation Process

  • Bartlomiej Beliczynski
Conference paper

Abstract

In this paper we present an incremental function approximation in Hilbert space, based on Gram-Schmidt orthonormalisation process. Two bases of approximation space are determined and mantained during approximation process. The first one is used for neural network implementation, the second — orthonormal one, is treated as an intermediate step of calculations. Only after terminating all iterations the output weights are calculated (once).

Keywords

Hilbert Space Hide Unit Output Weight Approximation Space Incremental Function 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    A. R. Barron. Universal approximation bounds for superpositions of a sigmoidal function. IEEE Transactions of Information Theory, 39(3):930–945, 1993.MathSciNetCrossRefMATHGoogle Scholar
  2. [2]
    B. Beliczynski. Incremental Function Approximation by Using Neural Networks, volume 112 of Electrical Engineering Series. Warsaw University of Technology Press, 2000 (in Polish).Google Scholar
  3. [3]
    S. E. Falhman C. Lebiere. The cascade correlation learning architecture. Technical report, CMU-CS-90–100, 1991.Google Scholar
  4. [4]
    B. Fritzke. Fast learning with incremental rbf networks. Neural Processing Letters, 1(2), 1994.Google Scholar
  5. [5]
    L. K. Jones. A simple lemma on greedy approximation in hilbert-space and convergence rates for projection pursuit regression and neural network training. Annals of Statistics, 20:608–613, 1992.MathSciNetCrossRefMATHGoogle Scholar
  6. [6]
    E. Kreyszig. Introductory Functional Analysis with Applications. J.Wiley, 1978.Google Scholar
  7. [7]
    T. Y. Kwok D. Y. Yeung. Constructive algorithms for structure learning in feedforward neural networks for regression problems. IEEE Trans. Neural networks, 7:1168–1183, 1996.CrossRefGoogle Scholar
  8. [8]
    T. Y. Kwok D. Y. Yeung. Objective functions for training new hidden units in constructive neural networks. IEEE Trans. Neural Networks, 8(5):1131–1148, 1997.Google Scholar
  9. [9]
    M. Leshno V. Lin A. Pinkus S. Schocken. Multilayer feedforward networks with a nonpolynomial activation function can approximate any function. Neural Networks, 13:350–373, 1993.Google Scholar

Copyright information

© Springer-Verlag Wien 2001

Authors and Affiliations

  • Bartlomiej Beliczynski
    • 1
  1. 1.Institute of Control and Industrial ElectronicsWarsaw University of TechnologyWarszawaPoland

Personalised recommendations