Advertisement

Some Comparisons of Networks with Radial and Kernel Units

  • Věra Kůrková
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7553)

Abstract

Two types of computational models, radial-basis function networks with units having varying widths and kernel networks where all units have a fixed width, are investigated in the framework of scaled kernels. The impact of widths of kernels on approximation of multivariable functions, generalization modelled by regularization with kernel stabilizers, and minimization of error functionals is analyzed.

Keywords

Radial and kernel networks universal approximation property fixed and varying widths minimization of error functionals stabilizers induced by kernels 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Broomhead, D.S., Lowe, D.: Error bounds for approximation with neural networks. Complex Systems 2, 321–355 (1988)zbMATHMathSciNetGoogle Scholar
  2. 2.
    Girosi, F., Poggio, T.: Regularization algorithms for learning that are equivalent to multilayer networks. Science 247(4945), 978–982 (1990)CrossRefzbMATHMathSciNetGoogle Scholar
  3. 3.
    Cortes, C., Vapnik, V.N.: Support vector networks. Machine Learning 20, 273–297 (1995)zbMATHGoogle Scholar
  4. 4.
    Park, J., Sandberg, I.: Universal approximation using radial–basis–function networks. Neural Computation 3, 246–257 (1991)CrossRefGoogle Scholar
  5. 5.
    Park, J., Sandberg, I.: Approximation and radial basis function networks. Neural Computation 5, 305–316 (1993)CrossRefGoogle Scholar
  6. 6.
    Kainen, P.C., Kůrková, V., Sanguineti, M.: Complexity of Gaussian radial basis networks approximating smooth functions. J. of Complexity 25, 63–74 (2009)CrossRefzbMATHGoogle Scholar
  7. 7.
    Gnecco, G., Kůrková, V., Sanguineti, M.: Some comparisons of complexity in dictionary-based and linear computational models. Neural Networks 24(1), 171–182 (2011)CrossRefzbMATHGoogle Scholar
  8. 8.
    Gnecco, G., Kůrková, V., Sanguineti, M.: Can dictionary-based computational models outperform the best linear ones? Neural Networks 24(8), 881–887 (2011)CrossRefzbMATHGoogle Scholar
  9. 9.
    Girosi, F.: An equivalence between sparse approximation and support vector machines. Neural Computation 10, 1455–1480 (1998) (AI memo 1606) Google Scholar
  10. 10.
    Cucker, F., Smale, S.: On the mathematical foundations of learning. Bulletin of AMS 39, 1–49 (2002)CrossRefzbMATHMathSciNetGoogle Scholar
  11. 11.
    Poggio, T., Smale, S.: The mathematics of learning: dealing with data. Notices of AMS 50, 537–544 (2003)zbMATHMathSciNetGoogle Scholar
  12. 12.
    Kůrková, V.: Inverse problems in learning from data. In: Kaslik, E., Sivasundaram, S. (eds.) Recent advances in dynamics and control of neural networks. Cambridge Scientific Publishers (to appear)Google Scholar
  13. 13.
    Gribonval, R., Vandergheynst, P.: On the exponential convergence of matching pursuits in quasi-incoherent dictionaries. IEEE Trans. on Information Theory 52, 255–261 (2006)CrossRefMathSciNetGoogle Scholar
  14. 14.
    Pietsch, A.: Eigenvalues and s-Numbers. Cambridge University Press, Cambridge (1987)zbMATHGoogle Scholar
  15. 15.
    Mhaskar, H.N.: Versatile Gaussian networks. In: Proceedings of IEEE Workshop of Nonlinear Image Processing, pp. 70–73 (1995)Google Scholar
  16. 16.
    Rudin, W.: Functional Analysis. Mc Graw-Hill (1991)Google Scholar
  17. 17.
    Friedman, A.: Modern Analysis. Dover, New York (1982)zbMATHGoogle Scholar
  18. 18.
    Schölkopf, B., Smola, A.J.: Learning with Kernels – Support Vector Machines, Regularization, Optimization and Beyond. MIT Press, Cambridge (2002)Google Scholar
  19. 19.
    Bertero, M.: Linear inverse and ill-posed problems. Advances in Electronics and Electron Physics 75, 1–120 (1989)CrossRefGoogle Scholar
  20. 20.
    Girosi, F., Jones, M., Poggio, T.: Regularization theory and neural networks architectures. Neural Computation 7, 219–269 (1995)CrossRefGoogle Scholar
  21. 21.
    Wahba, G.: Splines Models for Observational Data. SIAM, Philadelphia (1990)CrossRefGoogle Scholar
  22. 22.
    Loustau, S.: Aggregation of SVM classifiers using Sobolev spaces. Journal of Machine Learning Research 9, 1559–1582 (2008)zbMATHMathSciNetGoogle Scholar
  23. 23.
    Fine, T.L.: Feedforward Neural Network Methodology. Springer, Heidelberg (1999)zbMATHGoogle Scholar
  24. 24.
    Kůrková, V., Neruda, R.: Uniqueness of functional representations by Gaussian basis function networks. In: Proceedings of ICANN 1994, pp. 471–474. Springer, London (1994)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Věra Kůrková
    • 1
  1. 1.Institute of Computer ScienceAcademy of Sciences of the Czech RepublicPragueCzech Republic

Personalised recommendations