Advertisement

Evolving Sum and Composite Kernel Functions for Regularization Networks

  • Petra Vidnerová
  • Roman Neruda
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6593)

Abstract

In this paper we propose a novel evolutionary algorithm for regularization networks. The main drawback of regularization networks in practical applications is the presence of meta-parameters, including the type and parameters of kernel functions Our learning algorithm provides a solution to this problem by searching through a space of different kernel functions, including sum and composite kernels. Thus, an optimal combination of kernel functions with parameters is evolved for given task specified by training data. Comparisons of composite kernels, single kernels, and traditional Gaussians are provided in several experiments.

Keywords

regularization networks kernel functions genetic algorithms 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Girosi, F., Jones, M., Poggio, T.: Regularization theory and Neural Networks architectures. Neural Computation 2, 219–269 (1995)CrossRefGoogle Scholar
  2. 2.
    Kůrková, V.: Learning from data as an inverse problem. In: Antoch, J. (ed.) Computational Statistics, pp. 1377–1384. Physica Verlag, Heidelberg (2004)Google Scholar
  3. 3.
    Poggio, T., Girosi, F.: A theory of networks for approximation and learning. Technical report, Cambridge, MA, USA (1989); A. I. Memo No. 1140, C.B.I.P. Paper No. 31Google Scholar
  4. 4.
    Poggio, T., Smale, S.: The mathematics of learning: Dealing with data. Notices of the AMS 50, 536–544 (2003)MathSciNetzbMATHGoogle Scholar
  5. 5.
    Neruda, R., Vidnerová, P.: Genetic algorithm with species for regularization network metalearning. In: Papasratorn, B., Lavangnananda, K., Chutimaskul, W., Vanijja, V. (eds.) Advances in Information Technology. Communications in Computer and Information Science, vol. 114, pp. 192–201. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  6. 6.
    Aronszajn, N.: Theory of reproducing kernels. Transactions of the AMS 68, 337–404 (1950)MathSciNetCrossRefzbMATHGoogle Scholar
  7. 7.
    Kudová, P., Šámalová, T.: Sum and product kernel regularization networks. In: Rutkowski, L., Tadeusiewicz, R., Zadeh, L., Zurada, J. (eds.) ICAISC 2006. LNCS (LNAI), vol. 4029, pp. 56–65. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  8. 8.
    Mitchell, M.: An Introduction to Genetic Algorithms. MIT Press, Cambridge (1996)zbMATHGoogle Scholar
  9. 9.
    Prechelt, L.: PROBEN1 – a set of benchmarks and benchmarking rules for neural network training algorithms. Technical Report 21/94, Universitaet Karlsruhe (9 (1994)Google Scholar
  10. 10.
    LAPACK: Linear algebra package, http://www.netlib.org/lapack/
  11. 11.
    Shawe-Taylor, J., Cristianini, N.: Kernel Methods for Pattern Analysis. Cambridge University Press, Cambridge (2004)CrossRefzbMATHGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Petra Vidnerová
    • 1
  • Roman Neruda
    • 1
  1. 1.Institute of Computer ScienceAcademy of Sciences of the Czech RepublicPraha 8Czech Republic

Personalised recommendations