Advertisement

Extended Analyses for an Optimal Kernel in a Class of Kernels with an Invariant Metric

  • Akira Tanaka
  • Ichigaku Takigawa
  • Hideyuki Imai
  • Mineichi Kudo
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7626)

Abstract

Learning based on kernel machines is widely known as a powerful tool for various fields of information science such as pattern recognition and regression estimation. An appropriate model selection is required in order to obtain desirable learning results. In our previous work, we discussed a class of kernels forming a nested class of reproducing kernel Hilbert spaces with an invariant metric and proved that the kernel corresponding to the smallest reproducing kernel Hilbert space, including an unknown true function, gives the best model. In this paper, we relax the invariant metric condition and show that a similar result is obtained when a subspace with an invariant metric exists.

Keywords

kernel regressor reproducing kernel Hilbert space orthogonal projection invariant metric 

References

  1. 1.
    Muller, K., Mika, S., Ratsch, G., Tsuda, K., Scholkopf, B.: An Introduction to Kernel-Based Learning Algorithms. IEEE Transactions on Neural Networks 12, 181–201 (2001)CrossRefGoogle Scholar
  2. 2.
    Vapnik, V.N.: The Nature of Statistical Learning Theory. Springer, New York (1999)Google Scholar
  3. 3.
    Shawe-Taylor, J., Cristianini, N.: Kernel Methods for Pattern Recognition. Cambridge University Press, Cambridge (2004)CrossRefGoogle Scholar
  4. 4.
    Cristianini, N., Shawe-Taylor, J.: An Introduction to Support Vector Machines and other kernel-based learning methods. Cambridge University Press, Cambridge (2000)Google Scholar
  5. 5.
    Sugiyama, M., Ogawa, H.: Subspace Information Criterion for Model Selection. Neural Computation 13, 1863–1889 (2001)zbMATHCrossRefGoogle Scholar
  6. 6.
    Sugiyama, M., Kawanabe, M., Muller, K.: Trading Variance Reduction with Unbiasedness: The Regularized Subspace Information Criterion for Robust Model Selection in Kernel Regression. Neural Computation 16, 1077–1104 (2004)zbMATHCrossRefGoogle Scholar
  7. 7.
    Aronszajn, N.: Theory of Reproducing Kernels. Transactions of the American Mathematical Society 68, 337–404 (1950)MathSciNetzbMATHCrossRefGoogle Scholar
  8. 8.
    Mercer, J.: Functions of Positive and Negative Type and Their Connection with The Theory of Integral Equations. Transactions of the London Philosophical Society A, 415–446 (1909)Google Scholar
  9. 9.
    Tanaka, A., Imai, H., Kudo, M., Miyakoshi, M.: Optimal Kernel in a Class of Kernels with an Invariant Metric. In: da Vitoria Lobo, N., Kasparis, T., Roli, F., Kwok, J.T., Georgiopoulos, M., Anagnostopoulos, G.C., Loog, M. (eds.) S+SSPR 2008. LNCS, vol. 5342, pp. 530–539. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  10. 10.
    Schatten, R.: Norm Ideals of Completely Continuous Operators. Springer, Berlin (1960)zbMATHCrossRefGoogle Scholar
  11. 11.
    Ogawa, H.: Neural Networks and Generalization Ability. IEICE Technical Report NC95-8, 57–64 (1995)Google Scholar
  12. 12.
    Rao, C.R., Mitra, S.K.: Generalized Inverse of Matrices and its Applications. John Wiley & Sons (1971)Google Scholar
  13. 13.
    Tanaka, A., Imai, H., Kudo, M., Miyakoshi, M.: Theoretical Analyses on a Class of Nested RKHS’s. In: 2011 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2011), pp. 2072–2075 (2011)Google Scholar
  14. 14.
    Tanaka, A., Miyakoshi, M.: Theoretical Analyses for a Class of Kernels with an Invariant Metric. In: 2010 IEEE International Conference on Acoustics, Speech, and Signal Processing, pp. 2074–2077 (2010)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Akira Tanaka
    • 1
  • Ichigaku Takigawa
    • 2
  • Hideyuki Imai
    • 1
  • Mineichi Kudo
    • 1
  1. 1.Division of Computer ScienceHokkaido UniversitySapporoJapan
  2. 2.Creative Research InstitutionHokkaido UniversitySapporoJapan

Personalised recommendations