Skip to main content

An Identity for Kernel Ridge Regression

  • Conference paper
Algorithmic Learning Theory (ALT 2010)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 6331))

Included in the following conference series:

Abstract

This paper provides a probabilistic derivation of an identity connecting the square loss of ridge regression in on-line mode with the loss of a retrospectively best regressor. Some corollaries of the identity providing upper bounds for the cumulative loss of on-line ridge regression are also discussed.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Aronszajn, N.: La théorie des noyaux reproduisants et ses applications. Première partie. Proceedings of the Cambridge Philosophical Society 39, 133–153 (1943)

    Article  MathSciNet  Google Scholar 

  2. Azoury, K.S., Warmuth, M.K.: Relative loss bounds for on-line density estimation with the exponential family of distributions. Machine Learning 43, 211–246 (2001)

    Article  MATH  Google Scholar 

  3. Beckenbach, E.F., Bellman, R.E.: Inequalities. Springer, Heidelberg (1961)

    Google Scholar 

  4. Busuttil, S., Kalnishkan, Y.: Online regression competitive with changing predictors. In: Proceedings of Algorithmic Learning Theory, 18th International Conference, pp. 181–195 (2007)

    Google Scholar 

  5. Cesa-Bianchi, N., Long, P., Warmuth, M.K.: Worst-case quadratic loss bounds for on-line prediction of linear functions by gradient descent. IEEE Transactions on Neural Networks 7, 604–619 (1996)

    Article  Google Scholar 

  6. Cesa-Bianchi, N., Lugosi, G.: Prediction, Learning, and Games. Cambridge University Press, Cambridge (2006)

    Book  MATH  Google Scholar 

  7. Henderson, H.V., Searle, S.R.: On deriving the inverse of a sum of matrices. SIAM Review 23(1) (1981)

    Google Scholar 

  8. Herbster, M., Warmuth, M.K.: Tracking the best linear predictor. Journal of Machine Learning Research 1, 281–309 (2001)

    Article  MATH  MathSciNet  Google Scholar 

  9. Hoerl, A.E.: Application of ridge analysis to regression problems. Chemical Engineering Progress 58, 54–59 (1962)

    Google Scholar 

  10. Kakade, S.M., Seeger, M.W., Foster, D.P.: Worst-case bounds for Gaussian process models. In: Proceedings of the 19th Annual Conference on Neural Information Processing Systems (2005)

    Google Scholar 

  11. Kivinen, J., Warmuth, M.K.: Exponentiated gradient versus gradient descent for linear predictors. Infornation and Computation 132(1), 1–63 (1997)

    Article  MATH  MathSciNet  Google Scholar 

  12. Kumon, M., Takemura, A., Takeuchi, K.: Sequential optimizing strategy in multi-dimensional bounded forecasting games. CoRR abs/0911.3933v1 (2009)

    Google Scholar 

  13. Lamperti, J.: Stochastic Processes: A Survey of the Mathematical Theory. Springer, Heidelberg (1977)

    MATH  Google Scholar 

  14. Rasmussen, C.E., Williams, C.K.I.: Gaussian Processes for Machine Learning. MIT Press, Cambridge (2006)

    MATH  Google Scholar 

  15. Saunders, C., Gammerman, A., Vovk, V.: Ridge regression learning algorithm in dual variables. In: Proceedings of the 15th International Conference on Machine Learning, pp. 515–521 (1998)

    Google Scholar 

  16. Seeger, M.W., Kakade, S.M., Foster, D.P.: Information consistency of nonparametric Gaussian process methods. IEEE Transactions on Information Theory 54(5), 2376–2382 (2008)

    Article  MathSciNet  Google Scholar 

  17. Vovk, V.: Competitive on-line statistics. International Statistical Review 69(2), 213–248 (2001)

    Article  MATH  Google Scholar 

  18. Zhdanov, F., Vovk, V.: Competing with gaussian linear experts. CoRR abs/0910.4683 (2009)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2010 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Zhdanov, F., Kalnishkan, Y. (2010). An Identity for Kernel Ridge Regression. In: Hutter, M., Stephan, F., Vovk, V., Zeugmann, T. (eds) Algorithmic Learning Theory. ALT 2010. Lecture Notes in Computer Science(), vol 6331. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-16108-7_32

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-16108-7_32

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-16107-0

  • Online ISBN: 978-3-642-16108-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics