Advertisement

On relative loss bounds in generalized linear regression

  • Jürgen Forster
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1684)

Abstract

When relative loss bounds are considered, an on-line learning algorithm is compared to the performance of a class of off-line algorithms, called experts. In this paper we reconsider a result by Vovk, namely an upper bound on the on-line relative loss for linear regression with square loss — here the experts are linear functions. We give a shorter and simpler proof of Vovk’s result and give a new motivation for the choice of the predictions of Vovk’s learning algorithm. This is done by calculating the, in some sense, best prediction for the last trial of a sequence of trials when it is known that the outcome variable is bounded. We try to generalize these ideas to the case of generalized linear regression where the experts are neurons and give a formula for the “best” prediction for the last trial in this case, too. This prediction turns out to be essentially an integral over the “best” expert applied to the last instance. Predictions that are “optimal” in this sense might be good predictions for long sequences of trials as well.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Azoury, K., Warmuth, M.: Relative Loss Bounds for On-line Density Estimation with the Exponential Family of Distributions, to appear at the Fifteenth Conference on Uncertainty in Artificial Intelligence, UAI’99.Google Scholar
  2. 2.
    Beckenbach, E. F., Bellman, R.: Inequalities, Berlin: Springer, 1965.Google Scholar
  3. 3.
    Foster, D. P.: Prediction in the worst case, Annals of Statistics 19, 1084–1090.Google Scholar
  4. 4.
    Kivinen, J., Warmuth, M.: Relative Loss Bounds for Multidimensional Regression Problems. In Jordan, M., Kearns, M., Solla, S., editors, Advances in Neural Infor-mation Processing Systems 10 (NIPS 97), 287–293, MIT Press, Cambridge, MA, 1998.Google Scholar
  5. 5.
    Kivinen, J., Warmuth, M.: Additive versus exponentiated gradient updates for linear prediction, Information and Computation 132:1–64, 1997.zbMATHCrossRefMathSciNetGoogle Scholar
  6. 6.
    Vovk, V.: Competitive On-Line Linear Regression. Technical Report CSD-TR-97-13, Department of Computer Science, Royal Holloway, University of London, 1997.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1999

Authors and Affiliations

  • Jürgen Forster
    • 1
  1. 1.Universität BochumGermany

Personalised recommendations