Skip to main content

Smooth ε-Insensitive Regression by Loss Symmetrization

  • Conference paper
Learning Theory and Kernel Machines

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 2777))

Abstract

We describe a framework for solving regression problems by reduction to classification. Our reduction is based on symmetrization of margin-based loss functions commonly used in boosting algorithms, namely, the logistic loss and the exponential loss. Our construction yields a smooth version of the ε-insensitive hinge loss that is used in support vector regression. A byproduct of this construction is a new simple form of regularization for boosting-based classification and regression algorithms. We present two parametric families of batch learning algorithms for minimizing these losses. The first family employs a log-additive update and is based on recent boosting algorithms while the second family uses a new form of additive update. We also describe and analyze online gradient descent (GD) and exponentiated gradient (EG) algorithms for the ε-insensitive logistic loss. Our regression framework also has implications on classification algorithms, namely, a new additive batch algorithm for the log-loss and exp-loss used in boosting.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Cesa-Bianchi, N.: Analysis of two gradient-based algorithms for on-line regression. Journal of Computer and System Sciences 59(3), 392–411 (1999)

    Article  MATH  MathSciNet  Google Scholar 

  2. Collins, M., Schapire, R.E., Singer, Y.: Logistic regression, AdaBoost and Bregman distances. Machine Learning 47(2/3), 253–285 (2002)

    Article  Google Scholar 

  3. Duffy, N., Helmbold, D.: Leveraging for regression. In: Proceedings of the 15th Annual Conference on Computational Learning Theory. ACM, New York (2000)

    Google Scholar 

  4. Freund, Y., Opper, M.: Drifting games and Brownian motion. Journal of Computer and System Sciences 64, 113–132 (2002)

    Article  MATH  MathSciNet  Google Scholar 

  5. Friedman, J., Hastie, T., Tibshirani, R.: Additive logistic regression: a statistical view of boosting. Annals of Statistics 28(2), 337–374 (2000)

    Article  MATH  MathSciNet  Google Scholar 

  6. Hastie, T., Tibshirani, R., Friedman, J.: The Elements of Statistical Learning. Springer, Heidelberg (2001)

    Google Scholar 

  7. Huber, P.J.: Robust Statistics. John Wiley and Sons, New York (1981)

    Book  MATH  Google Scholar 

  8. Kivinen, J., Helmbold, D.P., Warmuth, M.: P Helmbold, and M. Warmuth. Relative loss bounds for single neurons. IEEE Transactions on Neural Networks 10(6), 1291–1304 (1999)

    Article  Google Scholar 

  9. Kivinen, J., Warmuth, M.K.: Exponentiated gradient versus gradient descent for linear predictors. Information and Computation 132(1), 1–64 (1997)

    Article  MATH  MathSciNet  Google Scholar 

  10. Lebanon, G., Lafferty, J.: Boosting and maximum likelihood for exponential models. In: Advances in Neural Information Processing Systems, vol. 14 (2001)

    Google Scholar 

  11. Poggio, T., Girosi, F.: Networks for approximation and learning. Proceedings of the IEEE 78(9) (1990)

    Google Scholar 

  12. Robert, E.: Schapire. Drifting games. In: Proceedings of the 12th Annual Conference on Computational Learning Theory (1999)

    Google Scholar 

  13. Smola, A., Schölkopf, B.: A tutorial on support vector regression. Technical Report NC2-TR-1998-030, NeuroCOLT2 (1998)

    Google Scholar 

  14. Vapnik, V.N.: The Nature of Statistical Learning Theory. Springer, Heidelberg (1995)

    MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2003 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Dekel, O., Shalev-Shwartz, S., Singer, Y. (2003). Smooth ε-Insensitive Regression by Loss Symmetrization. In: Schölkopf, B., Warmuth, M.K. (eds) Learning Theory and Kernel Machines. Lecture Notes in Computer Science(), vol 2777. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-45167-9_32

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-45167-9_32

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-40720-1

  • Online ISBN: 978-3-540-45167-9

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics