Skip to main content

Entropy Regularized LPBoost

  • Conference paper
Algorithmic Learning Theory (ALT 2008)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 5254))

Included in the following conference series:

Abstract

In this paper we discuss boosting algorithms that maximize the soft margin of the produced linear combination of base hypotheses. LPBoost is the most straightforward boosting algorithm for doing this. It maximizes the soft margin by solving a linear programming problem. While it performs well on natural data, there are cases where the number of iterations is linear in the number of examples instead of logarithmic.

By simply adding a relative entropy regularization to the linear objective of LPBoost, we arrive at the Entropy Regularized LPBoost algorithm for which we prove a logarithmic iteration bound. A previous algorithm, called SoftBoost, has the same iteration bound, but the generalization error of this algorithm often decreases slowly in early iterations. Entropy Regularized LPBoost does not suffer from this problem and has a simpler, more natural motivation.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Abe, N., Takeuchi, J., Warmuth, M.K.: Polynomial learnability of stochastic rules with respect to the KL-divergence and quadratic distance. IEICE Transactions on Information and Systems E84-D(3), 299–316 (2001)

    Google Scholar 

  2. Boyd, S., Vandenberghe, L.: Convex Optimization. Cambridge University Press, Cambridge (2004)

    MATH  Google Scholar 

  3. Demiriz, A., Bennett, K.P., Shawe-Taylor, J.: Linear programming boosting via column generation. Mach. Learn. 46(1-3), 225–254 (2002)

    Article  MATH  Google Scholar 

  4. Duffy, N., Helmbold, D.: Potential boosters? In: Solla, S., Leen, T., Müller, K.-R. (eds.) Advances in Neural Information Processing Systems 12, pp. 258–264. MIT Press, Cambridge (2000)

    Google Scholar 

  5. Freund, Y.: Boosting a weak learning algorithm by majority. Inform. Comput. 121(2), 256–285 (1995); In: COLT 1990

    Article  MATH  MathSciNet  Google Scholar 

  6. Freund, Y.: An adaptive version of the boost by majority algorithm. In: Proceedings of the 12th annual conference on Computational learning theory, pp. 102–113. ACM Press, New York (1999)

    Google Scholar 

  7. Freund, Y., Schapire, R.: A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences 55(1), 119–139 (1997)

    Article  MATH  MathSciNet  Google Scholar 

  8. Friedman, J., Hastie, T., Tibshirani, R.: Additive Logistic Regression: a Statistical View of Boosting. The Annals of Statistics 38(2) (2000)

    Google Scholar 

  9. Grove, A.J., Schuurmans, D.: Boosting in the limit: maximizing the margin of learned ensembles. In: AAAI 1998/IAAI 1998, Menlo Park, CA, USA, pp. 692–699 (1998)

    Google Scholar 

  10. Helmbold, D., Schapire, R.E., Singer, Y., Warmuth, M.K.: A comparison of new and old algorithms for a mixture estimation problem. Machine Learning 27(1), 97–119 (1997)

    Article  Google Scholar 

  11. Kivinen, J., Warmuth, M.K.: Boosting as entropy projection. In: Proc. 12th Annu. Conf. on Comput. Learning Theory, pp. 134–144. ACM Press, New York (1999)

    Google Scholar 

  12. Lafferty, J.: Additive models, boosting, and inference for generalized divergences. In: Proceedings of the 12th Annual Conference on Computional Learning Theory, pp. 125–133. ACM Press, New York (1999)

    Google Scholar 

  13. Liao, J.: Totally Corrective Boosting Algorithms that Maximize the Margin. PhD thesis, University of California at Santa Cruz (December 2006)

    Google Scholar 

  14. Rätsch, G.: Robust Boosting via Convex Optimization: Theory and Applications. PhD thesis, University of Potsdam (2001)

    Google Scholar 

  15. Rätsch, G., Onoda, T., Müller, K.-R.: Soft margins for adaboost. Mach. Learn. 42(3), 287–320 (2001)

    Article  MATH  Google Scholar 

  16. Rätsch, G., Schölkopf, B., Smola, A., Mika, S., Onoda, T., Müller, K.-R.: Robust ensemble learning. In: Smola, A., Bartlett, P., Schölkopf, B., Schuurmans, D. (eds.) Advances in Large Margin Classifiers, pp. 207–219. MIT Press, Cambridge, MA (2000)

    Google Scholar 

  17. Rätsch, G., Warmuth, M.: Efficient margin maximizing with boosting. Journal of Machine Learning Research 6, 2131–2152 (2005)

    Google Scholar 

  18. Rudin, C., Schapire, R., Daubechies, I.: Boosting based on a smooth margin. In: Proceedings of the 17th Annual Conference on Computational Learning Theory, pp. 502–517 (2004)

    Google Scholar 

  19. Rudin, C., Schapire, R., Daubechies, I.: Analysis of boosting algorithms using the smooth margin function. The Annals of Statistics 6(35), 2723–2768 (2007)

    Article  MathSciNet  Google Scholar 

  20. Schapire, R., Freund, Y., Bartlett, P., Lee, W.: Boosting the margin: A new explanation for the effectiveness of voting methods. The Annals of Statistics 26(5), 1651–1686 (1998)

    Article  MATH  MathSciNet  Google Scholar 

  21. Shalev-Shwartz, S., Singer, Y.: On the equivalence of weak learnability and linear separability: New relaxations and efficient boosting algorithms. In: Proceedings of the 21st annual conference on Computational learning theory, pp. 311–321. Omicron (2008)

    Google Scholar 

  22. Smola, A., Vishwanathan, S.V.N., Le, Q.: Bundle methods for machine learning. In: Platt, J., Koller, D., Singer, Y., Roweis, S. (eds.) Advances in Neural Information Processing Systems 20, pp. 1377–1384. MIT Press, Cambridge (2008)

    Google Scholar 

  23. Warmuth, M., Liao, J., Rätsch, G.: Totally corrective boosting algorithms that maximize the margin. In: ICML 2006, pp. 1001–1008. ACM Press, New York (2006)

    Chapter  Google Scholar 

  24. Warmuth, M.K., Glocer, K., Rätsch, G.: Boosting algorithms for maximizing the soft margin. In: Platt, J., Koller, D., Singer, Y., Roweis, S. (eds.) Advances in Neural Information Processing Systems 20. MIT Press, Cambridge (2007)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2008 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Warmuth, M.K., Glocer, K.A., Vishwanathan, S.V.N. (2008). Entropy Regularized LPBoost. In: Freund, Y., Györfi, L., Turán, G., Zeugmann, T. (eds) Algorithmic Learning Theory. ALT 2008. Lecture Notes in Computer Science(), vol 5254. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-87987-9_23

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-87987-9_23

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-87986-2

  • Online ISBN: 978-3-540-87987-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics