Skip to main content

Learning r-of-k Functions by Boosting

  • Conference paper

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 3244))

Abstract

We investigate further improvement of boosting in the case that the target concept belongs to the class of r-of-k threshold Boolean functions, which answer “+1” if at least r of k relevant variables are positive, and answer “–1” otherwise. Given m examples of a r-of-k function and literals as base hypotheses, popular boosting algorithms (e.g., AdaBoost) construct a consistent final hypothesis by using O(k 2 log m) base hypotheses. While this convergence speed is tight in general, we show that a modification of AdaBoost (confidence-rated AdaBoost [SS99] or InfoBoost [Asl00]) can make use of the property of r-of-k functions that make less error on one-side to find a consistent final hypothesis by using O(kr log m) hypotheses. Our result extends the previous investigation by Hatano and Warmuth [HW04] and gives more general examples where confidence-rated AdaBoost or InfoBoost has an advantage over AdaBoost.

This work is supported in part by a Grant-in-Aid for Scientific Research on Priority Areas “Statistical-Mechanical Approach to Probabilistic Information Processing”.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Aslam, J.A.: Improving algorithms for boosting. In: Proc. 13th Annu. Conference on Comput. Learning Theory, pp. 200–207. ACM Press, New York (2000)

    Google Scholar 

  2. Bshouty, N.H., Gavinsky, D.: On boosting with optimal poly-bounded distribution. In: Helmbold, D.P., Williamson, B. (eds.) COLT 2001 and EuroCOLT 2001. LNCS (LNAI), vol. 2111, pp. 490–506. Springer, Heidelberg (2001)

    Chapter  Google Scholar 

  3. Dasgupta, S., Long, P.M.: Boosting with diverse base classifiers. In: Schölkopf, B., Warmuth, M.K. (eds.) COLT/Kernel 2003. LNCS (LNAI), vol. 2777, pp. 273–287. Springer, Heidelberg (2003)

    Chapter  Google Scholar 

  4. Domingo, C., Watanabe, O.: MadaBoost: a modification of AdaBoost. In: Proc. 13th Annu. Conference on Computational Learning Theory, pp. 180–189. ACM Press, New York (2000)

    Google Scholar 

  5. Feige, U.: A threshold of ln n for approximating set cover. Journal of the ACM (JACM) 45(4), 634–652 (1998)

    Article  MATH  MathSciNet  Google Scholar 

  6. Freund, Y.: Boosting a weak learning algorithm by majority. Inform. Comput. 121(2), 256–285 (1995)

    Article  MATH  MathSciNet  Google Scholar 

  7. Gavinsky, D.: Optimally-smooth adaptive boosting and application to agnostic learning. Ournal of Machine Learning Research 4, 101–117 (2003)

    Article  MathSciNet  Google Scholar 

  8. Hatano, K., Warmuth, M.K.: Boosting versus covering. In: Thrun, S., Saul, L., Schölkopf, B. (eds.) Advances in Neural Information Processing Systems, vol. 16, MIT Press, Cambridge (2004)

    Google Scholar 

  9. Kearns, M.J., Vazirani, U.V.: An Introduction to Computational Learning Theory. MIT Press, Cambridge (1994)

    Google Scholar 

  10. Littlestone, N.: Learning Quickly When Irrelevant Attributes Abound: A New Linear-Threshold Algorithm. Machine Learning 2(4), 285–318 (1988)

    Google Scholar 

  11. Long, P.M.: Using the Pseudo-Dimension to Analyze Approximation Algorithms for Integer Programming. In: Proc. of the Seventh International Workshop on Algorithms and Data Structures, pp. 26–37 (2001)

    Google Scholar 

  12. Natarajan, B.K.: Machine Learning: A Theoretical Approach. Morgan Kaufmann, San Francisco (1991)

    Google Scholar 

  13. Schapire, R.E.: The strength of weak learnability. Machine Learning 5(2), 197–227 (1990)

    Google Scholar 

  14. Servedio, R.A.: Smooth Boosting and Learning with Malicious Noise. In: Helmbold, D.P., Williamson, B. (eds.) COLT 2001 and EuroCOLT 2001. LNCS (LNAI), vol. 2111, pp. 473–489. Springer, Heidelberg (2001)

    Chapter  Google Scholar 

  15. Schapire, R.E., Freund, Y., Bartlett, P., Lee, W.S.: Boosting the margin: A new explanation for the effectiveness of voting methods. The Annals of Statistics 26(5), 1651–1686 (1998)

    Article  MATH  MathSciNet  Google Scholar 

  16. Srinivasan, A.: Improved approximation guarantees for packing and covering integer programs. SIAM Journal on Computing 29, 648–670 (1999)

    Article  MATH  MathSciNet  Google Scholar 

  17. Srinivasan, A.: New approaches to covering and packing problems. In: Proc. ACM-SIAM Symposium on Discrete Algorithms (SODA), pp. 567–576 (2001)

    Google Scholar 

  18. Schapire, R.E., Singer, Y.: Improved boosting algorithms using confidence-rated predictions. Machine Learning 37(3), 297–336 (1999)

    Article  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2004 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Hatano, K., Watanabe, O. (2004). Learning r-of-k Functions by Boosting. In: Ben-David, S., Case, J., Maruoka, A. (eds) Algorithmic Learning Theory. ALT 2004. Lecture Notes in Computer Science(), vol 3244. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-30215-5_10

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-30215-5_10

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-23356-5

  • Online ISBN: 978-3-540-30215-5

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics