Abstract
We investigate further improvement of boosting in the case that the target concept belongs to the class of r-of-k threshold Boolean functions, which answer “+1” if at least r of k relevant variables are positive, and answer “–1” otherwise. Given m examples of a r-of-k function and literals as base hypotheses, popular boosting algorithms (e.g., AdaBoost) construct a consistent final hypothesis by using O(k 2 log m) base hypotheses. While this convergence speed is tight in general, we show that a modification of AdaBoost (confidence-rated AdaBoost [SS99] or InfoBoost [Asl00]) can make use of the property of r-of-k functions that make less error on one-side to find a consistent final hypothesis by using O(kr log m) hypotheses. Our result extends the previous investigation by Hatano and Warmuth [HW04] and gives more general examples where confidence-rated AdaBoost or InfoBoost has an advantage over AdaBoost.
This work is supported in part by a Grant-in-Aid for Scientific Research on Priority Areas “Statistical-Mechanical Approach to Probabilistic Information Processing”.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
References
Aslam, J.A.: Improving algorithms for boosting. In: Proc. 13th Annu. Conference on Comput. Learning Theory, pp. 200–207. ACM Press, New York (2000)
Bshouty, N.H., Gavinsky, D.: On boosting with optimal poly-bounded distribution. In: Helmbold, D.P., Williamson, B. (eds.) COLT 2001 and EuroCOLT 2001. LNCS (LNAI), vol. 2111, pp. 490–506. Springer, Heidelberg (2001)
Dasgupta, S., Long, P.M.: Boosting with diverse base classifiers. In: Schölkopf, B., Warmuth, M.K. (eds.) COLT/Kernel 2003. LNCS (LNAI), vol. 2777, pp. 273–287. Springer, Heidelberg (2003)
Domingo, C., Watanabe, O.: MadaBoost: a modification of AdaBoost. In: Proc. 13th Annu. Conference on Computational Learning Theory, pp. 180–189. ACM Press, New York (2000)
Feige, U.: A threshold of ln n for approximating set cover. Journal of the ACM (JACM) 45(4), 634–652 (1998)
Freund, Y.: Boosting a weak learning algorithm by majority. Inform. Comput. 121(2), 256–285 (1995)
Gavinsky, D.: Optimally-smooth adaptive boosting and application to agnostic learning. Ournal of Machine Learning Research 4, 101–117 (2003)
Hatano, K., Warmuth, M.K.: Boosting versus covering. In: Thrun, S., Saul, L., Schölkopf, B. (eds.) Advances in Neural Information Processing Systems, vol. 16, MIT Press, Cambridge (2004)
Kearns, M.J., Vazirani, U.V.: An Introduction to Computational Learning Theory. MIT Press, Cambridge (1994)
Littlestone, N.: Learning Quickly When Irrelevant Attributes Abound: A New Linear-Threshold Algorithm. Machine Learning 2(4), 285–318 (1988)
Long, P.M.: Using the Pseudo-Dimension to Analyze Approximation Algorithms for Integer Programming. In: Proc. of the Seventh International Workshop on Algorithms and Data Structures, pp. 26–37 (2001)
Natarajan, B.K.: Machine Learning: A Theoretical Approach. Morgan Kaufmann, San Francisco (1991)
Schapire, R.E.: The strength of weak learnability. Machine Learning 5(2), 197–227 (1990)
Servedio, R.A.: Smooth Boosting and Learning with Malicious Noise. In: Helmbold, D.P., Williamson, B. (eds.) COLT 2001 and EuroCOLT 2001. LNCS (LNAI), vol. 2111, pp. 473–489. Springer, Heidelberg (2001)
Schapire, R.E., Freund, Y., Bartlett, P., Lee, W.S.: Boosting the margin: A new explanation for the effectiveness of voting methods. The Annals of Statistics 26(5), 1651–1686 (1998)
Srinivasan, A.: Improved approximation guarantees for packing and covering integer programs. SIAM Journal on Computing 29, 648–670 (1999)
Srinivasan, A.: New approaches to covering and packing problems. In: Proc. ACM-SIAM Symposium on Discrete Algorithms (SODA), pp. 567–576 (2001)
Schapire, R.E., Singer, Y.: Improved boosting algorithms using confidence-rated predictions. Machine Learning 37(3), 297–336 (1999)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2004 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Hatano, K., Watanabe, O. (2004). Learning r-of-k Functions by Boosting. In: Ben-David, S., Case, J., Maruoka, A. (eds) Algorithmic Learning Theory. ALT 2004. Lecture Notes in Computer Science(), vol 3244. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-30215-5_10
Download citation
DOI: https://doi.org/10.1007/978-3-540-30215-5_10
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-23356-5
Online ISBN: 978-3-540-30215-5
eBook Packages: Springer Book Archive