Boosting is a kind of ensemble methods  which produces a strong learner that is capable of making very accurate predictions by combining rough and moderately inaccurate learners (which are called as base learners or weak learners). In particular, boosting sequentially trains a series of base learners by using a base learning algorithm, where the training examples wrongly predicted by a base learner will receive more attention from the successive base learner. After that, it generates a final strong learner through a weighted combination of these base learners.
In 1989, Kearns and Valiant posed an interesting theoretical question, i.e., whether two complexity classes, weakly learnable and strongly learnable problems, are equal. In other words, whether a weak learning algorithm that performs just slightly better than random guess can be boosted into an arbitrarily accurate strong learning algorithm. In 1990, Schapire  proved that the answer to the...
- 8.Reyzin L, Schapire RE. How boosting the margin can also boost classifier complexity. In: Proceedings of the 23rd International Conference on Machine Learning, Pittsburgh; 2006. p. 753–60.Google Scholar
- 9.Schapire RE. The strength of weak learn ability. Mach Learn. 1990;5(2):197–227.Google Scholar
- 11.Viola P, Jones M. Rapid object detection using a boosted cascade of simple features. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition; 2001. p. 511–8.Google Scholar
- 12.Wang L, Sugiyama M, Yang C, Zhou Z.H, Feng J. On the margin explanation of boosting algorithm. In: Proceedings of the 21st Annual Conference on Learning Theory; 2008. p. 479–90.Google Scholar
- 14.Zhou Z-H. Large margin distribution learning. In: Proceedings of Artificial Neural Networks in Pattern Recognition; 2014.Google Scholar