Abstract
Boosting algorithms or greedy methods are computationally fast and often powerful for high-dimensional data problems. They have been mainly developed for classification and regression. Regularization arises in form of algorithmic constraints rather than explicit penalty terms. Interestingly, both of these regularization concepts are sometimes close to being equivalent, as shown for linear models by Efron et al. (2004). We present boosting and related algorithms, including a brief discussion about forward variable selection and orthogonal matching pursuit. The exposition in this chapter is mainly focusing on methodology and describing simple computational ideas. Mathematical theory is developed for special cases like one-dimensional function estimation or high-dimensional linear models.
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Copyright information
© 2011 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
Bühlmann, P., van de Geer, S. (2011). Boosting and greedy algorithms. In: Statistics for High-Dimensional Data. Springer Series in Statistics. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-20192-9_12
Download citation
DOI: https://doi.org/10.1007/978-3-642-20192-9_12
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-20191-2
Online ISBN: 978-3-642-20192-9
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)