Abstract
The Lasso, proposed by Tibshirani (1996), is an acronym for Least Absolute Shrinkage and Selection Operator. Among the main reasons why it has become very popular for high-dimensional estimation problems are its statistical accuracy for prediction and variable selection coupled with its computational feasibility. Furthermore, since the Lasso is a penalized likelihood approach, the method is rather general and can be used in a broad variety of models. In the simple case of a linear model with orthonormal design, the Lasso equals the soft thresholding estimator introduced and analyzed by Donoho and Johnstone (1994). The Lasso for linear models is the core example to develop the methodology for ℓ1-penalization in highdimensional settings. We discuss in this chapter some fundamental methodological and computational aspects of the Lasso. We also present the adaptive Lasso, an important two-stage procedure which addresses some bias problems of the Lasso. The methodological steps are supported by describing various theoretical results which will be fully developed in Chapters 6 and 7.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Copyright information
© 2011 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
Bühlmann, P., van de Geer, S. (2011). Lasso for linear models. In: Statistics for High-Dimensional Data. Springer Series in Statistics. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-20192-9_2
Download citation
DOI: https://doi.org/10.1007/978-3-642-20192-9_2
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-20191-2
Online ISBN: 978-3-642-20192-9
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)