Abstract
Much of the theory and computational algorithms for ℓ1-penalized methods in the high-dimensional context has been developed for convex loss functions, e.g., the squared error loss for linear models (Chapters 2 and 6) or the negative log-likelihood in a generalized linear model (Chapters 3 and 6). However, there are many models where the negative log-likelihood is a non-convex function. Important examples include mixture models or linear mixed effects models which we describe in more details. Both of them address in a different way the issue of modeling a grouping structure among the observations, a quite common feature in complex situations. We discuss in this chapter how to deal with non-convex but smooth ℓ1- penalized likelihood problems. Regarding computation, we can typically find a local optimum of the corresponding non-convex optimization problem only whereas the theory is given for the estimator defined by a global optimum. Particularly in highdimensional problems, it is difficult to compute a global optimum and it would be desirable to have some theoretical properties of estimators arising from “reasonable” local optima. However, this is largely an unanswered problem.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Copyright information
© 2011 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
Bühlmann, P., van de Geer, S. (2011). Non-convex loss functions and ℓ1-regularization. In: Statistics for High-Dimensional Data. Springer Series in Statistics. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-20192-9_9
Download citation
DOI: https://doi.org/10.1007/978-3-642-20192-9_9
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-20191-2
Online ISBN: 978-3-642-20192-9
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)