Advertisement

AStA Advances in Statistical Analysis

, Volume 102, Issue 2, pp 179–210 | Cite as

Non-concave penalization in linear mixed-effect models and regularized selection of fixed effects

  • Abhik Ghosh
  • Magne Thoresen
Original Paper
  • 175 Downloads

Abstract

Mixed-effect models are very popular for analyzing data with a hierarchical structure. In medical applications, typical examples include repeated observations within subjects in a longitudinal design, patients nested within centers in a multicenter design. However, recently, due to the medical advances, the number of fixed-effect covariates collected from each patient can be quite large, e.g., data on gene expressions of each patient, and all of these variables are not necessarily important for the outcome. So, it is very important to choose the relevant covariates correctly for obtaining the optimal inference for the overall study. On the other hand, the relevant random effects will often be low-dimensional and pre-specified. In this paper, we consider regularized selection of important fixed-effect variables in linear mixed-effect models along with maximum penalized likelihood estimation of both fixed and random-effect parameters based on general non-concave penalties. Asymptotic and variable selection consistency with oracle properties are proved for low-dimensional cases as well as for high dimensionality of non-polynomial order of sample size (number of parameters is much larger than sample size). We also provide a suitable computationally efficient algorithm for implementation. Additionally, all the theoretical results are proved for a general non-convex optimization problem that applies to several important situations well beyond the mixed model setup (like finite mixture of regressions) illustrating the huge range of applicability of our proposal.

Notes

Acknowledgements

The work is funded by the Norwegian Cancer Society, Grant No. 5818504. We also thanks Prof. Stine Ulven from the department of Nutrition, University of Oslo, for providing the real dataset used in the paper and also for her help and guidance in biological interpretation of the results.

Supplementary material

10182_2017_298_MOESM1_ESM.rar (6 kb)
Supplementary material 1 (rar 5 KB)

References

  1. Antoniadis, A.: Wavelets in statistics: a review (with discussion). J. Ital. Stat. Assoc. 6, 97–144 (1997)CrossRefGoogle Scholar
  2. Antoniadis, A., Fan, J.: Regularization of wavelets approximations. J. Am. Stat. Assoc. 96, 939–967 (2001)MathSciNetCrossRefzbMATHGoogle Scholar
  3. Bertsekas, D.P.: Nonlinear Programming. Athena Scientific, Belmont (1999)zbMATHGoogle Scholar
  4. Bondell, H.D., Krishna, A., Ghosh, S.K.: Joint variable selection for fixed and random effects in linear mixed-effects models. Biometrics 66, 1069–1077 (2010)MathSciNetCrossRefzbMATHGoogle Scholar
  5. Bonnet, A., Gassiat, E., Levy-Leduc, C.: Heritability estimation in high-dimensional sparse linear mixed models. Electron. J. Stat. 9, 2099–2129 (2015)MathSciNetCrossRefzbMATHGoogle Scholar
  6. Chen, Z., Dunson, D.B.: Random effects selection in linear mixed models. Biometrics 59, 762–769 (2003)MathSciNetCrossRefzbMATHGoogle Scholar
  7. Daubechies, I., Defrise, M., De Mol, C.: An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Commun. Pure Appl. Math. 57, 1413–1457 (2004)MathSciNetCrossRefzbMATHGoogle Scholar
  8. Fan, J.: Comments on wavelets in statistics: a review by A. Antoniadis. J. Ital. Stat. Assoc. 6, 131–138 (1997)MathSciNetCrossRefGoogle Scholar
  9. Fan, J., Li, R.: Variable selection via nonconcave penalized likelihood and its oracle properties. J. Am. Stat. Assoc. 96, 1348–1360 (2001)MathSciNetCrossRefzbMATHGoogle Scholar
  10. Fan, J., Li, R.: Variable selection in linear mixed effects models. Ann. Stat. 40(4), 2043–2068 (2012)MathSciNetCrossRefzbMATHGoogle Scholar
  11. Fan, J., Liao, Y.: Endogeneity in high dimensions. Ann. Stat. 42(3), 872–917 (2014)MathSciNetCrossRefzbMATHGoogle Scholar
  12. Fan, J., Lv, J.: Nonconcave penalized likelihood With NP-dimensionality. IEEE Trans. Inf. Theory 57(8), 5467–5484 (2011)MathSciNetCrossRefzbMATHGoogle Scholar
  13. Fan, J., Peng, H.: Nonconcave penalized likelihood with diverging number of parameters. Ann. Stat. 32, 928–961 (2004)MathSciNetCrossRefzbMATHGoogle Scholar
  14. Fazli, S., Danczy, M., Schelldorfer, J., Muller, K.R.: \(l_1\)-penalized linear mixed-effects models for high-dimensional data with application to BCI. NeuroImage 56, 2100–2108 (2011)CrossRefGoogle Scholar
  15. Frank, I.E., Friedman, J.H.: A statistical view of some chemometrics regression tools. Technometrics 35, 109–148 (1993)CrossRefzbMATHGoogle Scholar
  16. Friedman, J., Hastie, T., Hfling, H., Tibshirani, R.: Pathwise coordinate optimization. Ann. Appl. Stat. 1, 302–332 (2007)MathSciNetCrossRefzbMATHGoogle Scholar
  17. Friedman, J., Hastie, T., Tibshirani, R.: Regularization paths for generalized linear models via coordinate descent. J. Stat. Softw. 33, 1–22 (2010)CrossRefGoogle Scholar
  18. Fu, W.J.: Penalized regression: the bridge versus the LASSO. J. Comput. Graph. Stat. 7, 397–416 (1998)MathSciNetGoogle Scholar
  19. Ibrahim, J.G., Zhu, H., Garcia, R.I., Guo, R.: Fixed and random effects selection in mixed effects models. Biometrics 67, 495–503 (2011)MathSciNetCrossRefzbMATHGoogle Scholar
  20. Jakubik, J.: Convex method for variable selection in high-dimensional linear mixed models. In: PROBASTAT-2015, Slovak Republic (2015)Google Scholar
  21. Knight, K., FU, W.J.: Asymptotics for lasso-type estimators. Ann. Stat. 28, 1356–1378 (2000)MathSciNetCrossRefzbMATHGoogle Scholar
  22. Lehmann, E.L.: Theory of Point Estimation. Wadsworth and Brooks/Cole, Pacific Grove, CA (1983)CrossRefzbMATHGoogle Scholar
  23. Liang, H., Wu, H.L., Zou, G.H.: A note on conditional AIC for linear mixed-effects models. Biometrika 95, 773–778 (2008)MathSciNetCrossRefzbMATHGoogle Scholar
  24. Lin, B., Pang, Z., Jiang, J.: Fixed and random effects selection by REML and pathwise coordinate optimization. J. Comput. Graph. Stat. 22(2), 341–355 (2013)MathSciNetCrossRefGoogle Scholar
  25. Loh, P.L., Wainwright, M.J.: Regularized M-estimators with nonconvexity: statistical and algorithmic theory for local optima. J. Mach. Learn. Res. 16, 559–616 (2015)MathSciNetzbMATHGoogle Scholar
  26. Meier, L., van de Geer, S., Bhlmann, P.: The group lasso for logistic regression. J. R. Stat. Soc. B 70, 53–71 (2008)MathSciNetCrossRefzbMATHGoogle Scholar
  27. Muller, S., Scealy, J.L., Welsh, A.H.: Model selection in linear mixed models. Stat. Sci. 28(2), 135–167 (2013)MathSciNetCrossRefzbMATHGoogle Scholar
  28. Ottestad, I., Retterstl, K., Myhrstad, M.C., Andersen, L.F., Vogt, G., Nilsson, A., et al.: Intake of oxidised fish oil does not affect circulating levels of oxidised LDL or inflammatory markers in healthy subjects. Nutr. Metab. Cardiovasc. Dis. 23(1), 3–4 (2013)CrossRefGoogle Scholar
  29. Pinheiro, J.C., Bates, D.M.: Mixed-Effects Models in S and S-Plus. Springer, New York (2000)CrossRefzbMATHGoogle Scholar
  30. Pu, W., Niu, X.: Selecting mixed-effects models based on a generalized information criterion. J. Multivar. Anal. 97, 733–758 (2006)MathSciNetCrossRefzbMATHGoogle Scholar
  31. Rohart, F., San, Cristobal M., Laurent, B.: Selection of fixed effects in high-dimensional linear mixed models using a multicycle ECM algorithm. Comput. Stat. Data Anal. 80, 209–222 (2014)MathSciNetCrossRefGoogle Scholar
  32. Schelldorfer, J., Buhlmann, P., Van de Geer, S.: Estimation for high-dimensional linear mixed-effects models using \(l_1\)-penalisation. Scand. J. Stat. 38, 197–214 (2011)MathSciNetCrossRefzbMATHGoogle Scholar
  33. Stadler, N., Buhlmann, P., van de Geer, S.: \(l_1\)-Penalization for mixture regression models (with discussion). Test 19, 209–285 (2010)MathSciNetCrossRefzbMATHGoogle Scholar
  34. Taylor, J.D., Verbyla, A.P., Cavanagh, C., Newberry, M.: Variable selection in linear mixed models using an extended class of penalties. Aust. N. Z. J. Stat. 54, 427–449 (2012)MathSciNetCrossRefzbMATHGoogle Scholar
  35. Tibshirani, R.: Regression shrinkage and selection via the lasso. J. R. Stat. Soc. B 58, 267–288 (1996)MathSciNetzbMATHGoogle Scholar
  36. Tseng, P., Yun, S.: A coordinate gradient descent method for nonsmooth separable minimization. Math. Program. Ser. B 117, 387–423 (2009)MathSciNetCrossRefzbMATHGoogle Scholar
  37. Vaida, F., Blanchard, S.: Conditional Akaike information for mixed effects models. Biometrika 92, 351–370 (2005)MathSciNetCrossRefzbMATHGoogle Scholar
  38. Xu, P., Wang, T., Zhu, H., Zhu, L.: Double penalized H-likelihood for selection of fixed and random effects in mixed effects models. Stat. Biosci. 7, 108–128 (2015)CrossRefGoogle Scholar
  39. Wu, T., Lange, K.: Coordinate descent algorithms for lasso penalized regression. Ann. Appl. Stat. 2, 224–244 (2008)MathSciNetCrossRefzbMATHGoogle Scholar
  40. Zou, H., Hastie, T., Tibshirani, R.: On the “degrees of freedom” of the lasso. Ann. Stat. 35, 2173–2192 (2007)MathSciNetCrossRefzbMATHGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2017

Authors and Affiliations

  1. 1.Oslo Centre for Biostatistics and Epidemiology, Department of BiostatisticsUniversity of OsloOsloNorway

Personalised recommendations