Advertisement

Variable selection in regression - estimation, prediction,sparsity, inference

  • Jaroslaw Harezlak
  • Eric Tchetgen
  • Xiaochun Li
Chapter
Part of the Applied Bioinformatics and Biostatistics in Cancer Research book series (ABB)

Keywords

Ridge Regression Model Selection Procedure Adaptive Lasso Smoothly Clip Absolute Deviation Oracle Property 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Breiman, L. (1995). Better subset regression using the nonnegative garrote. Technometrics, 37:373–384.CrossRefGoogle Scholar
  2. Candes, E. and Tao, T. (2007). The Dantzig selector: statistical estimation when p is much larger than n. The Annals of Statistics, 35(6):2312–2351.Google Scholar
  3. Donoho, D. and Stodden, V. (16–21 July 2006). Breakdown point of model selection when the number of variables exceeds the number of observations. In IJCNN '06. International Joint Conference on Neural Networks, 2006., pages 1916–1921.Google Scholar
  4. Donoho, D. L., Elad, M., and Temlyakov, V. N. (2006). Stable recovery of sparse overcomplete representations in the presence of noise. IEEE Transactions on Information Theory, 52(1):6–18.CrossRefGoogle Scholar
  5. Efron, B., Hastie, T., Johnstone, I., and Tibshirani, R. (2004). Least angle regression. The Annals of Statistics, 32(2):407–499.CrossRefGoogle Scholar
  6. Efron, B., Hastie, T., and Tibshirani, R. (2007). Discussion of “the Dantzig selector: statistical estimation when p is much larger than n.” The Annals of Statistics, 35(6):2358–2364.CrossRefGoogle Scholar
  7. Fan, J. and Li, R. (2001). Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association, 96:1348–1360.CrossRefGoogle Scholar
  8. Fan, J. and Li, R. (2006). Statistical challenges with high dimensionality: feature selection in knowledge discovery. In Sanz-Sole, M., Soria, J., Varona, J., and Verdera, J., editors, Proceedings of the International Congress of Mathematicians, volume III, pages 595–622. European Mathematical Society, Zurich.Google Scholar
  9. Fu, W. J. (1998). Penalized regressions: The bridge versus the Lasso. Journal of Computational and Graphical Statistics, 7(3):397–416.CrossRefGoogle Scholar
  10. Furnival, G. M. and Wilson, R. W. J. (1974). Regressions by leaps and bounds. Technometrics, 16(4):499–511.CrossRefGoogle Scholar
  11. Greenshtein, E. (2006). Best subset selection, persistence in high-dimensional statistical learning and optimization under l 1-constraint. The Annals of Statistics, 34(5):2367–2386.CrossRefGoogle Scholar
  12. Greenshtein, E. and Ritov, Y. (2004). Persistence in high-dimensional predictor selection and the virtue of overparametrization. Bernoulli, 10:971–988.CrossRefGoogle Scholar
  13. Hoerl, A. and Kennard, R. (1970). Ridge regression: biased estimation for non-orthogonal problems. Technometrics, 12:55–68.CrossRefGoogle Scholar
  14. Huang, J., Ma, S., and Zhang, C.-H. (2007). Adaptive Lasso for sparse high-dimensional regression models. Technical report, The University of Iowa.Google Scholar
  15. Kabaila, P. and Leeb, H. (2006). On the large-sample minimal coverage probability of confidence intervals after model selection. Journal of the American Statistical Association, 101:619–629.CrossRefGoogle Scholar
  16. Knight, K. and Fu, W. (2000). Asymptotics for Lasso-type estimators. The Annals of Statistics, 28:1356–1378.CrossRefGoogle Scholar
  17. Leeb, H. and Pötscher, B. (2005). Model selection and inference: Facts and fiction. Econometric Theory, 21:21–59.CrossRefGoogle Scholar
  18. Leeb, H. and Pötscher, B. (2006). Can one estimate the conditional distribution of post-model-selection estimators? Annals of Statistics, 34:2554–2591.CrossRefGoogle Scholar
  19. Leeb, H. and Pötscher, B. (2008a). Can one estimate the unconditional distribution of post-model-selection estimators? Econometric Theory, 24:338–376.Google Scholar
  20. Leeb, H. and Pötscher, B. (2008b). Sparse estimators and the oracle property, or the return of Hodges’ estimator. Journal of Econometrics, 142:201–211.CrossRefGoogle Scholar
  21. Leng, C., Lin, Y., and Wahba, G. (2006). A note on the Lasso and related procedures in model selection. Statistica Sinica, 16(4):1273–1284.Google Scholar
  22. Miller, A. (2002). Subset Selection in Regression. Chapman & Hall/CRC, London.CrossRefGoogle Scholar
  23. Paul, D., Bair, E., Hastie, T., and Tibshirani, R. (2008). Pre-conditioning for feature selection and regression in high-dimensional problems. The Annals of Statistics, 36:1595–1618.CrossRefGoogle Scholar
  24. Stamey, T. A., Kabalin, J. N., McNeal, J. E., Johnstone, I. M., Freiha, F., Redwine, E. A., and Yang, N. (1989). Prostate specific antigen in the diagnosis and treatment of adenocarcinoma of the prostate. ii. radical prostatectomy treated patients. The Journal of Urology, 141(5):1076–1083.PubMedGoogle Scholar
  25. Tibshirani, R. (1996). Regression shrinkage and selection via the Lasso. Journal of the Royal Statistical Society, Series B, Methodological, 58:267–288.Google Scholar
  26. Tibshirani, R., Saunders, M., Rosset, S., Zhu, J., and Knight, K. (2005). Sparsity and smoothness via the fused Lasso. Journal of the Royal Statistical Society, Series B, Methodological, 67:91–108.CrossRefGoogle Scholar
  27. Yang, Y. (2005). Can the strengths of AIC and BIC be shared? Biometrika, 92:937–950.CrossRefGoogle Scholar
  28. Yang, Y. (2007). Prediction/estimation with simple linear model: Is it really that simple? Econometric Theory, 23:1–36.CrossRefGoogle Scholar
  29. Yuan, M. and Lin, Y. (2007). On the non-negative garrote estimator. Journal of the Royal Statistical Society: Series B, 69(2):143–161.CrossRefGoogle Scholar
  30. Zhao, P. and Yu, B. (2006). On model selection consistency of Lasso. Journal of Machine Learning Research, 7:2541–2567.Google Scholar
  31. Zou, H. (2006). The adaptive Lasso and its oracle properties. Journal of the American Statistical Association, 101(476):1418–1429.CrossRefGoogle Scholar
  32. Zou, H. and Hastie, T. (2005). Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society, Series B: Statistical Methodology, 67(2):301–320.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC 2009

Authors and Affiliations

  1. 1.Division of BiostatisticsIndiana University School of MedicineIndianapolisUSA

Personalised recommendations