Advertisement

Linear Regression via Elastic Net: Non-enumerative Leave-One-Out Verification of Feature Selection

  • Elena ChernousovaEmail author
  • Nikolay Razin
  • Olga Krasotkina
  • Vadim Mottl
  • David Windridge
Chapter
Part of the Springer Optimization and Its Applications book series (SOIA, volume 92)

Abstract

The feature-selective non-quadratic Elastic Net criterion of regression estimation is completely determined by two numerical regularization parameters which penalize, respectively, the squared and absolute values of the regression coefficients under estimation. It is an inherent property of the minimum of the Elastic Net that the values of regularization parameters completely determine a partition of the variable set into three subsets of negative, positive, and strictly zero values, so that the former two subsets and the latter subset are, respectively, associated with “informative” and “redundant” features. We propose in this paper to treat this partition as a secondary structural parameter to be verified by leave-one-out cross validation. Once the partitioning is fixed, we show that there exists a non-enumerative method for computing the leave-one-out error rate, thus enabling an evaluation of model generality in order to tune the structural parameters without the necessity of multiple training repetitions.

Keywords

Elastic Net regression Partitioning of the feature set Secondary structural parameter Feature selection Non-enumerative leave-one-out 

References

  1. 1.
    Zou, H., Hastie, T.: Regularization and variable selection via the elastic net. J. Roy. Stat. Soc. 67, 301–320 (2005)CrossRefzbMATHMathSciNetGoogle Scholar
  2. 2.
    Tibshirani, R.: Regression shrinkage and selection via the lasso. J. Roy. Stat. Soc. 58(1), 267–288 (1996)zbMATHMathSciNetGoogle Scholar
  3. 3.
    Ye, G., Chen, Y., Xie, X.: Efficient variable selection in support vector machines via the alternating direction method of multipliers. J. Mach. Learn. Res. Proc. Track 832–840 (2011)Google Scholar
  4. 4.
    Wang, L., Zhu, J., Zou, H.: The doubly regularized support vector machine. Stat. Sinica 16, 589–615 (2006)zbMATHMathSciNetGoogle Scholar
  5. 5.
    Grosswindhager, S.: Using penalized logistic regression models for predicting the effects of advertising material (2009). http://publik.tuwien.ac.at/files/PubDat_179921.pdf
  6. 6.
    Friedman, J., Hastie, T., Tibshirani, R.: Regularization paths for generalized linear models via coordinate descent. J. Stat. Softw. 33, 1–22 (2010)Google Scholar
  7. 7.
    Christensen, R.: Plane Answers to Complex Questions. The Theory of Linear Models, 3rd edn. Springer, New York (2010)Google Scholar
  8. 8.
    Tibshirani, R., Efron, B., Hastie, T., Johnstone, I.: Least angle regression. Ann. Stat. 32, 407–499 (2004)CrossRefzbMATHMathSciNetGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2014

Authors and Affiliations

  • Elena Chernousova
    • 1
    Email author
  • Nikolay Razin
    • 1
  • Olga Krasotkina
    • 2
  • Vadim Mottl
    • 3
  • David Windridge
    • 4
  1. 1.Moscow Institute of Physics and TechnologyMoscowRussia
  2. 2.Tula State UniversityTulaRussia
  3. 3.Computing Centre of the Russian Academy of SciencesMoscowRussia
  4. 4.University of SurreyGuildfordUK

Personalised recommendations