Advertisement

Boosting Correlation Based Penalization in Generalized Linear Models

  • Jan Ulbricht
  • Gerhard Tutz

Linear models have a long tradition in statistics as nicely summarized in Rao, Toutenburg, Shalabh, Heumann (2008). When the number of covariates is large the estimation of unknown parameters frequently raises problems. Then the interest usually focusses on data driven subset selection of relevant regressors. The sophisticated monitoring equipment which is now routinely used in many data collection processes makes it possible to collect data with a huge amount of regressors, even with considerably more explanatory variables than observations. One example is the analysis of microarray data of gene expressions. Here the typical tasks are to select variables and to classify samples into two or more alternative categories. Binary responses of this type may be handled within the framework of generalized linear models (Neider and Wedderburn (1972)) and are also considered in Rao, Toutenburg, Shalabh, Heumann (2008).

In this paper we propose a new regularization method and a boosted version of it, which explicitly focus on the selection of groups. To reach this target we consider a correlation based penalty which uses correlation between variables as data driven weights for penalization. See also Tutz and Ulbricht (2006) for a similar approach to linear models. This new method and some of its main properties are described in Section 2. A boosted version of it that will be presented in Section 3 allows for variable selection. In Section 4 we use simulated and real data sets to compare our new methods with existing ones.

Keywords

Generalize Linear Model Maximum Likelihood Estimator Subset Selection Unknown Parameter Vector Fisher Matrix 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Anderson JA, Blair V (1982) Penalized maximum likelihood estimation in logistic regression and discrimination. Biometrika 69:123-136MATHCrossRefMathSciNetGoogle Scholar
  2. Breiman L (1998) Arcing classifiers. Annals of Statistics 26:801-849MATHCrossRefMathSciNetGoogle Scholar
  3. Bühlmann P, Yu B (2003) Boosting with the L2 loss: Regression and classification. Journal of the American Statistical Association 98:324-339MATHCrossRefMathSciNetGoogle Scholar
  4. Duffy DE, Santner TJ (1989) On the small sample properties of restricted maximum likelihood estimators for logistic regression models. Communication in Statistics, Theory & Methods 18:959-989MATHCrossRefMathSciNetGoogle Scholar
  5. Fahrmeir L, Kaufmann H (1985) Consistency and asymptotic normality of the maximum likelihood estimator in generalized linear models. The Annals of Statistics 13:342-368MATHCrossRefMathSciNetGoogle Scholar
  6. Friedman JH (2001) Greedy function approximation: a gradient boosting machine. Annals of Statistics 29:1189-1232MATHCrossRefMathSciNetGoogle Scholar
  7. Golub TR, Slonim DK, Tamayo P, Huard C, Gaasenbeek M, Mesirov JP, Coller H, Loh ML, Downing JR, Caligiuri MA, Bloomfield CD, Lander ES (1999) Molecular classification of cancer: class discovery and class prediction by gene expression monitoring. Science 286:531-537CrossRefGoogle Scholar
  8. Hoerl AE, Kennard RW (1970) Ridge regression: Bias estimation for nonorthogonal problems. Technometrics 12:55-67MATHCrossRefGoogle Scholar
  9. Meir R, Rätsch G (2003) An introduction to boosting and leveraging. In: Mendelson S, Smola A (eds) Advanced Lectures on Machine Learning, Springer, New York, pp 119-184Google Scholar
  10. Nelder JA, Wedderburn RWM (1972) Generalized linear models. Journal of the Royal Statistical Society A 135:370-384CrossRefGoogle Scholar
  11. Nyquist H (1991) Restricted estimation of generalized linear models. Applied Statistics 40:133-141MATHCrossRefGoogle Scholar
  12. Park MY, Hastie T (2007) An l1 regularization-path algorithm for generalized linear models. JRSSGoogle Scholar
  13. Schaefer RL, Roi LD, Wolfe RA (1984) A ridge logistic estimate. Com-munication in Statistics, Theory & Methods 13:99-113CrossRefGoogle Scholar
  14. Segerstedt B (1992) On ordinary ridge regression in generalized linear models. Communication in Statistics, Theory & Methods 21:2227-2246MATHCrossRefMathSciNetGoogle Scholar
  15. Shevade SK, Keerthi SS (2003) A simple and efficient algorithm for gene selection using sparse logistic regression. Bioinformatics 19:2246-2253CrossRefGoogle Scholar
  16. Tibshirani R (1996) Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society B 58:267-288MATHMathSciNetGoogle Scholar
  17. Rao CR, Toutenburg H, Shalabh, Heumann C (2008) Linear Models - Least Squares and Generalizations (3rd edition). Springer, Berlin Heidelberg New YorkGoogle Scholar
  18. Trenkler G, Toutenburg H (1990) Mean squared error matrix comparisons between biased estimators - an overview of recent results. Statistical Papers 31:165-179MATHCrossRefMathSciNetGoogle Scholar
  19. Tutz G, Binder H (2007) Boosting ridge regression. Computational Statistics & Data Analysis (Appearing)Google Scholar
  20. Tutz G, Leitenstorfer F (2007) Generalized smooth monotonic regression in additive modeling. Journal of Computational and Graphical Statistics 16:165-188CrossRefMathSciNetGoogle Scholar
  21. Tutz G, Ulbricht J (2006) Penalized regression with correlation based penalty. Discussion Paper 486, SFB 386, Universität MünchenGoogle Scholar
  22. Zou H, Hastie T (2005) Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society B 67:301-320MATHCrossRefMathSciNetGoogle Scholar

Copyright information

© Physica-Verlag Heidelberg 2008

Authors and Affiliations

  • Jan Ulbricht
    • 1
  • Gerhard Tutz
    • 1
  1. 1.Department of StatisticsUniversity of MunichMunichGermany

Personalised recommendations