Skip to main content

Shrinkage Ridge Regression Estimators in High-Dimensional Linear Models

  • Conference paper
  • First Online:
Proceedings of the Ninth International Conference on Management Science and Engineering Management

Part of the book series: Advances in Intelligent Systems and Computing ((AISC,volume 362))

Abstract

In this paper, we suggest shrinkage ridge regression estimators for a multiple linear regression model, and compared their performance with some penalty estimators which are lasso, adaptive lasso and SCAD. Monte Carlo studies were conducted to compare the estimators and a real data example is presented to illustrate the usefulness of the suggested methods.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 259.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 329.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 329.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Ahmed SE (1997) Asymptotic shrinkage estimation: the regression case. Appl Stat Sci II:113–139

    Google Scholar 

  2. Ahmed SE (2001) Shrinkage estimation of regression coefficients from censored data with multiple observations. In: Ahmed SE, Reid N (eds) Empirical Bayes and likelihood inference, lecture notes in statistics, vol 148. Springer, New York, pp 103–120

    Google Scholar 

  3. Ahmed SE (2002) Simultaneous estimation of coefficient of variations. J Stat Plann Inference 104:31–51

    Article  Google Scholar 

  4. Ahmed SE (2014) Penalty, Shrinkage and pretest estimation strategies: variable selection and estimation. Springer, New York

    Book  Google Scholar 

  5. Ahmed SE, Chitsaz S, Fallahpour S (2010) Shrinkage preliminary test estimation. In: Lovric M (ed) International encyclopaedia of statistical science. Springer, New York

    Google Scholar 

  6. Ahmed SE, Raheem SME (2012) Shrinkage and absolute penalty estimation in linear models. WIREs Comput Stat 4:541–553

    Article  Google Scholar 

  7. Bühlmann P, Kalisch M, Meier L (2013) High-dimensional statistics with a view towards applications in biology. Ann Rev Stat Appl 1:255–278

    Google Scholar 

  8. Efron B, Hastie T et al (2004) Least angle regression. Ann Stat 32:407–499

    Article  Google Scholar 

  9. Fallahpour S, Ahmed SE (2013) Variable selection and post-estimation of regression parameters using qusi-likelihood approach. In: Kollo T (ed) Multivariate statistics. World Scientific, USA, pp 1–13

    Google Scholar 

  10. Fan J, Li R (2001) Variable selection via nonconcave penalized likelihood and its oracle properties. J Am Stat Assoc 96:1348–1360

    Article  Google Scholar 

  11. Hoerl AE, Kennard RW (1970a) Ridge regression: biased estimation for non-orthogonal problems. Technometrics 12:69–82

    Article  Google Scholar 

  12. Hoerl AE, Kennard RW, Baldwin KF (1970b) Ridge regression: applications to Nonorthogonal Problems. Technometrics 12:69–82

    Article  Google Scholar 

  13. Knight K, Fu W (2000) Asymptotics for Lasso-Type estimators. Ann Stat 28:1356–1378

    Article  Google Scholar 

  14. Development Core Team R (2010) R: a language and environment for statistical computing. R Foundation for Statistical Computing, Vienna

    Google Scholar 

  15. Tibshirani R (1996) Regression shrinkage and selection via the Lasso. J R Stat Soci S B 267–288

    Google Scholar 

  16. Zou H (2006) The adaptive lasso and its oracle properties. J Am Stat Assoc 101:1418–1429

    Article  Google Scholar 

Download references

Acknowledgments

The research of Professor Ahmed was supported by the Natural Sciences and the Engineering Council of Canada (NSERC) (Grant Number: 98832-2011). We thank the Professor Jiuping Xu and Dr. Zongmin Li for the invitation and for their help and support in completing this research output.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to S. Ejaz Ahmed .

Editor information

Editors and Affiliations

Appendix

Appendix

The distributions of \(\varvec{\widehat{\beta }}_{1}^{RFM}\) and \( \varvec{\widehat{\beta }}_{1}^{RSM}\) are given by:

$$\begin{aligned} \vartheta _{1}= & {} \sqrt{n}\left( \varvec{\widehat{\beta }}_{1}^{RFM}- \varvec{\beta }_{1}\right) \overset{d}{\rightarrow }N _{p_{1}}\left( -\varvec{\mu }_{11.2},\sigma ^{2}\varvec{Q} _{11.2}^{-1}\right) , \\ \text { and}~~ \vartheta _{2}= & {} \sqrt{n}\left( \varvec{\widehat{\beta }}_{1}^{RSM}- \varvec{\beta }_{1}\right) \overset{d}{\rightarrow }N _{p_{1}}\left( -\varvec{\varvec{\gamma }},\sigma ^{2}\varvec{Q} _{11}^{-1}\right) , \end{aligned}$$

where “\(\overset{d}{\rightarrow }\)” denotes convergence in distribution, \(\varvec{\varvec{\gamma }}=\varvec{\mu }_{11.2}+\varvec{\delta }\) and \(\varvec{\delta }\) \(=\varvec{Q}_{11}^{-1}\varvec{Q}_{12} \varvec{\omega } .\)

To obtain the relationship between sub-model and full model estimators of \( \varvec{\beta }_{1}\), we use following equation by using \(\varvec{ \widetilde{y}}=\varvec{y}-\varvec{X}_{2}\varvec{\widehat{\beta }} _{2}^{RFM}.\)

$$\begin{aligned} \varvec{\widehat{\beta }}_{1}^{RFM}= & {} \underset{\varvec{\varvec{\beta }_{1}}}{\arg \min }\left\{ \left\| \varvec{\widetilde{y}}-\varvec{X}_{1} \varvec{\beta }_{1}\right\| +\lambda ^{R}\left\| \varvec{\beta }_{1}\right\| ^{2}\right\} \nonumber \\= & {} \left( \varvec{X}_{1}^{\top }\varvec{X}_{1}+\lambda ^{R} \varvec{I}_{p_{1}}\right) ^{-1}\varvec{X}_{1}^{\top }\varvec{ \widetilde{y}} \nonumber \\= & {} \left( \varvec{X}_{1}^{\top }\varvec{X}_{1}+\lambda ^{R} \varvec{I}_{p_{1}}\right) ^{-1}\varvec{X}_{1}^{\top }\varvec{y} -\left( \varvec{X}_{1}^{\top }\varvec{X}_{1}+\lambda ^{R}\varvec{ I}_{p_{1}}\right) ^{-1}\varvec{X}_{1}^{\top }\varvec{X}_{2} \varvec{\widehat{\beta }}_{2}^{RFM} \nonumber \\= & {} \varvec{\widehat{\beta }}_{1}^{RSM}-\left( \varvec{X}_{1}^{\top } \varvec{X}_{1}+\lambda ^{R}\varvec{I}_{p_{1}}\right) ^{-1} \varvec{X}_{1}^{\top }\varvec{X}_{2}\varvec{\widehat{\beta }} _{2}^{RFM}. \end{aligned}$$
(6)

Proof (Proof of Theorem 2 )

From the definition of ADB,

$$\begin{aligned} ADB\left( \varvec{\widehat{\beta }}_{1}^{RFM}\right)= & {} E\left\{ \underset{n\rightarrow \infty }{\lim }\sqrt{n}\left( \varvec{\widehat{ \beta }}_{1}^{RFM}-\varvec{\beta }_{1}\right) \right\} \\= & {} -\varvec{\mu }_{11.2} \text {.} \end{aligned}$$

To verify the asymptotic bias of \(\varvec{\widehat{\beta }}_{1}^{RSM}\), we use the Eq. (6). Hence, it can be written as follows:

$$\begin{aligned} ADB\left( \varvec{\widehat{\beta }}_{1}^{RSM} \right)= & {} E\left\{ \underset{n\rightarrow \infty }{\lim }\sqrt{n}\left( \varvec{ \widehat{\beta }}_{1}^{RSM} -\varvec{\beta }_{1}\right) \right\} \\= & {} E\left\{ \underset{n\rightarrow \infty }{\lim }\sqrt{n}\left( \varvec{ \widehat{\beta }}_{1}^{RFM} -\varvec{Q}_{11}^{-1}\varvec{Q}_{12} \varvec{\widehat{\beta }}_{2}^{RFM} -\varvec{\beta }_{1}\right) \right\} \\= & {} E\left\{ \underset{n\rightarrow \infty }{\lim }\sqrt{n}\left( \varvec{ \widehat{\beta }}_{1}^{RFM} -\varvec{\beta }_{1}\right) \right\} -E\left\{ \underset{n\rightarrow \infty }{\lim }\sqrt{n}\left( \varvec{ Q}_{11}^{-1}\varvec{Q}_{12}\varvec{\widehat{\beta }}_{2}^{RFM} \right) \right\} \\= & {} -\varvec{\mu }_{11.2}-\varvec{Q}_{11}^{-1}\varvec{Q} _{12}\varvec{\omega } \\= & {} -\left( \varvec{\mu }_{11.2}+\varvec{\delta }\right) \\= & {} -\varvec{\gamma } . \end{aligned}$$

Proof (Proof of Theorem 3 )

Firstly, the asymptotic covariance of \(\varvec{\widehat{\beta }}_{1}^{RFM}\) is given by:

$$\begin{aligned} \varGamma \left( \varvec{\widehat{\beta }}_{1}^{RFM} \right)= & {} E\left\{ \underset{n\rightarrow \infty }{\lim }\sqrt{n}\left( \varvec{\widehat{ \beta }}_{1}^{RFM} -\varvec{\beta }_{1}\right) \sqrt{n}\left( \varvec{\widehat{ \beta }}_{1}^{RFM} -\varvec{\beta }_{1}\right) ^{\top }\right\} \\= & {} E\left( \vartheta _{1}\vartheta _{1}^{\top }\right) \\= & {} Cov\left( \vartheta _{1}\vartheta _{1}^{\top }\right) +E\left( \vartheta _{1}\right) E\left( \vartheta _{1}^{\top }\right) \\= & {} \sigma ^{2}\varvec{Q}_{11.2}^{-1}+\varvec{\mu }_{11.2}\varvec{ \mu }_{11.2}^{\top }. \end{aligned}$$

The asymptotic covariance of \(\varvec{\widehat{\beta }}_{1}^{RSM}\) is given by:

$$\begin{aligned} \varGamma \left( \varvec{\widehat{\beta }}_{1}^{RSM} \right)= & {} E\left\{ \underset{n\rightarrow \infty }{\lim }\sqrt{n}\left( \varvec{\widehat{\beta }} _{1}^{RSM} -\varvec{\beta }_{1}\right) \sqrt{n}\left( \varvec{\widehat{\beta }} _{1}^{RSM} -\varvec{\beta }_{1}\right) ^{\top }\right\} \\= & {} E\left( \vartheta _{2}\vartheta _{2}^{\top }\right) \\= & {} Cov\left( \vartheta _{2}\vartheta _{2}^{\top }\right) +E\left( \vartheta _{2}\right) E\left( \vartheta _{2}^{\top }\right) \\= & {} \sigma ^{2}\varvec{Q}_{11}^{-1}+\varvec{\gamma \gamma }^{\top }, \end{aligned}$$

By using the Eq. (3),

$$\begin{aligned} R\left( \varvec{\widehat{\beta }}_{1}^{RFM}\right)= & {} \sigma ^{2}tr\left( \varvec{WQ}_{11.2}^{-1}\right) +\varvec{\mu } _{11.2}^{\top }\varvec{W\mu }_{11.2}, \\ R\left( \varvec{\widehat{\beta }}_{1}^{RSM}\right)= & {} \sigma ^{2}\text {tr }\left( \varvec{WQ}_{11}^{-1}\right) +\varvec{\varvec{\gamma }} ^{\top }\varvec{W\varvec{\gamma }}. \end{aligned}$$

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Yüzbaşı, B., Ahmed, S.E. (2015). Shrinkage Ridge Regression Estimators in High-Dimensional Linear Models. In: Xu, J., Nickel, S., Machado, V., Hajiyev, A. (eds) Proceedings of the Ninth International Conference on Management Science and Engineering Management. Advances in Intelligent Systems and Computing, vol 362. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-662-47241-5_67

Download citation

  • DOI: https://doi.org/10.1007/978-3-662-47241-5_67

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-662-47240-8

  • Online ISBN: 978-3-662-47241-5

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics