Skip to main content

On Penalized Least-Squares: Its Mean Squared Error and a Quasi-Optimal Weight Ratio

  • Chapter
Recent Advances in Linear Models and Related Areas

It is well known in a Random Effects Model, that the Best inhomogeneously LInear Prediction (inhomBLIP) of the random effects vector is equivalently generated by the standard Least-Squares (LS) approach. This LS solution is based on an objective function that consists of two parts, the first related to the observations and the second to the prior information on the random effects; for more details, we refer to the book by Rao, Toutenburg, Shalabh and Heumann (2008). We emphasize that, in this context, the second part cannot be interpreted as “penalization term”.

A very similar objective function, however, could be applied in the Gauss-Markov model where no prior information is available for the unknown parameters. In this case, the additional term would serve as “penalization” indeed as it forces the Penalized Least-Squares (PLS) solution into a chosen neighborhood, not specialized through the model. This idea goes, at least, back to Tykhonov (1963) and Phillips (1962) and has since become known as (a special case of) “Tykhonov regularization” for which the weight ratio between the first and the second term in the objective function determines the degree of smoothing to which the estimated parameters are subjected to. This weight ratio is widely known as “Tykhonov regularization parameter”; for more details, we refer to Grafarend and Schaffrin (1993) or Engl et al. (1996), for instance.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  • Baksalary JK, Kala R (1983) Partial ordering between matrices one of which is of rank one. Bulletin of Polish Academy of Sciences: Mathematics 31: 5-7

    MATH  MathSciNet  Google Scholar 

  • Engl H, Hanke M, Neubauer A (1996) Regularization of Inverse Problems. Kluwer: Dordrecht/NL

    MATH  Google Scholar 

  • Goldberger AS (1962) Best linear unbiased prediction in the genelarized linear regression model. Journal of American Statistical Association. 57: 369-375

    Article  MATH  MathSciNet  Google Scholar 

  • Golub GH, Heath M, Wahba G (1979) Generalized cross-validation as a method for choosing a good ridge parameter. Technometrics 21: 215-223

    Article  MATH  MathSciNet  Google Scholar 

  • Grafarend E, Schaffrin B (1993) Adjustment Computations in Linear Models (in German), Bibliograph. Institute Mannheim

    Google Scholar 

  • Hansen PC, O’Leary DP (1993) The use of the L-curve in the regularization of discrete ill-posed problems, SIAM Journal of Science and Computation. 14: 1487-1503

    Article  MATH  MathSciNet  Google Scholar 

  • Marshall AW, Olkin I (1979) Inequalities. Theory of Majorization and its Applications, Academic Press, New York

    Google Scholar 

  • Phillips DL (1962) A technique for the numerical solution of certain integral equations of the first kind, Journal of the Association for Computing Machinery. 9: 84-96

    MATH  MathSciNet  Google Scholar 

  • Rao CR (1976) Estimation of parameters in a linear model, Annals of Statistics. 4: 1023-1037

    Article  MATH  MathSciNet  Google Scholar 

  • Rao CR, Kleffe J (1988) Estimation of Variance Components and Applications, North Holland: Amsterdam/NL

    MATH  Google Scholar 

  • Rao CR, Toutenburg H, Shalabh, Heumann C (2008) Linear Models and Generalizations. Least Squares and Alternatives (3rd edition) Springer, Berlin Heidelberg New York

    Google Scholar 

  • Schaffrin B (1983) Estimation of variance-covariance components for heterogenous replicated measurements (in German), German Geo-detic Community, Publication C-282, Munich/Germany

    Google Scholar 

  • Schaffrin B (1985) The geodetic datum with stochastic prior informa-tion (in German), German Geodetic Community, Publication C-313, Munich/ Germany

    Google Scholar 

  • Schaffrin B (1995) A comparison of inverse techniques: Regularization, weight estimation and homBLUP, IUGG General Assembly, IAG Scientific Meeting U7, Boulder/CO

    Google Scholar 

  • Schaffrin B (2005) On the optimal choice of the regularization parameter through variance ratio estimation, 14th International Workshop on Matrices and Statistics, Auckland/NZ

    Google Scholar 

  • Searle SR, Casella G, McCulloch CE (1992) Variance Components, Wiley, New York

    Book  MATH  Google Scholar 

  • Tykhonov AN (1963) The regularization of incorrectly posed problems, Soviet Mathematics Doklady 4: 1624-1627

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2008 Physica-Verlag Heidelberg

About this chapter

Cite this chapter

Schaffrin, B. (2008). On Penalized Least-Squares: Its Mean Squared Error and a Quasi-Optimal Weight Ratio. In: Recent Advances in Linear Models and Related Areas. Physica-Verlag HD. https://doi.org/10.1007/978-3-7908-2064-5_16

Download citation

Publish with us

Policies and ethics