# On the variance parameter estimator in general linear models

- 85 Downloads

## Abstract

In the present note we consider general linear models where the covariates may be both random and non-random, and where the only restrictions on the error terms are that they are independent and have finite fourth moments. For this class of models we analyse the variance parameter estimator. In particular we obtain finite sample size bounds for the variance of the variance parameter estimator which are independent of covariate information regardless of whether the covariates are random or not. For the case with random covariates this immediately yields bounds on the unconditional variance of the variance estimator—a situation which in general is analytically intractable. The situation with random covariates is illustrated in an example where a certain vector autoregressive model which appears naturally within the area of insurance mathematics is analysed. Further, the obtained bounds are sharp in the sense that both the lower and upper bound will converge to the same asymptotic limit when scaled with the sample size. By using the derived bounds it is simple to show convergence in mean square of the variance parameter estimator for both random and non-random covariates. Moreover, the derivation of the bounds for the above general linear model is based on a lemma which applies in greater generality. This is illustrated by applying the used techniques to a class of mixed effects models.

## Keywords

General linear models Non-Gaussian error terms Moments of variance parameter estimators Finite sample size bounds Random covariates Unconditional bounds## 1 The general linear model

*conditional*on \({\varvec{X}}\) and \({\varvec{\Sigma }}\), mean \(\varvec{0}\) and covariance \({\varvec{I}}\), together with common central fourth moments \(\mu _4 \ge 1\) (since \(\mu _4 := {\mathbb {E}}[{\varvec{e}}_i^4] \ge {\text {Var}}({\varvec{e}}_i)^2 = 1\)). Here \({\varvec{I}}\) denotes the \(n \times n\) identity matrix. The standard generalized least squares estimator of \(\varvec{\beta }\), conditional on \({\varvec{X}}\) and \({\varvec{\Sigma }}\), is given by:

*n*and \(p_x\), for instance simply

*n*. It is important to note that when \({\varvec{X}}\) is assumed to be random it is assumed to be possible to observe perfectly. That is, we are not dealing with an errors-in-variables model, which would lead to problems such as biased estimators. In Example 3 we comment on the situation when the regression coefficients are allowed to be random instead of the covariates, i.e. we consider mixed effects models.

We will henceforth focus on properties of the variance of the variance parameter estimator \({\hat{\sigma }}^2\). One can note that the special case of a sample variance in the non-Gaussian setting, i.e. an intercept only GLM, was treated already in e.g. Cramér (1946, Eq. 27.4.2). Other similar results are obtained in the theory of minimum variance component estimation, see e.g. Rao (1970, 1971) and the proof of Seber and Lee (2003, Thm. 3.4). More general results can be found in for instance (Dette et al. 1998) where estimation of the variance parameter in the case of nonparametric regression is treated. Lemma 1 may be seen as a special case of the corresponding expression for the mean squared error (MSE) from Dette et al. (1998, Eq. 6). In Example 3 we discuss extensions to mixed effects models and comment on the results in Li (2012) which extend the analysis in Dette et al. (1998) w.r.t. mixed effects. We will return to these comparisons in more detail below.

The general problem formulation above, of course, relies on the theory of random quadratic forms. For more on this topic, see e.g. Eaton (1983), Mathai et al. (2012) and Seber and Lee (2003) and the references therein.

The results that we obtain for the variance of the variance estimator are based on that \({\text {Var}}({\hat{\sigma }}^2 | {\varvec{X}}, {\varvec{\Sigma }})\) can be calculated explicitly. For the particular situation of interest this variance is obtained using the following result from Plackett (1960, Eq. (2), p. 16) which we state in the following lemma:

### Lemma 1

**N.B.** The last sum in (4) corresponds to the sum of the squared diagonal elements of \({\varvec{W}}\), which should not be confused with \({{\,\mathrm{tr}\,}}({\varvec{W}}^2)\). The proof of Lemma 1 is given in Plackett (1960), and a more general version can be found, without proof, in Atiqullah (1962a), whose proof can be found in Seber and Lee (2003, Thm. 1.6). See also the derivation of the SE expressions given in Dette et al. (1998) and Li (2012, Lemma 3).

Further, the main objective of the current note is to obtain finite sample bounds for the variance of the variance estimator of the general linear model defined by (1) when the covariates may be both random and non-random. In order to prove such bounds we will make use of the following lemma:

### Lemma 2

Note that Lemma 2 only relies on Lemma 1 in terms of the explicit form of the variance given by (4), a fact which will be exploited further in Example 3 given below. The usefulness of Lemma 2 becomes apparent when the decomposition of \({\varvec{W}}\) is in terms of a \({\varvec{U}}\) with \(p_u\) being a constant (much) smaller than *n*. This is what will be exploited in Corollary 1 and 2 , and which is the motivation for why the bounds in Lemma 2 are expressed in terms of \(p_u\) instead of \(p_w := n - p_u\). Further, note that the split between \(p_u \le n/2\) and \(p_u > n/2\) ascertains that all bounds are positive.

### Proof of Lemma 2

We may now state our main result:

### Proposition 1

- (i)
\({\varvec{\Sigma }}\) be a random symmetric almost surely positive definite \(n\times n\) matrix and \({\varvec{X}}\) be a random \(n\times p_x\) matrix of almost surely full column rank,

- (ii)
the error term components defining the random \(n\times 1\) vector \({\varvec{e}}\) be independent with, conditional on \({\varvec{X}}\) and \({\varvec{\Sigma }}\), mean 0, variance 1 and common fourth central moments \(\mu _4\).

The proof of the *conditional* bounds in Proposition 1 is based on that \({\hat{\sigma }}^2\) may be expressed according to (3) together with an application of Lemmas 1 and 2. The details are given in the appendix. Moreover, note that the conditional variance bounds for \({\hat{\sigma }}^2\) are deterministic regardless of whether the covariates \({\varvec{X}}\) are random or not. This fact combined with that \({\hat{\sigma }}^2\) is unbiased conditional on \({\varvec{X}}\) and \({\varvec{\Sigma }}\), together with an application of variance decomposition proves the *unconditional* variance bounds in Proposition 1. The full proof of Proposition 1 is given in the appendix.

We can also state a finite sample upper bound on the difference between the conditional and unconditional variances together with convergence of these using the bounds in Proposition 1.

### Corollary 1

### Remark 1

*n*. Since, Corollary 1 is a convergence result in terms of

*n*where \(p_x\) is a fixed quantity, the primary interest is on the situation where \(p_x \le n/2\).

By Corollary 1 it follows that \({\text {Var}}({\hat{\sigma }}^2) \rightarrow 0\), uniformly as \(n \rightarrow \infty \), and therefore we can state the following result.

### Corollary 2

### Remark 2

Note that \({\hat{\sigma }}^2\) from (2) is a special case of the variance parameter estimator \({\hat{\sigma }}_{ON}^2\) defined for the nonparametric mixed effects models analysed in Li (2012), see their Eq. (11). One can also note that the mixed effects models treated in Li (2012) is an extension of the model class treated in Dette et al. (1998). An alternative proof of the consistency of \({\hat{\sigma }}^2\) stated in Corollary 2 in the case with *fixed covariates*, hence, follows from Li (2012, Thm. 1) by neglecting the random effects part of their model. Further, as already commented upon above, in Dette et al. (1998) and Li (2012) focus is on the *MSE* of the variance parameter estimator. Moreover, their MSE expressions are stated in terms of asymptotic equivalence, i.e. using “*o*(1 / *n*)”. Thus, it is not straightforward to use their expressions to obtain bounds similar to those provided by Proposition 1 (or Lemma 2) without carefully inspecting the *o*(1 / *n*) terms.

## 2 Examples

### Example 1

*t*denotes time and \({\varvec{X}}_t\) denotes the amount of payments made in the time interval [0,

*t*], see e.g. Mack (1993). More specifically, the Chain–Ladder model assumes that \({\varvec{X}}_t\) is \(n\times 1\), \(({\varvec{X}}_t)_i \ge 0\) and that \({\varvec{\Sigma }}_t := {\mathrm {diag}}({\varvec{X}}_t)\). In this situation we may analyse the variance of the variance parameter estimator, \({{\hat{\sigma }}}_t^2\), both conditional on \({\varvec{X}}_t\) as well as unconditionally using Proposition 1 and its corollaries. In either situation the above results provide us with finite sample bounds as well as ascertaining that \({{\hat{\sigma }}}_t^2\) is a (mean square) consistent estimator of \(\sigma _t^2\). A practical application of the finite sample bounds is for the Chain–Ladder model w.r.t. the discussion of the appropriateness of using conditional versus unconditional prediction error, see e.g. Buchwalder et al. (2006) and Lindholm et al. (2019). Moreover, the relevance of using finite sample bounds is also apparent in many real-world insurance applications, when highly aggregated data is used, where the sample size often is around \(n = 10\) and \(p_x = 1 \ll n\).

Further, the Chain–Ladder model assumes a diagonal structure of \({\varvec{\Sigma }}_t\) which, even though making \({\mathrm {diag}}({\varvec{V}}) = {\mathrm {diag}}({\varvec{P}})\), does not make the variance bounds any more explicit.

For more on other models than the distribution free Chain–Ladder model that are used in an insurance context, see e.g. Kremer (1984) and Lindholm et al. (2017).

### Example 2

### Example 3

*given*that you somehow have arrived at a variance expression of the form (4). Purely as an illustration, consider the following special case of a mixed effects model introduced in Atiqullah (1962b):

*n*, and \({\hat{\sigma }}_\theta ^2\) is, hence, not \(L^2\)-consistent. It is, however, possible to obtain sharper finite sample bounds on \({\text {Var}}({{\hat{\sigma }}}_\theta ^2)\) by using other trace inequalities—a matter not pursued further in the present note. For other examples and a deeper discussion concerning the performance of estimators of variance components, see e.g. Christensen (2019, Ex. 5.1.1 and Sec. 5.4).

Even though the above mixed effects example is only intended for illustration purposes, without any claim of practical relevance, it is still worth commenting on the relation to the results in Li (2012). In Li (2012) mixed effects nonparametric regression is considered w.r.t. the *MSE* of the *total* variance parameter estimator. That is, the situation with \(\sigma _\epsilon = \sigma _\theta \) is considered. Consequently, the decomposition from Atiqullah (1962b) is not covered as a special case. Moreover, by setting \(\sigma _\epsilon = \sigma _\theta \) above, the resulting estimator is neither contained in the estimators covered in Li (2012, Thm. 1). For more on differences compared with Li (2012) (and Dette et al. 1998), see Remark 2 above.

## Notes

### Acknowledgements

Open access funding provided by Stockholm University. The authors acknowledge beneficial discussion with Rolf Sundberg. The authors are also most grateful to Ronald Christensen for the comments on an earlier version of the paper which resulted in the split of \(p_u \le n/2\) and \(p_u > n/2\) in Lemma 2, and for pointing out that we had missed that \({\varvec{R}}\) from the current Example 3 is idempotent, which made it possible to more easily connect Example 3 to the results in the current paper and to the reference Christensen (2019). We also want to acknowledge the comments from an anonymous reviewer which pointed out a number of typos and provided comments which we believe have improved the paper.

### Compliance with ethical standards

### Conflict of interest

On behalf of all authors, the corresponding author states that there is no conflict of interest.

## References

- Atiqullah M (1962a) The estimation of residual variance in quadratically balanced least-squares problems and the robustness of the F-test. Biometrika 49(1–2):83–91MathSciNetCrossRefGoogle Scholar
- Atiqullah M (1962b) On the effect of non-normality on the estimation of components of variance. J R Stat Soc Ser B (Methodol) 24:140–147 MathSciNetzbMATHGoogle Scholar
- Buchwalder M, Bühlmann H, Merz M, Wüthrich MV (2006) The mean square error of prediction in the chain ladder reserving method (Mack and Murphy revisited). ASTIN Bull 36(2):521–542MathSciNetCrossRefGoogle Scholar
- Christensen R (2019) Advanced linear modeling: statistical learning and dependent data, 3rd edn. Springer, New YorkzbMATHGoogle Scholar
- Cramér H (1946) Mathematical methods of statistics. Princeton University Press, PrincetonzbMATHGoogle Scholar
- Dette H, Munk A, Wagner T (1998) Estimating the variance in nonparametric regression—What is a reasonable choice? J R Stat Soc Ser B (Stat Methodol) 60(4):751–764MathSciNetCrossRefGoogle Scholar
- Eaton ML (1983) Multivariate statistics: a vector space approach. Wiley, New YorkzbMATHGoogle Scholar
- Kremer E (1984) A class of autoregressive models for predicting the final claims amount. Insur Math Econ 3(2):111–119MathSciNetCrossRefGoogle Scholar
- Li Z (2012) A comparison of error variance estimates in nonparametric mixed models. Commun Stat Theory Methods 41(4):778–790MathSciNetCrossRefGoogle Scholar
- Lindholm M, Lindskog F, Wahl F (2017) Valuation of non-life liabilities from claims triangles. Risks 5(3):39CrossRefGoogle Scholar
- Lindholm M, Lindskog F, Wahl F (2019) Estimation of conditional mean squared error of prediction for claims reserving. Ann Actuar Sci. https://doi.org/10.1017/S174849951900006X CrossRefGoogle Scholar
- Mack T (1993) Distribution-free calculation of the standard error of chain ladder reserve estimates. ASTIN Bull 23(2):213–225CrossRefGoogle Scholar
- Mathai AM, Provost SB, Hayakawa T (2012) Bilinear forms and zonal polynomials, vol 102. Springer, BerlinzbMATHGoogle Scholar
- Plackett RL (1960) Principles of regression analysis. Clarendon Press, OxfordzbMATHGoogle Scholar
- Rao CR (1970) Estimation of heteroscedastic variances in linear models. J Am Stat Assoc 65(329):161–172MathSciNetCrossRefGoogle Scholar
- Rao CR (1971) Minimum variance quadratic unbiased estimation of variance components. J Multivar Anal 1:445–456MathSciNetCrossRefGoogle Scholar
- Seber G, Lee A (2003) Linear regression analysis, 2nd edn. Wiley, HobokenCrossRefGoogle Scholar
- Wolkowicz H, Styan GPH (1980) Bounds for eigenvalues using traces. Linear Algebra Appl 29:471–506MathSciNetCrossRefGoogle Scholar

## Copyright information

**Open Access**This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.