Advertisement

Large Sample Theory: Freedom-Equation Hypotheses

  • George A. F. Seber
Part of the Springer Series in Statistics book series (SSS)

Abstract

In this chapter we assume once again that \(\boldsymbol{\theta }\in W\). However our hypothesis H now takes the form of freedom equations, namely \(\boldsymbol{\theta }=\boldsymbol{\theta } (\boldsymbol{\alpha })\), where \(\boldsymbol{\alpha }= (\alpha _{1},\alpha _{2},\ldots,\alpha _{p-q})'\). We require the following additional notation. Let \(\boldsymbol{\Theta }_{\boldsymbol{\alpha }}\) be the p × pq matrix with (i, j)th element \(\partial \theta _{i}/\partial \alpha _{j}\), which we assume to have rank pq. As before, \(L(\boldsymbol{\theta }) =\log \prod _{ i=1}^{n}f(x_{i},\boldsymbol{\theta })\) is the log likelihood function. Let \(\mathbf{D}_{\boldsymbol{\theta }}L(\boldsymbol{\theta })\) and \(\mathbf{D}_{\boldsymbol{\alpha }}L(\boldsymbol{\theta }(\boldsymbol{\alpha }))\) be the column vectors whose ith elements are \(\partial L(\boldsymbol{\theta })/\partial \theta _{i}\) and \(\partial L(\boldsymbol{\theta })/\partial \alpha _{i}\) respectively. As before, \(\mathbf{B}_{\boldsymbol{\theta }}\) is the p × p information matrix with i, jth element
$$\displaystyle{-n^{-1}E_{\boldsymbol{\theta }}\left [\frac{\partial ^{2}L(\boldsymbol{\theta })} {\partial \theta _{i}\partial \theta _{j}} \right ] = -E\left [\frac{\partial ^{2}\log \,f(x,\boldsymbol{\theta })} {\partial \theta _{i}\partial \theta _{j}} \right ],}$$
and we add \(\mathbf{B}_{\boldsymbol{\alpha }}\), the \(p - q \times p - q\) information matrix with i, jth element \(-E[\partial ^{2}\log \,f(x,\boldsymbol{\theta }(\boldsymbol{\alpha }))/\partial \alpha _{i}\partial \alpha _{j}]\). To simplify the notation we use \([\cdot ]_{\boldsymbol{\alpha }}\) to denote that the matrix in square brackets is evaluated at \(\boldsymbol{\alpha }\), for example
$$\displaystyle{\mathbf{B}_{\boldsymbol{\alpha }} = [\boldsymbol{\varTheta }'\mathbf{B}_{\boldsymbol{\theta }}\boldsymbol{\varTheta }]_{\alpha } =\boldsymbol{\varTheta } _{\boldsymbol{\alpha }}'\mathbf{B}_{\boldsymbol{\theta }(\boldsymbol{\alpha })}\boldsymbol{\varTheta }_{\boldsymbol{\alpha }}.}$$
We note that
$$\displaystyle{\mathbf{D}_{\alpha }L(\boldsymbol{\theta }) =\boldsymbol{\varTheta } _{\alpha }'\mathbf{D}_{\boldsymbol{\theta }}L(\boldsymbol{\theta }(\boldsymbol{\alpha })).}$$

Reference

  1. Seber, G. A. F. (1964). The linear hypothesis and large sample theory. Annals of Mathematical Statistics, 35(2), 773–779.zbMATHMathSciNetCrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  • George A. F. Seber
    • 1
  1. 1.Department of StatisticsThe University of AucklandAucklandNew Zealand

Personalised recommendations