Second Order Efficiency

  • Rabi Bhattacharya
  • Manfred Denker
Part of the DMV Seminar book series (OWS, volume 14)


Under the usual regularity conditions an estimator θ̂ n of a real valued parameter θ is asymptotically efficient if
$$ {\hat{\theta }_n} - \theta = {{{(\frac{1}{n}\sum\limits_{{j = 1}}^n {\frac{{\partial \log f({X_j};\theta )}}{{\partial \theta }}} )}} \left/ {{I(\theta ) + {\varepsilon_n}}} \right.} $$
where \( \sqrt {n} {\varepsilon_n} \to 0 \) in P θ -probability as n → ∞, and I(θ) is Fisher’s information
$$ I(\theta ) = {E_theta}{(\frac{{\partial \log f({X_1};\theta )}}{{\partial \theta }})^2} = - {E_theta}(\frac{{{\partial^2}\log f({X_1};\theta )}}{{\partial {\theta^2}}}) $$
. Here X1, X2,... are i.i.d. observations with values in some measure space (χ, B, μ) and, for each θ in the parameter space Θ = (a, b) (-∞ ≤ a < b ≤ ∞) f(x; θ) is the density of the common distribution of the observations with respect to the sigmafinite measure μ. As before, for each θ ∈ Θ, (Ω, F, P θ ) is a probability space on which X′ j s are defined. In particular, any consistent solution of the likelihood equation
$$ \sum\limits_{{j = 1}}^n {\frac{{\partial \log f({X_j};\theta ')}}{{\partial \theta '}} = 0} $$
is asymptotically efficient. If there is a unique solution to (4.3) this solution is the maximum likelihood estimator. But even in this case of a unique solution there are other estimators which arise naturally and satisfy (4.1). For example, in attempting to solve (4.3) a common procedure is to take an initial estimator \({\tilde \theta _n}\) and use the Newton-Raphson method to get a first approximation \(\theta _n^ * \) to the solution of (4.3):
$$ \theta_n^{ \star }: = {\tilde{\theta }_n} - {(\frac{{\sum\limits_{{j = 1}}^n {{{{\partial \log f({X_j};\theta ')}} \left/ {{\partial \theta '}} \right.}} }}{{\sum\limits_{{j = 1}}^n {{{{{\partial^2}\log f({X_j};\theta ')}} \left/ {{\partial {{\theta '}^2}}} \right.}} }})_{{\theta ' = {{\tilde{\theta }}_n}}}} $$
. If, in addition to the usual regularity conditions, \( \sqrt {n} ({\tilde{\theta }_n} - \theta ) \) is stochastically bounded under P θ (i.e. if for any given ε > 0 there exists a constant -A ε such that \( \sqrt {n} ({\tilde{\theta }_n} - \theta ) \) for all n), then θ n * is asymptotically efficient.


Unique Solution Probability Space Measure Space Maximum Likelihood Estimator Bias Correction 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Copyright information

© Birkhäuser Verlag Basel 1990

Authors and Affiliations

  • Rabi Bhattacharya
    • 1
  • Manfred Denker
    • 2
  1. 1.Department of MathematicsIndiana UniversityBloomingtonUSA
  2. 2.Institut für MathematischeStochastikGöttingenGermany

Personalised recommendations