Advertisement

Unbiased Estimation

  • Robert W. Keener
Chapter
Part of the Springer Texts in Statistics book series (STS)

Abstract

Example 3.1 shows that a clean comparison between two estimators is not always possible: if their risk functions cross, one estimator will be preferable for θ in some subset of the parameter space Ω, and the other will be preferable in a different subset of Ω. In some cases this problem will not arise if both estimators are unbiased. We may then be able to identify a best unbiased estimator. These ideas and limitations of the theory are discussed in Sections 4.1 and 4.2. Sections 4.3 and 4.4 concern distribution theory and unbiased estimation for the normal one-sample problem in which data are i.i.d. from a normal distribution. Sections 4.5 and 4.6 introduce Fisher information and derive lower bounds for the variance of unbiased estimators.

Keywords

Success Probability Unbiased Estimation Fisher Information Variance Bound Unbiased Estimator 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Copyright information

© Springer New York 2009

Authors and Affiliations

  • Robert W. Keener
    • 1
  1. 1.Department of StatisticsUniversity of MichiganAnn ArborUSA

Personalised recommendations