Example 3.1 shows that a clean comparison between two estimators is not always possible: if their risk functions cross, one estimator will be preferable for θ in some subset of the parameter space Ω, and the other will be preferable in a different subset of Ω. In some cases this problem will not arise if both estimators are unbiased. We may then be able to identify a best unbiased estimator. These ideas and limitations of the theory are discussed in Sections 4.1 and 4.2. Sections 4.3 and 4.4 concern distribution theory and unbiased estimation for the normal one-sample problem in which data are i.i.d. from a normal distribution. Sections 4.5 and 4.6 introduce Fisher information and derive lower bounds for the variance of unbiased estimators.
KeywordsSuccess Probability Unbiased Estimation Fisher Information Variance Bound Unbiased Estimator
Unable to display preview. Download preview PDF.