Some Numerical Illustrations of Fisher’s Theory of Statistical Estimation

  • George Runger
Part of the Lecture Notes in Statistics book series (LNS, volume 1)


In the 1925 paper [CP 42] discussed in the previous lecture, Fisher redefines the efficiency of a statistic, T, as the limiting value of the ratio I T/I S, where I T is the information contained in T while I S is that in the sample. He discusses efficiency both in small and large samples, and shows that if no sufficient statistic exists, then some loss of information will necessarily ensue upon the substitution of a single estimate for the original data upon which it was based.


Empirical Variance Normal Deviate Generate Normal Variable Congruential Generator Iterate Expectation 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. Efron, B. (1975). “Defining the Curvature of a Statistical Problem (with Applications to Second Order Efficiency),” Annals of Statistics, 3, 1189–1217.MathSciNetzbMATHCrossRefGoogle Scholar
  2. Haldane, J.B.S. and S.M. Smith (1956). “The Sampling Distribution of a Maximum Likelihood Estimate,” Annals of StatisticsAnnals of Statistics, 43, 96–103.MathSciNetzbMATHGoogle Scholar
  3. Marsaglia, G. and T.A. Bray (1964). “A Convenient Method for Generating Normal Variables,” Siam Review, 6, 260–264.MathSciNetzbMATHCrossRefGoogle Scholar
  4. Rao, C.R. (1963). “Criteria of Estimation in Large Samples,” Sankhya, 25, 189–206.zbMATHGoogle Scholar
  5. Simon, G. (1976). “Computer Simulation Swindles, with Applications to Estimates of Location and Dispersion,” Applied Statistics, 25, 266–274.CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1980

Authors and Affiliations

  • George Runger

There are no affiliations available

Personalised recommendations