Skip to main content

Part of the book series: Springer Texts in Statistics ((STS))

Abstract

Given a parameter of interest, such as a population mean μ or population proportion p, the objective of point estimation is to use a sample to compute a number that represents in some sense a good guess for the true value of the parameter. The resulting number is called a point estimate. In Section 7.1, we present some general concepts of point estimation. In Section 7.2, we describe and illustrate two important methods for obtaining point estimates: the method of moments and the method of maximum likelihood.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 95.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Following earlier notation, we could use \( \hat{\Theta } \) (an uppercase theta) for the estimator, but this is cumbersome to write.

  2. 2.

    Since ln[g(x)] is a monotonic function of g(x), finding x to maximize ln[g(x)] is equivalent to maximizing g(x) itself. In statistics, taking the logarithm frequently changes a product to a sum, which is easier to work with.

  3. 3.

    This conclusion requires checking the second derivative, but the details are omitted.

Bibliography

  • DeGroot, Morris, and Mark Schervish, Probability and Statistics (3rd ed.), Addison-Wesley, Boston, MA, 2002. Includes an excellent discussion of both general properties and methods of point estimation; of particular interest are examples showing how general principles and methods can yield unsatisfactory estimators in particular situations.

    Google Scholar 

  • Efron, Bradley, and Robert Tibshirani, An Introduction to the Bootstrap, Chapman and Hall, New York, 1993. The bible of the bootstrap.

    Chapter  Google Scholar 

  • Hoaglin, David, Frederick Mosteller, and John Tukey, Understanding Robust and Exploratory Data Analysis, Wiley, New York, 1983. Contains several good chapters on robust point estimation, including one on M-estimation.

    Google Scholar 

  • Hogg, Robert, Allen Craig, and Joseph McKean, Introduction to Mathematical Statistics (6th ed.), Prentice Hall, Englewood Cliffs, NJ, 2005. A good discussion of unbiasedness.

    Google Scholar 

  • Larsen, Richard, and Morris Marx, Introduction to Mathematical Statistics (4th ed.), Prentice Hall, Englewood Cliffs, NJ, 2005. A very good discussion of point estimation from a slightly more mathematical perspective than the present text.

    Google Scholar 

  • Rice, John, Mathematical Statistics and Data Analysis (3rd ed.), Duxbury Press, Belmont, CA, 2007. A nice blending of statistical theory and data.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jay L. Devore .

Rights and permissions

Reprints and permissions

Copyright information

© 2012 Springer Science+Business Media, LLC

About this chapter

Cite this chapter

Devore, J.L., Berk, K.N. (2012). Point Estimation. In: Modern Mathematical Statistics with Applications. Springer Texts in Statistics. Springer, New York, NY. https://doi.org/10.1007/978-1-4614-0391-3_7

Download citation

Publish with us

Policies and ethics