In this chapter, some asymptotic optimality theory of hypothesis testing is developed. We consider testing one sequence of distributions against another (the asymptotic version of testing a simple hypothesis against a simple alternative). It turns out that this problem degenerates if the two sequences are too close together or too far apart. The non-degenerate situation can be characterized in terms of a suitable distance or metric between the distributions of the two sequences. Two such metrics, the total variation and the Hellinger metric, will be introduced below.
We begin by considering some of the basic metrics for probability distributions that are useful in statistics. Fundamental inequalities relating these metrics are developed, from which some large sample implications can be derived. We now recall the definition of a metric space; also see Section A.2 in the appendix.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Rights and permissions
Copyright information
© 2005 Springer Science+Business Media, LLC
About this chapter
Cite this chapter
(2005). Large Sample Optimality. In: Testing Statistical Hypotheses. Springer Texts in Statistics. Springer, New York, NY. https://doi.org/10.1007/0-387-27605-X_13
Download citation
DOI: https://doi.org/10.1007/0-387-27605-X_13
Publisher Name: Springer, New York, NY
Print ISBN: 978-0-387-98864-1
Online ISBN: 978-0-387-27605-2
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)