Advertisement

Convergence

Chapter
  • 4.1k Downloads
Part of the Springer Texts in Statistics book series (STS)

Abstract

From the introductory chapter we remember that the basis of probability theory, the empirical basis upon which the modeling of random phenomena rests, is the stabilization of the relative frequencies. In statistics a rule of thumb is to base one’s decisions or conclusions on large samples, if possible, because large samples have smoothing effects, the more wild randomness that is always there in small samples has been smeared out. The frequent use of the normal distribution (less nowadays, since computers can do a lot of numerical work within a reasonable time) is based on the fact that the arithmetic mean of some measurement in a sample is approximately normal when the sample is large. And so on. All of this triggers the notion of convergence. Let X1, X2, . . . be random variables. What can be said about their sum, Sn, as the number of summands increases (n → ∞)? What can be said about the largest of them, maxX1, X2, . . . , Xn as n→∞? What about the limit of sums of sequences? About functions of converging sequences? In mathematics one discusses point-wise convergence and convergence of integrals. When, if at all, can we assert that the integral of a limit equals the limit of the integrals? And what do such statements amount to in the context of random variables?

Keywords

Moment Generate Function Complete Convergence Continuity Point Uniform Integrability Continuity Theorem 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Copyright information

© Springer Science+Business Media, Inc. 2005

Personalised recommendations