We’re sorry, something doesn't seem to be working properly.

Please try refreshing the page. If that doesn't work, please contact support so we can address the problem.

We’re sorry, something doesn't seem to be working properly.

Please try refreshing the page. If that doesn't work, please contact support so we can address the problem.

Distributions, Moments, and Statistics | SpringerLink

We’re sorry, something doesn't seem to be working properly.

Please try refreshing the page. If that doesn't work, please contact support so we can address the problem.

Skip to main content

Distributions, Moments, and Statistics

    We’re sorry, something doesn't seem to be working properly.

    Please try refreshing the page. If that doesn't work, please contact support so we can address the problem.

  • First Online:
Stochasticity in Processes

We’re sorry, something doesn't seem to be working properly.

Please try refreshing the page. If that doesn't work, please contact support so we can address the problem.

Abstract

The moments of probability distributions represent the link between theory and observations since they are readily accessible to measurement. Rather abstract-looking generating functions have become important as highly versatile concepts and tools for solving specific problems. The probability distributions which are most important in applications are reviewed. Then the central limit theorem and the law of large numbers are presented. The chapter is closed by a brief digression into mathematical statistics and shows how to handle real world samples that cover a part, sometimes only a small part, of sample space.

Everything should be made as simple as possible, but not simpler.

Attributed to Albert Einstein 1950

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

eBook
USD 16.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    A proof is given in [84, pp. 164–166].

  2. 2.

    Since the moments centered around the expectation value will be used more frequently than the raw moments, we denote them by μ r and reserve \(\hat{\mu }_{r}\) for the raw moments. The first centered moment vanishes and since confusion is unlikely, we shall write the expectation value μ instead of \(\hat{\mu }_{1}\). The r th moment of a distribution is also called the moment of order r.

  3. 3.

    In contrast to expectation value, variance and standard deviation, skewness and kurtosis are not uniquely defined and it is necessary therefore to check the author’s definitions carefully when reading the literature.

  4. 4.

    The definition of the Pochhammer symbol is ambiguous [308, p. 414]. In combinatorics, the Pochhammer symbol (x) n is used for the falling factorial,

    $$\displaystyle{ (x)_{n} = x(x - 1)(x - 2)\cdots (x - n + 1) = \frac{\varGamma (x + 1)} {\varGamma (x - n + 1)}, }$$

    whereas the rising factorial is

    $$\displaystyle{ x^{(n)} = x(x + 1)(x + 2)\cdots (x + n - 1) = \frac{\varGamma (x + n)} {\varGamma (x)}. }$$

    We also mention a useful identity between the partial factorials

    $$\displaystyle{ (-x)^{(n)} = (-1)^{n}\,(x)_{n}. }$$

    In the theory of special functions in physics and chemistry, in particular in the context of the hypergeometric functions, however, (x) n is used for the rising factorial. Here, we shall use the unambiguous symbols from combinatorics and we shall say whether we mean the rising or the falling factorial. Clearly, expressions in terms of Gamma functions are unambiguous.

  5. 5.

    The logarithm is taken to the base 2 and it is commonly called binary logarithm or logarithmus dualis, log2 ≡ lb ≡ ld, with the dimensionless unit 1 binary digit (bit). The conventional unit of information in informatics is the byte: 1 byte (B) = 8 bits being tantamount to the coding capacity of an eight digit binary sequence. Although there is little chance of confusion, one should be aware that in the International System of Units, B is the abbreviation for the acoustical unit ‘bel’, which is the unit for measuring the signal strength of sound.

  6. 6.

    Two remarks are worth noting: (2.25) is Max Planck’s expression for the entropy in statistical mechanics, although it has been carved on Boltzmann’s tombstone, and W is called a probability despite the fact that it is not normalized, i.e., W ≥ 1.

  7. 7.

    An isolated system exchanges neither matter nor energy with its environment. For isolated, closed, and open systems, see also Sect. 4.3

  8. 8.

    Since we shall often need the derivatives in this section, we shall use the shorthand notations dg(s)∕ds = g′(s), d2 g(s)∕ds 2 = g″(s), and dj g(s)∕dsj = g ( j)(s), and for simplicity also (dg∕ds) |  s = k  = g′(k) and (d2 g∕ds 2) |  s = k  = g″(k) (\(k \in \mathbb{N}\)).

  9. 9.

    We remark that the same symbol s is used for the Laplace transformed variable and the dummy variable of probability generating functions (Sect. 2.2) in order to be consistent with the literature. We shall point out the difference wherever confusion is possible.

  10. 10.

    The difference between the Fourier transform \(\tilde{f}(k)\) and the characteristic function ϕ(s) of a function f(x), viz.,

    $$\displaystyle{ \tilde{f}(k) = \frac{1} {\sqrt{2\pi }}\int \nolimits _{-\infty }^{+\infty }f(x)\exp (+\mathrm{i}kx)\,\mathrm{d}x\quad \mathrm{and}\quad \phi (s) =\int \nolimits _{ -\infty }^{\infty }f(x)\exp (\mathrm{i}sx)\,\mathrm{d}x, }$$

    is only a matter of the factor \((\sqrt{2\pi })^{-1}\). The Fourier convention used here is the same as the one in modern physics. For other conventions, see, e.g., [568] and Sect. 3.1.6

  11. 11.

    The Taylor series \(f(s) =\sum _{ n=0}^{\infty }\dfrac{f^{(n)}(a)} {n!} (s - a)^{n}\) is named after the English mathematician Brook Taylor who invented the calculus of finite differences in 1715. Earlier series expansions were already in use in the seventeenth century. The MacLaurin series, in particular, is a Taylor expansion centered around the origin a = 0, named after the eighteenth century Scottish mathematician Colin MacLaurin .

  12. 12.

    In order to be able to solve the problems, note the following basic infinite series:

    $$\displaystyle\begin{array}{rcl} & \mathrm{e} =\sum _{ n=0}^{\infty }\frac{1} {n!}\,,\quad \mathrm{e}^{x} =\sum _{ n=0}^{\infty }\frac{x^{n}} {n!} \,,\;\mathrm{for}\;\vert x\vert < \infty \,,& {}\\ & \mathrm{e} =\lim _{n\rightarrow \infty }\left (1 + \frac{1} {n}\right )^{n},\quad \mathrm{e}^{-\alpha } =\lim _{n\rightarrow \infty }\left (1 - \frac{\alpha } {n}\right )^{n}.& {}\\ \end{array}$$
  13. 13.

    The notation applied here for the normal distribution is as follows: \(\mathcal{N}(\mu,\sigma )\) in general, \(F_{\mathcal{N}}(x;\mu,\sigma )\) for the cumulative distribution, and \(f_{\mathcal{N}}(x;\mu,\sigma )\) for the density. Commonly, the parameters (μ, σ) are omitted, when no misinterpretation is possible. For standard stable distributions (Sect. 2.5.9), a variance γ 2 = σ 2∕2 is applied.

  14. 14.

    We remark that erf(x) and erfc(x) are not normalized in the same way as the normal density:

    $$\displaystyle{\lim _{x\rightarrow \infty }\mathrm{erf}(x) = \frac{2} {\sqrt{\pi }}\int _{0}^{\infty }\exp (-u^{2})\,\mathrm{d}u = 1\;,\quad \int _{ 0}^{\infty }\varphi (u)\,\mathrm{d}u = \frac{1} {2}\int _{-\infty }^{+\infty }\varphi (u)\,\mathrm{d}u = \frac{1} {2}\;.}$$
  15. 15.

    The definite integrals are:

    $$\displaystyle{\int _{-\infty }^{+\infty }x^{n}\exp (-x^{2})\,\mathrm{d}x = \left \{\begin{array}{@{}l@{\quad }l@{}} \sqrt{\pi }\;,\qquad n = 0\;, \quad \\ 0\;,\qquad n \geq 1\;,\;\,n\mbox{ odd}\;, \quad \\ \dfrac{(n - 1)!!} {2^{n/2}} \sqrt{\pi }\;,\qquad n \geq 2\;,\;\,n\mbox{ even}\;,\quad \end{array} \right.}$$

    where (n − 1)! !  = 1 × 3 ×⋯ × (n − 1) is the double factorial.

  16. 16.

    It is important to remember that k is a discrete variable on the left-hand side, whereas it is continuous on the right-hand side of (2.52).

  17. 17.

    This differs from the extrapolation performed in Sect. 2.3.2 because the limit lim n →  B k (n, αn) = π k (α) leading to the Poisson distribution was performed for vanishing p = αn.

  18. 18.

    In computer science, the iterated logarithm of n is commonly written log n and represents the number of times the logarithmic function must be iteratively applied before the result is less than or equal to one:

    $$\displaystyle{\log ^{{\ast}}\ \doteq\ \left \{\begin{array}{@{}l@{\quad }l@{}} 0\;, \quad &\ \mathrm{if}\ \ n \leq 1, \\ 1 +\log ^{{\ast}}(\log n)\;,\quad &\ \mathrm{if}\ \ n > 1. \end{array} \right.}$$

    The iterated logarithm is well defined for base e, for base 2, and in general for any base greater than e1∕e = 1. 444667 .

  19. 19.

    Here and in the following listings for other distributions, ‘kurtosis’ stands for excess kurtosis γ 2 = β 2 − 3 = μ 4σ 4.

  20. 20.

    The chi-squared distribution is sometimes written χ 2(k), but we prefer the subscript since the number of degrees of freedom, the parameter k, specifies the distribution. Often the random variables \(\mathcal{X}_{i}\) satisfy a conservation relation and then the number of independent variables is reduced to k − 1, and we have χ k−1 2 (Sect. 2.6.2).

  21. 21.

    In mathematical statistics (Sect. 2.6), the quality of measured data is often characterized by scores. The z-score of a sample corresponds to the random variable \(\mathcal{Z}\) (2.79) and it is measured in standard deviations from the population mean as units.

  22. 22.

    A pivotal quantity or pivot is a function of measurable and unmeasurable parameters whose probability distribution does not depend on the unknown parameters.

  23. 23.

    It is important to distinguish the exponential distribution and the class of exponential families of distributions, which comprises a number of distributions like the normal distribution, the Poisson distribution, the binomial distribution, the exponential distribution and others [142, pp. 82–84]. The common form of the exponential family in the pdf is:

    $$\displaystyle{f_{\vartheta }(x) =\exp {\bigl ( A(\vartheta ) \cdot B(x)\, +\, C(x)\, +\, D(\vartheta )\bigr )},}$$

    where the parameter ϑ can be a scalar or a vector.

  24. 24.

    We remark that memorylessness is not tantamount to independence. Independence requires \(P(\mathcal{T} > s + t\,\vert \,\mathcal{T} > s) = P(\mathcal{T} > s + t)\).

  25. 25.

    As mentioned for the Cauchy distribution (Sect. 2.5.7), the location parameter defines the center of the distribution ϑ and the scale parameter γ determines its width, even in cases where the corresponding moments μ and σ 2 do not exist.

  26. 26.

    The symbol \(\buildrel \mbox{ $\mathrm{d}$} \over =\) means equality in distribution.

  27. 27.

    We remark that, for all stable distributions except the normal distribution, the conventional skewness (Sect. 2.1.2) is undefined.

  28. 28.

    For the reader who is interested in more details on mathematical statistics, we recommend the classic textbook by the Polish mathematician Marek Fisz [179] and the comprehensive treatise by Stuart and Ord [514, 515], which is a new edition of Kendall’s classic on statistics. An account that is useful as a not too elaborate introduction can be found in [257], while the monograph [88] is particularly addressed to experimentalists using statistics, and a wide variety of other, equally suitable texts are, of course, available in the rich literature on mathematical statistics.

  29. 29.

    It is important to note that 〈m i 〉 is the expectation value of an average over a finite sample, whereas the genuine expectation value refers to the entire sample space. In particular, we find

    $$\displaystyle{\langle m\rangle = \left < \frac{1} {n}\sum _{i=1}^{n}x_{ i}\right > =\mu =\hat{\mu } _{1},}$$

    where μ is the first (raw) moment. For the higher moments, the situation is more complicated and requires some care (see text).

  30. 30.

    We indicate the expected converge in the sense of the central limit theorem by choosing the symbol X k−1 2 for the finite n expression with lim n →  X k−1 2(n) = χ k−1 2.

  31. 31.

    Recall the claim by Ronald Fisher and others to the effect that Mendel’s data were too good to be true.

  32. 32.

    Variables and parameters of a function are separated by a semicolon as in f(x; p).

  33. 33.

    The prerequisite for asymptotic normality is, of course, that the central limit theorem should be applicable, requiring finite expectation value and finite variance of the distribution \(f(\mathbf{x}\vert \boldsymbol{\theta }\)).

  34. 34.

    The notation \(\mathrm{E}{\bigl (\ldots \vert \theta \bigr )}\) stands for a conditioned expectation value. Here the average is taken over the random variable \(\mathcal{X}\) for a given value of θ.

  35. 35.

    The signed curvature of a function y = f(x) is defined by

    $$\displaystyle{k(x) = \frac{\mathrm{d}^{2}f(x)/\mathrm{d}x^{2}} {{\Bigl (1 +\big (\mathrm{d}f(x)/\mathrm{d}x\big)^{2}\Bigr )}^{3/2}}\;.}$$

    If the tangent df(x)∕dx is small compared to unity, the curvature is determined by the second derivative d2 f(x)∕dx 2. Use of the function κ(x) =  | k(x) | as (unsigned) curvature is also common.

  36. 36.

    The equivalence i = 1 n(x i μ)2 =  i = 1 n(x i m)2 + n(mμ)2 is easy to check using the definition of the sample mean m =  i = 1 n x i n. We use it here because the dependence on the unknown parameter μ is reduced to a single term.

  37. 37.

    Bayesian statistics is described in many monographs, for example, in references [92, 199, 281, 333]. As a brief introduction to Bayesian statistics, we recommend [510].

References

  1. Adams, W.J.: The Life and Times of the Central Limit Theorem, History of Mathematics, vol. 35, 2nd edn. American Mathematical Society and London Mathematical Society, Providence, RI (2009). Articles by A. M. Lyapunov translated from the Russian by Hal McFaden.

    Google Scholar 

  2. Aldrich, J.: R. A. Fisher and the making of the maximum likelihood 1912–1922. Stat. Sci. 12, 162–176 (1997)

    Google Scholar 

  3. Bergström, H.: On some expansions of stable distribution functions. Ark. Math. 2, 375–378 (1952)

    Article  MathSciNet  MATH  Google Scholar 

  4. Chechkin, A.V., Metzler, R., Klafter, J., Gonchar, V.Y.: Introduction to the theory of Lévy flights. In: R. Klages, G. Radons, I.M. Sokolov (eds.) Anomalous Transport: Foundations and Applications, chap. 5, pp. 129–162. Wiley-VCH Verlag GmbH, Weinheim, DE (2008)

    Chapter  Google Scholar 

  5. Chung, K.L.: A Course in Probability Theory, Probability and Mathematical Statistics, vol. 21, 2nd edn. Academic Press, New York (1974)

    Google Scholar 

  6. Chung, K.L.: Elementary Probability Theory with Stochastic Processes, 3rd edn. Springer, New York (1979)

    Book  MATH  Google Scholar 

  7. Cochran, W.G.: The distribution of quadratic forms in normal systems, with applications to the analysis of covariance. Math. Proc. Camb. Philos. Soc. 30, 178–191 (1934)

    Article  ADS  MATH  Google Scholar 

  8. Conrad, K.: Probability distributions and maximum entropy. Expository paper, University of Connecticut, Storrs, CT (2005)

    Google Scholar 

  9. Cooper, B.E.: Statistics for Experimentalists. Pergamon Press, Oxford (1969)

    Google Scholar 

  10. Cover, T.M., Thomas, J.A.: Elements of Information Theory, 2nd edn. Wiley, Hoboken (2006)

    MATH  Google Scholar 

  11. Cox, R.T.: The Algebra of Probable Inference. The John Hopkins Press, Baltimore (1961)

    MATH  Google Scholar 

  12. Cramér, H.: Mathematical Methods of Statistics. Princeton Univ. Press, Priceton (1946)

    MATH  Google Scholar 

  13. Eddy, S.R.: What is Bayesian statistics? Nat. Biotechnol. 22, 1177–1178 (2004)

    Article  Google Scholar 

  14. Edgeworth, F.Y.: On the probable errors of frequence-constants. J. R. Stat. Soc. 71, 381–397 (1908)

    Article  Google Scholar 

  15. Edgeworth, F.Y.: On the probable errors of frequence-constants (contd.). J. R. Stat. Soc. 71, 651–678 (1908)

    Google Scholar 

  16. Evans, M., Hastings, N.A.J., Peacock, J.B.: Statistical Distributions, 3rd edn. Wiley, New York (2000)

    MATH  Google Scholar 

  17. Feller, W.: The general form of the so-called law of the iterated logarithm. Trans. Am. Math. Soc. 54, 373–402 (1943)

    Article  MathSciNet  MATH  Google Scholar 

  18. Feller, W.: An Introduction to Probability Theory and Its Application, vol. I, 3rd edn. Wiley, New York (1968)

    Google Scholar 

  19. Fisher, R.A.: On an absolute criterion for fitting frequency curves. Messeng. Math. 41, 155–160 (1912)

    Google Scholar 

  20. Fisher, R.A.: On the mathematical foundations of theoretical statistics. Philos. Trans. R. Soc. Lond. A 222, 309–368 (1922)

    Article  ADS  MATH  Google Scholar 

  21. Fisher, R.A.: Applications of “Student’s” distribution. Metron 5, 90–104 (1925)

    MATH  Google Scholar 

  22. Fisher, R.A.: Theory of statistical estimation. Proc. Camb. Philos. Soc. 22, 700–725 (1925)

    Article  ADS  MATH  Google Scholar 

  23. Fisher, R.A.: Moments and product moments of sampling distributions. Proc. Lond. Math. Soc. Ser.2, 30, 199–238 (1928)

    Google Scholar 

  24. Fisher, R.A.: The logic of inductive inference. J. R. Stat. Soc. 98, 39–54 (1935)

    Article  MATH  Google Scholar 

  25. Fisz, M.: Probability Theory and Mathematical Statistics, 3rd edn. Wiley, New York (1963)

    MATH  Google Scholar 

  26. Fisz, M.: Wahrscheinlichkeitsrechnung und mathematische Statistik. VEB Deutscher Verlag der Wissenschaft, Berlin (1989). In German

    MATH  Google Scholar 

  27. Fofack, H., Nolan, J.P.: Tail behavior, modes and other characteristics of stable distributions. Extremes 2, 39–58 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  28. Foster, D.P.: Law of the iterated logarithm. Wikipedia entry, University of Pennsylvania, Philadelphia, PA (2009). Retrieved April 07, 2009 from en.wikipedia.org/wiki/Law_of_the_iterated_logarithm

    Google Scholar 

  29. Galton, F.: The geometric mean in vital and social statistics. Proc. Roy. Soc. Lond. 29, 365–367 (1879)

    Article  Google Scholar 

  30. Gauß, C.F.: Theoria motus corporum coelestium in sectionibus conicis solem ambientium. Perthes et Besser, Hamburg (1809). English translation: Theory of the Motion of the Heavenly Bodies Moving about the Sun in Conic Sections. Little, Brown. Boston, MA. 1857. Reprinted by Dover, New York (1963)

    Google Scholar 

  31. Gelman, A., Carlin, J.B., Stern, H.S., Rubin, D.B.: Baysian Data Analysis, 2nd edn. Texts in Statistical Science. Chapman & Hall / CRC, Boca Raton (2004)

    Google Scholar 

  32. Gray, R.M.: Entropy and Information Theory, 2nd edn. Springer, New York (2011)

    Book  MATH  Google Scholar 

  33. Hartman, P., Wintner, A.: On the law of the iterated logarithm. Am. J. Math. 63, 169–173 (1941)

    Article  MathSciNet  MATH  Google Scholar 

  34. Hogg, R.V., McKean, J.W., Craig, A.T.: Introduction to Mathematical Statistics, 7th edn. Pearson Education, Upper Saddle River (2012)

    Google Scholar 

  35. Hogg, R.V., Tanis, E.A.: Probability and Statistical Inference, 8th edn. Pearson – Prentice Hall, Upper Saddle River (2010)

    MATH  Google Scholar 

  36. Jaynes, E.T.: Information theory and statistical mechanics. Phys. Rev. 106, 620–630 (1957)

    Article  ADS  MathSciNet  MATH  Google Scholar 

  37. Jaynes, E.T.: Information theory and statistical mechanics. II. Phys. Rev. 108, 171–190 (1957)

    Article  ADS  MathSciNet  MATH  Google Scholar 

  38. Jaynes, E.T.: Probability Theory. The Logic of Science. Cambridge University Press, Cambridge (2003)

    Book  MATH  Google Scholar 

  39. Johnson, N.L., Kotz, S., Balakrishnan, N.: Continuous Univariate Distributions, Probability and Mathematical Statistics. Applied Probability and Statistics, vol. 1, 2nd edn. Wiley, New York (1994)

    Google Scholar 

  40. Johnson, N.L., Kotz, S., Balakrishnan, N.: Continuous Univariate Distributions, Probability and Mathematical Statistics. Applied Probability and Statistics, vol. 2, 2nd edn. Wiley, New York (1995)

    Google Scholar 

  41. Kenney, J.F., Keeping, E.S.: Mathematics of Statistics, 2nd edn. Van Nostrand, Princeton (1951)

    MATH  Google Scholar 

  42. Kenney, J.F., Keeping, E.S.: The k-Statistics. In Mathematics of Statistics. Part I, §7.9, 3rd edn. Van Nostrand, Princeton (1962)

    Google Scholar 

  43. Khinchin, A.Y.: Über einen Satz der Wahrscheinlichkeitsrechnung. Fundam. Math. 6, 9–20 (1924). In German

    Google Scholar 

  44. Knuth, D.E.: Two notes on notation. Am. Math. Monthly 99, 403–422 (1992)

    Article  MathSciNet  MATH  Google Scholar 

  45. Kolmogorov, A.N.: Über das Gesetz es interierten Logarithmus. Math. Ann. 101, 126–135 (1929). In German

    Article  MathSciNet  Google Scholar 

  46. Kowalski, C.J.: Non-normal bivariate distributions with normal marginals. Am. Statistician 27, 103–106 (1973)

    Google Scholar 

  47. Laplace, P.S.: Mémoirs sur la probabilité des causes par les évènemens. Mémoires de Mathématique et de Physique, Presentés à l’Académie Royale des Sciences, par divers Savans & lûs dans ses Assemblées 6, 621–656 (1774). Reprinted in Laplace’s Ouevres complète 8, 27–65. English translation: Stat. Sci. 1, 364–378 (1986)

    Google Scholar 

  48. Laplace, P.S.: Théorie analytique des probabililtés. Courcies Imprimeur, Paris (1812)

    Google Scholar 

  49. Le Cam, L.: Maximum likelihood: An introduction. Int. Stat. Rev. 58, 153–171 (1990)

    Article  MATH  Google Scholar 

  50. Lee, P.M.: Bayesian Statistics, 3rd edn. Hodder Arnold, London (2004)

    MATH  Google Scholar 

  51. Leemis, L.: Poisson to normal. College of William & Mary, Department of Mathematics, Williamsburg, VA (2012). URL: www.math.wm.edu/~leemis/chart/UDR/PDFs/PoissonNormal.pdf

  52. Lévy, P.: Calcul de probabilités. Geuthier-Villars, Paris (1925). In French

    MATH  Google Scholar 

  53. Limpert, E., Stahel, W.A., Abbt, M.: Log-normal distributions across the sciences: Keys and clues. BioScience 51, 341–352 (2001)

    Article  Google Scholar 

  54. Lindeberg, J.W.: Über das Exponentialgesetzes in der Wahrscheinlichkeitsrechnung. Ann. Acad. Sci. Fenn. 16, 1–23 (1920). In German.

    Google Scholar 

  55. Lindeberg, J.W.: Eine neue Herleitung des Exponentialgesetzes in der Wahrscheinlichkeitsrechnung. Math. Z. 15, 211–225 (1922). In German

    Article  MathSciNet  MATH  Google Scholar 

  56. Lukacs, E.: Characteristic Functions. Hafner Publ. Co., New York (1970)

    MATH  Google Scholar 

  57. Lukacs, E.: A survey of the theory of characteristic functions. Adv. Appl. Probab. 4, 1–38 (1972)

    Article  MathSciNet  MATH  Google Scholar 

  58. Lyapunov, A.M.: Sur une proposition de la théorie des probabilités. Bull. Acad. Imp. Sci. St. Pétersbourg 13, 359–386 (1900)

    Google Scholar 

  59. Lyapunov, A.M.: Nouvelle forme du théorème sur la limite des probabilités. Mem. Acad. Imp. Sci. St. Pétersbourg, Classe Phys. Math. 12, 1–24 (1901)

    Google Scholar 

  60. Mallows, C.: Anothre comment on O’Cinneide. Am. Statistician 45, 257 (1991)

    Google Scholar 

  61. McAlister, D.: The law of the geometric mean. Proc. R. Soc. Lond. 29, 367–376 (1879)

    Article  Google Scholar 

  62. McKean, Jr., H.P.: Stochastic Integrals. Wiley, New York (1969)

    MATH  Google Scholar 

  63. Melnick, E.L., Tenenbein, A.: Misspecifications of the normal distribution. Am. Statistician 36, 372–373 (1982)

    Google Scholar 

  64. Merkle, M.: Jensen’s inequality for medians. Stat. Probab. Lett. 71, 277–281 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  65. Nolan, J.P.: Stable Distributions: Models for Heavy-Tailed Data. Birkhäuser, Boston (2013). Unfinished manuscript. Online at academic2.american.edu/~jpnolan

  66. Norden, R.H.: A survey of maximum likelihood estimation I. Int. Stat. Rev. 40, 329–354 (1972)

    Article  MathSciNet  MATH  Google Scholar 

  67. Norden, R.H.: A survey of maximum likelihood estimation II. Int. Stat. Rev. 41, 39–58 (1973)

    Article  MathSciNet  MATH  Google Scholar 

  68. Park, S.Y., Bera, A.K.: Maximum entropy autoregressive conditional heteroskedasticy model. J. Econ. 150, 219–230 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  69. Pearson, E.S., Wishart, J.: “Student’s” Collected Papers. Cambridge University Press, Cambridge (1942). Cambridge University Press for the Biometrika Trustees

    Google Scholar 

  70. Pearson, K.: The problem of the random walk. Nature 72, 294 (1905)

    Article  ADS  MATH  Google Scholar 

  71. Pearson, K.: Notes on the history of correlation. Biometrika 13, 25–45 (1920)

    Article  Google Scholar 

  72. Pearson, K., Filon, L.N.G.: Contributions to the mathematical theory of evolution. IV. On the probable errors of frequency constants and on the influence of random selection on variation and correlation. Philos. Trans. R. Soc. Lond. A 191, 229–311 (1898)

    MATH  Google Scholar 

  73. Pollard, H.: The representatioin of \(e^{-x^{\lambda } }\) as a Laplace intgeral. Bull. Am. Math. Soc. 52, 908–910 (1946)

    Article  MathSciNet  MATH  Google Scholar 

  74. Press, W.H., Flannery, B.P., Teukolsky, S.A., Vetterling, W.T.: Numerical Recipes. The Art of Scientific Computing. Cambridge University Press, Cambridge (1986)

    MATH  Google Scholar 

  75. Price, R.: LII. an essay towards soliving a problem in the doctrine of chances. By the late Ref. Mr. Bayes, communicated by Mr. Price, in a letter to John Canton, M.A. and F.R.S. Philos. Trans. R. Soc. Lond. 53, 370–418 (1763)

    Google Scholar 

  76. Rao, C.R.: Information and the acuracy attainable in the estimation of statistical parameters. Bull. Calcutta Math. Soc. 37, 81–89 (1945)

    MathSciNet  MATH  Google Scholar 

  77. Schilling, M.F., Watkins, A.E., Watkins, W.: Is human height bimodal? Am. Statistician 56, 223–229 (2002)

    Article  MathSciNet  Google Scholar 

  78. Seneta, E.: The central limit problem and lienear least squares in pre-revolutionary Russia: The background. Math. Scientist 9, 37–77 (1984)

    MathSciNet  MATH  Google Scholar 

  79. Shannon, C.E.: A mathematical theory of communication. Bell Syst. Tech. J. 27, 379–423 (1948)

    Article  MathSciNet  MATH  Google Scholar 

  80. Shannon, C.E., Weaver, W.: The Mathematical Theory of Communication. University of Illinois Press, Urbana (1949)

    MATH  Google Scholar 

  81. Stevens, J.W.: What is Bayesian Statistics? What is …? Hayward Medical Communications, a division of Hayward Group Ltd., London (2009)

    Google Scholar 

  82. Stigler, S.M.: Laplace’s 1774 memoir on inverse probability. Stat. Sci. 1, 359–378 (1986)

    Article  MathSciNet  MATH  Google Scholar 

  83. Stigler, S.M.: The epic story of maximum likelihood. Stat. Sci. 22, 598–620 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  84. Stone, J.V.: Bayes’ Rule. A Tutorial to Bayesian Analysis. Sebtel Press, England (2013)

    MATH  Google Scholar 

  85. Stuart, A., Ord, J.K.: Kendall’s Advanced Theory of Statistics. Volume 1: Distribution Theory, 5th edn. Charles Griffin & Co., London (1987)

    Google Scholar 

  86. Stuart, A., Ord, J.K.: Kendall’s Advanced Theory of Statistics. Volume 2: Classical Inference and Relationship, 5th edn. Edward Arnold, London (1991)

    Google Scholar 

  87. Student: The probable error of a mean. Biometrika 6, 1–25 (1908)

    Google Scholar 

  88. Swamee, P.K.: Near lognormal distribution. J. Hydrol. Eng. 7, 441–444 (2007)

    Article  Google Scholar 

  89. Volkenshtein, M.V.: Entropy and Information, Progress in Mathematical Physics, vol. 57. Birkhäuser Verlag, Basel, CH (2009). German version: W. Ebeling, Ed. Entropie und Information. Wissenschaftliche Taschenbücher, Band 306, Akademie-Verlag, Berlin (1990). Russian Edition: Nauka Publ., Moscow (1986)

    Google Scholar 

  90. Weber, N.A.: Dimorphism of the African oecophylla worker and an anomaly (hymenoptera formicidae). Ann. Entomol. Soc. Am. 39, 7–10 (1946)

    Article  Google Scholar 

  91. Weisstein, E.W.: Fourier Transform. MathWorld - A Wolfram Web Resource. The Wolfram Centre, Long Hanborough, UK. http://www.Mathworld.wolfram.com/FourierTransform.html, retrieved July 17, 2015

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing Switzerland

About this chapter

Cite this chapter

Schuster, P. (2016). Distributions, Moments, and Statistics. In: Stochasticity in Processes. Springer Series in Synergetics. Springer, Cham. https://doi.org/10.1007/978-3-319-39502-9_2

Download citation

Publish with us

Policies and ethics