Abstract
The moments of probability distributions represent the link between theory and observations since they are readily accessible to measurement. Rather abstract-looking generating functions have become important as highly versatile concepts and tools for solving specific problems. The probability distributions which are most important in applications are reviewed. Then the central limit theorem and the law of large numbers are presented. The chapter is closed by a brief digression into mathematical statistics and shows how to handle real world samples that cover a part, sometimes only a small part, of sample space.
Everything should be made as simple as possible, but not simpler.
Attributed to Albert Einstein 1950
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsNotes
- 1.
A proof is given in [84, pp. 164–166].
- 2.
Since the moments centered around the expectation value will be used more frequently than the raw moments, we denote them by μ r and reserve \(\hat{\mu }_{r}\) for the raw moments. The first centered moment vanishes and since confusion is unlikely, we shall write the expectation value μ instead of \(\hat{\mu }_{1}\). The r th moment of a distribution is also called the moment of order r.
- 3.
In contrast to expectation value, variance and standard deviation, skewness and kurtosis are not uniquely defined and it is necessary therefore to check the author’s definitions carefully when reading the literature.
- 4.
The definition of the Pochhammer symbol is ambiguous [308, p. 414]. In combinatorics, the Pochhammer symbol (x) n is used for the falling factorial,
$$\displaystyle{ (x)_{n} = x(x - 1)(x - 2)\cdots (x - n + 1) = \frac{\varGamma (x + 1)} {\varGamma (x - n + 1)}, }$$whereas the rising factorial is
$$\displaystyle{ x^{(n)} = x(x + 1)(x + 2)\cdots (x + n - 1) = \frac{\varGamma (x + n)} {\varGamma (x)}. }$$We also mention a useful identity between the partial factorials
$$\displaystyle{ (-x)^{(n)} = (-1)^{n}\,(x)_{n}. }$$In the theory of special functions in physics and chemistry, in particular in the context of the hypergeometric functions, however, (x) n is used for the rising factorial. Here, we shall use the unambiguous symbols from combinatorics and we shall say whether we mean the rising or the falling factorial. Clearly, expressions in terms of Gamma functions are unambiguous.
- 5.
The logarithm is taken to the base 2 and it is commonly called binary logarithm or logarithmus dualis, log2 ≡ lb ≡ ld, with the dimensionless unit 1 binary digit (bit). The conventional unit of information in informatics is the byte: 1 byte (B) = 8 bits being tantamount to the coding capacity of an eight digit binary sequence. Although there is little chance of confusion, one should be aware that in the International System of Units, B is the abbreviation for the acoustical unit ‘bel’, which is the unit for measuring the signal strength of sound.
- 6.
Two remarks are worth noting: (2.25) is Max Planck’s expression for the entropy in statistical mechanics, although it has been carved on Boltzmann’s tombstone, and W is called a probability despite the fact that it is not normalized, i.e., W ≥ 1.
- 7.
An isolated system exchanges neither matter nor energy with its environment. For isolated, closed, and open systems, see also Sect. 4.3
- 8.
Since we shall often need the derivatives in this section, we shall use the shorthand notations dg(s)∕ds = g′(s), d2 g(s)∕ds 2 = g″(s), and dj g(s)∕ds j = g ( j)(s), and for simplicity also (dg∕ds) | s = k = g′(k) and (d2 g∕ds 2) | s = k = g″(k) (\(k \in \mathbb{N}\)).
- 9.
We remark that the same symbol s is used for the Laplace transformed variable and the dummy variable of probability generating functions (Sect. 2.2) in order to be consistent with the literature. We shall point out the difference wherever confusion is possible.
- 10.
The difference between the Fourier transform \(\tilde{f}(k)\) and the characteristic function ϕ(s) of a function f(x), viz.,
$$\displaystyle{ \tilde{f}(k) = \frac{1} {\sqrt{2\pi }}\int \nolimits _{-\infty }^{+\infty }f(x)\exp (+\mathrm{i}kx)\,\mathrm{d}x\quad \mathrm{and}\quad \phi (s) =\int \nolimits _{ -\infty }^{\infty }f(x)\exp (\mathrm{i}sx)\,\mathrm{d}x, }$$is only a matter of the factor \((\sqrt{2\pi })^{-1}\). The Fourier convention used here is the same as the one in modern physics. For other conventions, see, e.g., [568] and Sect. 3.1.6
- 11.
The Taylor series \(f(s) =\sum _{ n=0}^{\infty }\dfrac{f^{(n)}(a)} {n!} (s - a)^{n}\) is named after the English mathematician Brook Taylor who invented the calculus of finite differences in 1715. Earlier series expansions were already in use in the seventeenth century. The MacLaurin series, in particular, is a Taylor expansion centered around the origin a = 0, named after the eighteenth century Scottish mathematician Colin MacLaurin .
- 12.
In order to be able to solve the problems, note the following basic infinite series:
$$\displaystyle\begin{array}{rcl} & \mathrm{e} =\sum _{ n=0}^{\infty }\frac{1} {n!}\,,\quad \mathrm{e}^{x} =\sum _{ n=0}^{\infty }\frac{x^{n}} {n!} \,,\;\mathrm{for}\;\vert x\vert < \infty \,,& {}\\ & \mathrm{e} =\lim _{n\rightarrow \infty }\left (1 + \frac{1} {n}\right )^{n},\quad \mathrm{e}^{-\alpha } =\lim _{n\rightarrow \infty }\left (1 - \frac{\alpha } {n}\right )^{n}.& {}\\ \end{array}$$ - 13.
The notation applied here for the normal distribution is as follows: \(\mathcal{N}(\mu,\sigma )\) in general, \(F_{\mathcal{N}}(x;\mu,\sigma )\) for the cumulative distribution, and \(f_{\mathcal{N}}(x;\mu,\sigma )\) for the density. Commonly, the parameters (μ, σ) are omitted, when no misinterpretation is possible. For standard stable distributions (Sect. 2.5.9), a variance γ 2 = σ 2∕2 is applied.
- 14.
We remark that erf(x) and erfc(x) are not normalized in the same way as the normal density:
$$\displaystyle{\lim _{x\rightarrow \infty }\mathrm{erf}(x) = \frac{2} {\sqrt{\pi }}\int _{0}^{\infty }\exp (-u^{2})\,\mathrm{d}u = 1\;,\quad \int _{ 0}^{\infty }\varphi (u)\,\mathrm{d}u = \frac{1} {2}\int _{-\infty }^{+\infty }\varphi (u)\,\mathrm{d}u = \frac{1} {2}\;.}$$ - 15.
The definite integrals are:
$$\displaystyle{\int _{-\infty }^{+\infty }x^{n}\exp (-x^{2})\,\mathrm{d}x = \left \{\begin{array}{@{}l@{\quad }l@{}} \sqrt{\pi }\;,\qquad n = 0\;, \quad \\ 0\;,\qquad n \geq 1\;,\;\,n\mbox{ odd}\;, \quad \\ \dfrac{(n - 1)!!} {2^{n/2}} \sqrt{\pi }\;,\qquad n \geq 2\;,\;\,n\mbox{ even}\;,\quad \end{array} \right.}$$where (n − 1)! ! = 1 × 3 ×⋯ × (n − 1) is the double factorial.
- 16.
It is important to remember that k is a discrete variable on the left-hand side, whereas it is continuous on the right-hand side of (2.52).
- 17.
This differs from the extrapolation performed in Sect. 2.3.2 because the limit lim n → ∞ B k (n, α∕n) = π k (α) leading to the Poisson distribution was performed for vanishing p = α∕n.
- 18.
In computer science, the iterated logarithm of n is commonly written log∗ n and represents the number of times the logarithmic function must be iteratively applied before the result is less than or equal to one:
$$\displaystyle{\log ^{{\ast}}\ \doteq\ \left \{\begin{array}{@{}l@{\quad }l@{}} 0\;, \quad &\ \mathrm{if}\ \ n \leq 1, \\ 1 +\log ^{{\ast}}(\log n)\;,\quad &\ \mathrm{if}\ \ n > 1. \end{array} \right.}$$The iterated logarithm is well defined for base e, for base 2, and in general for any base greater than e1∕e = 1. 444667… .
- 19.
Here and in the following listings for other distributions, ‘kurtosis’ stands for excess kurtosis γ 2 = β 2 − 3 = μ 4∕σ 4.
- 20.
The chi-squared distribution is sometimes written χ 2(k), but we prefer the subscript since the number of degrees of freedom, the parameter k, specifies the distribution. Often the random variables \(\mathcal{X}_{i}\) satisfy a conservation relation and then the number of independent variables is reduced to k − 1, and we have χ k−1 2 (Sect. 2.6.2).
- 21.
- 22.
A pivotal quantity or pivot is a function of measurable and unmeasurable parameters whose probability distribution does not depend on the unknown parameters.
- 23.
It is important to distinguish the exponential distribution and the class of exponential families of distributions, which comprises a number of distributions like the normal distribution, the Poisson distribution, the binomial distribution, the exponential distribution and others [142, pp. 82–84]. The common form of the exponential family in the pdf is:
$$\displaystyle{f_{\vartheta }(x) =\exp {\bigl ( A(\vartheta ) \cdot B(x)\, +\, C(x)\, +\, D(\vartheta )\bigr )},}$$where the parameter ϑ can be a scalar or a vector.
- 24.
We remark that memorylessness is not tantamount to independence. Independence requires \(P(\mathcal{T} > s + t\,\vert \,\mathcal{T} > s) = P(\mathcal{T} > s + t)\).
- 25.
As mentioned for the Cauchy distribution (Sect. 2.5.7), the location parameter defines the center of the distribution ϑ and the scale parameter γ determines its width, even in cases where the corresponding moments μ and σ 2 do not exist.
- 26.
The symbol \(\buildrel \mbox{ $\mathrm{d}$} \over =\) means equality in distribution.
- 27.
We remark that, for all stable distributions except the normal distribution, the conventional skewness (Sect. 2.1.2) is undefined.
- 28.
For the reader who is interested in more details on mathematical statistics, we recommend the classic textbook by the Polish mathematician Marek Fisz [179] and the comprehensive treatise by Stuart and Ord [514, 515], which is a new edition of Kendall’s classic on statistics. An account that is useful as a not too elaborate introduction can be found in [257], while the monograph [88] is particularly addressed to experimentalists using statistics, and a wide variety of other, equally suitable texts are, of course, available in the rich literature on mathematical statistics.
- 29.
It is important to note that 〈m i 〉 is the expectation value of an average over a finite sample, whereas the genuine expectation value refers to the entire sample space. In particular, we find
$$\displaystyle{\langle m\rangle = \left < \frac{1} {n}\sum _{i=1}^{n}x_{ i}\right > =\mu =\hat{\mu } _{1},}$$where μ is the first (raw) moment. For the higher moments, the situation is more complicated and requires some care (see text).
- 30.
We indicate the expected converge in the sense of the central limit theorem by choosing the symbol X k−1 2 for the finite n expression with lim n → ∞ X k−1 2(n) = χ k−1 2.
- 31.
Recall the claim by Ronald Fisher and others to the effect that Mendel’s data were too good to be true.
- 32.
Variables and parameters of a function are separated by a semicolon as in f(x; p).
- 33.
The prerequisite for asymptotic normality is, of course, that the central limit theorem should be applicable, requiring finite expectation value and finite variance of the distribution \(f(\mathbf{x}\vert \boldsymbol{\theta }\)).
- 34.
The notation \(\mathrm{E}{\bigl (\ldots \vert \theta \bigr )}\) stands for a conditioned expectation value. Here the average is taken over the random variable \(\mathcal{X}\) for a given value of θ.
- 35.
The signed curvature of a function y = f(x) is defined by
$$\displaystyle{k(x) = \frac{\mathrm{d}^{2}f(x)/\mathrm{d}x^{2}} {{\Bigl (1 +\big (\mathrm{d}f(x)/\mathrm{d}x\big)^{2}\Bigr )}^{3/2}}\;.}$$If the tangent df(x)∕dx is small compared to unity, the curvature is determined by the second derivative d2 f(x)∕dx 2. Use of the function κ(x) = | k(x) | as (unsigned) curvature is also common.
- 36.
The equivalence ∑ i = 1 n(x i −μ)2 = ∑ i = 1 n(x i − m)2 + n(m −μ)2 is easy to check using the definition of the sample mean m = ∑ i = 1 n x i ∕n. We use it here because the dependence on the unknown parameter μ is reduced to a single term.
- 37.
References
Adams, W.J.: The Life and Times of the Central Limit Theorem, History of Mathematics, vol. 35, 2nd edn. American Mathematical Society and London Mathematical Society, Providence, RI (2009). Articles by A. M. Lyapunov translated from the Russian by Hal McFaden.
Aldrich, J.: R. A. Fisher and the making of the maximum likelihood 1912–1922. Stat. Sci. 12, 162–176 (1997)
Bergström, H.: On some expansions of stable distribution functions. Ark. Math. 2, 375–378 (1952)
Chechkin, A.V., Metzler, R., Klafter, J., Gonchar, V.Y.: Introduction to the theory of Lévy flights. In: R. Klages, G. Radons, I.M. Sokolov (eds.) Anomalous Transport: Foundations and Applications, chap. 5, pp. 129–162. Wiley-VCH Verlag GmbH, Weinheim, DE (2008)
Chung, K.L.: A Course in Probability Theory, Probability and Mathematical Statistics, vol. 21, 2nd edn. Academic Press, New York (1974)
Chung, K.L.: Elementary Probability Theory with Stochastic Processes, 3rd edn. Springer, New York (1979)
Cochran, W.G.: The distribution of quadratic forms in normal systems, with applications to the analysis of covariance. Math. Proc. Camb. Philos. Soc. 30, 178–191 (1934)
Conrad, K.: Probability distributions and maximum entropy. Expository paper, University of Connecticut, Storrs, CT (2005)
Cooper, B.E.: Statistics for Experimentalists. Pergamon Press, Oxford (1969)
Cover, T.M., Thomas, J.A.: Elements of Information Theory, 2nd edn. Wiley, Hoboken (2006)
Cox, R.T.: The Algebra of Probable Inference. The John Hopkins Press, Baltimore (1961)
Cramér, H.: Mathematical Methods of Statistics. Princeton Univ. Press, Priceton (1946)
Eddy, S.R.: What is Bayesian statistics? Nat. Biotechnol. 22, 1177–1178 (2004)
Edgeworth, F.Y.: On the probable errors of frequence-constants. J. R. Stat. Soc. 71, 381–397 (1908)
Edgeworth, F.Y.: On the probable errors of frequence-constants (contd.). J. R. Stat. Soc. 71, 651–678 (1908)
Evans, M., Hastings, N.A.J., Peacock, J.B.: Statistical Distributions, 3rd edn. Wiley, New York (2000)
Feller, W.: The general form of the so-called law of the iterated logarithm. Trans. Am. Math. Soc. 54, 373–402 (1943)
Feller, W.: An Introduction to Probability Theory and Its Application, vol. I, 3rd edn. Wiley, New York (1968)
Fisher, R.A.: On an absolute criterion for fitting frequency curves. Messeng. Math. 41, 155–160 (1912)
Fisher, R.A.: On the mathematical foundations of theoretical statistics. Philos. Trans. R. Soc. Lond. A 222, 309–368 (1922)
Fisher, R.A.: Applications of “Student’s” distribution. Metron 5, 90–104 (1925)
Fisher, R.A.: Theory of statistical estimation. Proc. Camb. Philos. Soc. 22, 700–725 (1925)
Fisher, R.A.: Moments and product moments of sampling distributions. Proc. Lond. Math. Soc. Ser.2, 30, 199–238 (1928)
Fisher, R.A.: The logic of inductive inference. J. R. Stat. Soc. 98, 39–54 (1935)
Fisz, M.: Probability Theory and Mathematical Statistics, 3rd edn. Wiley, New York (1963)
Fisz, M.: Wahrscheinlichkeitsrechnung und mathematische Statistik. VEB Deutscher Verlag der Wissenschaft, Berlin (1989). In German
Fofack, H., Nolan, J.P.: Tail behavior, modes and other characteristics of stable distributions. Extremes 2, 39–58 (1999)
Foster, D.P.: Law of the iterated logarithm. Wikipedia entry, University of Pennsylvania, Philadelphia, PA (2009). Retrieved April 07, 2009 from en.wikipedia.org/wiki/Law_of_the_iterated_logarithm
Galton, F.: The geometric mean in vital and social statistics. Proc. Roy. Soc. Lond. 29, 365–367 (1879)
Gauß, C.F.: Theoria motus corporum coelestium in sectionibus conicis solem ambientium. Perthes et Besser, Hamburg (1809). English translation: Theory of the Motion of the Heavenly Bodies Moving about the Sun in Conic Sections. Little, Brown. Boston, MA. 1857. Reprinted by Dover, New York (1963)
Gelman, A., Carlin, J.B., Stern, H.S., Rubin, D.B.: Baysian Data Analysis, 2nd edn. Texts in Statistical Science. Chapman & Hall / CRC, Boca Raton (2004)
Gray, R.M.: Entropy and Information Theory, 2nd edn. Springer, New York (2011)
Hartman, P., Wintner, A.: On the law of the iterated logarithm. Am. J. Math. 63, 169–173 (1941)
Hogg, R.V., McKean, J.W., Craig, A.T.: Introduction to Mathematical Statistics, 7th edn. Pearson Education, Upper Saddle River (2012)
Hogg, R.V., Tanis, E.A.: Probability and Statistical Inference, 8th edn. Pearson – Prentice Hall, Upper Saddle River (2010)
Jaynes, E.T.: Information theory and statistical mechanics. Phys. Rev. 106, 620–630 (1957)
Jaynes, E.T.: Information theory and statistical mechanics. II. Phys. Rev. 108, 171–190 (1957)
Jaynes, E.T.: Probability Theory. The Logic of Science. Cambridge University Press, Cambridge (2003)
Johnson, N.L., Kotz, S., Balakrishnan, N.: Continuous Univariate Distributions, Probability and Mathematical Statistics. Applied Probability and Statistics, vol. 1, 2nd edn. Wiley, New York (1994)
Johnson, N.L., Kotz, S., Balakrishnan, N.: Continuous Univariate Distributions, Probability and Mathematical Statistics. Applied Probability and Statistics, vol. 2, 2nd edn. Wiley, New York (1995)
Kenney, J.F., Keeping, E.S.: Mathematics of Statistics, 2nd edn. Van Nostrand, Princeton (1951)
Kenney, J.F., Keeping, E.S.: The k-Statistics. In Mathematics of Statistics. Part I, §7.9, 3rd edn. Van Nostrand, Princeton (1962)
Khinchin, A.Y.: Über einen Satz der Wahrscheinlichkeitsrechnung. Fundam. Math. 6, 9–20 (1924). In German
Knuth, D.E.: Two notes on notation. Am. Math. Monthly 99, 403–422 (1992)
Kolmogorov, A.N.: Über das Gesetz es interierten Logarithmus. Math. Ann. 101, 126–135 (1929). In German
Kowalski, C.J.: Non-normal bivariate distributions with normal marginals. Am. Statistician 27, 103–106 (1973)
Laplace, P.S.: Mémoirs sur la probabilité des causes par les évènemens. Mémoires de Mathématique et de Physique, Presentés à l’Académie Royale des Sciences, par divers Savans & lûs dans ses Assemblées 6, 621–656 (1774). Reprinted in Laplace’s Ouevres complète 8, 27–65. English translation: Stat. Sci. 1, 364–378 (1986)
Laplace, P.S.: Théorie analytique des probabililtés. Courcies Imprimeur, Paris (1812)
Le Cam, L.: Maximum likelihood: An introduction. Int. Stat. Rev. 58, 153–171 (1990)
Lee, P.M.: Bayesian Statistics, 3rd edn. Hodder Arnold, London (2004)
Leemis, L.: Poisson to normal. College of William & Mary, Department of Mathematics, Williamsburg, VA (2012). URL: www.math.wm.edu/~leemis/chart/UDR/PDFs/PoissonNormal.pdf
Lévy, P.: Calcul de probabilités. Geuthier-Villars, Paris (1925). In French
Limpert, E., Stahel, W.A., Abbt, M.: Log-normal distributions across the sciences: Keys and clues. BioScience 51, 341–352 (2001)
Lindeberg, J.W.: Über das Exponentialgesetzes in der Wahrscheinlichkeitsrechnung. Ann. Acad. Sci. Fenn. 16, 1–23 (1920). In German.
Lindeberg, J.W.: Eine neue Herleitung des Exponentialgesetzes in der Wahrscheinlichkeitsrechnung. Math. Z. 15, 211–225 (1922). In German
Lukacs, E.: Characteristic Functions. Hafner Publ. Co., New York (1970)
Lukacs, E.: A survey of the theory of characteristic functions. Adv. Appl. Probab. 4, 1–38 (1972)
Lyapunov, A.M.: Sur une proposition de la théorie des probabilités. Bull. Acad. Imp. Sci. St. Pétersbourg 13, 359–386 (1900)
Lyapunov, A.M.: Nouvelle forme du théorème sur la limite des probabilités. Mem. Acad. Imp. Sci. St. Pétersbourg, Classe Phys. Math. 12, 1–24 (1901)
Mallows, C.: Anothre comment on O’Cinneide. Am. Statistician 45, 257 (1991)
McAlister, D.: The law of the geometric mean. Proc. R. Soc. Lond. 29, 367–376 (1879)
McKean, Jr., H.P.: Stochastic Integrals. Wiley, New York (1969)
Melnick, E.L., Tenenbein, A.: Misspecifications of the normal distribution. Am. Statistician 36, 372–373 (1982)
Merkle, M.: Jensen’s inequality for medians. Stat. Probab. Lett. 71, 277–281 (2005)
Nolan, J.P.: Stable Distributions: Models for Heavy-Tailed Data. Birkhäuser, Boston (2013). Unfinished manuscript. Online at academic2.american.edu/~jpnolan
Norden, R.H.: A survey of maximum likelihood estimation I. Int. Stat. Rev. 40, 329–354 (1972)
Norden, R.H.: A survey of maximum likelihood estimation II. Int. Stat. Rev. 41, 39–58 (1973)
Park, S.Y., Bera, A.K.: Maximum entropy autoregressive conditional heteroskedasticy model. J. Econ. 150, 219–230 (2009)
Pearson, E.S., Wishart, J.: “Student’s” Collected Papers. Cambridge University Press, Cambridge (1942). Cambridge University Press for the Biometrika Trustees
Pearson, K.: The problem of the random walk. Nature 72, 294 (1905)
Pearson, K.: Notes on the history of correlation. Biometrika 13, 25–45 (1920)
Pearson, K., Filon, L.N.G.: Contributions to the mathematical theory of evolution. IV. On the probable errors of frequency constants and on the influence of random selection on variation and correlation. Philos. Trans. R. Soc. Lond. A 191, 229–311 (1898)
Pollard, H.: The representatioin of \(e^{-x^{\lambda } }\) as a Laplace intgeral. Bull. Am. Math. Soc. 52, 908–910 (1946)
Press, W.H., Flannery, B.P., Teukolsky, S.A., Vetterling, W.T.: Numerical Recipes. The Art of Scientific Computing. Cambridge University Press, Cambridge (1986)
Price, R.: LII. an essay towards soliving a problem in the doctrine of chances. By the late Ref. Mr. Bayes, communicated by Mr. Price, in a letter to John Canton, M.A. and F.R.S. Philos. Trans. R. Soc. Lond. 53, 370–418 (1763)
Rao, C.R.: Information and the acuracy attainable in the estimation of statistical parameters. Bull. Calcutta Math. Soc. 37, 81–89 (1945)
Schilling, M.F., Watkins, A.E., Watkins, W.: Is human height bimodal? Am. Statistician 56, 223–229 (2002)
Seneta, E.: The central limit problem and lienear least squares in pre-revolutionary Russia: The background. Math. Scientist 9, 37–77 (1984)
Shannon, C.E.: A mathematical theory of communication. Bell Syst. Tech. J. 27, 379–423 (1948)
Shannon, C.E., Weaver, W.: The Mathematical Theory of Communication. University of Illinois Press, Urbana (1949)
Stevens, J.W.: What is Bayesian Statistics? What is …? Hayward Medical Communications, a division of Hayward Group Ltd., London (2009)
Stigler, S.M.: Laplace’s 1774 memoir on inverse probability. Stat. Sci. 1, 359–378 (1986)
Stigler, S.M.: The epic story of maximum likelihood. Stat. Sci. 22, 598–620 (2007)
Stone, J.V.: Bayes’ Rule. A Tutorial to Bayesian Analysis. Sebtel Press, England (2013)
Stuart, A., Ord, J.K.: Kendall’s Advanced Theory of Statistics. Volume 1: Distribution Theory, 5th edn. Charles Griffin & Co., London (1987)
Stuart, A., Ord, J.K.: Kendall’s Advanced Theory of Statistics. Volume 2: Classical Inference and Relationship, 5th edn. Edward Arnold, London (1991)
Student: The probable error of a mean. Biometrika 6, 1–25 (1908)
Swamee, P.K.: Near lognormal distribution. J. Hydrol. Eng. 7, 441–444 (2007)
Volkenshtein, M.V.: Entropy and Information, Progress in Mathematical Physics, vol. 57. Birkhäuser Verlag, Basel, CH (2009). German version: W. Ebeling, Ed. Entropie und Information. Wissenschaftliche Taschenbücher, Band 306, Akademie-Verlag, Berlin (1990). Russian Edition: Nauka Publ., Moscow (1986)
Weber, N.A.: Dimorphism of the African oecophylla worker and an anomaly (hymenoptera formicidae). Ann. Entomol. Soc. Am. 39, 7–10 (1946)
Weisstein, E.W.: Fourier Transform. MathWorld - A Wolfram Web Resource. The Wolfram Centre, Long Hanborough, UK. http://www.Mathworld.wolfram.com/FourierTransform.html, retrieved July 17, 2015
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2016 Springer International Publishing Switzerland
About this chapter
Cite this chapter
Schuster, P. (2016). Distributions, Moments, and Statistics. In: Stochasticity in Processes. Springer Series in Synergetics. Springer, Cham. https://doi.org/10.1007/978-3-319-39502-9_2
Download citation
DOI: https://doi.org/10.1007/978-3-319-39502-9_2
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-39500-5
Online ISBN: 978-3-319-39502-9
eBook Packages: Physics and AstronomyPhysics and Astronomy (R0)