Abstract
In Chapter 1 we said that a measurement is determined in part by a “signal” of interest, and in part by unknown factors we may call “noise.” Statistical models introduce probability distributions to describe the variation due to noise, and thereby achieve quantitative expressions of knowledge about the signal—a process we will describe more fully in Chapters 7 and 10.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
Additional comments on this method, and its use in analysis of synaptic plasticity, may be found in Faber and Korn (1991).
- 2.
The derivation of the Poisson distribution as an approximation to the binomial is credited to Siméon D. Poisson, having appeared in his book, published in 1837. Bortkiewicz (1898, The Law of Small Numbers) emphasized the importance of the Poisson distribution as a model of rare events.
- 3.
Rutherford et al. (1920, p. 172); cited in Feller (1968).
- 4.
He actually found the “probable error,” which is \(.6745\sigma \) to be 48.4 s. See Stigler (1986) for a discussion of these data.
- 5.
Actually, different authors give somewhat different advice. The acceptability of this or any other approximation must depend on the particular use to which it will be put. For computing the probability that a Poisson random variable will fall within 1 standard deviation of its mean, the normal approximation has an error of less than 10 % when \(\lambda = 15\). However, it will not be suitable for calculations that go far out into the tails, or that require several digits of accuracy. In addition, a computational fine point is mentioned in many books. Suppose we wish to approximate a discrete cdf \(F(x)\) by a normal, say \(\tilde{F}(x)\). The the value \(\tilde{F}(x+.5)\) is generally closer to \(F(x)\) than is \(\tilde{F}(x)\). This is sometimes called a continuity correction.
- 6.
Another reason the exponential distribution is special is that among all distributions on \((0,\infty )\) with mean \(\mu =1/\lambda \), the \({\textit{Exp}}(\lambda )\) distribution has the maximum entropy. See Eq. (4.33).
- 7.
The memoryless property can also be stated analogously for discrete distributions; in the discrete case only the geometric distributions are memoryless.
- 8.
It may be shown that \(\hat{\rho }_{XY|U}\) is equal to the correlation between the pair of residual vectors found from the multiple regressions (see Chapter 12) of \(x\) on \(u\) and \(y\) on \(u\).
- 9.
In fact, \(\hat{\rho }_{XY|U}\) is the maximum likelihood estimate; maximum likelihood estimation is discussed in Chapter 7.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Copyright information
© 2014 Springer Science+Business Media New York
About this chapter
Cite this chapter
Kass, R.E., Eden, U.T., Brown, E.N. (2014). Important Probability Distributions. In: Analysis of Neural Data. Springer Series in Statistics. Springer, New York, NY. https://doi.org/10.1007/978-1-4614-9602-1_5
Download citation
DOI: https://doi.org/10.1007/978-1-4614-9602-1_5
Publisher Name: Springer, New York, NY
Print ISBN: 978-1-4614-9601-4
Online ISBN: 978-1-4614-9602-1
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)