Gaussian Signals, Correlation Matrices, and Sample Path Properties
In general, determination of the shape of the sample paths of a random signal X(t) requires knowledge of n-point probabilities P(a 1 < X(t 1) < b 1,. . ., a a < X(t n) < b n) for an arbitrary n, and arbitrary windows a 1 < b 1, . . ., a n < b n. But usually this information cannot be recovered if the only signal characteristic known is the autocorrelation function. The latter depends on the two-point distributions but does not uniquely determine them. However, in the case of Gaussian signals, the autocorrelations determine not only the two-point probability distributions but also all the n-point probability distributions, so that complete information is available within the second-order theory. For example, this means that you only have to estimate means and covariances to make the model. Also, in the Gaussian universe, weak stationarity implies strict stationarity as defined in Chapter 4. For the sake of simplicity all signals in this chapter are assumed to be real-valued. The chapter ends with a more subtle analysis of sample path properties of stationary signals such as continuity and differentiability; in the Gaussian case the information is particularly complete.
KeywordsAutocorrelation Function Sample Path Correlation Matrice Random Quantity Random Signal
Unable to display preview. Download preview PDF.
- 33.See, e.g., M. Denker and W. A. Woyczyński’s book mentioned in previous chapters.Google Scholar
- 36.This argument relies on the so-called Cauchy criterion of convergence for random quantities with finite variance: A sequence X n converges in the mean-square as n → ∞, that is, there exists a random quantity X such that limn→∞ E(X n − X)2 = 0, if and only if limn→∞ limm→∞ E(X n − X m)2 = 0. This criterion permits the verification of the convergence without knowing what the limit is; see, e.g., Theorem 11.4.2 in W. Rudin, Principles of Mathematical Analysis, McGraw-Hill, New York, 1976.MATHGoogle Scholar