The goal of this chapter is to prove an ergodic theorem for sample entropy of finite alphabet random processes. The result is sometimes called the ergodic theorem of information theory or the asymptotic equipartion theorem, but it is best known as the Shannon-McMillan-Breiman theorem. It provides a common foundation to many of the results of both ergodic theory and information theory. Shannon  first developed the result for convergence in probability for stationary ergodic Markov sources. McMillan  proved L 1 convergence for stationary ergodic sources and Breiman   proved almost everywhere convergence for stationary and ergodic sources. Billingsley  extended the result to stationary nonergodic sources. Jacobs   extended it to processes dominated by a stationary measure and hence to two-sided AMS processes. Gray and Kieffer  extended it to processes asymptotically dominated by a stationary measure and hence to all AMS processes. The generalizations to AMS processes build on the Billingsley theorem for the stationary mean. Following generalizations of the definitions of entropy and information, corresponding generalizations of the entropy ergodic theorem will be considered in Chapter 8.
KeywordsSample Entropy Error Index Entropy Rate Finite Alphabet Markov Approximation
Unable to display preview. Download preview PDF.