Abstract
The goal of this chapter is to prove an ergodic theorem for sample entropy of finite alphabet random processes. The result is sometimes called the ergodic theorem of information theory or the asymptotic equipartion theorem, but it is best known as the Shannon-McMillan-Breiman theorem. It provides a common foundation to many of the results of both ergodic theory and information theory. Shannon [129] first developed the result for convergence in probability for stationary ergodic Markov sources. McMillan [103] proved L 1 convergence for stationary ergodic sources and Breiman [19] [20] proved almost everywhere convergence for stationary and ergodic sources. Billingsley [15] extended the result to stationary nonergodic sources. Jacobs [67] [66] extended it to processes dominated by a stationary measure and hence to two-sided AMS processes. Gray and Kieffer [54] extended it to processes asymptotically dominated by a stationary measure and hence to all AMS processes. The generalizations to AMS processes build on the Billingsley theorem for the stationary mean. Following generalizations of the definitions of entropy and information, corresponding generalizations of the entropy ergodic theorem will be considered in Chapter 8.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 1990 Springer Science+Business Media New York
About this chapter
Cite this chapter
Gray, R.M. (1990). The Entropy Ergodic Theorem. In: Entropy and Information Theory. Springer, New York, NY. https://doi.org/10.1007/978-1-4757-3982-4_3
Download citation
DOI: https://doi.org/10.1007/978-1-4757-3982-4_3
Publisher Name: Springer, New York, NY
Print ISBN: 978-1-4757-3984-8
Online ISBN: 978-1-4757-3982-4
eBook Packages: Springer Book Archive