Universal Source Coding

  • Lee D. Davisson
Part of the International Centre for Mechanical Sciences book series (CISM, volume 166)


The basic purpose of data compression is to massage a data stream to reduce the average bit rate required for transmission or storage by removing unwanted redundancy and/or unnecessary precision. A mathematical formulation of data compression providing figures of merit and bounds on optimal performance was developed by Shannon [1,2] both for the case where a perfect compressed reproduction is required and for the case where a certain specified average distortion is allowable. Unfortunately, however, Shannon’s probabilistic approach requires advance precise knowledge of the statistical description of the process to be compressed - a demand rarely met in practice. The coding theorems only apply, or are meaningful, when the source is stationary and ergodic.

We here present a tutorial description of numerous recent approaches and results generalizing the Shannon approach to unknown statistical environments. Simple examples and empirical results are given to illustrate the essential ideas.


Stationary Source Average Mutual Information Source Block Message Block Average Distortion 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. [1]
    SHANNON, C.E., “The Mathematical Theory of Communication”, University of Illinois Press, 1949, Urbana, Illinois.MATHGoogle Scholar
  2. [2]
    Coding Theorems for a Discrete Source with a Fidelity Criterion“, in IRE Nat.Conv.Rec., pt. 4, pp. 142–163, 1959.Google Scholar
  3. [3]
    ROZANOV, YU., “Stationary Random Processes”, Holden-Day, San Francisco, 1967.MATHGoogle Scholar
  4. [4]
    GRAY, R.M., and DAVISSON, L.D., “Source Coding Theorems without the Ergodic Assumption”, IEEE Trans. IT, July 1974.Google Scholar
  5. [5]
    GRAY, R.M., and DAVISSON, L.D., “The Ergodic Decomposition of Discrete Stationary Sources”, IEEE Trans. IT, September 1974.Google Scholar
  6. [6]
    DAVISSON, L.D., “Universal Noiseless Coding”, IEEE Trans. Inform. Theory, Vol. IT-19, pp. 783–795, November 1973.MATHMathSciNetCrossRefGoogle Scholar
  7. [7]
    GRAY, R.M., NEUHOFF, D., and SHIELDS, P., “A Generalization of Ornstein’s d Metric with Applications to Information Theory”, Annals of Probability (to be published).Google Scholar
  8. [8]
    NEUHOFF, D., GRAY, R.M. and DAVISSON, L.D., “Fixed Rate Universal Source Coding with a Fidelity Criterion”, submitted to IEEE Trans. IT.Google Scholar
  9. [9]
    PURSLEY, M.B., “Coding Theorems for Non-Ergodic Sources and Sources with Unknown Parameters”, USC Technical Report, February 1974.Google Scholar
  10. [10]
    ZIV, J., “Coding of Sources with Unknown Statistics-Part I: Probability of Encoding Error; Part II: Distortion Relative to a Fidelity Criterion”, IEEE Trans.Info.Theo., vol IT-18, No. 4, July 1972, pp. 460–473.CrossRefGoogle Scholar
  11. [11]
    BLAHUT, R.E., “Computation of Channel Capacity and Rate-Distortion Functions”, IEEE Trans.Info.Theo., Vol. IT-18, No. 4, July 1972, pp. 460–473.MATHMathSciNetCrossRefGoogle Scholar
  12. [12]
    GALLAGER, R.G., “Information Theory and Reliable Communication”, New York, Wiley, 1968, ch. 9.Google Scholar
  13. [13]
    BERGER, T., “Rate Distortion Theory: A Mathematical Basis for Data Compression”, Englewood Cliffs, New Jersey, Prentice-Hall.Google Scholar
  14. [14]
    NEUHOFF, D., Ph.D. Research, Stanford University, 1973.Google Scholar
  15. [15]
    GRAY, R.M., and DAVISSON, L.D. “A Mathematical Theory of Data Compression (?)”, USCEE Report, September 1974.Google Scholar

Copyright information

© Springer-Verlag Wien 1975

Authors and Affiliations

  • Lee D. Davisson
    • 1
  1. 1.Department of Electrical EngineeringUniversity of Southern CaliforniaBerkeleyUSA

Personalised recommendations