Universal Coding of Non-Prefix Context Tree Sources

  • Yuri M. Shtarkov


The efficiency of data compression with the help of universal coding depends on the used model or set of models of the source. By expanding the set of models and/or increasing their complexity we can improve the approximation of the statistical properties of messages. However, this entails a higher redundancy and (usually) a higher complexity of coding. For this reason, the development of comparatively simple models capable of improving the statistical description of messages is of great importance. Not surprisingly, this problem has attracted much attention.


Conditional Probability Markov Chain Model Minimal Description Length Sequential Estimation Conditional Probability Distribution 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. [1]
    P.A.J. Volf and F.M.J. Willems, “A Context-Tree Branch-Weighting Algorithm,”, Proc. of 18th Symp. on Inform. Theory in the Benelux, 1997, 115–122.Google Scholar
  2. [2]
    M. J. Weinberger, J. J. Rissanen and R. B. Arps, “Applications of Universal Context Modeling to Losseless Compression of Gray-Scale Images”, IEEE Trans. Image Processing, vol. 5, no. 4, 1996, 575–586.CrossRefGoogle Scholar
  3. [3]
    Yu.M. Shtarkov, “Coding of discrete sources with unknown statistics”, Topics in Inform. Theory (Second Colloquium, Keszely, 1975), Colloquia Mathematica Sosietatis Janos Bolyai, Amsterdam, North Holland, vol. 16, 1977, 559–574.Google Scholar
  4. [4]
    Yu.M. Shtarkov, “Universal Sequential Coding of Single Messages”, Probl. Inform. Trans., vol. 23, no. 3, 1987, 3–17.MathSciNetGoogle Scholar
  5. [5]
    B.Ya. Ryabko, “Twice-Universal Coding”, Probl. Inform. Trans., vol. 20, no. 4, 1984, 396–402.Google Scholar
  6. [6]
    B.Ya. Ryabko, “Prediction of Random Sequences and Universal Coding”, Probi. Inform. Trans., vol. 24, no. 2, 1988, 3–14.MathSciNetGoogle Scholar
  7. [7]
    J.J. Rissanen, Stochastic Complexity in Statistical Inquiry, New Jersey: World Scientific Publ. Co., 1989.zbMATHGoogle Scholar
  8. [8]
    Yu.M. Shtarkov, “Aim Functions and Sequential Estimation of Source Model for Universal Coding”, Probl. Inform. Trans., vol. 35, no. 3, 1999.Google Scholar
  9. [9]
    J.J. Rissanen, “Complexity of Strings in the Class of Markov Sources”, IEEE Trans. Inform. Theory, vol. 32, no. 4, 1986, 526–532.MathSciNetzbMATHCrossRefGoogle Scholar
  10. [10]
    F.M.J. Willems, Yu.M. Shtarkov and Tj.J. Tjalkens, “Context Tree Weighting: A Sequential Universal Coding Procedure for FSMX Sources”, Proc. 1993 IEEE Intern. Symp. Inform. Theory, USA, 1993, 59.Google Scholar
  11. [11]
    F.M.J. Willems, Yu. M. Shtarkov and Tj. J. Tjalkens, “The Context Tree Weighting Method: Basic Properties”, IEEE Trans. Inform. Theory, vol. 41, no. 3, 1995, 653–664.zbMATHCrossRefGoogle Scholar
  12. [12]
    Yu.M. Shtarkov, Tj.J. Tjalkens and F.M.J. Willems, “Multialphabet Weighted Universal Coding of Context Tree Sources”, Probl. Inform. Trans., vol. 33, no. 1, 1997, 3–11.MathSciNetGoogle Scholar
  13. [13]
    M.J. Weinberger, J. J. Rissanen and M. Feder, “A Universal Finite Memory Source” IEEE Trans. Inform. Theory, vol. 41, no. 3, 1995, 643–652.MathSciNetzbMATHCrossRefGoogle Scholar
  14. [14]
    M.J. Weinberger, A. Lempel and J. Ziv, “A Sequential Algorithm for the Universal Coding of Finite Memory Sources” IEEE Trans. Inform. Theory, vol. 38, no. 3., 1992, 1002–1014.Google Scholar
  15. [15]
    B. Balkenhol, S. Kurtz, and Yu.M. Shtarkov, “Modifications of the Burrows and Wheeler Data Compression Algorithm”, Proc. of Data Compression Conference, 1999, 188–197.Google Scholar

Copyright information

© Springer Science+Business Media New York 2000

Authors and Affiliations

  • Yuri M. Shtarkov
    • 1
  1. 1.Institute for Problems of Information Transmission, RASMoscowRussia

Personalised recommendations