Abstract
In this chapter, data compression as it relates to multimedia information is studied from the point of view of lossless algorithms, where the input data is essentially exactly recoverable from the compressed data Lossy algorithms, for which this is not the case, are presented in Chapter 8. Here we introduce the fundamentals of information theory and algorithms whose goal is a savings in bitrate given the entropy, especially Huffman Coding and its adaptive version. We then study Dictionary-based Coding (as in Winzip) and go on to a detailed discussion of Arithmetic Coding. Finally, Lossless Image Compression is examined specifically.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
Since we have chosen 2 as the base for logarithms in the above definition, the unit of information is bit—naturally also most appropriate for the binary code representation used in digital computers. If the log base is 10, the unit is the hartley; if the base is \(e\), the unit is the nat.
- 2.
An information source that is independently distributed, meaning that the value of the current symbol does not depend on the values of the previously appeared symbols.
References
M. Nelson, J.L. Gailly, The Data Compression Book, 2nd edn. (M&T Books, New York, 1995)
K. Sayood, Introduction to Data Compression, 4th edn. (Morgan Kaufmann, San Francisco, 2012)
C.E. Shannon, A mathematical theory of communication. Bell Syst. Tech. J. 27:379–423, 623–656 (1948)
C.E. Shannon, W. Weaver, The Mathematical Theory of Communication. (University of Illinois Press, Illinois, 1971)
R.C. Gonzalez, R.E. Woods, Digital Image Processing, 3rd edn. (Prentice-Hall, USA, 2007)
R. Fano, Transmission of Information, (MIT Press, Cambridge, 1961)
D.A. Huffman, A method for the construction of minimum-redundancy codes. Proc. IRE 40(9), 1098–1101 (1952)
T.H. Cormen, C.E. Leiserson, R.L. Rivest, Introduction to Algorithms, 3rd edn. (The MIT Press, Cambridge, Massachusetts, 2009)
J. Ziv, A. Lempel, A universal algorithm for sequential data compression. IEEE Trans. Inf. Theory 23(3), 337–343 (1977)
J. Ziv, A. Lempel, Compression of individual sequences via variable-rate coding. IEEE Trans. Inf. Theory 24(5), 530–536 (1978)
T.A. Welch, A technique for high performance data compression. IEEE Comput. 17(6), 8–19 (1984)
J. Rissanen, G.G. Langdon, Arithmetic coding. IBM J. Res. Dev. 23(2), 149–162 (1979)
I.H. Witten, R.M. Neal, J.G. Cleary, Arithmetic coding for data compression. Commun. ACM 30(6), 520–540 (1987)
T.C. Bell, J.G. Cleary, I.H. Witten, Text Compression (Prentice Hall, Englewood Cliffs, New Jersey, 1990)
N. Abramson, Information Theory and Coding (McGraw-Hill, New York, 1963)
F. Jelinek, Probabilistic Information Theory (McGraw-Hill, New York, 1968)
R. Pasco, Source Coding Algorithms for Data Compression. Ph.D. thesis, Department of Electrical Engineering, Stanford University, 1976
S.M. Lei, M.T. Sun, An entropy coding system for digital HDTV applications. IEEE Trans. Circuits Syst. Video Technol. 1(1):147–154 (1991)
P. G. Howard and J. S. Vitter. Practical implementation of arithmetic coding. In J. A. Storer, editor, Image and Text Compression, pages 85–112. Kluwer Academic Publishers, 1992.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Copyright information
© 2014 Springer International Publishing Switzerland
About this chapter
Cite this chapter
Li, ZN., Drew, M.S., Liu, J. (2014). Lossless Compression Algorithms. In: Fundamentals of Multimedia. Texts in Computer Science. Springer, Cham. https://doi.org/10.1007/978-3-319-05290-8_7
Download citation
DOI: https://doi.org/10.1007/978-3-319-05290-8_7
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-05289-2
Online ISBN: 978-3-319-05290-8
eBook Packages: Computer ScienceComputer Science (R0)