The Huffman algorithm is based on the probabilities of the individual data symbols. These probabilities become a statistical model of the data. As a result, the compression produced by this method depends on how good that model is. Dictionary-based compression methods are different. They do not use a statistical model of the data, nor do they employ variable-length codes. Instead they select strings of symbols from the input and employ a dictionary to encode each string as a token. The dictionary holds strings of symbols, and it may be static or dynamic (adaptive). The former is permanent, sometimes allowing the addition of strings but no deletions, whereas the latter holds strings previously found in the input, allowing for additions and deletions of strings as new input is being read.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Rights and permissions
Copyright information
© 2008 Springer-Verlag London Limited
About this chapter
Cite this chapter
(2008). Dictionary Methods. In: A Concise Introduction to Data Compression. Undergraduate Topics in Computer Science. Springer, London. https://doi.org/10.1007/978-1-84800-072-8_3
Download citation
DOI: https://doi.org/10.1007/978-1-84800-072-8_3
Publisher Name: Springer, London
Print ISBN: 978-1-84800-071-1
Online ISBN: 978-1-84800-072-8
eBook Packages: Computer ScienceComputer Science (R0)