Abstract
We saw in Chapter 1 how to encode information so that decoding is unique or instantaneous. In either case the basic requirement, given by Kraft’s or McMillan’s inequality, is that we should use sufficiently long code-words. This raises the question of efficiency: if the code-words are too long, then storage may be difficult and transmission may be slow. We therefore need to strike a balance between using words which are long enough to allow effective decoding, and short enough for economy. Prom this point of view, the best codes available are those called optimal codes, the instantaneous codes with least average word- length. We will prove that they exist, and we will examine Huffman’s algorithm for constructing them. For simplicity, we will concentrate mainly on the binary case (r = 2), though we will briefly outline how these ideas extend to non-binary codes.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2000 Springer-Verlag London
About this chapter
Cite this chapter
Jones, G.A., Mary Jones, J. (2000). Optimal Codes. In: Information and Coding Theory. Springer Undergraduate Mathematics Series. Springer, London. https://doi.org/10.1007/978-1-4471-0361-5_2
Download citation
DOI: https://doi.org/10.1007/978-1-4471-0361-5_2
Publisher Name: Springer, London
Print ISBN: 978-1-85233-622-6
Online ISBN: 978-1-4471-0361-5
eBook Packages: Springer Book Archive