Abstract
As we have seen in Section 2.5, information theory provides theoretical upper bounds on the information rates that can be obtained over physical channels. While these bounds can be computed for a wide range of channels, the theory gives little indication of how they may be attained. It is fortuitous that, about the same time information theory was conceived, a theory of error correcting and detecting codes emerged. By the systematic injection of redundant bits into the encoding of information, the reliability of transmission could be improved. Hopes that codes achieving the bounds could be easily obtained proved to be illusionary; however, the theory has steadily developed so that codes have become indispensable components of many communications systems yielding considerable performance gains. The remarkable series of pictures that have been received from deep-space probes attest to the power of coding. A recent breakthrough, trellis coding, which is a direct outgrowth of the theory and which will be covered in Section 5.8, has put the Shannon bound within reach for bandlimited channels.
In writing this chapter significant use was made of the reference texts [1]–[4], particularly the first of these. In addition to these, we benefited from insightful review articles [5]–[8]. In keeping with the focus of the text, we shall consider codes in a communication context; however, a significant application of the technique lies in data storage and recording. (See [9] as an example ofthelauer.)
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
References
J. G. Proakis, Digital Communications, McGraw-Hill, New York, 1983.
S. Lin and D. J. Costello, Error-Control Coding, Prentice-Hall, 1983.
W. W. Peterson and E. J. Weldon, Error-Correcting Codes, Second Edition, MIT Press, Cambridge, Mass., 1972.
G. C. Clark and J. B. Cain, Error-Correction Coding for Digital Communications, Plenum, 1981.
V. J. Bhargava, “Forward-Error Correction for Digital Communications,” IEEE Communications Society Magazine, Vol. 21, No. 1, pp. 11–19, January 1983.
S. Lin et al., “Automatic-Repeat-Request Error Control Schemes,” IEEE Communications Society Magazine, Vol. 22, No. 12, pp. 5–16, December 1984.
E. R. Berlekamp et al., “The Application of Error Control to Communications,” IEEE Communications Society Magazine, Vol. 25, No. 4, pp. 44–57, April 1987.
A. J. Viterbi, “Letter to the Editor,” IEEE Communications Society Magazine, Vol. 25, June 1987.
B. H. Peek, “Communications Aspects of the Compact-Disk Digital Audio System,” IEEE Communications Society Magazine, Vol. 23, No. 2, pp. 7–15, February 1985.
D. Slepian, “A Class of Binary-Signaling Alphabets,” BSTJ, Vol. 35, pp. 203–234, 1956.
N. J. A. Sloane, “The Packing of Spheres,” Scientific American, January 1984.
A. Gill, Linear Sequential Circuits, McGraw-Hill, 1967.
A. J. Viterbi, “Error Bounds for Convolutional Codes and an Asymptotically Optimum Decoding Algorithm,” IEEE Transactions on Information Theory, Vol. IT-13, pp. 260–269, April 1967.
J. K. Omura, “Optimal Receiver Design for Convolutional Codes and Channels with Memory Via Control Theoretical Concepts,” Information Science, Vol. 3, pp. 243–266, July 1971.
R. E. Bellman, Dynamic Programming, Princeton University Press, 1957.
S. E. Dreyfus, Dynamic Programming and the Calculus of Variations, Academic Press, 1965.
J. F. Hayes, “The Viterbi Algorithm Applied to Digital-Data Transmission,” IEEE Communications Society Magazine, Vol. 13, No. 2, pp. 5–16, March 1975.
J. P. Odenwalder, “Optimal Decoding of Convolutional Codes,” Ph.D. dissertation, Department of Systems Science, University of California, Los Angeles, 1970.
K. J. Larsen, “Short Convolutional Codes with Maximal-Free Distance for Rates 1/2, 1/3 and 1/4,” IEEE Transactions on Information Theory, Vol. IT-19, pp. 371–372, May 1973.
E. Parke, “Short Binary Convolutional Codes with Maximal-Free Distance for Rates 2/3 and 3/4,” IEEE Trans Information Theory, Vol. IT-20, pp. 683–689, September 1974.
D. G. Dant, J. W. Modestino, and L. D. Wismer, “New Short Constraint Length Convolutional Code Construction for Selected Rational Rates,” IEEE Transactions on Information Theory, Vol. IT-28, pp. 793–799, September 1982.
R. M. Fano, “A Heuristic Discussion of Probabilistic Decoding,” IEEE Transactions on Information Theory, Vol. IT-9, pp. 64–74, April 1963.
K. Zigangirov, “Some Sequential-Decoding Procedures,” Probl. Peredachi Inf., Vol. 2, pp. 13–25, 1966.
F. Jelinek, “A Fast Sequential-Decoding Algorithm Using a Stack,” IBM Journal of Research and Development, Vol. 13, pp. 675–685, November 1969.
D. Haccoun, “A Branching Process Analysis of the Average Number of Computations in the Stack Algorithm,” IEEE Transactions on Information Theory, Vol IT-30, No. 3, pp. 497–508, May 1984.
S. B. Weinstein, “In Galois Fields,” IEEE Transactions on Information Theory, Vol. 17, p. 220, March 1971.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 1992 Springer Science+Business Media New York
About this chapter
Cite this chapter
Gitlin, R.D., Hayes, J.F., Weinstein, S.B. (1992). Error Correcting and Detecting Codes. In: Data Communications Principles. Applications of Communications Theory. Springer, Boston, MA. https://doi.org/10.1007/978-1-4615-3292-7_3
Download citation
DOI: https://doi.org/10.1007/978-1-4615-3292-7_3
Publisher Name: Springer, Boston, MA
Print ISBN: 978-1-4613-6448-1
Online ISBN: 978-1-4615-3292-7
eBook Packages: Springer Book Archive