Abstract
This chapter summarizing background information assumes that the reader has some familiarity with linear algebra and basic probability. The basic model of information theory and error-correcting block codes is introduced. The basic example of the Hamming [7,4,3] code is presented in detail.
What is ironic is that even in basic background issues, coding theory has interesting open questions. For example, for a given length and dimension, which code is the best 2-error-correcting code? Another example: see Manin’s theorem 19 and the closely related Conjecture 22 below.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
From the publication point of view, Hamming published only binary [7,4,3] code, and Golay published the other binary and nonbinary Hamming codes. However it has been shown that Hamming knew all the binary codes prior to Shannon’s publication and had circulated them in an interdepartmental memorandum several months prior to the submission date of Golay’s one-page paper [Tho].
- 2.
Here “GF” stands for Galois field, named after the French mathematician Evariste Galois who died after a duel at the age of 20. See http://en.wikipedia.org/wiki/Evariste_Galois for more details on his life’s story.
- 3.
In other words, add the entries placed in each circle mod 2. If this sum is \(\equiv1\pmod{2}\), then we say that the circle fails the parity check; otherwise, it passes. See Example 5.
- 4.
The appendix Sect. 7.2 gives further details on finite fields.
- 5.
“Short exact” means (a) the arrow G is injective, i.e., G is a full-rank k×n matrix, (b) the arrow H is surjective, and (c) image(G)=kernel(H).
- 6.
If e>1 is allowed to vary with n, then more can be said, but we omit that case.
- 7.
Although, please see Chap. 4, where some interesting but conjectural results use quadratic residue code-like constructions to find related codes which might have very good parameters.
- 8.
We follow [MS], Sects. 16.4–16.5.
References
Aftab, Cheung, Kim, Thakkar, Yeddanapudi: Information theory. Student term project in a course at MIT http://web.mit.edu/6.933/www/. Preprint (2001). Available: http://web.mit.edu/6.933/www/Fall2001/Shannon2.pdf
Ash, R.: Information Theory. Dover, New York (1965)
Ball, S.: On large subsets of a nite vector space in which every subset of basis size is a basis. Preprint, Dec. (2010). http://www-ma4.upc.es/~simeon/jems-mds-conj-revised.pdf
A. Barg’s Coding Theory webpage http://www.ece.umd.edu/~abarg/
Berrou, C., Glavieux, A., Thitimajshima, P.: Near Shannon limit error-correcting coding and decoding: Turbo-codes. Communications (1993). ICC 93. Geneva. Technical Program, Conference Record, IEEE International Conference on, vol. 2, pp. 1064–1070 (1993). Available: http://www-classes.usc.edu/engr/ee-s/568/org/Original.pdf
Bierbrauer, J.: Introduction to Coding Theory. Chapman & Hall/CRC, New York (2005)
Conway, F., Siegelman, J.: Dark Hero of the Information Age. MIT Press, New York (2005)
de Launey, W., Gordon, D.: A remark on Plotkin’s bound. IEEE Trans. Inf. Theory 47, 352–355 (2001). Available: http://www.ccrwest.org/gordon/plotkin.pdf
Gaborit, P., Zemor, G.: Asymptotic improvement of the Gilbert–Varshamov bound for linear codes. IEEE Trans. Inf. Theory IT-54(9), 3865–3872 (2008). Available: http://arxiv.org/abs/0708.4164
Hill, R.: A First Course in Coding Theory. Oxford Univ Press, Oxford (1986)
Hirschfeld, J.W.P.: The main conjecture for MDS codes. In: Cryptography and Coding. Lecture Notes in Computer Science, vol. 1025. Springer, Berlin (1995). Avaialble from: http://www.maths.sussex.ac.uk/Staff/JWPH/RESEARCH/research.html
Huffman, W.C., Pless, V.: Fundamentals of Error-Correcting Codes. Cambridge Univ. Press, Cambridge (2003)
Jiang, T., Vardy, A.: Asymptotic improvement of the Gilbert–Varshamov bound on the size of binary codes. IEEE Trans. Inf. Theory 50, 1655–1664 (2004)
Joyner, D., Kreminski, R., Turisco, J.: Applied Abstract Algebra. Johns Hopkins Univ. Press, Baltimore (2004)
MacWilliams, F., Sloane, N.: The Theory of Error-Correcting Codes. North-Holland, Amsterdam (1977)
MacKay, D.J.C., Radford, M.N.: Near Shannon limit performance of low density parity check codes. Electronics Letters, July (1996). Available: http://www.inference.phy.cam.ac.uk/mackay/abstracts/mncEL.html
Niven, I.: Coding theory applied to a problem of Ulam. Math. Mag. 61, 275–281 (1988)
The SAGE Group: SAGE: Mathematical software, Version 4.6. http://www.sagemath.org/
Shokranian, S., Shokrollahi, M.A.: Coding Theory and Bilinear Complexity. Scientific Series of the International Bureau, vol. 21. KFA Jülich (1994)
Thompson, T.: From Error-Correcting Codes Through Sphere Packings to Simple Groups. Cambridge Univ. Press, Cambridge (2004)
van Lint, J.: Introduction to Coding Theory, 3rd edn. Springer, Berlin (1999).
Viterbi, A.J., Viterbi, A.M., Sindhushayana, N.T.: Interleaved concatenated codes: New perspectives on approaching the Shannon limit. Proc. Natl. Acad. Sci. USA 94, 9525–9531 (1997)
Voloch, F.: Computing the minimum distance of cyclic codes. Preprint. Available: http://www.ma.utexas.edu/users/voloch/preprint.html
Ward, H.N.: Quadratic residue codes and symplectic groups. J. Algebra 29, 150–171 (1974)
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Copyright information
© 2011 Springer Science+Business Media, LLC
About this chapter
Cite this chapter
Joyner, D., Kim, JL. (2011). Background on Information Theory and Coding Theory. In: Selected Unsolved Problems in Coding Theory. Applied and Numerical Harmonic Analysis. Birkhäuser, Boston, MA. https://doi.org/10.1007/978-0-8176-8256-9_1
Download citation
DOI: https://doi.org/10.1007/978-0-8176-8256-9_1
Publisher Name: Birkhäuser, Boston, MA
Print ISBN: 978-0-8176-8255-2
Online ISBN: 978-0-8176-8256-9
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)