Advertisement

Good codes based on very sparse matrices

  • David J. C. MacKay
  • Radford M. Neal
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1025)

Abstract

We present a new family of error-correcting codes for the binary symmetric channel. These codes are designed to encode a sparse source, and are defined in terms of very sparse invertible matrices, in such a way that the decoder can treat the signal and the noise symmetrically. The decoding problem involves only very sparse matrices and sparse vectors, and so is a promising candidate for practical decoding.

It can be proved that these codes are ‘very good’, in that sequences of codes exist which, when optimally decoded, achieve information rates up to the Shannon limit.

We give experimental results using a free energy minimization algorithm and a belief propagation algorithm for decoding, demonstrating practical performance superior to that of both Bose-Chaudhury-Hocquenghem codes and Reed-Muller codes over a wide range of noise levels.

Keywords

Information Rate Sparse Matrice Symbol Rate Code Family Belief Propagation Algorithm 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    S. Andreassen, M. Woldbye, B. Falck, and S. Andersen. MUNIN — a causal probabilistic network for the interpretation of electromyographic findings. In Proc. of the 10th National Conf. on AI, AAAI: Menlo Park CA., pages 121–123, 1987.Google Scholar
  2. 2.
    E. R. Berlekamp. Algebraic Coding Theory. McGraw-Hill, New York, 1968.Google Scholar
  3. 3.
    T. M. Cover and J. A. Thomas. Elements of Information Theory. Wiley, New York, 1991.Google Scholar
  4. 4.
    R. G. Gallager. Low density parity check codes. IRE Trans. Info. Theory, IT-8:21–28, Jan 1962.CrossRefGoogle Scholar
  5. 5.
    D. J. C. MacKay. Free energy minimization algorithm for decoding and cryptanalysis. Electronics Letters, 31(6):446–447, 1995.CrossRefGoogle Scholar
  6. 6.
    D. J. C. MacKay and R. M. Neal. Good codes based on very sparse matrices. Available from http://131.111.48.24/, 1995.Google Scholar
  7. 7.
    F. J. MacWilliams and N. J. A. Sloane. The theory of error-correcting codes. North-Holland, Amsterdam, 1977.Google Scholar
  8. 8.
    R. J. McEliece. The theory of information and coding: a mathematical framework for communication. Addison-Wesley, Reading, Mass., 1977.Google Scholar
  9. 9.
    W. Meier and O. Staffelbach. Fast correlation attacks on certain stream ciphers. J. Cryptology, 1:159–176, 1989.Google Scholar
  10. 10.
    M. J. Mihaljević and J. D. Golić. Convergence of a Bayesian iterative error-correction procedure on a noisy shift register sequence. In Advances in Cryptology — EUROCRYPT 92, volume 658, pages 124–137. Springer-Verlag, 1993.Google Scholar
  11. 11.
    J. Pearl. Probabilistic reasoning in intelligent systems: networks of plausible inference. Morgan Kaufmann, San Mateo, 1988.Google Scholar
  12. 12.
    W. W. Peterson and E. J. Weldon, Jr. Error-Correcting Codes. MIT Press, Cambridge, Massachusetts, 2nd edition, 1972.Google Scholar
  13. 13.
    J. Rissanen and G. G. Langdon. Arithmetic coding. IBM Journal of Research and Development, 23:149–162, 1979.Google Scholar
  14. 14.
    C. E. Shannon. A mathematical theory of communication. Bell Sys. Tech. J., 27:379–423, 623–656, 1948.Google Scholar
  15. 15.
    M. A. Tsfasman. Algebraic-geometric codes and asymptotic problems. Discrete Applied Mathematics, 33(1–3):241–256, 1991.CrossRefGoogle Scholar
  16. 16.
    I. H. Witten, R. M. Neal, and J. G. Cleary. Arithmetic coding for data compression. Communications of the ACM, 30(6):520–540, 1987.CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1995

Authors and Affiliations

  • David J. C. MacKay
    • 1
  • Radford M. Neal
    • 2
  1. 1.Cavendish LaboratoryCambridgeUK
  2. 2.Depts. of Statistics and Computer ScienceUniv. of TorontoCanada

Personalised recommendations