Detection of Markov Chains with Known Parameters

  • Bernard C. Levy


Markov Chain Markov Chain Model Convolutional Code Survivor Path Viterbi Algorithm 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    G. Ungerboeck, “Trellis-coded modulation with redundant signal sets, Parts I and II,” IEEE Communications Magazine, vol. 25, pp. 5–21, 1987.CrossRefGoogle Scholar
  2. 2.
    C. Douillard, M. Jezequel, C. Berrou, A. Picart, P. Didier, and A. Glavieux, “Iterative correction of intersymbol interference: Turbo equalization,” European Trans. Telecomm., vol. 2, pp. 259–263, June 1998.Google Scholar
  3. 3.
    R. Koetter, A. C. Singer, and M. Tüchler, “Turbo equalization,” IEEE Signal Processing Mag., vol. 21, pp. 67–80, Jan. 2004.CrossRefGoogle Scholar
  4. 4.
    G. Ferrari, G. Colavolpe, and R. Raheli, Detection Algorithms for Wireless Communications With Applications to Wired and Storage Systems. Chichester, England: J. Wiley & Sons, 2004.Google Scholar
  5. 5.
    K. Chugg, A. Anastasopoulos, and X. Chen, Iterative Detection: Adaptivity, Complexity Reduction, and Applications. Boston: Kluwer Acad. Publ., 2001.Google Scholar
  6. 6.
    L. R. Rabiner, “A tutorial on hidden Markov models and selected applications in speech recognition,” Proc. IEEE, vol. 77, pp. 257–286, Feb. 1989.Google Scholar
  7. 7.
    L. Rabiner and B.-H. Juang, Fundamentals of Speech Recognition. Englewood Cliffs, NJ: Prentice Hall, 1993.Google Scholar
  8. 8.
    J. G. Proakis, Digital Communications, Fourth Edition. New York: McGraw-Hill, 2000.Google Scholar
  9. 9.
    R. A. Horn and C. R. Johnson, Matrix Analysis. Cambridge, UK: Cambridge Univ. Press, 1985.MATHGoogle Scholar
  10. 10.
    R. G. Gallager, Discrete Stochastic Processes. Boston: Kluwer Acad. Publ., 1996.Google Scholar
  11. 11.
    R. A. Horn and C. R. Johnson, Topics in Matrix Analysis. Cambridge, United Kingdom: Cambridge Univ. Press, 1994.MATHGoogle Scholar
  12. 12.
    S.-I. Amari and H. Nagaoka, Methods of Information Geometry. Providence, RI: American Mathematical Soc., 2000.MATHGoogle Scholar
  13. 13.
    A. J. Viterbi, “Error bounds for convolutional codes and an asymptotically optimum decoding algorithm,” IEEE Trans. Informat. Theory, vol. 13, pp. 260–269, 1967.CrossRefMATHGoogle Scholar
  14. 14.
    J. G. D. Forney, “Maximum-likelihood sequence estimation of digital sequences in the presence of intersymbol interference,” IEEE Trans. Informat. Theory, vol. 18, pp. 363–378, May 1972.CrossRefMATHMathSciNetGoogle Scholar
  15. 15.
    J. G. D. Forney, “The Viterbi algorithm,” Proc. IEEE, vol. 31, pp. 268–278, Mar. 1973.Google Scholar
  16. 16.
    R. Bellman, “The theory of dynamic programming,” Proc. Nat. Acad. Sci., vol. 38, pp. 716–719, 1952.Google Scholar
  17. 17.
    R. Bellman, Dynamic Programming. Princeton, NJ: Princeton Univ. Press, 1957. Reprinted by Dover Publ., Mineola, NY, 2003.MATHGoogle Scholar
  18. 18.
    C. Rader, “Memory management in a Viterbi decoder,”IEEE Trans. Commun., vol. 29, pp. 1399–1401, Sept. 1981.CrossRefGoogle Scholar
  19. 19.
    R. Cypher and C. B. Shung, “Generalized trace-back technique for survivor memory management in the Viterbi algorithm,” J. VLSI Signal Proc., vol. 5, pp. 85–94, 1993.Google Scholar
  20. 20.
    G. Feygin and P. G. Gulak, “Architectural tradeoffs for survivor sequence memory management in Viterbi decoders,” IEEE Trans. Commun., vol. 41, pp. 425–429, Mar. 1993.CrossRefGoogle Scholar
  21. 21.
    H.-L. Lou, “Implementing the Viterbi algorithm,” IEEE Signal Processing Magazine, vol. 12, pp. 42–52, Sept. 1995.CrossRefGoogle Scholar
  22. 22.
    J. R. Barry, E. A. Lee, and D. G. Messerschmitt, Digital Communications, Third Edition. New York: Springer Verlag, 2003.Google Scholar
  23. 23.
    K. Ouahada and H. C. Ferreira, “Viterbi decoding of ternary line codes,” in Proc. 2004 IEEE Internat. Conf. on Communications, vol. 2, (Paris, France), pp. 667–671, June 2004.Google Scholar
  24. 24.
    S. Verdü, “Maximum likelihood sequence detection for intersymbol interference channels: A new upper bound on error probability,” IEEE Trans. Informat. Theory, vol. 33, pp. 62–68, Jan. 1987.CrossRefGoogle Scholar
  25. 25.
    L. E. Baum, T. Petrie, G. Soules, and N. Weiss, “A maximization technique occurring in the statistical analysis of probabilistic functions of Markov chains,” Annals Mathematical Statistics, vol. 41, pp. 164–171, Feb. 1970.CrossRefMATHMathSciNetGoogle Scholar
  26. 26.
    L. R. Bahl, J. Cocke, F. Jelinek, and J. Raviv, “Optimal decoding of linear codes for minimizing symbol error rates,” IEEE Trans. Informat. Theory, vol. 20, pp. 284–287, Mar. 1974.CrossRefMATHMathSciNetGoogle Scholar
  27. 27.
    C. Berrou and A. Glavieux, “Near optimum error correcting coding and decoding: turbo codes,” IEEE Trans. Commun., vol. 44, pp. 1261–127, Oct. 1996.CrossRefGoogle Scholar
  28. 28.
    J. Hagenauer and P. Hoeher, “A Viterbi algorithm with soft-decision outputs and its applications,” in Proc. IEEE Globecom Conf., (Houston,TX), pp. 793–797, Nov. 1989.Google Scholar
  29. 29.
    M. P. C. Fossorier, F. Burkert, S. Lin, and J. Hagenauer, “On the equivalence between SOVA and max-log-MAP decoding,” IEEE Communications Letters, vol. 5, pp. 137–139, May 1998.CrossRefGoogle Scholar
  30. 30.
    G. Battail, “Pondération des symboles décodés par l’algorithme de Viterbi,” Annales des Telecommunications, pp. 31–38, Jan. 1987.Google Scholar
  31. 31.
    G. M. Vastula and F. S. Hill, “On optimal detection of band-limited PAM signals with excess bandwidth,” IEEE Trans. Commun., vol. 29, pp. 886–890, June 1981.CrossRefGoogle Scholar
  32. 32.
    K. M. Chugg and A. Polydoros, “MLSE for an unknown channel – Part I: Optimality considerations,” IEEE Trans. Commun., vol. 44, pp. 836–846, July 1996.CrossRefMATHGoogle Scholar
  33. 33.
    D. Bertsimas and J. Tsitsiklis, Introduction to Linear Optimization. Belmont, MA: Athena Scientific, 1997.Google Scholar
  34. 34.
    S. Natarajan, “Large deviations, hypothesis testing, and source coding for finite Markov chains,” IEEE Trans. Informat. Theory, vol. 31, pp. 360–365, May 1985.CrossRefMATHMathSciNetGoogle Scholar
  35. 35.
    V. Anantharam, “A large deviations approach to error exponents in source coding and hypothesis testing,” IEEE Trans. Informat. Theory, vol. 36, July 1990.Google Scholar
  36. 36.
    L. D. Davisson, G. Longo, and A. Sgarro, “The error exponent for the noiseless encoding of finite ergodic Markov sources,” IEEE Trans. Informat. Theory, vol. 27, pp. 431–438, July 1981.CrossRefMATHMathSciNetGoogle Scholar
  37. 37.
    F. den Hollander, Large Deviations. Providence, RI: American Mathematical Soc., 2000.MATHGoogle Scholar
  38. 38.
    G. J. Foschini, “Performance bound for maximum-likelihood reception of digital data,” IEEE Trans. Informat. Theory, vol. 21, pp. 47–50, Jan. 1975.CrossRefMATHMathSciNetGoogle Scholar
  39. 39.
    A. P. Dempster, N. M. Laird, and D. B. Rubin, “Maximum likelihood from incomplete data via the EM algorithm,” J. Royal Stat. Society, Series B, vol. 39, no. 1, pp. 1–38, 1977.MATHMathSciNetGoogle Scholar
  40. 40.
    L. R. Welch, “The Shannon lecture: Hidden Markov models and the Baum-Welch algorithm,” IEEE Information Theory Soc. Newsletter, vol. 53, Dec. 2003.Google Scholar

Copyright information

© Springer Science+Business Media, LLC 2008

Authors and Affiliations

  • Bernard C. Levy
    • 1
  1. 1.University of CaliforniaDavisUSA

Personalised recommendations