Skip to main content

Introduction

  • Chapter
  • First Online:
Markov Chains

Part of the book series: International Series in Operations Research & Management Science ((ISOR,volume 189))

Abstract

Markov chains are named after Prof. Andrei A. Markov (1856–1922). He was born on June 14, 1856 in Ryazan, Russia and died on July 20, 1922 in St. Petersburg, Russia. Markov enrolled at the University of St. Petersburg, where he earned a master’s degree and a doctorate degree. He was a professor at St. Petersburg and also a member of the Russian Academy of Sciences.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 99.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 129.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 129.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Altman E (1999) Constrained markov decision processes. Chapman and Hall/CRC, London

    Google Scholar 

  2. Axelsson O (1996) Iterative solution methods. Cambridge University Press, New York

    Google Scholar 

  3. Baum L (1972) An inequality and associated maximization techniques in statistical estimation for probabilistic function of Markov processes. Inequality 3:1–8

    Google Scholar 

  4. Berman A, Plemmons R (1994) Nonnegative matrices in the mathematical sciences. Society for industrial and applied mathematics, Philadelphia

    Book  Google Scholar 

  5. Bini D, Latouche G, Meini B (2005) Numerical methods for structured Markov chains. Oxford University Press, New York

    Book  Google Scholar 

  6. Chan R, Ng M (1996) Conjugate gradient method for Toeplitz systems. SIAM Rev 38:427–482

    Article  Google Scholar 

  7. Ching W (1997) Circulant preconditioners for failure prone manufacturing systems. Linear Algebra Appl 266:161–180

    Article  Google Scholar 

  8. Ching W (1997) Markov modulated poisson processes for multi-location inventory problems. Int J Product Econ 53:217–223

    Article  Google Scholar 

  9. Ching W (1998) Iterative methods for manufacturing systems of two stations in Tandem. Appl Math Lett 11:7–12

    Article  Google Scholar 

  10. Ching W (2001) Iterative methods for queuing and manufacturing systems. Springer monographs in mathematics. Springer, London

    Google Scholar 

  11. Golub G, van Loan C (1989) Matrix computations. John Hopkins University Press, Baltimore

    Google Scholar 

  12. Häggström (2002) Finite Markov chains and algorithmic applications. London mathematical society, Student Texts 52. Cambridge University Press, Cambridge

    Google Scholar 

  13. Hestenes M, Stiefel E (1952) Methods of conjugate gradients for solving linear systems. J Res Nat Bureau Stand 49:490–436

    Google Scholar 

  14. Horn R, Johnson C (1985) Matrix analysis. Cambridge University Press, Cambridge

    Book  Google Scholar 

  15. Kahan W (1958) Gauss-Seidel methods of solving large systems of linear equations. Ph.D. thesis, University of Toronto

    Google Scholar 

  16. Kincaid D, Cheney W (2002) Numerical analysis: mathematics of scientific computing, 3rd edn. Books/Cole Thomson Learning, CA

    Google Scholar 

  17. Koski T (2001) Hidden Markov models for bioinformatics. Kluwer, Dordrecht

    Book  Google Scholar 

  18. Lim J (1990) Two-dimensional signal and image processing. Prentice Hall, Englewood Cliffs

    Google Scholar 

  19. MacDonald I, Zucchini W (1997) Hidden Markov and other models for discrete-valued time series. Chapman & Hall, London

    Google Scholar 

  20. Priestley M (1981) Spectral anslysis and time series. Academic, New York

    Google Scholar 

  21. Puterman M (1994) Markov decision processes: discrete stochastic dynamic programming. Wiley, New York

    Book  Google Scholar 

  22. Rabiner L (1989) A tutorial on hidden Markov models and selected applications in speech recognition. Proc IEEE 77:257–286

    Article  Google Scholar 

  23. Ross S (2000) Introduction to probability models, 7th edn. Academic, New York

    Google Scholar 

  24. Saad Y (2003) Iterative methods for sparse linear systems society for industrial and applied mathematics, 2nd edn. SIAM, Philadelphia

    Book  Google Scholar 

  25. Sonneveld P (1989) A fast Lanczos-type solver for non-symmetric linear systems. SIAM J Scientif Comput 10:36–52

    Article  Google Scholar 

  26. Trench W (1964) An algorithm for the inversion of finite Toeplitz matrices. SIAM J Appl Math 12:515–522

    Article  Google Scholar 

  27. Viterbi A (1967) Error bounds for convolutional codes and an asymptotically optimum decoding algorithm. IEEE Trans Inform Theory 13:260–269

    Article  Google Scholar 

  28. White D (1993) Markov decision processes. Wiley, Chichester

    Google Scholar 

  29. Winston W (1994) Operations research: applications and algorithms, Belmont Calif., 3rd edn. Duxbury, North Scituate

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2013 Springer Science+Business Media New York

About this chapter

Cite this chapter

Ching, WK., Huang, X., Ng, M.K., Siu, TK. (2013). Introduction. In: Markov Chains. International Series in Operations Research & Management Science, vol 189. Springer, Boston, MA. https://doi.org/10.1007/978-1-4614-6312-2_1

Download citation

Publish with us

Policies and ethics