Journal of Statistical Theory and Practice

, Volume 11, Issue 3, pp 407–417 | Cite as

On a reward rate estimation for the finite irreducible continuous-time Markov chain

  • Alexander AndronovEmail author


A continuous-time homogeneous irreducible Markov chain {X(t)}, t ϵ [0; ∞), taking values on N = {1,..., k}, k <∞, is considered. Matrix λ = (λij) of the intensity of transition λij from state i to state j is known. A unit of the sojourn time in state i gives reward βi so the total reward during time t is \(Y(t) = \mathop \smallint \limits_0^t {\beta _{X(s)}}ds\). The reward rates {βi} are not known and it is necessary to estimate them. For that purpose the following statistical data on r observations are at our disposal: (1) t, observation time; (2) i, initial state X(0); (3) j, final state X(t); and (4) y, acquired reward Y(t). Two methods are used for the estimation: the weighted least-squares method and the saddle-point method for the Laplace transformation of the reward. Simulation study illustrates the suggested approaches.


Methods of point estimation Markov chain simulation 

AMS Subject Classification

62M05 62F10 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. Abate, J., and W. Whitt. 1992. The Fourier-series method for inverting transform of probability distributions. Queueing Systems 19:5–88.MathSciNetCrossRefGoogle Scholar
  2. Andronov, A. M. 1992. Parameter statistical estimates of Markov-modulated linear regression. In Statistical methods of parameter estimation and hypothesis testing, vol. 24, 163–80. Perm, Russia: Perm State University (in Russian).Google Scholar
  3. Andronov, A. M. 2014. Markov-modulated samples and their applications. In Topics in statistical simulation, ed. V. B. Melas, S. Mignani, P. Monari, and L. Salmoso, vol. 114, 29–35. New York, NY: Springer Proceedings in Mathematics & Statistics, Springer.CrossRefGoogle Scholar
  4. Bellman, R. 1969. Introduction to matrix analysis. New York, NY: McGraw-Hill.zbMATHGoogle Scholar
  5. Bladt, M., B. Meini, M. F. Neuts, and B. Sericola. 2002. Distributions of reward functions on continuous-time Markov chain. In 4th International Conference on Matrix Analytic Methods. Theory and applications, 1–24. Adelaide, Australia.Google Scholar
  6. Crawford, F. W., V. N. Minin, and M. A. Suchard. 2014. Estimation for general birth–death processes. Journal of the American Statistical Association 109 (506): 730–47.MathSciNetCrossRefGoogle Scholar
  7. Kijima, M. 1997. Markov processes for stochastic modeling. London, UK: Chapman & Hall.CrossRefGoogle Scholar
  8. Minoux, M. 1989. Programmation matematique. Theorie et Algorithmes. Paris, France: Bordas.Google Scholar
  9. Pacheco, A., L. C. Tang, and N. U. Prabhu. 2009. Markov-modulated processes & semiregenerative phenomena. Hoboken, NJ: World Scientific.zbMATHGoogle Scholar
  10. Sericola, B. 2000. Occupation times in Markov processes. Stochastic Models 16:479–510.MathSciNetCrossRefGoogle Scholar
  11. Turkington, D. A. 2002. Matrix calculus and zero-one matrices. Statistical and econometric applications. Cambridge, UK: Cambridge University Press.zbMATHGoogle Scholar

Copyright information

© Grace Scientific Publishing 2017

Authors and Affiliations

  1. 1.Mathematical Methods and ModelingTransport and Telecommunication InstituteRigaLatvia

Personalised recommendations