Skip to main content

Abstract

In this chapter, we consider reward processes of an irreducible continuous-time block-structured Markov chain. By using the RG-factorizations, we provide a unified algorithmic framework to derive expressions for conditional distributions and conditional moments of the reward processes. As an important example, we study the reward processes for an irreducible continuous-time level-dependent QBD process with either finitely-many levels or infinitely-many levels. At the same time, we provide a simple introduction to the reward processes of an irreducible discrete-time block-structured Markov chain.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 169.00
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Abdallah H. and M. Hamza (2002). On the sensitivity analysis of the expected accumulated reward. Performance Evaluation 47: 163–179

    Article  MATH  Google Scholar 

  2. Asmussen S. (1987). Applied Probability and Queues, John Wiley & Sons

    Google Scholar 

  3. Bean N.G., P.K. Pollett and P.G. Taylor (2000). Quasistationary distributions for leveldependent quasi-birth-and-death processes. Stochastic Models 16: 511–541

    Article  MATH  MathSciNet  Google Scholar 

  4. Bladt M., B. Meini, M.F. Neuts and B. Sericola (2002). Distributions of reward functions on continuous-time Markov chains. In Matrix Analytic Methods: Theory and Applications, G. Latouche and P.G. Taylor (eds.), World Scientific: New Jersey, 39–62

    Google Scholar 

  5. Bobbio A. and K.S. Trivedi (1990). Computation of the distribution of the completion time when the work requirement is a PH random variable. Stochastic Models 10: 533–548

    Google Scholar 

  6. Brenner A. and U.D. Kumar (1998). Performance/dependability modelling using stochastic reward models: Transient behaviour. Microelectronics and Reliability 38: 449–454

    Article  Google Scholar 

  7. Cao X.R. (1994). Realization Probabilities: The Dynamics of Queuing Systems, Springer-Verlag: London

    Book  MATH  Google Scholar 

  8. Cao X.R. (2007). Stochastic Learning and Optimization: A Sensitivity-Based Approach, Springer-Verlag: New York

    MATH  Google Scholar 

  9. Cassandras C.G. (1993). Discrete-Event Systems: Modeling and Performance Analysis, Homewood, IL, Aksen Associates

    Google Scholar 

  10. Ciardo G., A. Blakemore, P.F.Jr. Chimento, J.K. Muppala and K.S. Trivedi (1993). Automated generation and analysis of Markov reward models using stochastic reward nets. In Linear algebra, Markov chains, and queueing models, IMA Vol. Math. Appl., 48, Springer: New York, 145–191

    Google Scholar 

  11. Ciardo G., R.A. Marie, B. Sericola and K.S. Trivedi (1990). Performability analysis using semi-Markov reward processes. IEEE Transactions on Computers 39: 1251–1264

    Article  Google Scholar 

  12. Ciardo G. and K.S. Trivedi (1993). A decomposition approach for stochastic reward net models. Performance Evaluation 18: 37–59

    Article  MATH  MathSciNet  Google Scholar 

  13. Daley D.J. (1969). The total waiting time in a busy period of a stable single-server queue, I. Journal of Applied Probability 6: 550–564

    Article  MATH  MathSciNet  Google Scholar 

  14. Daley D.J. and D.R.J. Jacobs (1969). The total waiting time in a busy period of a stable single-server queue, II. Journal of Applied Probability 6: 565–572

    Article  MATH  MathSciNet  Google Scholar 

  15. Darling D.A. and M. Kac (1957). On occupation times for Markoff processes. Transactions of the American Mathematical Society 84: 444–458

    MATH  MathSciNet  Google Scholar 

  16. Souza E. de e Silva and H.R. Gail (1998). An algorithm to calculate transient distributions of cumulative rate and impulse based reward. Stochastic Models 14: 509–536

    Article  MATH  Google Scholar 

  17. Donatiello L. and V. Grassi (1991). On evaluating the cumulative performance distribution of fault-tolerant computer-systems. IEEE Transactions on Computers 40: 1301–1307

    Article  Google Scholar 

  18. Glasserman P. (1988). Infinitesimal perturbation analysis of a birth and death process. Operations Research Letter 7: 43–49

    Article  MATH  MathSciNet  Google Scholar 

  19. Glasserman P. (1991). Gradient Estimation Via Perturbation Analysis, Kluwer Academic Publishers

    Google Scholar 

  20. Glasserman P. (1992). Derivative estimates from simulation of continuous-time Markov chains. Operations Research 40: 292–308

    Article  MATH  MathSciNet  Google Scholar 

  21. Grassmann W.K. and J. Luo (2005). Simulating Markov-reward processes with rare events. ACM Transactions on Modeling and Computer Simulation 15: 138–154

    Article  Google Scholar 

  22. Ho Y.C. and X.R. Cao (1991). Perturbation Analysis of Discrete-Event Dynamic Systems, Kluwer

    Google Scholar 

  23. Howard R.A. (1971). Dynamic Probabilistic Systems, Vol. II: Semi-Markov and Decision Processes, John Wiley and Sons

    Google Scholar 

  24. Karlin S. and J. McGregor (1961). Occupation time law for birth and death processes. In Proc. 4th Berkeley Symp. Math. Statist. Prob., 2: 249–272

    MathSciNet  Google Scholar 

  25. Kesten H. (1962). On occupation times for Markov and semi-Markov processes. Transactions of the American Mathematical Society 103: 82–112

    Article  MATH  MathSciNet  Google Scholar 

  26. Kulkarni V.G., V.F. Nicola and K.S. Trivedi (1987). The completion time of a job on multimode systems. Advances in Applied Probability 19: 932–954

    Article  MATH  MathSciNet  Google Scholar 

  27. Latouche G., C.E.M. Pearce and P.G. Taylor (1998). Invariant measures for quasi-birthdeath processes. Stochastic Models 14: 443–460

    Article  MATH  MathSciNet  Google Scholar 

  28. Li Q.L. (1997). Stochastic Integral Functionals and Quasi-Stationary Distributions in Stochastic Models, Ph.D. Thesis, Institute of Applied Mathematics, Chinese Academy of Sciences, Beijing 100080, China

    Google Scholar 

  29. Li Q.L. and J. Cao (2004). Two types of RG-factorizations of quasi-birth-and-death processes and their applications to stochastic integral functionals. Stochastic Models 20: 299–340

    Article  MATH  MathSciNet  Google Scholar 

  30. Lisnianski A. (2007). The Markov reward model for a multi-state system reliability assessment with variable demand. Quality Technology & Quantitative Management 4: 265–278

    MathSciNet  Google Scholar 

  31. Mallubhatla R., K.R. Pattipati and N. Viswanadham (1995). Discrete-time Markov-reward models of production systems. In Discrete event systems, manufacturing systems, and communication networks, Springer: New York, 149–175

    Google Scholar 

  32. Masuda Y. (1993). Partially observable semi-markov reward processes. Journal of Applied Probability 30: 548–560

    Article  MATH  MathSciNet  Google Scholar 

  33. Masuda Y. and U. Sumita (1991). A multivariate reward process defined on a semi-Markov process and its first-passage-time distributions. Journal of Applied Probability 28: 360–373

    Article  MATH  MathSciNet  Google Scholar 

  34. Mclean R.A. and M.F. Neuts (1967). The integral of a step function defined in a semi-Markov process. SIAM Journal on Applied Mathematics 15: 726–737

    Article  MATH  MathSciNet  Google Scholar 

  35. McNeil D.R. (1970). Integral functionals of birth and death processes and related limiting distributions. Annals of Mathematical Statistics 41: 480–485

    Article  MATH  MathSciNet  Google Scholar 

  36. Meyer J.F. (1982). Closed form solution of performability. IEEE Transaction on Computer C-31: 648–657

    Article  Google Scholar 

  37. Nabli, H., Sericola, B. Performability analysis for degradable computer systems. Computers & Mathematics with Applications 39: 217–234

    Google Scholar 

  38. Naddor. E. (1966). Inventory Systems, Wiley: New York

    Google Scholar 

  39. Neuts M.F. (1981). Matrix-Geometric Solutions in Stochastic Models-An Algorithmic Approach, The Johns Hopkins University Press: Baltimore

    MATH  Google Scholar 

  40. Neuts M.F. (1989). Structured Stochastic Matrices of M/G/1 Type and Their Applications, Marcel Dekker: New York

    MATH  Google Scholar 

  41. Puri P.S. (1966). On the homogeneous birth-and-death process and its integral. Biometrika 53: 61–71

    MATH  MathSciNet  Google Scholar 

  42. Puri P.S. (1968). Some further results on the homogeneous birth and death process and its integral. Mathematical Proceedings of the Cambridge Philosophical Society 64: 141–154

    Article  MATH  Google Scholar 

  43. Puri P.S. (1971). A method for studying the integral functionals of stochastic processes with applications: I. Markov chains case. Journal of Applied Probability 8: 331–343

    Article  MATH  MathSciNet  Google Scholar 

  44. Qureshi M.A. and W.H. Sanders (1994). Reward model solution methods with impulse and rate rewards: an algorithm and numerical results. Performance Evaluation 20: 413–436

    Article  MATH  Google Scholar 

  45. Rácz S. (2002). Numerical analysis of communication systems through Markov reward models, Ph.D. Thesis, Department of Telecomunications and Telematics, Budapest University of Technology and Economics

    Google Scholar 

  46. Ramaswami V. and P.G. Taylor (1996). Some properties of the rate operators in level dependent quasi-birth-and-death processes with a countable number of phases. Stochastic Models 12: 143–164

    Article  MATH  MathSciNet  Google Scholar 

  47. Reibman A., R. Smith and K.S. Trivedi (1989). Markov and Markov reward model transient analysis: an overview of numerical approaches. European Journal of Operational Research 40: 257–267

    Article  MATH  MathSciNet  Google Scholar 

  48. Reibman A. and K.S. Trivedi (1989). Transient analysis of cumulative measures of Markov model behavior. Stochastic Models 5: 683–710

    Article  MATH  MathSciNet  Google Scholar 

  49. Rubino G. and B. Sericola (1993). Sojourn times in semi-markov reward processesapplication to fault-tolerant systems modeling. Reliability Engineering & System Safety 41: 1–4

    Article  Google Scholar 

  50. Stefanov V.T. (2006). Exact distributions for reward functions on semi-Markov and Markov additive processes. Journal of Applied Probability 43: 1053–1065

    Article  MATH  MathSciNet  Google Scholar 

  51. Stenberg F., R. Manca and D. Silvestrov (2007). An algorithmic approach to discrete time non-homogeneous backward semi-Markov reward processes with an application to disability insurance. Methodology and Computing in Applied Probability 9: 497–519

    Article  MATH  MathSciNet  Google Scholar 

  52. Telek M. and A. Horváth (1998). Supplementary variable approach applied to the transient analysis of Age-MRSPNs. In Proc. 3-rd IEEE Annual Int. Computer Performance and Dependability Symposium, 7–9 September 1998, Durham, NC, 44–51

    Google Scholar 

  53. Telek M., A. Horváth and G. Horváth (2004). Analysis of inhomogeneous Markov reward models. Linear Algebra and its Applications 386: 383–405

    Article  MATH  MathSciNet  Google Scholar 

  54. Telek M., A. Pfening and G. Fodor (1998). An effective numerical method to compute the moments of the completion time of Markov reward models. Computers & Mathematics with Applications 36: 59–65

    Article  MATH  MathSciNet  Google Scholar 

  55. Telek M., A. Pfening and G. Fodor (1998). Analysis of the completion time of Markov reward models and its application. Acta Cybernetica 13: 439–452

    MATH  MathSciNet  Google Scholar 

  56. Telek M. and S. Rácz (1999). Numerical analysis of large Markov reward models. Performance Evaluation 36 & 37: 95–114

    Article  Google Scholar 

  57. Trivedi K.S. and R.A. Wagner (1979). A decision model for closed queueing networks. IEEE Transactions on Software Engineering 5: 328–332

    Article  MathSciNet  Google Scholar 

  58. Vaidyanathan K., R.E. Harper, S.W. Hunter and K.S. Trivedi (2002). Analysis of software rejuvenation in cluster systems using stochastic reward nets. The Journal of Mathematical Sciences 1: 123–150

    MATH  MathSciNet  Google Scholar 

  59. Wang Z.K. (1961). On the distributions of functionals of birth and death processes and their applications in theory of queues. Scientia Sinica 5: 160–170

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2010 Tsinghua University Press, Beijing and Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Li, QL. (2010). Markov Reward Processes. In: Constructive Computation in Stochastic Models with Applications. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-11492-2_10

Download citation

Publish with us

Policies and ethics