Abstract
In this chapter, we consider reward processes of an irreducible continuous-time block-structured Markov chain. By using the RG-factorizations, we provide a unified algorithmic framework to derive expressions for conditional distributions and conditional moments of the reward processes. As an important example, we study the reward processes for an irreducible continuous-time level-dependent QBD process with either finitely-many levels or infinitely-many levels. At the same time, we provide a simple introduction to the reward processes of an irreducible discrete-time block-structured Markov chain.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Abdallah H. and M. Hamza (2002). On the sensitivity analysis of the expected accumulated reward. Performance Evaluation 47: 163–179
Asmussen S. (1987). Applied Probability and Queues, John Wiley & Sons
Bean N.G., P.K. Pollett and P.G. Taylor (2000). Quasistationary distributions for leveldependent quasi-birth-and-death processes. Stochastic Models 16: 511–541
Bladt M., B. Meini, M.F. Neuts and B. Sericola (2002). Distributions of reward functions on continuous-time Markov chains. In Matrix Analytic Methods: Theory and Applications, G. Latouche and P.G. Taylor (eds.), World Scientific: New Jersey, 39–62
Bobbio A. and K.S. Trivedi (1990). Computation of the distribution of the completion time when the work requirement is a PH random variable. Stochastic Models 10: 533–548
Brenner A. and U.D. Kumar (1998). Performance/dependability modelling using stochastic reward models: Transient behaviour. Microelectronics and Reliability 38: 449–454
Cao X.R. (1994). Realization Probabilities: The Dynamics of Queuing Systems, Springer-Verlag: London
Cao X.R. (2007). Stochastic Learning and Optimization: A Sensitivity-Based Approach, Springer-Verlag: New York
Cassandras C.G. (1993). Discrete-Event Systems: Modeling and Performance Analysis, Homewood, IL, Aksen Associates
Ciardo G., A. Blakemore, P.F.Jr. Chimento, J.K. Muppala and K.S. Trivedi (1993). Automated generation and analysis of Markov reward models using stochastic reward nets. In Linear algebra, Markov chains, and queueing models, IMA Vol. Math. Appl., 48, Springer: New York, 145–191
Ciardo G., R.A. Marie, B. Sericola and K.S. Trivedi (1990). Performability analysis using semi-Markov reward processes. IEEE Transactions on Computers 39: 1251–1264
Ciardo G. and K.S. Trivedi (1993). A decomposition approach for stochastic reward net models. Performance Evaluation 18: 37–59
Daley D.J. (1969). The total waiting time in a busy period of a stable single-server queue, I. Journal of Applied Probability 6: 550–564
Daley D.J. and D.R.J. Jacobs (1969). The total waiting time in a busy period of a stable single-server queue, II. Journal of Applied Probability 6: 565–572
Darling D.A. and M. Kac (1957). On occupation times for Markoff processes. Transactions of the American Mathematical Society 84: 444–458
Souza E. de e Silva and H.R. Gail (1998). An algorithm to calculate transient distributions of cumulative rate and impulse based reward. Stochastic Models 14: 509–536
Donatiello L. and V. Grassi (1991). On evaluating the cumulative performance distribution of fault-tolerant computer-systems. IEEE Transactions on Computers 40: 1301–1307
Glasserman P. (1988). Infinitesimal perturbation analysis of a birth and death process. Operations Research Letter 7: 43–49
Glasserman P. (1991). Gradient Estimation Via Perturbation Analysis, Kluwer Academic Publishers
Glasserman P. (1992). Derivative estimates from simulation of continuous-time Markov chains. Operations Research 40: 292–308
Grassmann W.K. and J. Luo (2005). Simulating Markov-reward processes with rare events. ACM Transactions on Modeling and Computer Simulation 15: 138–154
Ho Y.C. and X.R. Cao (1991). Perturbation Analysis of Discrete-Event Dynamic Systems, Kluwer
Howard R.A. (1971). Dynamic Probabilistic Systems, Vol. II: Semi-Markov and Decision Processes, John Wiley and Sons
Karlin S. and J. McGregor (1961). Occupation time law for birth and death processes. In Proc. 4th Berkeley Symp. Math. Statist. Prob., 2: 249–272
Kesten H. (1962). On occupation times for Markov and semi-Markov processes. Transactions of the American Mathematical Society 103: 82–112
Kulkarni V.G., V.F. Nicola and K.S. Trivedi (1987). The completion time of a job on multimode systems. Advances in Applied Probability 19: 932–954
Latouche G., C.E.M. Pearce and P.G. Taylor (1998). Invariant measures for quasi-birthdeath processes. Stochastic Models 14: 443–460
Li Q.L. (1997). Stochastic Integral Functionals and Quasi-Stationary Distributions in Stochastic Models, Ph.D. Thesis, Institute of Applied Mathematics, Chinese Academy of Sciences, Beijing 100080, China
Li Q.L. and J. Cao (2004). Two types of RG-factorizations of quasi-birth-and-death processes and their applications to stochastic integral functionals. Stochastic Models 20: 299–340
Lisnianski A. (2007). The Markov reward model for a multi-state system reliability assessment with variable demand. Quality Technology & Quantitative Management 4: 265–278
Mallubhatla R., K.R. Pattipati and N. Viswanadham (1995). Discrete-time Markov-reward models of production systems. In Discrete event systems, manufacturing systems, and communication networks, Springer: New York, 149–175
Masuda Y. (1993). Partially observable semi-markov reward processes. Journal of Applied Probability 30: 548–560
Masuda Y. and U. Sumita (1991). A multivariate reward process defined on a semi-Markov process and its first-passage-time distributions. Journal of Applied Probability 28: 360–373
Mclean R.A. and M.F. Neuts (1967). The integral of a step function defined in a semi-Markov process. SIAM Journal on Applied Mathematics 15: 726–737
McNeil D.R. (1970). Integral functionals of birth and death processes and related limiting distributions. Annals of Mathematical Statistics 41: 480–485
Meyer J.F. (1982). Closed form solution of performability. IEEE Transaction on Computer C-31: 648–657
Nabli, H., Sericola, B. Performability analysis for degradable computer systems. Computers & Mathematics with Applications 39: 217–234
Naddor. E. (1966). Inventory Systems, Wiley: New York
Neuts M.F. (1981). Matrix-Geometric Solutions in Stochastic Models-An Algorithmic Approach, The Johns Hopkins University Press: Baltimore
Neuts M.F. (1989). Structured Stochastic Matrices of M/G/1 Type and Their Applications, Marcel Dekker: New York
Puri P.S. (1966). On the homogeneous birth-and-death process and its integral. Biometrika 53: 61–71
Puri P.S. (1968). Some further results on the homogeneous birth and death process and its integral. Mathematical Proceedings of the Cambridge Philosophical Society 64: 141–154
Puri P.S. (1971). A method for studying the integral functionals of stochastic processes with applications: I. Markov chains case. Journal of Applied Probability 8: 331–343
Qureshi M.A. and W.H. Sanders (1994). Reward model solution methods with impulse and rate rewards: an algorithm and numerical results. Performance Evaluation 20: 413–436
Rácz S. (2002). Numerical analysis of communication systems through Markov reward models, Ph.D. Thesis, Department of Telecomunications and Telematics, Budapest University of Technology and Economics
Ramaswami V. and P.G. Taylor (1996). Some properties of the rate operators in level dependent quasi-birth-and-death processes with a countable number of phases. Stochastic Models 12: 143–164
Reibman A., R. Smith and K.S. Trivedi (1989). Markov and Markov reward model transient analysis: an overview of numerical approaches. European Journal of Operational Research 40: 257–267
Reibman A. and K.S. Trivedi (1989). Transient analysis of cumulative measures of Markov model behavior. Stochastic Models 5: 683–710
Rubino G. and B. Sericola (1993). Sojourn times in semi-markov reward processesapplication to fault-tolerant systems modeling. Reliability Engineering & System Safety 41: 1–4
Stefanov V.T. (2006). Exact distributions for reward functions on semi-Markov and Markov additive processes. Journal of Applied Probability 43: 1053–1065
Stenberg F., R. Manca and D. Silvestrov (2007). An algorithmic approach to discrete time non-homogeneous backward semi-Markov reward processes with an application to disability insurance. Methodology and Computing in Applied Probability 9: 497–519
Telek M. and A. Horváth (1998). Supplementary variable approach applied to the transient analysis of Age-MRSPNs. In Proc. 3-rd IEEE Annual Int. Computer Performance and Dependability Symposium, 7–9 September 1998, Durham, NC, 44–51
Telek M., A. Horváth and G. Horváth (2004). Analysis of inhomogeneous Markov reward models. Linear Algebra and its Applications 386: 383–405
Telek M., A. Pfening and G. Fodor (1998). An effective numerical method to compute the moments of the completion time of Markov reward models. Computers & Mathematics with Applications 36: 59–65
Telek M., A. Pfening and G. Fodor (1998). Analysis of the completion time of Markov reward models and its application. Acta Cybernetica 13: 439–452
Telek M. and S. Rácz (1999). Numerical analysis of large Markov reward models. Performance Evaluation 36 & 37: 95–114
Trivedi K.S. and R.A. Wagner (1979). A decision model for closed queueing networks. IEEE Transactions on Software Engineering 5: 328–332
Vaidyanathan K., R.E. Harper, S.W. Hunter and K.S. Trivedi (2002). Analysis of software rejuvenation in cluster systems using stochastic reward nets. The Journal of Mathematical Sciences 1: 123–150
Wang Z.K. (1961). On the distributions of functionals of birth and death processes and their applications in theory of queues. Scientia Sinica 5: 160–170
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2010 Tsinghua University Press, Beijing and Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
Li, QL. (2010). Markov Reward Processes. In: Constructive Computation in Stochastic Models with Applications. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-11492-2_10
Download citation
DOI: https://doi.org/10.1007/978-3-642-11492-2_10
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-11491-5
Online ISBN: 978-3-642-11492-2
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)