In the previous chapter, we presented the Generalized Semi-Markov Process (GSMP) framework as a means of modeling stochastic DES. By allowing event clocks to tick at varying speeds, we also provided an extension to the basic GSMP. In addition, we introduced the Poisson process as a basic building block for a class of stochastic DES which possess the Markov (memoryless) property. Thus, we obtained the class of stochastic processes known as Markov chains, which we will study in some detail in this chapter. It should be pointed out that the analysis of Markov chains provides a rich framework for studying many DES of practical interest, ranging from gambling and the stock market to the design of “high-tech” computer systems and communication networks.
The main characteristic of Markov chains is that their stochastic behavior is described by transition probabilities of the form P[X(tk+1) = x′ ǀ X(t k ) = x] for all state values x, x′ and t k ≤ t k +1. Given these transition probabilities and a distribution for the initial state, it is possible to determine the probability of being at any state at any time instant. Describing precisely how to accomplish this and appreciating the difficulties involved in the process are the main objectives of this chapter.
KeywordsMarkov Chain Markov Chain Model Transition Probability Matrix State Transition Diagram Homogeneous Markov Chain
Unable to display preview. Download preview PDF.
- ∎ Bertsekas, D.P., Dynamic Programming and Optimal Control, Vols. 1 and 2, Athena Scientific, Belmont, MA, 1995.Google Scholar
- ∎ Gallager, R.G., Discrete Stochastic Processes, Kluwer Academic Publishers, Boston, 1996.Google Scholar
- ∎ Kleinrock, L., Queueing Systems, Volume I: Theory, Wiley, New York, 1975.Google Scholar
- ∎ Trivedi, K.S., Probability and Statistics with Reliability, Queuing and Computer Science Applications, Second Edition, Wiley, New York, 2001.Google Scholar