Abstract
In this chapter we generalize the situation considered in the last chapter. Recall that in the last chapter Peter and Paul were gambling. At any time during their play, their finances were in a certain condition or state. The word state is the one usually used here. Thus we had a “system” that could be in any one of a number of states. In fact, it moved from state to state as time went on. The motion was in discrete steps; each step corresponded to one game. We shall retain those concepts from the last chapter. We assumed that the amount bet was fixed at one dollar. Thus at each step Peter either gained a dollar or lost a dollar. We now want to generalize and allow the possibility of change in a single step from any state to any state.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 1997 Springer Science+Business Media New York
About this chapter
Cite this chapter
Gordon, H. (1997). Markov Chains. In: Discrete Probability. Undergraduate Texts in Mathematics. Springer, New York, NY. https://doi.org/10.1007/978-1-4612-1966-8_9
Download citation
DOI: https://doi.org/10.1007/978-1-4612-1966-8_9
Publisher Name: Springer, New York, NY
Print ISBN: 978-1-4612-7359-2
Online ISBN: 978-1-4612-1966-8
eBook Packages: Springer Book Archive