Skip to main content

Discrete-Time Markov Chains

  • Chapter
  • First Online:
Understanding Markov Chains

Part of the book series: Springer Undergraduate Mathematics Series ((SUMS))

Abstract

In this chapter we start the general study of discrete-time Markov chains by focusing on the Markov property and on the role played by transition probability matrices. We also include a complete study of the time evolution of the two-state chain, which represents the simplest example of Markov chain.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 34.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 44.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Nicolas Privault .

Exercises

Exercises

Exercise 4.1

Consider a symmetric random walk \((S_n)_{n\in {\mathord {\mathbb N}}}\) on \({\mathord {\mathbb Z}}\) with independent increments \(\pm 1\) chosen with equal probability 1 / 2, started at \(S_0 = 0\).

  1. (a)

    Is the process \(Z_n: = 2S_n+1\) a Markov chain?

  2. (b)

    Is the process \(Z_n: = (S_n)^2\) a Markov chain?

Exercise 4.2

Consider the Markov chain \((Z_n)_{n\ge 0}\) with state space \(\mathbb {S}= \{1,2\}\) and transition matrix

  1. (a)

    Compute \(\mathbb {P}( Z_7 = 1 \text{ and } Z_5 = 2 \mid Z_4 = 1 \text{ and } Z_3 = 2)\).

  2. (b)

    Compute \(\mathop {\hbox \mathrm{IE}}\nolimits [ Z_2 \mid Z_1 = 1]\).

Exercise 4.3

Consider a transition probability matrix P of the form

$$ P = \left[ ~ P_{i, j} ~ \right] _{ 0 \le i , j \le N } = \left[ \begin{array}{cccccc} \pi _0 ~&{} \pi _1 ~&{} \pi _2 ~&{} \pi _3 ~&{} \cdots ~&{} \pi _N \\ \\ \pi _0 ~&{} \pi _1 ~&{} \pi _2 ~&{} \pi _3 ~&{} \cdots ~&{} \pi _N \\ \\ \pi _0 ~&{} \pi _1 ~&{} \pi _2 ~&{} \pi _3 ~&{} \cdots ~&{} \pi _N \\ \\ \vdots ~&{} \vdots ~&{} \vdots ~&{} \vdots ~&{} \ddots ~&{} \vdots \\ \\ \pi _0 ~&{} \pi _1 ~&{} \pi _2 ~&{} \pi _3 ~&{} \cdots ~&{} \pi _N \end{array} \right] , $$

where \(\pi = [\pi _0,\pi _1,\ldots , \pi _N] \in [0,1]^{N+1}\) is a vector such that \(\pi _0+\pi _1+\cdots + \pi _N=1\).

  1. (a)

    Compute \(P^n\) for all \(n\ge 2\).

  2. (b)

    Show that the vector \(\pi \) is an invariant (or stationary) distribution for P.

  3. (c)

    Show that if \(\mathbb {P}( Z_0 = i ) = \pi _i\), \(i = 0,1 , \ldots , N\), then \(Z_n\) is independent of \(Z_k\) for all \(0\le k < n\), and \((Z_n)_{n\in {\mathord {\mathbb N}}}\) is an i.i.d sequence of random variables with distribution \(\pi =[\pi _0,\pi _1,\ldots ,\pi _N]\) over \(\{0,1,\ldots , N \}\).

Exercise 4.4

Consider a \(\{0,1\}\)-valued “hidden” two-state Markov chain \((X_n)_{n\in {\mathord {\mathbb N}}}\) with transition probability matrix

$$\begin{aligned} P = \left[ \begin{array}{cc} P_{0,0} &{} P_{0,1} \\ \\ P_{1,0} &{} P_{1,1} \end{array} \right] = \left[ \begin{array}{cc} \mathbb {P}( X_1 =0 \mid X_0 = 0) &{} \mathbb {P}( X_1 =1 \mid X_0 = 0) \\ \\ \mathbb {P}( X_1 =0 \mid X_0 = 1) &{} \mathbb {P}( X_1 =1 \mid X_0 = 1) \end{array} \right] , \end{aligned}$$

and initial distribution

$$ \pi = [ \pi _0 , \pi _1 ] = [\mathbb {P}( X_0 = 0), \mathbb {P}( X_0 = 1)]. $$

We observe a process \((O_k)_{k\in {\mathord {\mathbb N}}}\) whose state \(O_k\in \{a, b\}\) at every time \(k\in {\mathord {\mathbb N}}\) has a conditional distribution given \(X_k \in \{0,1\}\) denoted by

$$ M = \left[ \begin{array}{cc} m_{0,a} &{} m_{0,b} \\ m_{1,a} &{} m_{1,b} \end{array} \right] = \left[ \begin{array}{ccc} \mathbb {P}( O_k=a \mid X_k=0) &{} \mathbb {P}( O_k=b \mid X_k=0) \\ \mathbb {P}( O_k=a \mid X_k=1) &{} \mathbb {P}( O_k=b \mid X_k=1) \end{array} \right] , $$

called the emission probability matrix.

  1. (a)

    Using elements of \(\pi \), P and M, compute \(\mathbb {P}(X_0=1,X_1=1)\) and the probability

    $$ \mathbb {P}( (O_0,O_1)=(a, b ) \text{ and } ( X_0, X_1) = (1,1) )$$

    of observing the sequence \((O_0,O_1)=(a, b )\) when \(( X_0, X_1) = (1,1)\).

    Hint: By independence, the conditional probability of observing \((O_0, O_1)=(a, b)\) given that \((X_0,X_1)=(1,1)\) splits as

    $$ \mathbb {P}( (O_0, O_1)=(a, b) \mid (X_0,X_1)=(1,1) ) = \mathbb {P}( O_0=a \mid X_0 = 1) \mathbb {P}( O_1=b \mid X_1 = 1). $$
  2. (b)

    Find the probability\(\mathbb {P}( (O_0,O_1)=(a, b) )\) that the observed sequence is (ab).

    Hint: Use the law of total probability based on all possible values of \((X_0,X_1)\).

  3. (c)

    Compute the probabilities

    $$\mathbb {P}( X_1=1 \mid (O_0,O_1)=(a, b) ), \quad \text{ and } \quad \mathbb {P}( X_1=0 \mid (O_0,O_1)=(a, b) ). $$

Exercise 4.5

Consider a two-dimensional random walk \((S_n)_{n\in {\mathord {\mathbb N}}}\) started at \(S_0 = (0,0)\) on \({\mathord {\mathbb Z}}^2\), where, starting from a location \(S_n = (i, j)\) the chain can move to any of the points \((i + 1 , j + 1 )\), \((i + 1 , j - 1 )\), \((i - 1 , j + 1 )\), \((i - 1 , j - 1 )\) with equal probability 1 / 4.

  1. (a)

    Suppose in addition that the random walk cannot visit any site more than once, as in a snake game. Is the resulting system a Markov chain? Justify your answer.

    figure h
  2. (b)

    Let \(S_n = (X_n, Y_n)\) denote the coordinates of \(S_n\) at time n and let \(Z_n : = X_n^2 + Y_n^2\). Is \((Z_n)_{n\in {\mathord {\mathbb N}}}\) a Markov chain? Justify your answer.

    Hint: Use the fact that a same value of \(Z_n\) may correspond to different locations of \((X_n, Y_n)\) on the circle, for example \((X_n, Y_n) = (5,0)\) and \((X_n, Y_n) = (4,3)\) when \(Z_n=25\).

Questions (a) and (b) above are independent.

Exercise 4.6

The Elephant Random Walk \((S_n)_{n\in {\mathord {\mathbb N}}}\) [ST08] is a discrete-time \({\mathord {\mathbb Z}}\)-valued random walk

$$ S_n := X_1+\cdots +X_n, \qquad n\in {\mathord {\mathbb N}}, $$

whose increments \(X_k=S_k-S_{k-1}\), \(k\ge 1\), are recursively defined as follows:

  • At time \(n=1\), \(X_1\) is a Bernoulli \(\{-1,+1\}\)-valued random variable with

    $$\mathbb {P}(X_1=+1)=p \quad \text{ and } \quad \mathbb {P}(X_1=-1)=q=1-p \in (0,1). $$
  • At any subsequent time \(n\ge 2\), one draws randomly an integer time index \(k\in \{1,\dots , n-1\}\) with uniform probability, and lets \(X_n:=X_k\) with probability p, and \(X_n:=-X_k\) with probability \(q:=1-p\).

Does the Elephant Random Walk \((S_n)_{n\in {\mathord {\mathbb N}}}\) have the Markov property?

Exercise 4.7

Consider a Markov chain \((X_n)_{n\ge 0}\) with state space \(\mathbb {S}= \{0,1\}\) and transition matrix

where \(a, b>0\), and define a new stochastic process \((Z_n)_{n\ge 1}\) by \(Z_n = (X_{n-1}, X_n)\), \(n\ge 1\). Argue that \((Z_n)_{n\ge 1}\) is a Markov chain and write down its transition matrix. Start by determining the state space of \((Z_n)_{n\ge 1}\).

Exercise 4.8

Given \(p\in [0,1)\), consider the Markov chain \((X_n)_{n\ge 0}\) on the state space \(\{0,1,2 \}\) having the transition matrix

with \(q:=1-p\).

  1. (a)

    Give the probability distribution of the first hitting time

    $$ T_2 : = \inf \big \{ n \ge 0 \ : \ X_n = 2 \big \}. $$

    of state starting from .

    Hint: The sum \(Z=X_1+\cdots + X_n\) of n independent geometric random variables on \(\{1,2,\ldots \}\) has the negative binomial distribution

    $$\mathbb {P}( Z = k \mid X_0 = 1 ) = {\left( {\begin{array}{c}k-1\\ k-d\end{array}}\right) } (1-p)^dp^{k-d}, \qquad k \ge d. $$
  2. (b)

    Compute the mean hitting time \(\mathop {\hbox \mathrm{IE}}\nolimits [ T_2 \mid X_0 = 0]\) of state starting from \(X_0 = 0\).

    Hint: We have

    $$ \sum _{k=1}^\infty k p^{k-1} = \frac{1}{(1-p)^2} \quad \text{ and } \quad \sum _{k=2}^\infty k(k-1) p^{k-2} = \frac{2}{(1-p)^3}, \qquad 0\le p < 1. $$

Exercise 4.9

Bernoulli–Laplace chain. Consider two boxes and a total of 2N balls made of N red balls and N green balls. At time 0, a number \(k=X_0\) of red balls and a number \(N-k\) of green balls are placed in the first box, while the remaining \(N-k\) red balls and k green balls are placed in the second box.

 

figure i

At each unit of time, one ball is chosen randomly out of N in each box, and the two balls are interchanged. Write down the transition matrix of the Markov chain \((X_n)_{n\in {\mathord {\mathbb N}}}\) with state space \(\{ 0, 1, 2, \ldots , N\}\), representing the number of red balls in the first box. Start for example from \(N=5\).

Exercise 4.10

  1. (a)

    After winning k dollars, a gambler either receives \(k+1\) dollars with probability p, or has to quit the game and lose everything with probability \(q=1-p\). Starting from one dollar, find a model for the time evolution of the wealth of the player using a Markov chain whose transition probability matrix P will be described explicitly along with its powers \(P^n\) of all orders \(n\ge 1\).

  2. (b)

    (Success runs Markov chain). We modify the model of Question (a) by allowing the gambler to start playing again and win with probability p after reaching state . Write down the corresponding transition probability matrix P, and compute \(P^n\) for all \(n\ge 2\).

Exercise 4.11

Let \((X_k)_{k\in {\mathord {\mathbb N}}}\) be the Markov chain with transition matrix

$$ P = \left[ \begin{array}{cccc} 1/4 ~&{} 0 ~&{} 1/2 ~&{} 1/4 \\ 0 ~&{} 1/5 ~&{} 0 ~&{} 4/5 \\ 0 ~&{} 1 ~&{} 0 ~&{} 0 \\ 1/3 ~&{} 1/3 ~&{} 0 ~&{} 1/3 \\ \end{array} \right] . $$

A new process is defined by letting

$$ Z_n := \left\{ \begin{array}{ll} 0 &{} \text{ if } X_n = 0 \text{ or } X_n = 1, \\ \\ X_n &{} \text{ if } X_n = 2 \text{ or } X_n = 3, \end{array} \right. $$

i.e.

$$ Z_n = X_n \mathbbm {1}_{\{ X_n \in \{ 2,3 \} \} }, \qquad n \ge 0. $$
  1. (a)

    Compute

    $$ \mathbb {P}( Z_{n+1} = 2 \mid Z_n = 0 \text{ and } Z_{n-1}=2) \ \ \text{ and } \ \ \mathbb {P}( Z_{n+1} = 2 \mid Z_n = 0 \text{ and } Z_{n-1}=3), $$

    \(n \ge 1\).

  2. (b)

    Is \((Z_n)_{n\in {\mathord {\mathbb N}}}\) a Markov chain?

Exercise 4.12

[OSA+09]

Abeokuta, one of the major towns of the defunct Western Region of Nigeria, has recently seen an astronomic increase in vehicular activities. The intensity of vehicle traffic at the Lafenwa intersection which consists of Ayetoro, Old Bridge and Ita-Oshin routes, is modeled according to three states L/M/H = \(\{\text{ Low } \text{/ } \text{ Moderate } \text{/ } \text{ High } \}\).

  1. (a)

    During year 2005, low intensity incoming traffic has been observed at Lafenwa intersection for \(\eta _L = 50 \%\) of the time, moderate traffic has been observed for \(\eta _M = 40 \%\) of the time, while high traffic has been observed during \(\eta _H = 10 \%\) of the time.

    Given the correspondence table

    $$ \begin{array}{c|c} \mathrm{incoming~traffic} &{} \mathrm{~vehicles~per~hour} \\ \hline \mathrm{L~(low~intensity)} &{} 360 \\ \hline \mathrm{M~(medium~intensity)} &{} 505 \\ \hline \mathrm{H~(high~intensity)} &{} 640 \end{array} $$

    compute the average incoming traffic per hour in year 2005.

  2. (b)

    The analysis of incoming daily traffic volumes at Lafenwa intersection between years 2004 and 2005 shows that the probability of switching states within \(\{ \text{ L, } \text{ M, } \text{ H } \}\) is given by the Markov transition probability matrix

    $$ P = \left[ \begin{array}{ccc} 2/3 ~&{} 1/6 ~&{} 1/6 \\ 1/3 ~&{} 1/2 ~&{} 1/6 \\ 1/6 ~&{} 2/3 ~&{} 1/6 \end{array} \right] . $$

    Based on the knowledge of P and \(\eta = [\eta _L , \eta _M , \eta _ H]\), give a projection of the respective proportions of traffic in the states L/M/H for year 2006.

  3. (c)

    Based on the result of Question (b), give a projected estimate for the average incoming traffic per hour in year 2006.

  4. (d)

    By solving the equation \(\pi = \pi P\) for the invariant (or stationary) probability distribution \(\pi = [ \pi _L , \pi _M , \pi _H ]\), give a long term projection of steady traffic at Lafenwa intersection. Hint: we have \(\pi _L = 11/24\).

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Nature Singapore Pte Ltd.

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Privault, N. (2018). Discrete-Time Markov Chains. In: Understanding Markov Chains. Springer Undergraduate Mathematics Series. Springer, Singapore. https://doi.org/10.1007/978-981-13-0659-4_4

Download citation

Publish with us

Policies and ethics