Keywords

1 Introduction

The gambler’s ruin problem for two players is solved using recursion when the bets are even money. The solution gives the expected time until one player is ruined, and the probabilities each player acquires all the money. The multiplayer problem presents more difficulties. Consider a three-player game and let the amount of money (or chips) of the players be \(S_1\), \(S_2\), \(S_3\). Let \(S = S_1 + S_2 + S_3\). In each time step, a game is played, with winners and losers. Suppose that each game involves exactly two players, each one having a \(50\,\%\) chance of winning the bet. The model goes as: \(f_{ij}\) is chosen randomly with probability 1/6, where \(i, j = 1, 2, 3\) and

$$\begin{aligned} f_{ij}: (S_i, S_j) \mapsto (S_i + b, S_j - b) . \end{aligned}$$
(1)

If the bet is fixed at \(b = 1\) and the stacks are positive integers, the resulting model is the 3-tower game. The 3-tower model is loosely based on the tower of Hanoi problem, with no constraints on the order by which the chips are stacked. One application of this model is tournaments that involve the accumulation of chips or wealth, for example, poker tournaments. In such cases, a partial information game may be modeled as a zero information games to determine players’ equities independent of skill. Other forms of the gambler’s ruin for three players are the symmetric problem [8, 9] and the C-centric game [4]. The time until a player is ruined in the 3-tower problem has been solved [1, 2, 10, 11] and is given by

$$\begin{aligned} T = \frac{3 S_1 S_2 S_3}{S}. \end{aligned}$$
(2)

The probability that each player is ruined first is an open problem [4]. Ferguson used Brownian motion to numerically approximate the probability of ruin [3], and Kim improved on this by using numerical solutions to Markov processes [6]. An alternative method for calculating placing probabilities is the Independent Chip Model (ICM), credited to Malmuth-Harville [7], although no proofs of the method are found. Let n be the number of players, \(X_i\) be the random variable denoting the placing of Player i, and \(S_i\) be the current stack size of Player i. Then the probability of player i placing 1st is

$$\begin{aligned} P(X_i = 1) = S_i/S \end{aligned}$$
(3)

where \(S = S_1 + S_2 + \ldots + S_n\). To obtain the probability of placing 2nd, conditional probabilities are used for each opponent finishing 1st, and from the remaining players the probability of placing 2nd (which is essentially 1st among the remaining \(n - 1\) players) is the proportion of player i’s stack to the total stack not including the stack of the conditional 1st place finisher. Thus,

$$\begin{aligned} P(X_i = 2) = \sum \limits _{j \not = i} P(X_j = 1)P(X_i = 2|S_j = 0) = \sum \limits _{j \not =} \frac{S_j}{S}\frac{S_i}{S-S_j} . \end{aligned}$$
(4)

Continuing, the probability of Player i finishing 3rd is

$$\begin{aligned} P(X_i = 3) = \sum \limits _{j \not = i} \sum \limits _{k \not = i,j} \frac{S_j}{S}\frac{S_k}{S-S_j}\frac{S_i}{S-S_j-S_k} . \end{aligned}$$
(5)

It should be emphasized that ICM does not make any assumptions about the bet amount, hence is slightly different from the 3-tower problem.

In the 3-tower model, the probability of finishing 1st is easily solved by recursion. The result is exactly the same as (3). A method of calculating 3rd place probabilities (and hence, 2nd place probabilities) for the 3-tower problem is presented here using Markov chains constructed using directed multigraphs with loops.

2 Methods

Consider a 3-player model where the players are involved in even money bets. We define a state as an ordered triple (xyz) with \(x \ge y \ge z \ge 0\). Because the games in each round are all fair and random, the probabilities of becoming ruined first of each player depends only on the amount of money each player has in a given state. We also define chip position (or simply, position) to be the number of money (or chips) a player has in a given state. Let us also define the function p(u|vw) to be the probability that a player with a given chip position u in the state (uvw) will finish 3rd, or become ruined first. If the state is understood from context, we will simply write this as p(u). If v and w are positions in the same state such that \(v = w\), then it is assumed that \(p(v) = p(w)\).

A terminal state is one wherein the probabilities of placing 3rd are known. There are two types of terminal states.

  1. 1.

    If one of the three positions is zero, i.e. \(z = 0\)

  2. 2.

    If all three positions are equal, i.e. \(x = y = z\).

Note that for the terminal state (xy, 0), then \(p(x) = p(y) = 0\) and \(p(0) = 1\). For the terminal state (xxx), \(p(x) = 1/3\) using the previous assumption.

A state (xyz) is adjacent to a state (uvw) if the former state can move to the latter state in one round. A state that is adjacent to a terminal state is called a near-terminal state.

Lemma 1

A state (xyz) with \(x \ge y \ge z\) that satisfies one of the following is a near-terminal state:

  1. (i)

    \(z = 1\)

  2. (ii)

    \(x = y + 1\) and \(z = y - 1\).

In constructing the multigraph, all possible states of S are represented by nodes. The transitions between adjacent states are given by directed edges. The states are arranged such that all states (xyz) with a fixed value of z are aligned vertically, with the highest value of x in the topmost position, in decreasing order going down (i.e. from North to South), while y is increasing at the same time. All states (xyz) with fixed x are aligned horizontally, with y in decreasing order from left to right (i.e. West to East) and z increasing at the same time. Consequently, all states with fixed y are aligned diagonally, with x decreasing and z increasing as the states move from Northwest to Southeast. An example of the resulting multigraph is given in Figs. 1 and 2. From the construction, it is clear that for a given node, its adjacent nodes are the ones located to its immediate top, bottom, left, right, top left and bottom right positions (i.e.  North, South, East, West, Northwest and Southeast). A state may be adjacent to itself if the following holds:

Lemma 2

A state (xyz) is adjacent to itself if \(y = x - 1\) and/or \(z = y - 1\).

In the “and” case in Lemma 2, the state is doubly adjacent to itself. The state is also doubly adjacent to itself for states of the form \((x, x, x-1)\) and \((x, x-1, x-1)\). This and the following Lemma can be proved using the definition of adjacent nodes and (1) with \(b = 1\).

Lemma 3

A state of the form (xyy) or (xxz) is doubly adjacent to its adjacent nodes.

There is always at least one edge from a non-terminal state to its adjacent state. A state that is doubly adjacent to another state has two edges going to that other state. If a state A is doubly adjacent to a non-terminal state B, it does not follow that B is doubly adjacent to A.

3 Results and Discussion

Based on the multigraph, we construct the Markov chain. Let \(\tau \) be a relation that maps a position from one state to a position in another non-terminal state. We can think of \(\tau \) as directed edges that connect specific positions within states to other positions in other states (or possibly within the same state). Let the function \(\phi \) denote the ruin probability of the position in one move. Note that the unique non-terminal positions are exactly the transient states in the Markov chain, while \(\phi \) gives the probability of absorption to the first-ruined state. The Theorem below follows from the previous Lemmas.

Theorem 1

Let the state corresponding to a non-terminal vertex be denoted by (xyz) where \(x \ge y \ge z\) such that \(z \ge 1\). Let \(S = x + y + z\) and let (uvw) be an adjacent non-terminal vertex.

  1. (i)

    If \(z = y - 1\), then \(\tau (x) \rightarrow u, \tau (y) \rightarrow w, \tau (z) \rightarrow v\)

  2. (ii)

    If \(y = x - 1\), then \(\tau (x) \rightarrow v, \tau (y) \rightarrow u, \tau (z) \rightarrow w\)

  3. (iii)

    If \(z = y\), then \(\tau (x) \rightarrow u\) twice, \(\tau (y) \rightarrow v\) and \(\tau (y) \rightarrow w\)

  4. (iv)

    If \(y = x\), then \(\tau (z) \rightarrow w\) twice, \(\tau (x) \rightarrow u\) and \(\tau (x) \rightarrow v\).

In all other cases, the transitions are \(\tau (x) \rightarrow u\), \(\tau (y) \rightarrow v\) and \(\tau (z) \rightarrow w\).

Remark 1

If the state is self-adjacent, then the corresponding transition of positions within the same state in Theorem 1(i),(ii) are given by

  1. (i)

    \(\tau (x) \rightarrow x, \tau (y) \rightarrow z, \tau (z) \rightarrow y\)

  2. (ii)

    \(\tau (x) \rightarrow y, \tau (y) \rightarrow x, \tau (z) \rightarrow z\).

Each mapping of positions by \(\tau \) gives a transition probability of 1 / 6, except when the mapping occurs twice, as in the 3rd and 4th cases in the Theorem, then the transition probability is 2 / 6. These values are then used to generate the transient matrix \(\mathbf Q \) in the Markov chain. For the absorption probabilities, we use the following:

Theorem 2

Given a position u, then \(\phi (u) = 1/3\) when \(u = 1\). If \(u > 1\), then \(\phi (u) = 0\), the only exception is for the state \((u+1, u, u-1)\), wherein \(\phi (u+1) = \phi (u) = \phi (u-1) = 1/18\).

If \(S = 6\), then in the state (3, 2, 1), \(\phi (1) = 1/3 + 1/18\), by combining the two cases in Theorem 2. The values of \(\phi \) on the various positions are then used to construct the vector r. Finally, we solve the system

$$\begin{aligned} (\mathbf I -\mathbf Q ) \mathbf p = \mathbf r \end{aligned}$$
(6)

where the vector p gives the probabilities of first ruin for each position and I is the identity matrix with the same dimension as Q. It is easy to show the transient matrix \(\mathbf I -\mathbf Q \) is invertible [5].

Fig. 1.
figure 1

Multigraph with loops for \(S = 4\).

Example 1

To illustrate the method, we first consider the simplest case \(S = 4\) with only one non-terminal state (2, 1, 1) and two terminal states (3, 1, 0), (2, 2, 0). The multigraph is shown in Fig. 1. Note that by Lemma 2, (2, 1, 1) is adjacent to itself, and by Lemma 3, it is doubly adjacent to its adjacent nodes including itself. We use a combination of Remark 1 and Theorem 1 (iii) to get \(\tau (2) \rightarrow 1\) twice, \(\tau (1) \rightarrow \tau (2)\) once and \(\tau (1) \rightarrow \tau (1)\) once. This generates our transient matrix Q. Using Theorem 2, \(\phi (2) = 0\) and \(\phi (1) = 1/3\). This produces the vector r. Thus

$$\begin{aligned} \mathbf Q = \frac{1}{6}\left( \begin{array}{cc} 0 \, &{} \, 2 \\ 1 \, &{}\, 1 \end{array} \right) , \,\,\,\ \mathbf r = \left( \begin{array}{c} 0 \\ 1/3 \end{array} \right) . \end{aligned}$$
(7)

Substituting (7) in (6), we obtain the solution \(\mathbf p = (1/7, 3/7)^T\), i.e. the probabilities of being ruined first are \(p(2) = 1/7\), \(p(1) = 3/7\). The 2nd place probabilities are thus 10 / 28 and 9 / 28 for positions 2 and 1, respectively. In comparison, [3] obtained (0.35790, 0.32105) using a Brownian motion model, which are good approximations of the true values obtained by our method. The computed ICM values are (1 / 3, 1 / 3), which are slightly different from the 3-tower values.

Fig. 2.
figure 2

Directed multigraph (with loops) of state transitions for \(S = 9\).

Example 2

Figure 2 illustrates the various states for \(S = 9\). In this example, there are 6 non-terminal states and a total of 15 unique positions. For labeling purposes, letters are affixed to the position in cases when there are two or more unique states with the same position value. Starting from the top left non-terminal state moving downwards (or South), the non-terminal states are (7, 1a, 1a), (6, 2a, 1b), (5a, 3a, 1c), (4a, 1d, 1d), (5b, 2b, 2b), and (4b, 3b, 2c). The unique positions are then arranged starting from the x positions in each of the states above, then the y positions, and then the z positions, skipping the non-unique positions as needed. Thus our indices correspond to 7, 6, 5a, 4a, 5b, 4b, 1a, 2a, 3a, 1d, 2b, 3b, 1b, 1c and 2c, respectively. For example, index \(i = 1\) of Q, r and p corresponds to values for position 7 from the state (7, 1, 1), while index \(i=15\) corresponds to the state-position 2c from the state (4, 3, 2). In constructing Q, note that by Theorem 1(iii), \(\tau (7) \rightarrow 6\) twice, and using the 1 and 2-index for positions 7 and 6, respectively, we have \(Q_{12} = 2/6\). Because all other transitions of 7 are towards terminal states, then \(Q_{1j} = 0\) for \(j \ne 2\). From Theorem 2, \(r_1 = \phi (7) = 0\) because 7 is not a near-terminal position. For position 6, we have \(\tau (6) \rightarrow \{7, 6, 5a, 5b\}\) using Theorem 1(i), hence \(Q_{21} = Q_{22} = Q_{23} = Q_{25} = 1/6\). For position 5a in the state (5a, 3a, 1c), the ’regular’ case of Theorem 1 applies, hence \(\tau (5a) \rightarrow \{6, 5b, 4a, 4b\}\). The rest of the entries are computed similarly, using Theorems 1 and 2. The transient matrix Q and absorption vector r in (6) are obtained as follows:

$$\begin{aligned} \mathbf Q = \frac{1}{6}\left( \begin{array}{ccccccccccccccc} 0 &{} 2 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ 1 &{} 1 &{} 1 &{} 0 &{} 1 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ 0 &{} 1 &{} 0 &{} 1 &{} 1 &{} 1 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ 0 &{} 0 &{} 1 &{} 0 &{} 0 &{} 1 &{} 0 &{} 0 &{} 1 &{} 0 &{} 1 &{} 0 &{} 0 &{} 0 &{} 0 \\ 0 &{} 2 &{} 2 &{} 0 &{} 0 &{} 2 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ 0 &{} 0 &{} 1 &{} 1 &{} 1 &{} 1 &{} 0 &{} 0 &{} 0 &{} 0 &{} 1 &{} 0 &{} 0 &{} 0 &{} 0 \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 1 &{} 0 &{} 0 &{} 0 &{}1 &{} 0 &{} 0 &{} 0 \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 1 &{} 0 &{} 1 &{} 1&{} 0 &{} 1 &{} 0 &{} 0 &{} 0 \\ 0 &{} 0 &{} 0 &{} 1 &{} 0 &{} 0 &{} 0 &{} 1 &{} 0 &{} 1 &{} 1 &{} 0 &{} 0 &{} 0 &{} 0 \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 1 &{} 1 &{} 0 &{} 1 &{} 1 &{} 1 &{} 0 &{} 1 \\ 0 &{} 0 &{} 0 &{} 1 &{} 0 &{} 1 &{} 0 &{} 0 &{} 1 &{} 1 &{} 0 &{} 0 &{} 0 &{} 0 &{} 1 \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 1 &{} 1 &{} 0 &{} 1 &{} 0 &{} 0 &{} 1 &{} 0 &{} 0 \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 1 &{} 0 &{} 1 &{} 0 &{} 1 &{} 1 \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 2 &{} 0 &{} 2 \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 1 &{} 1 &{} 0 &{} 1 &{} 1 &{} 1 \\ \end{array} \right) \end{aligned}$$
(8)

and

$$\begin{aligned} \mathbf r = \left( 0 \,\,\, 0 \,\,\, 0 \,\,\, 0 \,\,\, 0 \,\,\, 1/18 \,\,\, 1/3 \,\,\, 0 \,\,\, 0 \,\,\, 1/18 \,\,\, 0 \,\,\, 1/3 \,\,\, 1/3 \,\,\, 1/3 \,\,\, 1/18 \right) ^T. \end{aligned}$$
(9)

The solution of (6) using (8) and (9) then produces the 3rd place probabilities of the various positions. For example, given the state (5, 3, 1), the probabilities of placing 3rd are \(p(5) = 0.075227\), \(p(3) = 0.196310\), and \(p(1) = 0.728463\). The results are similar to the values obtained using numerical solutions to Markov processes, which were given to 4 decimal places [6]. The corresponding ICM values are (0.0972, 0.2083, 0.6944).

Unlike ICM or other methods, the actual number of chips of each player, rather than just the proportion of the chips to those of the other players, affects that player’s probability of first ruin. The accuracy of the method allows very minute differences in probabilities to be observed. As an illustration, the 3rd place probabilities for states in multiples of (3, 2, 1) are shown in Table 1. As x, y and z increase, the ruin probabilities approach a limiting value. The ICM values are shown for comparison.

Table 1. Probabilities of placing 3rd for given states (xyz), where x : y : z are in the ratio 3 : 2 : 1. The ICM values are shown for comparison.

The time until first ruin can also be calculated from the model. For this purpose, we adjust the transient matrix Q when \(S\text { mod } 3 = 0\) by regarding the state (S / 3, S / 3, S / 3) as non-terminal. The time until ruin is calculated from the row sum of \(\mathbf N = (\mathbf I - \mathbf Q )^{-1}\). In Example 1, for the state (2, 1, 1), Q given by (7) and I the \(2 \times 2\) identity, we obtain

$$\begin{aligned} \mathbf N = \frac{1}{14}\left( \begin{array}{cc} 15 \, &{} \, 6 \\ 3\, &{} \,18 \\ \end{array} \right) \end{aligned}$$
(10)

and the row sum for both positions is 3/2, exactly the same value using the time until ruin formula (2).

4 Conclusion

In this paper, a method of solving players’ first-ruin probabilities in the 3-tower problem or gambler’s ruin with three players was presented. The assumptions were that each player started with some nonzero number of chips and during each round, two players were randomly selected in an even-money bet with a randomly chosen winner winning one chip from the loser. One application of this is computing equities in a partial information game (e.g. poker tournaments) modeled as a random game. A multigraph of the various states given S total chips was constructed. The method specified how to obtain the state transitions and absorption probabilities, as given by Theorems 1 and 2. The resulting linear system of the Markov chain were then used to solve the 3rd place (and thus 2nd place) probabilities of any state in S. Although a closed form formula was not derived for the probabilities, the method produces exact solutions instead of numerical approximations. This made it possible to show subtle differences in probabilities of first ruin as S was increased, while preserving the relative chip ratios. In contrast, other methods, such as ICM or Brownian models, only depend on the proportion of chips each player has, thus are independent of any scaling factor. The calculated results were similar to previous numerical approximations using Brownian motion, but differed from ICM by up to \(15\,\%\), although as mentioned, ICM and the 3-tower problem do not use the same assumptions.

The 3-tower model may be extended to one wherein bet sizes are not fixed. The multigraph form of such a model would be much more complex because the number of edges and adjacent nodes is not limited to six. The model may also be applied to other forms of the three-player gambler’s ruin such as player-centric and symmetric games. An extension to an N-tower problem may be done but the increase in complexity of the graph is expected to be significant.