## Abstract

The tug-of-war in which single players repeatedly compete in all-pay auctions is known to have a non-cooperative Markov-perfect equilibrium in which neither player expends positive effort and the tug-of-war remains unresolved, provided that the winner and loser prizes, the tie-breaking rule, the lead required for victory, and the initial state are chosen appropriately. In this paper, we show that such peaceful equilibria do not exist if the tug-of-war is between teams with pairwise matched players. The reason for this phenomenon is that the members of the teams can externalize future effort costs while the single players cannot. For a restricted number of states, our analysis also highlights the impact of the discount factor on the expected trajectory of the tug-of-war, the dynamics of the expected effort, and the equilibrium utility.

This is a preview of subscription content, log in to check access.

## Notes

- 1.
- 2.
In a setting without a loser prize and tie-breaking rule that favors the player with the higher continuation value, Konrad and Kovenock (2005) show that the concern about high future effort may discourage players who accumulated a disadvantage from trying to win and pushing the process toward their preferred endpoint. This results in an equilibrium with an interior tipping state, which is the only state in which positive effort occurs.

- 3.
- 4.
This difference is immaterial for our analysis. In particular, the logic of the proof and the intuition for their eternal-peace result, which we present in Proposition 1, are unaffected.

- 5.
Agastya and McAfee (2006) also consider the case in which the winner prize is greater than the loser prize and they allow for terminal prizes that are asymmetric between the players. In these cases, a peaceful equilibrium might also exist, yet the number of states required might be greater than five. We focus on the range of win and loss parameters satisfying (2) because it is this range in which the total surplus from conflict resolve is negative. Hence, it is the range in which peaceful continuation along interior states forever maximizes the sum of all players’ rents, and in which non-existence of a peaceful equilibrium is potentially more surprising.

- 6.
See also Konrad and Kovenock (2015) for a more general analysis of the static all-pay contest when prizes are withheld in case of ties.

- 7.
To obtain this result from Theorem 1 in McAfee (2000), note that the functions \({\varPsi }_s\) and \({\varPhi }_s\) defined in (16) and (17) in McAfee (2000) intersect at \(s=3\), that \({\varPsi }_3<0\), and that \({\varPsi }_s\) (\({\varPhi }_s\)) is decreasing (increasing) in

*s*. The discussion after Theorem 1 in McAfee (2000) makes it clear that the case labeled (*b*) in Theorem 1 applies. Equilibrium duplicity continues to hold for any arbitrary number of states, where for either of the two equilibria described in the following, the respective characterizations and the logic of the respective proofs extend in a straightforward way. - 8.
With more than five states, there are still only two states in which the players expend positive effort, yet a connected set of interior states that are eternally peaceful.

- 9.
We conjecture that a similar construction works for any number of states in the tug-of-war.

- 10.
- 11.
For a corresponding tug-of-war with more than five states, the equilibrium analysis would require to determine not just one such consistent winning probability, but a whole vector of consistent winning probabilities. While existence might still be established along the lines pursued hereafter, obtaining the corresponding comparative static results would require to analyze a system of multivariate polynomials rather than the cubic equations in

*p*as in our setup (compare, for example, the proof to Lemma 4). - 12.
- 13.
Parties with an equal number of players are not necessary for any of the results in this section, yet it facilitates some of the notation.

## References

Agastya, M., McAfee, R.P.: Continuing Wars of Attrition, SSRN Working Paper (2006)

Arce, D.G., Kovenock, D., Roberson, B.: Weakest-link attacker-defender games with multiple attack technologies. Naval Research Logistics

**59**, 457–469 (2012)Baye, M.R., Kovenock, D., de Vries, C.G.: The all-pay auction with complete information. Econ. Theory

**8**(2), 291–305 (1996)Clark, D.J., Konrad, K.A.: Asymmetric conflict: weakest link against best shot. J. Conflict Resol.

**51**(3), 457–469 (2007)Esteban, J.M., Ray, D.: Collective action and the group size paradox. Am. Polit. Sci. Rev.

**95**(3), 663–672 (2001)Fu, Q., Lu, J., Pan, Y.: Team contests with multiple pairwise subcontests. Am. Econ. Rev.

**105**(7), 2120–2140 (2015)Fudenberg, D., Tirole, J.: Game Theory. MIT Press, Cambridge (1991)

Gelder, A.: From custer to thermopylae: last stand behavior in multi-stage contests. Games Econ. Behav.

**87**, 442–466 (2014)Häfner, S., Konrad, K.A.: Eternal Peace in the Tug-of-War?, Working Paper of the Max Planck Institute for Tax Law and Public Finance No. 2016-09 (2016)

Häfner, S.: A tug-of-war team contest. Games Econ. Behav.

**104**, 372–391 (2017)Harris, C., Vickers, J.: Racing with uncertainty. Rev. Econ. Stud.

**54**(1), 1–21 (1987)Hirshleifer, J.: From weakest link to best shot: the voluntary provision of public goods. Public Choice

**63**(2), 101–112 (1983)Klumpp, T., Polborn, M.K.: Primaries and the new Hampshire effect. J. Public Econ.

**90**(6–7), 1073–1114 (2006)Konrad, K.A., Kovenock, D.: Equilibrium and Efficiency in the Tug-of-War, CESifo Working Paper No. 1564 (2005)

Konrad, K.A., Kovenock, D.: Multi-battle contests. Games Econ. Behav.

**66**(1), 256–274 (2009)Konrad, K.A., Kovenock, D.: Interest groups, Influence Activities and Politicians with Incomplete Commitment, unpublished manuscript, presented at the 2015 SAET meeting in Cambridge, UK (2015)

McAfee, R.P.: Continuing Wars of Attrition, unpublished manuscript (2000)

Maskin, E., Tirole, J.: Markov perfect equilibrium: I. Observable actions. J. Econ. Theory

**100**(2), 191–219 (2001)Nitzan, S.: Collective rent dissipation. Econ. J.

**101**(409), 1522–1534 (1991)Plemmons, R.J.: M-Matrix characterizations. I-Nonsingular M-matrices. Linear Algebra Appl.

**18**(2), 175–188 (1977)

## Author information

### Affiliations

### Corresponding author

## Additional information

### Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

I am indebted to Kai A. Konrad, who initially suggested this project and with whom I developed many of ideas in this paper (cf. Häfner and Konrad 2016). Further thanks go to Igor Letina, Georg Nöldeke, Anja Shortland, and three anonymous referees for very helpful comments. The seminar audiences at Heidelberg, Munich, Oslo, Stockholm, Zurich, and the SAET in Rio provided invaluable feedback.

## Appendices

### The static all-pay auction with a zero-tie-prize

As argued in the text, the subcontests in the tug-of-war can be interpreted as static all-pay auctions. Consider a complete information all-pay auction between two players \(i=A,B\) in which the players simultaneously expend bids \(x_1,x_2 \ge 0\) and have the following payoffs \(U_i(x_1,x_2)\):

This payoff function describes an all-pay contest in which players \(i=A,B\) compete for a winner prize that is worth \(W_i \ge 0\) to player *i* and receive a loser prize \(L_i \le W_i\) (either positive or negative) in case of losing when the opponent has chosen a strictly positive bid. Ties with strictly positive bids are resolved by a fair coin toss, yet both players *i* receive a zero-effort-tie prize \(T_i\) (either positive or negative) in case both players expend zero bids.

The following lemmas describe the equilibrium for the prize ranges that are relevant for our analysis. The equilibrium strategies are described by the cumulative distribution functions \(F_i\) that describe the distribution over the feasible bids \([0,{\bar{x}}]\) for either player \(i=A,B\). Atoms at a bid \(x \in [0,{\bar{x}}]\) are denoted by \(\alpha _i(x) \in (0,1]\). The first result is immediate; the second result can be checked more formally by following along the lines of the analysis in Baye et al. (1996).

### Lemma 5

If \(W_i=T_i\) for both players \(i=A,B\), then there is an equilibrium in which both players \(i=A,B\) play zero with probability one, i.e., \(\alpha _i(0)=1\) holds for both players \(i=A,B\).

For the second result, we denote the net value of winning of player *i* by \(Z_i = W_i-L_i\).

### Lemma 6

Suppose \(Z_j \ge Z_i>0\). Further, suppose \(W_i \ge T_i\) and \(W_j \ge T_j\), where the inequality is strict for at least one of the players *i* and *j*. Then, there is a unique equilibrium with the equilibrium strategies given by

### Proofs

### Proof for Sect. 2

### Proof of Proposition 1

As observed in the text, we need to establish that neither of the players *A* and *B* has an incentive to deviate from the strategies prescribed in the profile \((F_{A},F_{B})\) at any single subcontest, provided that \((F_{A},F_{B})\) is played throughout the tug-of-war. As the strategies are stationary, this amounts to showing that neither player has an incentive for a one-shot deviation in any of the interior states.

We start with state \(s=2\) (state \(s=4\) follows by an analogous argument). In the terms used in Appendix A, the winning prize for player *A* at \(s=2\) under \((F_{A},F_{B})\) is \(\delta V_{A}^3\) and hence equal to zero (as \(V_{A}^3=0\)), the losing prize is \(\delta V_{A}^1 = -\delta L\), and the zero-effort-tie prize \(\delta V_{A}^2\) is negative (because under \((F_{A},F_{B})\) player *A* may never win more than zero utility, and hence \(V_{A}^2 \le 0\)). The net value of winning for player *A* is \(\delta [V_{A}^3-V_{A}^1]=\delta L>0\). On the other hand, both the winning prize and the net value of winning for player *B* are equal to \(\delta W\), while the zero-effort-tie prize is bounded above by \(\delta ^2 W\). Because \(W<L\) holds, the strategies \(F_{A}\) and \(F_{B}\) are thus mutual best replies according to Lemma 6, so that neither player has an incentive for a one-shot deviation.

Next we turn to state \(s=3\). Using the insights of Baye et al. (1996) we find that the continuation value for player *A* of reaching state \(s=4\) under \((F_{A},F_{B})\) is zero, while the continuation value for player *B* of reaching state \(s=2\) is also zero. That is, the winning prizes of both players in \(s=3\) are zero. Together with the fact that staying in \(s=3\) yields a continuation value of zero to both players, and hence the zero-effort-tie prize is also zero, it follows by Lemma 5 that both players playing zero with probability one constitutes mutual best responses and hence neither player has an incentive for a one-shot deviation. \(\square \)

### Proof of Proposition 2

We may assume symmetry in the strategies, i.e., \(F_{A}^2=F_{B}^4\), \(F_{A}^3=F_{B}^3\), and \(F_{A}^2=F_{B}^4\). Symmetry implies symmetric continuation values, and we hence define \(V_{A}^s=V_{B}^{6-s} \equiv V_s\) for all \(s \in \{1,...,5\}\). Assuming for the moment, that \(V_5>V_4>V_3>V_2>V_1\) holds, then the strategies in Lemma 6 give (cf. also Baye et al. 1996)

where \(Z_4 = \delta [V_5-V_3] = \delta [W-V_3]>0\) is the net value of winning for player *A* in \(s=4\) or, equally, the net value of winning for player *B* in \(s=2\) and analogously \(Z_2 = \delta [V_3+L]>0\) is the net value of winning for player *B* in \(s=4\) or, equivalently, the net value of winning for player *A* in \(s=2\). We have to consider two cases: (i) \(Z_4 \ge Z_2\) and (ii) \(Z_4 < Z_2\).

First, consider \(Z_4 \ge Z_2\). In this case, we obtain \(V_2 = \delta (-L)\), \(V_3=\delta ^2(-L)\), \(Z_4 = \delta [W+\delta ^2L]\), \(Z_2 = \delta L(1-\delta ^2)\), and hence

and as a condition for \(Z_4 \ge Z_2\),

Further, we obtain \(Z_3 = \delta ^2[W-L(1-\delta ^2)+L] = \delta ^2[W+\delta ^2L]\), and it is obvious that \(V_5>V_4>V_3>V_2>V_1\). Consequently, plugging the prizes \(Z_4\) and \(Z_2\) into the expressions in Lemma 6 yields mutually best responses in every interior state *s*. Because these strategies are stationary, the one-shot deviation principle gives that they form a MPE, which together with (24) gives us the first part of the proposition.

Next, consider \(Z_4 < Z_2\). For \(V_2\) we obtain

and as a condition for \(Z_4 < Z_2\),

Further, we obtain \(V_3=-\delta ^2W/(1-2\delta ^2)\), \(V_4=-\delta ^3W/(1-2\delta ^2)\), \(Z_4=\delta W(1-\delta ^2)/(1-2\delta ^2)\), \(Z_3=\delta ^2W(1-\delta ^2)/(1-2\delta ^2)\), \(Z_2 = \delta [L-\delta ^2W/(1-2\delta ^2)]\). Also, it is obvious that \(V_5>V_4>V_3>V_2>V_1\). Consequently, plugging the prizes \(Z_4\) and \(Z_2\) into the expressions in Lemma 6 yields mutually best responses in every interior state *s*. Again, by the one-shot deviation principle these strategies form a MPE, which together with (25) gives us the second part of the proposition. \(\square \)

### Proof for Sect. 3

### Proof to Proposition 3

A proof of this property is by contradiction: Suppose \(s=2\) is a peaceful state. In this case both players choose \(x_{A,2}=x_{B,2}=0\) now and in all future at this state and the expected present value payoff of players (*A*, 2) and (*B*, 2) is zero. However, given the choice \(x_{A,2}=0\), player (*B*, 2) may choose, for instance, \(x_{B,2}=\frac{\delta W}{2}\). This choice moves the process to \(s=1\) with probability 1 and causes a payoff for (*B*, 2) of

The existence of this strategy contradicts the assumption that \(x_{B,2}=0\) maximizes (*B*, 2)’s payoff. The same argument can be made at \(s=4\), with player (*B*, 2) replaced by player (*A*, 4).

Finally, consider \(s=3\). If this state is peaceful forever, this implies \(x_{A,3}=x_{B,3}=0\), and yields a present value of payoffs to players (*A*, 3) and (*B*, 3) of zero. Suppose this is an equilibrium. Then, a one-stage deviation at *t* by one of the players cannot yield a higher payoff to this player. We show, however, that such a profitable deviation exists.

To show this let us first consider the optimal play of players (*A*, 2) and (*B*, 2), should the process move from \(s=3\) to \(s=2\). Players (*A*, 2) and (*B*, 2) anticipate that moving from \(s=2\) to \(s=1\) yields final payoffs of \(-\delta L\) for (*A*, 2) and \(\delta W\) for (*B*, 2), whereas moving from \(s=2\) to \(s=3\) leads into the peaceful state at which the process rests forever. Hence, the present value of payoffs for both of them for \(s=3\) is zero. This implies that (*A*, 2) and (*B*, 2) are teamed up against each other in an all-pay contest with strictly positive net prizes \(Z_{A,2}=\delta L\) and \(Z_{B,2}=\delta W\), winner prizes given by \(\delta W\) for player (*B*, 2) and 0 for player (*A*, 2). Because the zero-effort-tie prize for player (*B*, 2) is bounded above by \(\delta ^2 W < \delta W\) and the zero-effort-tie prize of player (*A*, 2) is not greater than zero, it follows by Lemma 6 that the unique mutual best responses are \(F_{A,2}(x)=x/(\delta W)\) and \(F_{B,2}(x)=1-(W/L)+x/(\delta L)\) in the support \(x\in [0,\delta W]\). This implies that in the conjectured equilibrium the process moves from \(s=2\) to \(s=1\) with probability \(\frac{1}{2}\frac{W}{L}\) and it moves to \(s=3\) with probability \(1-\frac{1}{2}\frac{W}{L}\). Taking this into consideration, a one-stage deviation of (*B*, 3) from \(x_{B,3}=0\) to, for instance, \(x_{B,3}=\varepsilon >0\) increases (*B*, 3)’s payoff from zero to \(\frac{1}{2}\frac{W}{L}\delta W-\varepsilon \), which is positive for all \(0<\varepsilon <\frac{1}{2}\frac{W}{L}\delta W\). This profitable one-stage deviation implies that \(s=3\) cannot be a peaceful state in a MPE in stationary strategies. \(\square \)

### Proof of Lemma 1

We start by arguing that \(p>0\) must hold in any stationary MPE of the form considered. Suppose not, i.e., suppose that \(p=0\) holds. Then, the tug-of-war state remains in the set \(\{2,3,4\}\) forever. Now, consider the players in state \(s=3\). Because the tug-of-war state remains in the set \(\{2,3,4\}\) both the value of winning at state \(s=3\) as well as the zero-effort-tie prizes are zero for both players, and hence, by Lemma 5, they both optimally expend zero effort. But this implies that \(\alpha _{A,3}(0)=\alpha _{B,3}(0)=1\), contradicting our Proposition 3.

Next, we consider the prizes and the continuation values in the states \(s \in S\) separately, focusing on the players of team *A*, in order to determine the unique Markov-strategies as a function of the prizes. The argument for the players of team *B* is analogous. Throughout we write \(V_{A,s}^{s'}\) for the value that player (*A*, *s*) attaches to the tug-of-war being in state \(s' \in S^+\). As stated in equation (3), the prize \(Z_{A,s}\) that player (*A*, *s*) fights for when the tug-of-war is in state *s* is \(\delta [V_{A,s}^{s+1}-V_{A,s}^{s-1}]\). In terms of Lemma 6, the zero-effort-tie prize is \(\delta V_{A,s}^s\) and the winner prize is \(\delta V_{A,s}^{s+1}\), so that it suffices to establish that \(Z_{A,s}>0\) and \(V_{A,s}^s < V_{A,s}^{s+1}\) hold for all \(s \in \{2,3,4\}\).

Let us start with player (*A*, 4). Because \(V_{A,4}^5=W\) and it holds that both \(V_{A,4}^4,V_{A,4}^3 \le \delta W\), it immediately follows that \(Z_{A,4}>0\) and \(V_{A,4}^4<V_{A,4}^5\). So, we turn to player (*A*, 3). We compute the relevant continuation values as

so that we get \(Z_{A,3}=\delta ^2 p (W+L)>0\) as \(p>0\) holds in equilibrium. Further,

where the weak inequality follows because \(V_{A,3}\) is bounded above by \(\delta ^2p\frac{1-r}{2}W\) and the term in the inner square bracket before \(V_{A,3}\) is negative. By symmetry, these results imply that the equilibrium strategies in state \(s=3\) are given by (5)–(6) and hence, because neither player’s strategy has an atom at zero, that \(r=0\).

It remains to look at player (*A*, 2). The relevant continuation values are

Because \(V_{A,2}^3\) is bounded below by \(-\delta ^2 L\), we immediately get \(Z_{A,2} =\delta [V_{A,2}^3-V_{A,2}^1] > 0\) as desired. Further, combining (26)–(27), we get

so that we can compute

Next, we observe that \(V_{A,2}^2\) is bounded above by \(\delta ^3\frac{1}{2}p(1-p-q)W\) and that the term in the square brackets before \(V_{A,2}^2\) on the right-hand side above is negative. To see the latter, observe that the sign of the term is equal to the sign of

giving us the result. Together, these observations give us that

which can be restated as

where the strict inequality holds because the sign of the term in the outer square brackets is equal to the sign of

which is greater than

Taken together, this gives us \(Z_{A,s}>0\) and \(V_{A,s}^s < V_{A,s}^{s+1}\) for all \(s \in S\). Symmetry and Lemma 6 then together imply that the strategies (5)–(6) are the unique mutual best replies in all \(s \in S\). To complete the proof, we note that from these strategies it also follows that \(p < 1\) and \(q=0\). \(\square \)

### Proof of Lemma 2

Combining (7) with the expressions for \(V_{A,2}^2\) and \(V_{B,2}^2\) in (10)–(11) yields

with

Because \(b(p,\delta )<1\), it is straightforward to verify that Equations (28)–(29) have a unique solution \(Z_{A,2},Z_{B,2}>0\) given by (12)–(13) in the statement. \(\square \)

### Proof of Lemma 3

We only need to show (b). We start by showing that for fixed \(\delta \in (0,1)\) any \(p \in P(\delta )\) corresponds to a symmetric MPE profile \((F_A,F_B)\): Having \(\delta \in (0,1)\) and \(p \in P(\delta )\), we can construct candidate equilibrium prizes using (4) in Lemma 1 and (12)–(13) in Lemma 2, which in turn can be used to construct a candidate MPE strategy profile \(({\hat{F}}_A,{\hat{F}}_B)\) by using (5)–(6) in Lemma (1). Because by construction, no player has an incentive to deviate from his or her strategy in \(({\hat{F}}_A,{\hat{F}}_B)\), it follow from the one-shot deviation principle (Theorem 4.2 in Fudenberg and Tirole 1991), that we have an equilibrium.

Uniqueness then follows straight away from the fact that any \((p,\delta )\) yields unique prizes by (4) in Lemma 1 and (12)–(13) in Lemma 2, and the mutually optimal strategies in the subcontests are uniquely determined by (5)–(6) in Lemma 1. \(\square \)

### Proof of Proposition 4

For fixed \(\delta \in (0,1)\), we define

so that any \(p \in (0,1)\) solving \(p=g(p)\) satisfies \(p \in P(\delta )\). From (15), we see that \(b(p,\delta )<1\) when \(p \in [0,1]\), so that the function *g* maps [0, 1] to [0, 1]. We first show that for given \(\delta \) the mapping \(g:[0,1]\rightarrow [0,1]\) has a fixed point \(p^* \in [0,1]\). To do so, we observe (i) that the set [0, 1] is convex and compact and (ii) that *g*(*p*) is continuous, because from \({\varDelta }_{3,2}(p,\delta ),{\varDelta }_{3,5}(p,\delta )\) lying in [0, 1] and being continuous in \(p \in [0,1]\), together with a strictly positive discount factor \(\delta > 0\), it follows that the terms \(a(p,\delta )\), \(b(p,\delta )\), \(c(p,\delta )\), \(d(p,\delta )\) in (14)–(17) are continuous in \(p \in [0,1]\). Consequently, when keeping \(\delta \in (0,1)\) fixed, it follows by the Brouwer fixed point theorem that *g*(*p*) has a fixed point \(p^*=g(p^*)\), \(p^* \in [0,1]\). To finish the proof, we observe that \(c(p,\delta ) \ge \delta W\) holds for any \(p \in [0,1]\), and also that for any \(p \in [0,1]\) we have

giving us that *g*(*p*) is strictly bounded away from 0 and 1 for any *p*. From this it follows that we must have \(p^* \in (0,1)\), and hence that \(P(\delta )\) is non-empty. \(\square \)

### Proofs for Sect. 3.3

The proofs in this and the next section rely on the following corollary to Lemma 2, giving a useful alternative characterization of \(P(\delta )\):

### Corollary 3

The mapping \(P(\delta )\) can be written as

where

### Proof of Corollary 3

First, suppose that \(p \le 1/2\) holds. It is a consequence of (18) that in equilibrium \(p \le 1/2\) is equivalent to \(Z_{A,2} \ge Z_{B,2}\). Further, from (12)–(13) we see that \(Z_{A,2} \ge Z_{B,2}\) is equivalent to \(a(p,\delta ) \ge c(p,\delta )\). Hence, when \(p \le 1/2\), the prizes are given by

As regards \(Z_{B,2}\) we get by inserting \({\varDelta }_{3,5}(p,\delta )\) and \({\varDelta }_{3,2}(p,\delta )\) from (8) and (9)

which is equivalent to \(Z^{\le }_{B,2}(p,\delta )\) in the statement. As regards \(Z_{A,2}\) we compute

which corresponds to \(Z^{\le }_{A,2}(p,\delta )\) in the statement.

We turn to \(p > 1/2\). It follows from (18) that in equilibrium \(p > 1/2\) is equivalent to \(Z_{A,2} > Z_{B,2}\). Further, from (12)–(13) we see that \(Z_{A,2} > Z_{B,2}\) is equivalent to \(a(p,\delta ) > c(p,\delta )\). Hence, when \(p > 1/2\), the prizes are given by

As regards \(Z_{A,2}\) we get by inserting \({\varDelta }_{3,5}(p,\delta )\) and \({\varDelta }_{3,2}(p,\delta )\) from (8) and (9)

which is equivalent to \(Z^{>}_{A,2}(p,\delta )\) in the statement. As regards \(Z_{B,2}\) we compute

which can be rewritten to arrive at \(Z^{>}_{B,2}(p,\delta )\) in the statement. \(\square \)

### Proof of Lemma 4

It proves helpful to consider the level sets of *P* as characterized in (30), i.e.,

To show the claim that \((W/(2L),{\bar{p}}) \subseteq P((0,1))\), it suffices to show that \(P^{-1}(p) \ne \emptyset \) for \(p \in (W/(2L),{\bar{p}})\). Doing so will also reveal the other properties of \(P^{-1}\) stated in the lemma.

We distinguish \(p \le 1/2\) and \(p>1/2\). We start with \(p \le 1/2\). For given \(p \le 1/2\) the level set \(P^{-1}(p)\) is non-empty if there is \(\delta \in (0,1)\) such that

holds. Using Lemma 2, we can rewrite (31) as

Rewriting this again and substituting \(d = \delta ^2\), we get that the above equality holds iff

Equation (32) is a cubic equation in *d* which has at most two roots. For \(d=1\), the left-hand side is equal to

For \(d=0\), on the other hand, the left-hand side is

which is strictly greater than zero if and only if \(p > W/(2L)\), where \(W/(2L) < 1/2\) by our assumption that \(L > W\). This implies that for any \(p \in (W/(2L),1/2]\) we can find unique \(\delta \) such that (31) holds.

We continue with \(p > 1/2\). For given \(p > 1/2\) the level set \(P^{-1}(p)\) is non-empty if there is \(\delta \in (0,1)\) such that

holds. Using Lemma 2, we can rewrite (33) as

Rewriting this again and substituting \(d = \delta ^2\), we get that the above equality holds iff

Equation (34) is a cubic equation in *d* which has at most two roots. For \(d=1\) the left-hand side of the equation is equal to

which is strictly positive if and only if \(p \in [1/2,{\overline{p}})\) with \({\overline{p}}\) defined as in the proposition. Together with the fact that for \(d=0\) the left-hand side of our cubic equation is

which is strictly less than zero because \(p > 1/2\) and \(W<L\), this implies that to for any \(p \in [1/2,{\overline{p}})\) we can find unique \(\delta \) such that (33) holds.

Together, the arguments for \(p \le 1/2\) and \(p>1/2\) just given imply

and, furthermore, that \(P^{-1}(p)\) is single-valued for all \(p \in (W/(2L),{\bar{p}})\), and that

holds. \(\square \)

### Proof of Proposition 5

First, suppose \(p \le 1/2\). When \(p \le 1/2\), it must hold that

where the equality follows from Corollary 3 and the inequality follows from (18). For \(\delta \rightarrow 1\) we see that the sign of the above difference is equal to the sign of

contradicting \(p \le 1/2\). Hence, \(p>1/2\) when \(\delta \rightarrow 1\). Second, suppose \(p \ge 1/2\). When \(p \ge 1/2\), it must hold that

where the equality follows from Corollary 3 and the inequality follows from (18). For \(\delta \rightarrow 0\) we see that the sign of the above difference is equal to the sign of

contradicting \(p \ge 1/2\). Hence, \(p<1/2\) when \(\delta \rightarrow 0\). \(\square \)

### Proof of Proposition 6

Observe that we have

The identities in (35) and (37) follow from (10)–(11) and (36) follows from symmetry, which implies that the prizes of the two players (*A*, 3) and (*B*, 3) are symmetric, leaving zero net utility from the contest, so that the value of the tug-of-war being in state \(s=3\) is for both players equal to the continuation value of losing the subcontest. It is easy to see from (36) that for any *p* and \(\delta \) we have \(V_{i,3}^3 < 0\).

So, we now turn to \(V_{A,2}^2\). If \(Z_{A,2} < Z_{B,2}\) in equilibrium, corresponding to \(p > 1/2\), then clearly \(V_{A,2}^2 = -\delta L < 0\). If, on the other hand, we have \(p \le 1/2\), then inserting \(Z_{A,2}^{\le }(p,\delta )\) and \(Z_{B,2}^{\le }(p,\delta )\) from Corollary 3 yields

Consider now \(V_{A,4}^4\). If \(Z_{A,2} \ge Z_{B,2}\) in equilibrium, corresponding to \(p \le 1/2\), then clearly \(V_{A,4}^2 < 0\). If, on the other hand, we have \(p > 1/2\), then we can proceed as follows. The choice \(x_{B,2}=Z_{A,2}\) is in the support of the equilibrium strategy of player (*B*, 2) and yields \(\delta W\) with probability 1. The continuation value is thus \(V_{B,2}^2 = \delta W-Z_{A,2}\). Inserting \(Z_{A,2}^{>}(p,\delta )\) from Corollary 3 and using \(V_{A,4}^4=V_{B,2}^2\) then gives us

Equation (38) yields that \(V_{A,4}^4<0\) holds if and only if (23) holds, as stated in the proposition.

On the other hand, it holds that \(V_{A,4}^4>0\) iff

Now, observe that it follows from the characterization (20) of \(P^{-1}\) given in Lemma 4, that for any \(\delta \) sufficiently close to 1 there exists a symmetric MPE in which the leader winning probability is sufficiently close to \({\bar{p}}\) so that (39) is satisfied whenever \(\frac{W}{L} > {\bar{p}}\) holds, giving us the second part of the proposition. \(\square \)

### Proofs for Sect. 3.4

### Proof of Proposition 7

We proceed in two steps. First we compute the discount factor \(\delta \) for which \(p=1/2\) holds in equilibrium, and then, we argue that \(P(\delta )\) is single-valued and increasing, where single-valuedness implies that the there is indeed a unique symmetric stationary MPE.

*Step 1:*\(p=1/2\). First, we compute the unique discount factor \(\delta \) such that \(p=1/2\) holds in equilibrium. To this end, we note that for \(p=1/2\) to hold in equilibrium it must be that we have \(a(1/2,\delta )=c(1/2,\delta )\), where *a*(., .) and *c*(., .) are defined in (14) and (16) in Lemma 2. That is, it must hold that

If the equilibrium leader winning probability is \(p=1/2\), the discounted probabilities \({\varDelta }_{3,2}(1/2,\delta )\) and \({\varDelta }_{3,5}(1/2,\delta )\) are

Inserting and rearranging yield

*Step 2: Single-valued and increasing*\(P(\delta )\). We now want to establish that \(P(\delta )\) is single-valued and strictly increasing. To do so, we use the fact, established in Lemma 4, that \(P^{-1}(p)\) is single-valued on \((W/(2L),{\bar{p}})\) and satisfies \(\lim _{p \rightarrow W/(2L)}P^{-1}(p)=0\) and \(\lim _{p \rightarrow {\bar{p}}}P^{-1}(p)=1\). From this it follows that if \(P^{-1}(p)\) is strictly increasing on \((W/(2L),{\bar{p}})\), then \(P(\delta )\) is single-valued and strictly increasing on (0, 1), and furthermore \(P((0,1))=(W/(2L),{\bar{p}})\).

We distinguish \(p>1/2\) and \(p \le 1/2\). We first consider the case where the equilibrium leader winning probability is \(p>1/2\). If \(p>1/2\), then the leader winning probability *p* and the discount factor \(\delta \) must be such that they solve (34) where \(d=\delta ^2\). We denote the left-hand side of (34) as *f*(*p*, *d*) and note that \([P^{-1}(p)]^2=\{ d \in (0,1) : f(p,d)=0 \}\) for \(p>1/2\). We can express *f*(*p*, *d*) as polynomial of third degree in *p* with coefficients \(c_i(d)\), \(i=0,1,2,3\), i.e.,

where

Clearly \(c_3(d)<0\) for all \(d \in (0,1)\). As regards \(c_2(d)\) it is easy to see that \(c_2(d)<0\) for all \(d \in (0,1)\), too. Because \(c_3(d)\) and \(c_2(d)\) are strictly smaller than zero, the function *f*(*p*, *d*) is strictly concave in *p*. Together with the fact that *f*(*p*, *d*) is cubic in *p*, we get that for fixed *d*, the function *f*(*p*, *d*) has exactly one root. Furthermore, it follows from \(c_3(d)<0\) for all *d* that the derivative of *f* with respect to the first argument \(f_1(p,d)=\partial f(p,d)/\partial p\) satisfies \(f_1(p,d) < 0\) at that root. Further, we know from the proof of Lemma 4 that for any \(p>1/2\) there exists unique \(d \in (0,1)\) such that \(f(p,d)=0\) and, furthermore, that \(f_2(p,d)=\partial f(p,d)/\partial d>0\) holds at that point. Hence, it follows by the implicit function theorem that for any \(p > 1/2\) and \(d \in (0,1)\) where \(f(p,d)=0\) there are open sets *U* and *V* around *p* and *d*, respectively, and a unique differentiable and strictly increasing function \(g:U \rightarrow V\) satisfying \(f(p,g(p))=0\).

We know from Lemma 4 that for \(d = 1\) we must have \(p={\bar{p}}\) such that \(f(p,d)=0\) and from the above we know that for \(d=d^*\) we must have \(p=1/2\) such that \(f(p,d)=0\). Consequently, there is unique and strictly increasing function *g*(*p*) on \((1/2,{\bar{p}})\) such that \(f(p,g(p))=0\), where *g*(*p*) satisfies \(\lim _{p\rightarrow 1/2} g(p)=d^*\) and \(\lim _{p\rightarrow {\bar{p}}}g(d)=1\). Because it holds that \([P^{-1}(p)]^2=g(p)\) on \((1/2,{\bar{p}})\) we get that \(P^{-1}(p)\) is strictly increasing on \((1/2,{\bar{p}})\).

We next consider \(p \le 1/2\). Here we consider (32) and denote the left-hand side of (32) by *f*(*p*, *d*). Again, we write *f*(*p*, *d*) as polynomial of third degree in *p* where the coefficients are

We first observe that \(c_3(d)>0\) for all \(d \in (0,1)\) and that \(c_2(d)>0\) for \(d \in (0,{\tilde{d}})\) where we have defined

We restrict attention to \(d \in (0,{\tilde{d}})\) in the following and assume that \(d^*<{\tilde{d}}\) (where \(d^*\) is defined in (40) above) which corresponds to \(L < 4W\), i.e., the assumption stated in the proposition.

Because \(c_2(d),c_3(d)>0\) for all \(d \in (0,{\tilde{d}})\) the function *f*(*p*, *d*) is strictly convex in *p* for all \(d \in (0,{\tilde{d}})\). Together with the fact that *f*(*p*, *d*) is cubic in *p*, we get that for fixed *d*, the function *f*(*p*, *d*) has exactly one root. Furthermore, it follows from \(c_3(d)>0\) for all *d* that the derivative of *f* with respect to the first argument satisfies \(f_1(p,d) > 0\) at that root. From the proof of Lemma 4 we know that for any \(p\le 1/2\) there is \(d \in (0,1)\) such that \(f(p,d)=0\) and that \(f_2(p,d)<0\) at that point. Together, it follows by the implicit function theorem that for any \(p \le 1/2\) and \(d \in (0,{\tilde{d}})\) where \(f(p,d)=0\) there are open sets *U* and *V* around *p* and *d*, respectively, and a unique differentiable and strictly increasing function \(h:U \rightarrow V\) satisfying \(f(p,h(p))=0\).

We know from Lemma 4 that for \(d = 0\) we must have \(p = W/(2L)\) such that \(f(p,d)=0\). From the above we know that for \(d=d^*\) we must have \(p=1/2\) such that \(f(p,d)=0\). Consequently, there is unique strictly increasing function *h*(*p*) on \((W/(2L),d^*]\) such that \(f(p,h(p))=0\), where *h*(*p*) satisfies \(h(1/2)=d^*\) and \(\lim _{p \rightarrow W/(2L)}h(p)=0\). Because it holds that \([P^{-1}(p)]^2=h(p)\) on (*W*/(2*L*), 1/2] we get that \(P^{-1}(p)\) is strictly increasing on (*W*/(2*L*), 1/2].

Because \(h(1/2)=g(1/2)\) we have that \(P^{-1}(p)\) is strictly increasing on \((W/(2L),{\bar{p}})\). Consequently, there is a unique symmetric MPE for any \(\delta \in (0,1)\), and also \(P((0,1))=(W/(2L),{\bar{p}})\) holds as claimed. \(\square \)

### The general impossibility of eternal peace between teams

In this appendix we present a generalized team tug-of-war model and derive a non-existence result which generalizes Proposition 3. The generalized tug-of-war is between two parties, which we label *A* and *B*. Each party has \(m \ge 1\) players, collected in the sets \(N_A = \{A_1,...,A_m\}\) and \(N_B = \{ B_1,...,B_m\}\), which together make up the set \(N = \{N_A,N_B\}\) of all players in the tug-of-war.^{Footnote 13} The tug-of-war consists of a set of states \(S^{+}=\{1,2,...,n\}\) which we conceive as ordered along a line. The set of states \(S^+\) can be decomposed into the set \(S=\{2,3,...,n-1\}\) of interior states and a set \(\{1,n\}\) of terminal states.

In every round \(t=0,1,...\) and whenever the tug-of-war is in an interior state \(s \in S\), two players (one from each party) compete in a battle. Before the respective battles are fought, nature determines and publicly announces which players \(A_{i}\) and \(B_{j}\) from the respective parties are called upon to play. The probability that player \(K_{i} \in N_K\) with \(K=A,B\) is called upon to play in a given round at state *s* is time-invariant and equal to \(\eta _{K_{i}}^{s}\in [0,1]\), where \(\sum _{i=1}^{m}\eta _{A_{i}}^{s}=1\) and \(\sum _{j=1}^{m}\eta _{B_{j}}^{s}=1\). The fighting probabilities for a player \(K_i\) are taken together in the vector \(\eta _{K_i} = (\eta _{K_i}^2,...,\eta _{K_i}^{n-1})\), and those for team *K* are taken together in the vector \(\eta _{K}=(\eta _{K_1},...,\eta _{K_m})\). When called upon to play in battle when the tug-of-war is in round *t* and its state is \(s_t\), the players choose effort levels \(x_{A_{i}}^{s_t}(t) \in [0,{\bar{x}}]\) and \(x_{B_{j}}^{s_t}(t) \in [0,{\bar{x}}]\), and the transition of the state again follows our law of motion (1), continuing until an absorbing state in \(\{1,n\}\) is reached. The terminal payoffs *W* and *L* are as in the team tug-of-war above, i.e., they are paid out to every player in the respective team and they satisfy \(L>W>0\), as stated in (2).

When entering the battlefield in round *t*, the two active players are informed about the current state and the identities of the active players, as well as about the effort expended in the previous rounds, the identities of the players in the previous rounds, and the prior states. To formalize this, we write \(h_t = \{ (s_t,A^t,B^t), \{a_\tau \}_{\tau =0}^{t-1}\}\) for a history of the tug-of-war consisting of the current state \(s_t \in S\) and the identities \((A^t,B^t) \in N_A \times N_B\) of the players in the current battle and of a sequence \(\{a_\tau \}_{\tau =0}^{t-1}\) of lists \(a_\tau =(s_\tau ,A^\tau ,B^\tau ,x_{A}^{\tau },x_{B}^{\tau })\) collecting for every round \(\tau = 0,...,t-1\) the state \(s_\tau \) as well as the identities \((A^\tau ,B^\tau )\) of the players along with their effort levels \(x_{K}^{\tau }\). We write the history at the beginning of the battle in round \(t=0\) as \(h_0 = \{(s_0,A^0,B^0),\emptyset \}\), and let \(H_t\) denote the set of all feasible histories in round *t*.

A strategy of any agent \(K_i\) of team \(K=A,B\) is given by a probability distribution over feasible effort \([0,{\overline{b}}]\), \(\sigma _{K_i}:H_t \rightarrow {\varDelta }([0,{\overline{b}}])\). Given a strategy profile \(\sigma = (\sigma _{A_1},...,\sigma _{A_m},\sigma _{B_1},...,\sigma _{B_m})\) and a history \(h_t\in H_t\), we write

for the expected utility in the battle of round *t* for the active player \(K_i\) choosing a distribution over effort in \([0,{\overline{b}}]\) according to his strategy \(\sigma _{K_i}\) when the other players follow their strategies in \(\sigma _{-K_i}\).

The start of any battle in any round \(t \ge 1\) after any history \(h_t \in H_t\) is the initial node of a proper subgame (Fudenberg and Tirole 1991) of the tug-of-war. Consequently, the strategy profile \(\sigma \) is a subgame-perfect equilibrium in the tug-of-war if, for every round \(t \ge 0\) and for every feasible history \(h_t \in H_t\), the strategy profile \(\sigma \) is a Nash equilibrium in the corresponding battle, given that all other players follow their strategies in the strategy profile \(\sigma \) (cf. Maskin and Tirole 2001):

### Definition 1

*(Subgame-Perfect Equilibrium, SPE)*. A strategy profile \(\sigma \) is a SPE for a tug-of-war if, for all \(t \ge 0\) and any feasible history \(h_t \in H_t\), we have

for both active players \(K_i\) and for any alternative strategy \(\sigma _{K_i}'\).

We focus on Markov strategy profiles. To this end, let \(N_{K}^s \in N_K\) be the set of players from team *K* that might be active in state *s* (i.e., for which \(\eta _{K_i}^s>0\)), and let \(F=(F_A,F_B)\) be a vector with elements \(F_K=(F_{K_1},...,F_{K_m})\), where each \(F_{K_i}\) collects the effort distributions for player \(K_i\) in the states in which he is active against the respective possible opponents \({\hat{K}}_j\), i.e.,

An atom in the distribution \(\smash {F_{K_i,{\hat{K}}_j}^s}\) at \(x \in [0,{\overline{b}}]\) is denoted by \(\smash {\alpha _{K_i,{\hat{K}}_j}^s(x)} \in (0,1]\). Let \(\smash {H_t^{(s,A_i,B_j)}} \subset H_t\) be the set of histories such that the tug-of-war state in round \(t \ge 0\) is \(s \in S\) and the active players are \((A_i,B_j) \in N_A^s \times N_B^s\). Then, the strategy profile \(\sigma \) is *Markov* if there is a vector *F* such that for all rounds \(t \ge 0\), for all states \(s \in S\), and for all active player combinations \((A_i,B_j) \in N_A^s \times N_B^s\), it holds that

The strategies in the Markov profile \(\sigma \) depend on the history \(h_t\) only through the current state *s* and the identity of the opponent. If the context is clear, we refer to \(F=(F_A,F_B)\) as a Markov strategy profile, where it is of course understood that we implicitly refer to the corresponding Markov strategy profile \(\sigma \). We use Markov-perfect equilibrium (cf. Maskin and Tirole 2001):

### Definition 2

*(Markov-Perfect Equilibrium, MPE)*. A strategy profile \(\sigma \) is a MPE for a tug-of-war if it is both a SPE and Markov.

Any Markov strategy profile together with the fighting probabilities \((\eta _A,\eta _B)\) determine the stationary continuation values of the players for any interior state *s*. For every player \(K_i\), we take these values together in the vector \(V_{K_i}=(V_{K_i}^1,...,V_{K_i}^n)\), where \(V_{K_i}^s\) denotes the value of the tug-of-war being at state \(s \in S^+\) to player \(K_i\). Under Markov strategies \(\sigma \), the expected utility \(U_{K_i}(\sigma |h_t)\) of the active player \(K_i\) after history \(h_t\) only depends on the opponent identity \({\hat{K}}_j\) and the state *s* and can thus be written as \(U_{K_i,{\hat{K}}_j}^s(\sigma )\). This gives us that \(V_{K_i}^s\) is given by

with boundary conditions \(V_{A_i}^1=V_{B_j}^n=-L\) and \(V_{A_i}^n=V_{B_j}^1=W\). Here, \(p_{s,s'}\) is the probability that under the strategy profile \(\sigma \) the tug-of-war moves from state *s* to state \(s'\) conditional on player \(K_i\) not being active in the respective round.

Again, the one-shot deviation principle applies to our setup, giving us that a profile \((F_A,F_B)\) is a MPE if and only if no player has an incentive to deviate from the strategy specified in \((F_A,F_B)\) in any battle along the course of the tug-of-war in which that player has a nonzero probability of being active. This allows us to show the following result; the proof is given below.

### Proposition 8

Consider the generalized tug-of-war with \(n \ge 5\) states, terminal payoffs \(L>W>0\), \(m \ge 2\) players in each team, and fighting probabilities \(\eta _{K_i}^s \in [0,1]\) satisfying

There is no MPE in stationary strategies \((F_A,F_B)\) with a state \(s \in S\) and players \((A_i,B_j) \in N_A^s \times N_B^s\) such that \(\alpha _{A_i,B_j}^s(0)=\alpha _{B_j,A_i}^s(0)=1\). Consequently, there is no interior peaceful state in which the process rests forever.

Although rather cumbersome to prove, the non-existence of eternal peace is intuitive, following the same logic as for the simple team tug-of-war setup analyzed in the main part of the paper. Condition (43) implies that any player in the team tug-of-war that is active in a given state has a positive probability of being the bystander in any other of the tug-of-war states. This has the effect that moving the process by one step toward the favored absorbing state of her party is always attractive for a player, because future effort costs are not borne by the current player alone. The net value of winning is always strictly positive, and hence, expending a positive amount of effort is always worthwhile.

### Proof of Proposition 8

Fix any MPE profile \(\sigma \). Let the set of peaceful states be \(R=\{r_{1},r_{2},...,r_{k}\}\), which is a subset of the set of interior states \(S = \{2,...,n-1\}\). For any state \(r_{i}\in R\), \(V_{K_i}^{r}=0\) holds for all players \(K_i \in N\). Let \(r_{1}\) be the peaceful interior state that is closest to \(s=1\) among all \(r_{i}\in R\). All states *s* satisfying \(1<s<r_{1}\) are non-peaceful interior states, which we collect in the set \(S_{r_{1}} \equiv \{2,...,r_{1}-1\}\).

We now first present two auxiliary lemmas characterizing the continuation values in any peaceful MPE, which we subsequently use to show that *R* must be empty in any MPE.

### Lemma 7

Suppose there is a MPE with a nonempty set of peaceful states \(R=\{r_{1},r_{2},...,r_{k}\} \subset S\), where \(S_{r_{1}} \equiv \{2,...,r_{1}-1\}\) is also nonempty. For all \(s \in S_{r_1}\) and all \(i,j \in \{1,...,m\}\) it holds that

In particular, for all \(s>s'\) with \(s,s' \in S_{r_1}\) and all \(i,j \in \{1,...,m\}\) it holds that

### Proof

Fix a round \(t\ge 1\), and a feasible history \(h_t \in H_t\). Consider player \(A_i\). Because player \(A_i\) can never win a positive prize (the closest the state of the tug-of-war can come to \(s=n\) is \(s=r_1\), where it stays forever), it must hold that \(V_{A_i}^s \le 0\) for all \(s \in S_{r_1}\).

Next, we establish that \(V_{A_i}^2 \ge \delta (-L) = \delta V_{A_i}^1\). We proceed by contradiction, and suppose not, i.e., we suppose \(V_{A_i}^2 < \delta (-L)\). The stationary continuation values \((V_{A_i}^2,...,V_{A_i}^{r_1-1})\) under the Markov profile \(\sigma \) for player \(A_i\) satisfy

where

and \(p_{s,s'}\) is the probability that the tug-of-war state moves from *s* to \(s'\) under profile \(\sigma \) conditional on player \(A_i\) not being active. If we write the above system as \(V = PV + Y\) with \(V=(V_{A_i}^2,...,V_{A_i}^{r_1-1})\) and \(Y=X+\smash {((1-\eta _{A_i}^2)\delta p_{2,1}(-L),0,\ldots ,0)^T}\), then we see that the solution is \(V=\smash {(I-P)^{-1}Y}\), where it holds for the elements \(a_{ij}\) of the \((r_1-2) \times (r_1-2)\)-matrix \((I-P)\) that \(a_{ij} \le 0\) for all \(i \ne j\). Because the row sums in *P* are all strictly below 1, the row sums in \((I-P)\) are positive, and hence, condition \(K_{35}\) of Theorem 1 in Plemmons (1977) is satisfied, giving us that \((I-P)\) is a non-singular *M*-matrix, and hence \((I-P)^{-1} \ge 0\) (corresponding to condition \(F_{15}\) in Plemmons 1977). Consequently, if \(V=(V_{A_i}^2,...,V_{A_i}^{r_1-1})\) is a solution to the above system for some vector \(X=(X_2,...,X_{r_1-1})\), then the solution \({\hat{V}}=({\hat{V}}_{A_i}^2,...,{\hat{V}}_{A_i}^{r_1-1})\) for \({\hat{X}}=({\hat{X}}_2,...,{\hat{X}}_{r_1-1})\) satisfies \(V \ge {\hat{V}} \iff X \ge {\hat{X}}\), where \(X \ge {\hat{X}}\) means that \(X_i \ge {\hat{X}}_i\) for all \(i = 2,...,r_1-1\).

Now, consider a strategy \(\sigma _{A_i}'\) to not expend any effort in any state in the future when active. Because \(\sigma _{A_i}\) is part of a MPE profile and \(\sigma _{A_i}'\) is a stationary strategy, it must be that

and hence, by the arguments given above and the fact that the other players’ strategies (and hence *P*) remain unchanged, this strategy must yield a continuation value \({\hat{V}}_{A_i}^2 \le V_{A_i}^2 < \delta (-L)\). Yet, if player \(A_i\) were not to expend any effort in the future, the worst that could happen is that the tug-of-war directly moves from \(s=2\) to \(s=1\), yielding \(\delta (-L)\) (any other path either yields \((-L)\) at a later date, or zero in case the path never ends in \(s=1\)). That is, it must be that \({\hat{V}}_{A_i}^2 \ge \delta (-L)\), which is a contradiction.

Next we establish \(V_{A_i}^3 \ge \delta V_{A_i}^2\). Again we proceed by contradiction and suppose not, i.e., we suppose \(V_{A_i}^3 < \delta V_{A_i}^2\). For the following argument it is convenient to express the stationary state values \((V_{A_i}^3,...,V_{A_i}^{r_1-1})\) under the Markov profile \(\sigma \) for player \(A_i\) as

where

and the boundary condition \(V_{A_i}^2\) corresponds to the value of being in state \(s=2\) under profile \(\sigma \). Again, if \(V=(V_{A_i}^3,...,V_{A_i}^{r_1-1})\) is a solution to the above system for some vector \(X=(X_3,...,X_{r_1-1})\), then the solution \({\hat{V}}=({\hat{V}}_{A_i}^3,...,{\hat{V}}_{A_i}^{r_1-1})\) for \({\hat{X}}=({\hat{X}}_3,...,{\hat{X}}_{r_1-1})\) satisfies \(V \ge {\hat{V}} \iff X \ge {\hat{X}}\), where \(X \ge {\hat{X}}\) means that \(X_i \ge {\hat{X}}_i\) for all \(i = 3,...,r_1-1\).

Now, consider a strategy \(\sigma _{A_i}'\) to not expend any effort in any state in the future until the tug-of-war is in \(s=2\) for the first time and then revert to the conjectured Markov equilibrium strategies in \(\sigma \). Because \(\sigma _{A_i}\) is part of a MPE profile and \(\sigma _{A_i}'\) is a stationary strategy as long as the state does not hit \(s=2\), the solution \({\hat{V}}\) to

with

corresponds to the vector of continuation values under \(\sigma _{A_i}'\) as long as the state has not yet left \(\{3,...,r_1-1\}\). (Here, \(U_{A_i,B_j}^s(\sigma _{A_i}',\sigma _{-A_i})\) corresponds to the value from battle under \(\sigma _{A_i}'\) in state *s* conditional on the state not yet having hit \(s=2\) and the value of hitting \(s=2\) being \(V_{A_i}^2\), which is the state value of being in state \(s=2\) under the original profile \(\sigma \), as mentioned above.) Because \({\hat{X}} \le X\), this vector satisfies \({\hat{V}} \le V\). Yet, if player \(A_i\) were not to expend any effort until the state hits \(s=2\) for the first time, the worst that could happen is that the tug-of-war directly moves from \(s=3\) to \(s=2\), yielding \(\delta V_{A_i}^2 \le 0\) (any other path either yields \(V_{A_i}^2 \le 0\) at a later date, or zero in case the path never ends in \(s=2\)). That is, we have \({\hat{V}}_{A_i}^3 \ge \delta V_{A_i}^2\), which is a contradiction.

By analogous arguments it follows that \(V_{A_i}^s \ge \delta V_{A_i}^{s-1}\) for all \(s \in S_{r_1}\). But from this, together with \(V_{A_i}^s \le 0\) for all \(s \in S_{r_1}\) as observed above, both the claims (i) that \(V_{A_i}^s \ge \delta ^{s-1}(-L)\) holds for all \(s \in S_{r_1}\) and (ii) that for all \(s>s'\) with \(s,s' \in S_{r_1}\) we have \(V_{A_i}^s \ge \delta ^{s-s'}V_{A_i}^{s'}\) immediately follow.

Now, consider player \(B_j\). The prize *W* is obtained only in \(s=1\), and hence, it must be that \(V_{B_j}^s \le \delta ^{s-1}W\), because this is the present value of the prize if the tug-of-war were to move directly from state *s* to state 1 without any effort expended by player \(B_j\). On the other hand, it must hold that \(V_{B_j}^s \ge 0\). To see this, suppose not, i.e., suppose \(V_{B_j}^s < 0\). Consider a strategy \(\sigma _{B_j}'\) to not expend any effort in any state in the future when active. Because we assume to be in a SPE, then by (41) and (42) and by an entirely analogous argument as above, this strategy must yield a continuation value \({\hat{V}}_{B_j}^s \le V_{B_j}^s < 0\). However, if player \(B_j\) were not to expend any effort in the future, the worst that could happen is that the tug-of-war at some point moves to \(s=r_1\) or remains in any state \(\{s,...,r_1-1\}\) forever, which yields no less than zero. That is \({\hat{V}}_{B_j}^s \ge 0\), and we have a contradiction to \({\hat{V}}_{B_j}^s < 0\). But then, \(V_{B_j}^s \le \delta V_{B_j}^{s-1}\) for all \(s \in S_{r_1}\) follows immediately because \(\delta V_{B_j}^{s-1} \ge 0\) is the present value if the tug-of-war were to move directly from state *s* to state \(s-1\) without any effort expended by player \(B_j\). From this it also immediately follows that for all \(s>s'\) with \(s,s' \in S_{r_1}\) we have \(V_{B_j}^s \le \delta ^{s-s'}V_{B_j}^{s'}\). \(\square \)

Next, recall that \(N_{K}^s\) is the set of players from team *K* that might be active in state *s*. Let \(\smash {p_{s,s-1}^{(A_i,B_j)}}\) for \(A_i \in N_A^s\) and \(B_j \in N_B^s\) be the equilibrium probability that the state moves from *s* to \(s-1\) when players \(A_i\) and \(B_j\) are called upon to compete, and define \({\underline{p}}_{s,s-1}\) as the lower bound on the equilibrium probability for the state to move from state *s* to the neighboring state \(s-1\), i.e.,

The next lemma establishes that if for a given \(s \in S_{r_1}\) it holds that \({\underline{p}}_{k,k-1}\) is strictly bounded away from zero for all \(2\le k\le s\), then we can obtain bounds on the continuation values of state *s* for the active players in state \(s+1\) which are tighter than those obtained in Lemma 7:

### Lemma 8

Suppose there is a MPE with a nonempty set of peaceful states \(R=\{r_{1},r_{2},...,r_{k}\} \subset S\), where \(S_{r_{1}} \equiv \{2,...,r_{1}-1\}\) is also nonempty. Fix \(s \in S_{r_1}\) and suppose \({\underline{p}}_{k,k-1}>0\) for all \(2\le k \le s\). Then it holds for all players \(A_i \in N_A^{s+1}\) and \(B_j \in N_B^{s+1}\) that

### Proof

We proceed by contradiction. Fix some state \(s \in S_{r_1}\), consider any player \(B_j \in N_B^{s+1}\), and suppose to the contrary that

Now, consider a strategy \(\sigma _{B_j}'\) to not expend any effort in any state in the future and denote the continuation value under this strategy by \(\smash {{\hat{V}}_{B_j}^s}\). Because we assume to be in a SPE, then by (41)–(42) and by an analogous argument used in the proof to the preceding lemma, the strategy \(\smash {\sigma _{B_j}'}\) must yield a continuation value \(\smash {{\hat{V}}_{B_j}^s} \le \smash {V_{B_j}^s}\). Under strategy \(\smash {\sigma _{B_j}'}\), any path of states either yields zero (if it ends in \(s=r_1\) or stays in \(\smash {S_{r_1}}\) forever) or *W* if it ends in \(s=1\). The net present value of the prize *W* when the process moves directly from state *s* to state 1 is \(\delta ^{s-1}W\). Such a path has a probability of at least \(\prod _{k=0}^{k=s-2}(1-\eta _{B_j}^{s-k}){\underline{p}}_{s-k,s-k-1}\), which is strictly positive because every \({\underline{p}}_{s-k,s-k-1}\) in the product is strictly positive by assumption and because every \((1-\eta _{B_j}^{s-k})\) in the summation is strictly positive by our condition (43) on the fighting probabilities. But this gives

contradicting that (44) holds.

Next, consider any player \(A_i \in N_A^{s+1}\). Under any strategy, any path of states either yields zero (if it ends in \(s=r_1\) or stays in \(S_{r_1}\) forever) or \(-L<0\) if it ends in \(s=1\), gross of potential effort costs. The most direct path from state *s* to state 1 yields a present value of \(\delta ^{s-1}(-L)<0\) (again gross of potential effort costs) and this path occurs with probability of at least \(\prod _{k=0}^{k=s-2}(1-\eta _{A_i}^{s-k}){\underline{p}}_{s-k,s-k-1}\), which is strictly positive because every \({\underline{p}}_{s-k,s-k-1}\) in the product is strictly positive by assumption and because every \((1-\eta _{A_i}^{s-k})\) in the summation is strictly positive by condition (43) on the fighting probabilities. As any other path yields a non-positive payoff, this gives an upper bound on \(V_{A_i}^s\), or

as desired. \(\square \)

With Lemmas (7) and (8) we now set out to establish that \({\underline{p}}_{s,s-1}>0\) holds for all \(s \in S_{r_1}\). We do so by induction and first establish that \(\smash {{\underline{p}}_{2,1}>0}\) holds (and hence that \(S_{r_1}\) is non-empty in any MPE). Either \(s=2\) is in \(S_{r_1}\) or it is not in \(S_{r_1}\). In the latter case we have \(V_{A_i}^3=V_{B_j}^3=0\), and in the former case Lemma 7 gives lower and upper bounds for players’ continuation values in \(s=3\), so that we can conclude that irrespective of whether \(s = 2\in S_{r_1}\) or not, the battle taking place between \(A_i\) and \(B_j\) in state \(s=2\) is about the following stakes

Furthermore, if \(s = 2 \in S_{r_1}\), then by Lemma 7, we have \(V_{B_j}^2 \le \delta V_{B_j}^1\) and hence \(V_{B_j}^1-V_{B_j}^2 \ge (1-\delta )V_{B_j}^1>0\). On the other hand, if \(s = 2 \notin S_{r_1}\) then \(V_{B_j}^1-V_{B_j}^2>0\) as \(V_{B_j}^2=0\). Analogously, by Lemma 7 we have \(V_{A_i}^3 \ge \delta V_{A_i}^2\) and hence \(V_{A_i}^3-V_{A_i}^2 \ge (\delta -1)V_{A_i}^2 \ge 0\) (again irrespective of whether \(s = 2 \in S_{r_1}\) or not). Thus, we obtain by Lemma 6 that for any encounter between any two players \(A_i\) and \(B_j\) that might be active in \(s=2\) the two players have unique mutually optimal strategies and that the probability that the process moves to \(s=1\) equals

So, because \(p_{2,1}^{(A_i,B_j)}>0\) holds for any combination of players \((A_i,B_j)\) that might be active in \(s=2\) we have \({\underline{p}}_{2,1}>0\) as claimed. Crucially, this also implies that \(S_{r_1}\) is non-empty.

As for the inductive step, we show that for any \(s \in S_{r_1}\) it follows from \({\underline{p}}_{k,k-1}>0\) for all \(2\le k<s\) that \({\underline{p}}_{s,s-1}>0\). The battle taking place in state *s* between \(A_i\) and \(B_j\) is about the following stakes

From Lemma 8 we obtain \(V_{B_j}^{s-1} > 0\), and hence it holds \(V_{B_j}^{s-1}-V_{B_j}^{s+1} \ge (1-\delta ^{2})V_{B_j}^{s-1}>0\) (if \(s+1 \in S_{r_1}\) the weak inequality follows from Lemma 7 and if \(s+1 \notin S_{r_1}\) the strict inequality follows directly from \(V_{B_j}^{s+1}=0\)). Additionally, we observe \(V_{B_j}^{s-1}-V_{B_j}^s > 0\), which holds because \(V_{B_j}^{s-1}-V_{B_j}^s \ge (1-\delta )V_{B_j}^{s-1} > 0\) by Lemmas 7–8. Together, we have that the net value of winning is strictly positive and that the zero-effort-tie prize is strictly smaller than the winner prize. As regards player \(A_i\), we make similar observations: By Lemma 8 we have \(V_{A_i}^{s-1}<0\), and hence, it holds \(V_{A_i}^{s+1}-V_{A_i}^{s-1} \ge (\delta ^{2}-1)V_{A_i}^{s-1} > 0\) (again, if \(s+1 \in S_{r_1}\) then the first inequality follows from Lemma 7 and if \(s+1 \notin S_{r_1}\) the strict inequality follows directly because \(V_{A_i}^{s+1}=0\)). Further, we have \(V_{A_i}^s \le 0\) by Lemma 7 and consequently \(V_{A_i}^{s+1}-V_{A_i}^s \ge (\delta -1)V_{A_i}^{s} \ge 0\). That is, for player \(A_j\) the net value of winning is strictly positive, too, and the zero-effort-tie prize is weakly smaller than the winner prize.

Consequently, we obtain by Lemma 6 that for any encounter between any two players \(A_i\) and \(B_j\) in *s* the two players have unique mutually optimal strategies and that the probability that the process moves to the state \(s-1\) equals

So, because \(p_{s,s-1}^{(A_i,B_j)}>0\) holds for any combination of players \((A_i,B_j)\) that might be active in *s* we have \({\underline{p}}_{s,s-1}>0\) as claimed.

Now, the fact that \({\underline{p}}_{s,s-1}>0\) holds for all \(s \in S_{r_1}\) together with Lemma 8 implies that

holds for all players \(B_j\) that are active in \(s=r_1\) with positive probability. But this implies that given \(x_{A_i}^{r_1}=0\), player \(B_j\) has a strict incentive to deviate from \(x_{B_j}^{r_1}=0\) to some \(x_{B_j}^{r_1} = \epsilon >0\) in order to drive the state to \(r_1-1\) and obtaining \(V_{B_j}^{r_1-1}\) rather than \(V_{B_j}^{r_1}=0\). This violates the assumption that \(r_{1}\) is a peaceful state, thus concluding the proof. \(\square \)

## Rights and permissions

## About this article

### Cite this article

Häfner, S. Eternal peace in the tug-of-war?.
*Econ Theory* (2020). https://doi.org/10.1007/s00199-020-01287-9

Received:

Accepted:

Published:

### Keywords

- Contests
- Teams
- Tug-of-War

### JEL Classification

- D74
- D72