Advertisement

On Equilibrium Properties of the Replicator–Mutator Equation in Deterministic and Random Games

  • Manh Hong DuongEmail author
  • The Anh Han
Open Access
S.I. : In memory of Chris Cannings

Abstract

In this paper, we study the number of equilibria of the replicator–mutator dynamics for both deterministic and random multi-player two-strategy evolutionary games. For deterministic games, using Descartes’ rule of signs, we provide a formula to compute the number of equilibria in multi-player games via the number of change of signs in the coefficients of a polynomial. For two-player social dilemmas (namely the Prisoner’s Dilemma, Snow Drift, Stag Hunt and Harmony), we characterize (stable) equilibrium points and analytically calculate the probability of having a certain number of equilibria when the payoff entries are uniformly distributed. For multi-player random games whose pay-offs are independently distributed according to a normal distribution, by employing techniques from random polynomial theory, we compute the expected or average number of internal equilibria. In addition, we perform extensive simulations by sampling and averaging over a large number of possible payoff matrices to compare with and illustrate analytical results. Numerical simulations also suggest several interesting behaviours of the average number of equilibria when the number of players is sufficiently large or when the mutation is sufficiently small. In general, we observe that introducing mutation results in a larger average number of internal equilibria than when mutation is absent, implying that mutation leads to larger behavioural diversity in dynamical systems. Interestingly, this number is largest when mutation is rare rather than when it is frequent.

Keywords

Evolutionary game theory Replicator–mutator dynamics Multi-player multi-strategy games Social dilemmas 

1 Introduction

The replicator–mutator dynamics has become a powerful mathematical framework for the modelling and analysis of complex biological, economical and social systems. It has been employed in the study of, among other applications, population genetics [14], autocatalytic reaction networks [33], language evolution [23], the evolution of cooperation [18] and dynamics of behaviour in social networks [24]. Suppose that in an infinite population there are n types/strategies \(S_1,\ldots , S_n\) whose frequencies are, respectively, \(x_1,\ldots , x_n\). These types undergo selection; that is, the reproduction rate of each type, \(S_i\), is determined by its fitness or average pay-off, \(f_i\), which is obtained from interacting with other individuals in the population. The interaction of the individuals in the population is carried out within randomly selected groups of d participants (for some integer d). That is, they play and obtain their pay-offs from a d-player game, defined by a payoff matrix. We consider here symmetric games where the pay-offs do not depend on the ordering of the players in a group. Mutation is included by adding the possibility that individuals spontaneously change from one strategy to another, which is modelled via a mutation matrix, \(Q=(q_{ji}), j,i\in \{1,\ldots ,n\}\). The entry \(q_{ji}\) denotes the probability that a player of type \(S_j\) changes its type or strategy to \(S_i\). The mutation matrix Q is a row-stochastic matrix, i.e.
$$\begin{aligned} \sum _{j=1}^n q_{ji}=1, \quad 1\le i\le n. \end{aligned}$$
The replicator–mutator is then given by, see, e.g. [19, 20, 21, 25],
$$\begin{aligned} {\dot{x}}_i=\sum _{j=1}^n x_j f_j({\mathbf {x}})q_{ji}- x_i {\bar{f}}({\mathbf {x}})=:g_i(x),\qquad i=1,\ldots , n, \end{aligned}$$
(1)
where \({\mathbf {x}}= (x_1, x_2, \dots , x_n)\) and \({\bar{f}}({\mathbf {x}})=\sum _{i=1}^n x_i f_i({\mathbf {x}})\) denotes the average fitness of the whole population. The replicator dynamics is a special instance of (1) when the mutation matrix is the identity matrix.
In this paper, we are interested in properties of the equilibrium points of the replicator–mutator dynamics (1). Note that we are concerned with dynamic equilibria almost exclusively. There might be a dynamic equilibrium which is not a Nash equilibrium of the game. These dynamic equilibrium points are solutions of the following system of polynomial equations:
$$\begin{aligned} {\left\{ \begin{array}{ll} g_i(x)=0, \quad i=1,\ldots , n-1,\\ \sum _{i=1}^n x_i=1. \end{array}\right. } \end{aligned}$$
(2)
The second condition in (2), that is the preservation of the sum of the frequencies, is due to the term \(x_i{\bar{f}}({\mathbf {x}})\) in (1). The first condition imposes relations on the fitnesses. We consider both deterministic and random games where the entries of the payoff matrix are, respectively, deterministic and random variables. Typical examples of deterministic games include pairwise social dilemmas and public goods games that have been studied intensively in the literature, see, e.g. [15, 16, 27, 32, 35]. On the other hand, random evolutionary games are suitable for modelling social and biological systems in which very limited information is available, or where the environment changes so rapidly and frequently that one cannot describe the pay-offs of their inhabitants’ interactions [9, 10, 11]. Simulations and analysis of random games are also helpful for the prediction of the bifurcation of the replicator–mutator dynamics [20, 21, 25]. Here, we are mainly interested in the number of equilibria in deterministic games and the expected number of equilibria in random games, which allow predicting the levels of social and biological diversity as well as the overall complexity in a dynamical system. As in [20, 21, 25], we consider an independent mutation model that corresponds to a uniform random probability of mutating to alternative strategies as follows:
$$\begin{aligned} q_{ij}=\frac{q}{n-1},~~i\ne j,~~q_{ii}=1-q,~~1\le i,j\le n. \end{aligned}$$
(3)
In particular, for two-strategy games (i.e. when \(n=2\)), the above relations read
$$\begin{aligned} q_{12}=q_{21}=q,~~ q_{11}=q_{22}=1-q. \end{aligned}$$
The parameter q represents the strength of mutation and ranges from 0 to \(1-\frac{1}{n}\). The two boundaries have interesting interpretation in the context of dynamics of learning [21]: for \(q=0\) (which corresponds to the replicator dynamics), learning is perfect and learners always end up speaking the grammar of their teachers. In this case, vertices of the unit hypercube in \({\mathbb {R}}^n\) are always equilibria. On the other hand, for \(q=\frac{n-1}{n}\), the chance for the learner to pick any grammar is the same for all grammars and is independent of the teacher’s grammar. In this case, there always exists a uniform equilibrium \({\mathbf {x}}=(1/n,\ldots , 1/n)\) (cf. Remark 1). Equilibrium properties of the replicator dynamics, particularly the probability of observing the maximal number of equilibrium points, the attainability and stability of the patterns of evolutionarily stable strategies have been studied intensively in the literature [2, 3, 12, 13, 17]. More recently, we have provided explicit formulas for the computation of the expected number and the distribution of internal equilibria for the replicator dynamics with multi-player games by employing techniques from both classical and random polynomial theory [4, 5, 6, 7]. For the replicator dynamics, that is when there is no mutation, the first condition in (2) means that all the strategies have the same fitness which is also the average fitness of the whole population. This benign property is no longer valid in the presence of mutation making the mathematical analysis harder. In a general d-player n-strategy game, each \(g_i\) is a multivariate polynomial of degree \(d+1\); thus, (2) is a system of multivariate polynomial equations. In particular, for a two-player two-strategy game, which is the simplest case, (2) reduces to a cubic equation whose coefficients depend on the payoff entries and the mutation strength. For larger d and n, solving (2) analytically is generally impossible according to Abel’s impossibility theorem. Nevertheless, there has been a considerable effort to study equilibrium properties of the replicator–mutator dynamics in deterministic two-player games, see for instance [19, 20, 21, 25]. In particular, with the mutation strength q as the bifurcation parameter, bifurcations and limit cycles have been shown for various classes of fitness matrices [19, 25]. However, equilibrium properties for multi-player games and for random games are much less understood although in the previously mentioned papers, random games were employed to detect and predict certain behaviour of (1).

In this paper, we explore further connections between classical/random polynomial theory and evolutionary game theory developed in [4, 5, 6, 7] to study equilibrium properties of the replicator–mutator dynamics. For deterministic games, by using Descartes’ rule of signs and its recent developments, we are able to fully characterize the equilibrium properties for social dilemmas. In addition, we provide a method to compute the number of equilibria in multi-player games via the sign changes of the coefficients of a polynomial. For two-player social dilemma games, we calculate the probability of having a certain number of equilibria when the payoff entries are uniformly distributed. For multi-player two-strategy random games whose pay-offs are independently distributed according to a normal distribution, we obtain explicit formulas to compute the expected number of equilibria by relating it to the expected number of positive roots of a random polynomial. Interestingly, due to mutation, the coefficients of the random polynomial become correlated as opposed to the replicator dynamics where they are independent. The case \(q=0.5\) turns out to be special and needs different treatment. We also perform extensive simulations by sampling and averaging over a large number of possible payoff matrices, to compare with and illustrate analytical results. Moreover, numerical simulations also show interesting behaviour of the expected number of equilibria when the number of players tends to infinity or when the mutation goes to zero. It would be challenging to analyse these asymptotic behaviours rigorously, and we leave it for future work.

The rest of the paper is organized as follows. In Sect. 2, we study deterministic games. In Sect. 3, we consider random games. Finally, we provide further discussions and outlook in Sect. 4.

2 Properties of Equilibrium Points: Deterministic Games

In this section, we study properties of equilibrium points of deterministic games. We start with some preliminary results on the roots of a general polynomial that will be used in the subsequent sections. We then focus on two-player games, particularly the social dilemmas. Finally, by employing Descartes’ rule of signs and its recent improvement [1] we derive a formula to compute the number of equilibria of multi-player games.

2.1 Preliminaries

This section presents some preliminary results on the roots of a polynomial that will be used in the subsequent sections. The following lemma is an elementary characterization of stability of equilibrium points of a dynamical system where the right-hand side is a polynomial.

Lemma 1

Consider a dynamical system \({\dot{x}}=P(x)=a_n x^n+\cdots + a_1x+ a_0\) where \(a_0,\ldots , a_n\) are real coefficients. Suppose that P has n real roots \(x_1<x_2<\cdots <x_n\). Then, the stability of these equilibrium points is alternatively switched, that is for all \(i=1,\ldots n-1\), if \(x_i\) is stable then \(x_{i+1}\) is unstable and vice versa. In particular, consider the dynamics \({\dot{x}}=P(x)=Ax^3+Bx^2+Cx+D\). Suppose that P(x) has three real roots \(x_1<x_2<x_3\). Then,
  1. (i)

    If \(A>0\), then \(x_2\) is stable; \(x_1\) and \(x_3\) are unstable.

     
  2. (ii)

    If \(A<0\), then \(x_2\) is unstable; \(x_1\) and \(x_3\) are stable.

     

Proof

We prove the general case since the cubic case is a direct consequence. Since P has n real roots \(x_1,\ldots , x_n\), we have \( P(x)=a_n\prod _{i=1}^n(x-x_i)\). Thus,
$$\begin{aligned} P'(x)=a_n\sum _{i=1}^n\prod _{j\ne i} (x-x_j). \end{aligned}$$
Therefore, for any \(i=1,\ldots , n\), we obtain
$$\begin{aligned} P'(x_i)=a_n \prod _{j\ne i}(x_i-x_j). \end{aligned}$$
Since \(x_1<\cdots <x_n\), we have, for any \(i=1,\ldots , n-1\),
$$\begin{aligned} {\mathrm {sign}}(P'(x_i))={\mathrm {sign}}\Big (a_n (-1)^{n-i}\Big )\quad \text {and}\quad {\mathrm {sign}}(P'(x_{i+1}))={\mathrm {sign}}\Big (a_n (-1)^{n-i-1}\Big )=-{\mathrm {sign}}(P'(x_i)), \end{aligned}$$
which implies that \(P'(x_i)\) and \(P'(x_{i+1})\) have alternative signs. Thus, their stability is alternatively switched. \(\square \)

The following lemma specifies the location of roots of a quadratic equation whose proof is omitted.

Lemma 2

Consider a quadratic equation \(f(x)=ax^2+bx+c\). Define \(\Delta =b^2-4ac\). Then,
  1. (i)

    Exactly one of the roots lies in a given interval \((m_1,m_2)\) if \(f(m_1)f(m_2)<0\).

     
  2. (ii)
    Both roots are greater than a given number m if
    $$\begin{aligned} \Delta \ge 0,\quad -\frac{b}{2a}>m\quad \text {and}\quad a f(m)>0. \end{aligned}$$
     
  3. (iii)
    Both roots are less than a given number m if
    $$\begin{aligned} \Delta \ge 0,\quad -\frac{b}{2a}<m\quad \text {and}\quad a f(m)>0. \end{aligned}$$
     
  4. (iv)
    Both roots lie in a given interval \((m_1,m_2)\) if
    $$\begin{aligned} \Delta \ge 0,\quad m_1<-\frac{b}{2a}<m_2,\quad af(m_1)>0\quad \text {and}\quad af(m_2)>0. \end{aligned}$$
     

2.2 Two-Player Games

We first consider the case of two-player games. Let \(\{a_{jk}\}_{j,k=1}^n\) be the payoff matrix where j is the strategy of the focal player and k is that of the opponent. Then, the average pay-offs of strategy j and of the whole population are given respectively by:
$$\begin{aligned} f_j({\mathbf {x}})=\sum _{k=1}^n x_k a_{jk}\quad \text {and}\quad {\bar{f}}({\mathbf {x}})=\sum _{j=1}^n x_j f_j({\mathbf {x}})=\sum _{j,k=1}^n a_{jk}x_jx_k. \end{aligned}$$
(4)
Substituting (4) into (1), we obtain
$$\begin{aligned} {\dot{x}}_i=\sum _{j,k=1}^n q_{ji} x_jx_k a_{jk}-x_i\sum _{j,k=1}^n a_{jk}x_jx_k. \end{aligned}$$
(5)
In particular, for two-player two-strategy games the replicator–mutator equation is
$$\begin{aligned} {\dot{x}}= & {} q_{11}a_{11}x^2+q_{11}x(1-x)a_{12}+q_{21}x(1-x)a_{21}+q_{21}a_{22}(1-x)^2 \nonumber \\&-x\Big (a_{11}x^2+(a_{12}+a_{21})x(1-x)+a_{22}(1-x)^2\Big ), \end{aligned}$$
(6)
where x is the frequency of the first strategy and \(1-x\) is the frequency of the second one. Using the identities \(q_{11}=q_{22}=1-q, \quad q_{12}=q_{21}=q\), Eq. (6) becomes
$$\begin{aligned} {\dot{x}}&=\Big (a_{12}+a_{21}-a_{11}-a_{22}\Big )x^3+ \Big (a_{11}-a_{21}-2(a_{12}-a_{22})+q(a_{22}+a_{12}-a_{11}-a_{21})\Big )x^2\nonumber \\&\quad +\Big (a_{12}-a_{22}+q(a_{21}-a_{12}-2a_{22})\Big )x+q a_{22}. \end{aligned}$$
(7)
The properties of equilibrium points for the case \(q=0\) are well understood, see, e.g. [13]. Thus, we consider \(0<q\le 1/2\). In addition, equilibria of (7) and their stability for the case \(a_{11}=a_{22}=1, a_{12}\le a_{21}\le 1\), have been studied in [19].
Two-Player Social Dilemma Games We first consider two-player social dilemma games. We adopt the following parameterized payoff matrix to study the full space of two-player social dilemma games where the first strategy is cooperator and the second is defector [32, 35], \(a_{11} = 1; \ a_{22} = 0; \)\(0 \le a_{21} = T \le 2\) and \(-1 \le a_{12} = S \le 1\), that covers the following games:
  1. (i)

    the Prisoner’s Dilemma (PD) game: \(2\ge T> 1> 0 > S\ge -1\),

     
  2. (ii)

    the Snow Drift (SD) game: \(2\ge T> 1> S > 0\),

     
  3. (iii)

    the Stag Hunt (SH) game: \(1> T> 0 > S\ge -1\),

     
  4. (iv)

    the Harmony (H) game: \(1> T\ge 0, 1\ge S > 0\).

     
Note that in the SD-game: \(S+T>1\) and in the SH-game: \(S+T<1\). By simplifying the right-hand side of (7), equilibria of a social dilemma game are roots in the interval [0, 1] of the following cubic equation:
$$\begin{aligned} \Big (T+S-1\Big )x^3+\Big (1-T-2S+q(S-1-T)\Big )x^2+\Big (S+q(T-S)\Big )x = 0.\qquad \end{aligned}$$
(8)
It follows that \(x = 0\) is always an equilibrium. If \(q=\frac{1}{2}\), then the above equation has two solutions \(x_1=\frac{1}{2}\) and \(x_2=\frac{T+S}{T+S-1}\). In PD, SD and H games, \(x_2\not \in (0,1)\), thus they have two equilibria \(x_0=0\) and \(x_1=\frac{1}{2}\). In the SH-game: if \(T+S<0\), then the game has three equilibria \(x_0=0, x_1=\frac{1}{2}\) and \(0<x_2<1\); if \(T+S\ge 0\), then the game has only two equilibria \(x_0=0 and x_1=\frac{1}{2}\).
We consider \(q\ne \frac{1}{2}\). For nonzero equilibrium points, we solve the following quadratic equation:
$$\begin{aligned} h(x):=(T+S-1)x^2+(1-T-2S+q(S-1-T))x+S+q(T-S)=:ax^2+bx+c=0.\qquad \quad \end{aligned}$$
(9)
Note that we have \(h(1)=-q<0\) for all the above games. In the SD-game, since \(T+S-1>0\) and \(h(0)=S+q(T-S)=qT+S(1-q)>0\), h is a quadratic and has two positive roots \(0<x_1<1<x_2\). Thus, the SD-game always has two equilibria: an unstable one \(x_0=0\) and a stable one \(0<x_1<1\). For the H-game,
  1. (i)

    If \(S+T=1\), then h becomes \(h(x)=-(2Tq+1-T)x+(2qT+1-T)-q\) and has a root \(x=1-\frac{q}{2Tq+1-T}\). If \(q(1-2T)<1-T\), then \(x\in (0,1)\) and the game has two equilibria: an unstable one \(x_0=0\) and a stable one \(0<x_1<1\). If \(q(1-2T)\le 1-T\), then \(x<0\) and the game has only one equilibrium \(x=0\).

     
  2. (ii)

    if \(S+T>1\), then since \(h(0)=S+q(T-S)=qT+S(1-q)>0\), h has two roots \(0<x_1<1<x_2\); thus, the game has two equilibria: an unstable one \(x_0=0\) and a stable one \(0<x_1<1\).

     
  3. (iii)

    if \(S+T<1\), then since \(h(0)=S+q(T-S)=qT+S(1-q)>0\), h has two roots \(x_2<0<x_1<1\); thus, the game has two equilibria: an unstable one \(x_0=0\) and a stable one \(0<x_1<1\).

     
Thus, the H-game has either one equilibrium or two equilibria. The analysis for the SH-game and the PD-game is more involved since we do not know the sign of h(0).
SH-Game Since \(T+S<1\), h is always a quadratic polynomial. Define
$$\begin{aligned} \Delta&=(1-T-2S+q(S-1-T))^2-4(T+S-1)(S+q(T-S)), \end{aligned}$$
(10)
$$\begin{aligned} m&:=-\frac{b}{2a}=\frac{T+2S-1-q(S-T-1)}{2(T+S-1)}=1+ \frac{1-T+q(T+1-S)}{2(T+S-1)}. \end{aligned}$$
(11)
Since \(T+S-1<0\) and \(1-T+q(T+1-S)>0\), we have \(m<1\). Applying Lemma 2, it results in the following cases:
  1. (i)

    If \(\Delta <0\), then the game has only one equilibrium \(x_0=0\) which is stable if \(S+q(T-S)<0\) and is unstable if \(S+q(T-S)>0\).

     
  2. (ii)

    If \(\Delta \ge 0\) and \(h(0)>0\), then the game has two equilibria: an unstable one \(x_0=0\) and a stable one \(0<x_1<1\).

     
  3. (iii)

    If \(\Delta \ge 0\) and \(h(0)<0\) and \(-\frac{b}{2a}>0\), then the game has three equilibria \(x_0=0<x_1<x_2<1\) where \(x_0\) and \(x_2\) are stable while \(x_1\) is unstable.

     
  4. (iv)

    If \(\Delta \ge 0\) and \(h(0)<0\) and \(-\frac{b}{2a}<0\), then the game has only one stable equilibrium \(x_0=0\).

     
PD-Game It remains to consider the PD-game. If \(S+T=1\), then h becomes \(h(x)=-(2Tq+1-T)x+(2qT+1-T)-q\) and has a root \({\bar{x}}=1-\frac{q}{2Tq+1-T}\). Thus, the game has only one equilibrium \(x_0=0\) if \({\bar{x}}\not \in (0,1)\) and has two equilibria if \({\bar{x}}\in (0,1)\). If \(S+T\ne 1\), then h is a quadratic polynomial. Let \(\Delta \) and m be defined as in (10)–(11). According to Lemma 2, we have the following cases:
  1. (i)

    If \(\Delta <0\), then h has no real roots. Thus, the game only has one equilibrium \(x_0=0\).

     
  2. (ii)

    If \(\Delta \ge 0\) and \(h(0)=qT+S(1-q)>0\), then h has exactly one root in (0, 1). Thus, the game has two equilibria.

     
  3. (iii)

    If \(\Delta \ge 0,\quad 0<\frac{T+2S-1-q(S-T-1)}{2(T+S-1)}<1, ah(0)=(T+S-1)(qT+S(1-q))>0, \quad \text {and}\quad ah(1)=-q(T+S-1)>0\), then h has two roots in (0, 1). Thus, the game has three equilibria.

     
  4. (iv)

    In other cases, h has two roots but do not belong to (0, 1). Thus, the game has only one equilibrium at \(x_0=0\).

     
For comparison, we consider the case \(q=0\). Equation (8) becomes
$$\begin{aligned} (T+S-1)x^3+(1-T-2S)x^2+Sx= x(1-x)(S-(T+S-1)x)=0, \end{aligned}$$
which implies
$$\begin{aligned} x_0=0,~~x_1=1,~~x_2=\frac{S}{T+S-1}. \end{aligned}$$
The condition \(0<x_2<1\) is equivalent to
$$\begin{aligned} S(S+T-1)>0\quad \text {and}\quad (1-T)(S+T-1)<0, \end{aligned}$$
which is satisfied in the SD-game and the SH-game but is violated in the PD-game and the H-game. In the SD-game \(S+T>1\) and \(0=x_1<x_2<1=x_1\), thus \(x_2\) is stable and \(x_0\) and \(x_1\) are unstable. In the SH-game, \(S+T<1\) and \(0=x_1<x_2<1=x_1\), thus \(x_2\) is unstable and \(x_1\) and \(x_3\) are stable. The PD-game and the H-game have only two equilibria: for the PD-game, \(x_0=0\) (stable) and \(x_1=1\) (unstable) and for the H-game: \(x_0=0\) (unstable) and \(x_1=1\) (stable).
General Games Now, we consider a general two-player two-strategy game where there is no ranking on the coefficients. An equilibrium point is a root \(x\in (0,1)\) of the cubic on the right-hand side of (6)
$$\begin{aligned}&\Big (a_{12}+a_{21}-a_{11}-a_{22}\Big )x^3+\Big (a_{11}-a_{21} -2(a_{12}-a_{22})+q(a_{22}+a_{12}-a_{11}-a_{21})\Big )x^2 \nonumber \nonumber \\&\quad +\Big (a_{12}-a_{22}+q(a_{21}-a_{12}-2a_{22})\Big )x+q a_{22}=0. \end{aligned}$$
We define \(t:=\frac{x}{1-x}\). Dividing the above equation by \((1-x)^3\) and using the relation \(\frac{1}{1-x}=1+t\), the above equation can be written in t-variable as:
$$\begin{aligned} P_3(t)&=-a_{11}qt^3 +(a_{12}-a_{21}+q(a_{21}q-a_{11}-a_{12}))t^2\\&\quad +(a_{12}-a_{22} +q(a_{21}+a_{22}-a_{12}))t+a_{22}q \\ {}&:=a t^3+bt^2+ct+d. \end{aligned}$$
The number of equilibria of the \(2\times 2\)-game is equal to the number of positive roots of the cubic \(P_3\). Applying Sturm’s theorem, see for instance [34, Theorem 1.4], to the polynomial \(P_3\) for the interval \((0,+\,\infty )\), where the sign at \(+\,\infty \) of a polynomial is the same as the sign of its leading coefficient, we obtain the following result.

Lemma 3

Let \(s_1\) and \(s_2\) be, respectively, the number of changes of signs in the following sequences:
$$\begin{aligned}&\Big \{d,c,\frac{bc-9ad}{a}, \Delta \Big \}, \\ {}&\Big \{a, \frac{b^2-3ac}{a},\Delta \Big \}, \end{aligned}$$
where \(\Delta :=a\big (18 abcd-4b^3 d+b^2c^2-4ac^3-27a^2d^2\big )\) denotes the radicand. Then, \(P_3\) has exactly \(s_1-s_2\) number of positive roots. As consequences,
  1. (i)
    \(P_3\) has three distinct real positive roots (thus the game has three equilibria) if and only if
    $$\begin{aligned} {\left\{ \begin{array}{ll} \Delta> 0,\\ ab<0, \\ ac>0,\\ ad<0. \end{array}\right. } \end{aligned}$$
     
  2. (ii)
    If there is no change of sign in the sequence of polynomial’s coefficients, then there is no positive root. That is, if
    $$\begin{aligned} {\left\{ \begin{array}{ll} ab>0\\ bc>0\\ cd>0 \end{array}\right. } \end{aligned}$$
    then \(P_3\) has no positive root. (Thus the game has no equilibria.)
     

Remark 1

In this remark, we show that in the case \(q=\frac{n-1}{n}\) the point \({\mathbf {x}}=(1/n,\ldots , 1/n)\) is always an equilibrium of the general replicator–mutator dynamics regardless of the type of games and of the payoff functions. In fact, since \(q=\frac{n-1}{n}\), we have
$$\begin{aligned} q_{ji}=\frac{q}{n-1}=\frac{1}{n}, ~q_{ii}=1-q=\frac{1}{n}. \end{aligned}$$
Substituting this into the formula of \(g_i\) in (1), we obtain
$$\begin{aligned} g_i({\mathbf {x}})=\frac{1}{n}\sum _{j=1}^n x_j f_j({\mathbf {x}})-x_i{\bar{f}}({\mathbf {x}})=(1/n- x_i){\bar{f}}({\mathbf {x}}). \end{aligned}$$
Thus, the replicator–mutator dynamics always has an uniform equilibrium\({\mathbf {x}}=(1/n,\ldots , 1/n)\), see [25] for the bifurcation analysis of this equilibrium point for the case \(d=2\) and \(n\ge 3\).

2.3 Muti-Player Games

In this section, we focus on the replicator–mutator equation for d-player two-strategy games with a symmetric mutation matrix \(Q=(q_{ji})\) (with \(j,i\in \{1,2\}\)) so that
$$\begin{aligned} q_{11}=q_{22}=1-q \quad \text {and}\quad q_{12}=q_{21}=q, \end{aligned}$$
for some constant \(0\le q\le 1/2\). Note that this is a direct consequence of Eq. (3) and is not an additional restriction/assumption. Let x be the frequency of \(S_1\). Thus, the frequency of \(S_2\) is \(1-x\). The interaction of the individuals in the population is in randomly selected groups of d participants, that is they play and obtain their fitness from d-player games. Let \(a_k\) (resp., \(b_k\)) be the pay-off of an \(S_1\)-strategist (resp., \(S_2\)) in a group containing other k\(S_1\) strategists (i.e. \(d-1-k\)\(S_2\) strategists). Here, we consider symmetric games where the pay-offs do not depend on the ordering of the players. In this case, the average pay-offs of \(S_1\) and \(S_2\) are, respectively,
$$\begin{aligned} f_1(x)= \sum \limits _{k=0}^{d-1}a_k\begin{pmatrix} d-1\\ k \end{pmatrix}x^k (1-x)^{d-1-k}\quad \text {and}\quad f_2(x)= \sum \limits _{k=0}^{d-1}b_k\begin{pmatrix} d-1\\ k \end{pmatrix}x^k (1-x)^{d-1-k}. \end{aligned}$$
(12)
The replicator–mutator equation (1) then becomes
$$\begin{aligned} {\dot{x}}&=x f_1(x)(1-q)+(1-x) f_2(x)q-x(x f_1(x)+(1-x)f_2(x))\nonumber \\ {}&=q\Big [(1-x)f_2(x)-x f_1(x)\Big ]+x(1-x)(f_1(x)-f_2(x)). \end{aligned}$$
(13)
Note that when \(q=0\), we recover the usual replicator equation (i.e. without mutation). In contrast to the replicator equation, \(x=0\) and \(x=1\) are no longer equilibrium points of the system for \(q\ne 0\). In addition, according to Remark 1 if \(q=\frac{1}{2}\) then \(x=\frac{1}{2}\) is always an equilibrium point.
Equilibrium points are those points \(0\le x\le 1\) that make the right-hand side of (13) vanish, that is
$$\begin{aligned} q\Big [(1-x)f_2(x)-x f_1(x)\Big ]+x(1-x)(f_1(x)-f_2(x))=0. \end{aligned}$$
(14)
Using (12), Eq. (14) becomes
$$\begin{aligned}&q\bigg [\sum _{k=0}^{d-1}b_k\begin{pmatrix} d-1\\ k \end{pmatrix} x^k(1-x)^{d-k}-\sum _{k=0}^{d-1}a_k\begin{pmatrix} d-1\\ k \end{pmatrix} x^{k+1}(1-x)^{d-1-k}\bigg ] \nonumber \\&\qquad +\sum _{k=0}^{d-1}\beta _k \begin{pmatrix} d-1\\ k \end{pmatrix} x^{k+1}(1-x)^{d-k}=0, \end{aligned}$$
(15)
where \(\beta _k:=a_k-b_k\). Now, setting \(t:=\frac{x}{1-x}\) then dividing (15) by \((1-x)^{d+1}\) and using the relation that \((1+t)=\frac{1}{1-x}\), we obtain
$$\begin{aligned} q(1+t)\Big [\sum _{k=0}^{d-1}b_k\begin{pmatrix} d-1\\ k \end{pmatrix}t^k-\sum _{k=0}^{d-1}a_k\begin{pmatrix} d-1\\ k \end{pmatrix}t^{k+1}\Big ]+ \sum _{k=0}^{d-1}\beta _k \begin{pmatrix} d-1\\ k \end{pmatrix} t^{k+1}=0.\qquad \end{aligned}$$
(16)
By regrouping terms and changing the sign, we obtain the following polynomial equation in t-variable:
$$\begin{aligned} P(t):=\sum _{k=0}^{d+1}c_k t^k=0, \end{aligned}$$
(17)
where the coefficient \(c_k\) for \(k=0,\ldots , d+1\) is given by:
$$\begin{aligned} c_k:={\left\{ \begin{array}{ll} -qb_0~~&{}\text {for}~~k=0,\\ (q-1)(a_0-b_0)-q(d-1)b_1~~&{}\text {for}~~k=1,\\ q a_{k-2}\begin{pmatrix} d-1\\ k-2 \end{pmatrix}+(q-1)(a_{k-1}-b_{k-1})\begin{pmatrix} d-1\\ k-1 \end{pmatrix}-qb_k\begin{pmatrix} d-1\\ k \end{pmatrix}~~&{}\text {for}~~ k=2,\ldots , d-1,\\ (q-1)(a_{d-1}-b_{d-1})+qa_{d-2}(d-1)~~&{}\text {for}~~k=d,\\ qa_{d-1}~~&{}\text {for}~~ k=d+1. \end{array}\right. } \end{aligned}$$
(18)
Thus, the number of equilibria of d-player two-strategy games is the same as the number of positive roots of the polynomial P. We now use Descartes’ rule of signs to count the latter. Let \({\mathbf {c}}:=\{c_0,c_1,\ldots , c_{d+1}\}\) be the sequence of coefficients given in (18). Applying Descartes’ rule of signs, we obtain the following result.

Lemma 4

The number of positive roots of P, which is also the number of equilibria of the d-player two-strategy replicator–mutator dynamics, is either equal to the number of sign changes of \({\mathbf {c}}\) or is less than it by an even amount.

In [28], the author has employed a similar approach to study the number of equilibria for the standard replicator dynamics, in which P turns out to be a Bernstein polynomial and many useful properties of Bernstein polynomials were exploited. In the following remark, we show that the polynomial P can also be written in the form of a Bernstein polynomial.

Remark 2

Using the identities B.4 and B.5 in [29], we can write \(q[(1-x)f_2(x)-xf_1(x)]\) as a polynomial in Bernstein form of degree d (call it \(P_1(x)\)), and similarly \(x(1-x)(f_1(x)-f_2(x))\) as a polynomial in Bernstein form of degree \(d+1\) (call it \(P_2(x)\)), as follows:
$$\begin{aligned} P_1(x)= & {} q[(1-x)f_2(x)-xf_1(x)] = q \left( \sum \limits _{k=0}^{d}\frac{b_k (d-k) - k a_{k-1}}{d} \begin{pmatrix} d\\ k \end{pmatrix}x^k (1-x)^{d-k} \right) , \\ P_2(x)= & {} x(1-x)(f_1(x)-f_2(x)) = \sum \limits _{k=0}^{d+1}\frac{k(d+1-k)( a_{k-1} - b_{k-1} )}{d(d+1)} \begin{pmatrix} d+1\\ k \end{pmatrix}x^k (1-x)^{d+1-k}. \end{aligned}$$
Using the following identity, obtained by multiplying the polynomial by \(((1-x) + x)\),
$$\begin{aligned} \sum \limits _{k=0}^{d}c_k\begin{pmatrix} d\\ k \end{pmatrix}x^k (1-x)^{d-k} = \sum \limits _{k=0}^{d+1} \frac{ (d+1-k) c_k + k c_{k-1} }{d+1}\begin{pmatrix} d+1\\ k \end{pmatrix}x^k (1-x)^{d+1-k}, \end{aligned}$$
we have
$$\begin{aligned} P_1(x)= & {} q \Big ( \sum \limits _{k=0}^{d+1} \frac{ (d+1-k) [(d-k)b_k - k a_{k-1}] + k [b_{k-1} (d+1-k) - (k-1) a_{k-2}] }{d(d+1)}\nonumber \\&\times \begin{pmatrix} d+1\\ k \end{pmatrix}x^k (1-x)^{d+1-k} \Big ). \end{aligned}$$
Combining the above computations, we have converted \(P_1(x) + P_2(x)\) into a polynomial in Bernstein form:
$$\begin{aligned} P_1(x) + P_2(x)= \frac{1}{d(d+1)} \sum \limits _{k=0}^{d+1} \rho _k\begin{pmatrix} d+1\\ k \end{pmatrix}x^k (1-x)^{d+1-k}, \end{aligned}$$
where
$$\begin{aligned} \rho _k&= k(d+1-k)( a_{k-1} - b_{k-1} ) + q \Big ((d+1-k) [(d-k)b_k - k a_{k-1}] \\&\qquad + k [b_{k-1} (d+1-k) - (k-1) a_{k-2}] \Big )\\&= q (d+1-k)(d-k)b_k + (1-q) (d+1-k)k (a_{k-1} - b_{k-1}) - q k (k-1) a_{k-2}. \end{aligned}$$
Direct computations show that (note that we have changed the sign of \(c_k\) for notation convenience in the subsequent sections)
$$\begin{aligned} \rho _k=-\frac{c_k d(d+1)}{\begin{pmatrix} d+1\\ k \end{pmatrix}. } \end{aligned}$$
Having written P in the form of a Bernstein polynomial, similar general results on the equilibrium points of the replicator–mutator dynamics as in [28] could be, in principle, obtained using the link between the sign pattern of the sequence \(\varvec{\rho }=\{\rho _0,\ldots , \rho _{d+1}\}\) and the sign pattern and number of roots of the polynomial P. We do not go into further details here and leave this interesting topic for future research.

For a (real) polynomial P, we denote by S(P) the number of changes of signs in the sequence of coefficients of P disregarding zeros and by R(P) the number of positive roots of P counted with multiplicities. Descartes’ rule of signs only provides an upper bound for R(P) in terms of S(P). Recently, it has been shown that R(P) can be computed exactly as S(PQ) for some polynomial Q or as a limit of \(S((t+1)^n P(t))\) as n tends to infinity.

Theorem 1

[1] Let P be a nonzero real polynomial.
  1. (i)

    There exists a real polynomial Q with all non-negative coefficients such that \(S(PQ)=R(P)\).

     
  2. (ii)

    The sequence \(S((t+1)^n P(t))\) is monotone decreasing with limit equal to R(P).

     
The polynomial Q in part (i) involves all the roots of P (even the imaginary ones), which are not known in general; hence, part (i) is practically inefficient. The sequence \(\{S((t+1)^n P(t))\}_{n}\) can be easily computed, but it only can be used for approximating R(P). Note that for \(P(t)=c_{d+1} t^{d+1}+\cdots +c_1 t+c_0\), we have
$$\begin{aligned} (t+1)^n P(t)=\sum _{j=0}^n\sum _{i=0}^{d+1} c_i\begin{pmatrix} n\\ j \end{pmatrix}t^{i+j}=\sum _{k=0}^{n+d+1}\sum _{i=0}^{d+1} c_i\begin{pmatrix} n\\ k-i \end{pmatrix}t^k. \end{aligned}$$
Thus, k-th coefficient of \((t+1)^n P(t)\) is
$$\begin{aligned} a^k_n=\sum _{i=0}^{d+1} c_i \begin{pmatrix} n\\ k-i \end{pmatrix}. \end{aligned}$$
(19)

Corollary 1

Let \(s_n\) be the number of changes of signs in the sequence \(\{a^k_n\}_{k=0}^{n+d+1}\) defined in (19). Then, the number N of equilibria of a d-player two-strategy game is
$$\begin{aligned} N=R(P)=\lim _{n\rightarrow \infty } s_n. \end{aligned}$$
(20)
Fig. 1

Plot of \(s_n\) for some randomly chosen payoff matrices. (We adopted \(q = 0.1\) in all cases.) We indicate the number players d in the game, the payoff matrix used for small d (for the sake of representation given large sizes of the payoff matrices for large d), and the number of internal equilibria, N. For sufficiently large n, \(s_n\) decreasingly converges to the corresponding value of N

Corollary 1 provides us with a simple method to calculate the number of equilibria, N, for a given d-player two-strategy game. In Fig. 1, we show a number of examples. The value of n such that \(s_n\) reaches N varies significantly for different games and is usually (very) large. It would be an interesting problem to find the smallest value of n satisfying \(s_n=N\). An upper bound for such n is also helpful. This is still an open problem [1]. However, in the particular case when P has no positive root, we have the following theorem.

Theorem 2

[30] Let \(P(t)=c_{d+1} x^{d+1}+\cdots +c_1 t+c_0\). If \(R(P)=0\), then \(S((t+1)^{n_0} P(t))=0\) where
$$\begin{aligned} n_0=\left\lceil \begin{pmatrix} d+1\\ 2 \end{pmatrix}\frac{\max _{0\le i\le d+1}\big \{c_i/\begin{pmatrix} d+1\\ i \end{pmatrix}\big \}}{\min _{\lambda \in [0,1]}\big \{(1-\lambda )^{d+1} f(\frac{\lambda }{1-\lambda })\big \}}-d-1\right\rceil . \end{aligned}$$

Corollary 2

If \(S((t+1)^{n_0}P(t))\ge 1\), then \(R(P)\ge 1\).

3 Properties of Equilibrium Points: Random Games

In this section, we study random games. For two-player social dilemma games, we calculate the probability of having a certain number of equilibria when S and T are uniformly distributed. For multi-player games, we compute the expected number of equilibria when the payoff entries are normally distributed.

3.1 Probability of Having a Certain Number of Equilibria in Social Dilemma Games

We consider two-player social dilemma games in Sect. 2.2, but T and S are now random variables uniformly distributed in the corresponding intervals. In this section, \(p^G_k\), where \(G \in \{\mathrm{SD}, \mathrm{H}, \mathrm{SH}, \mathrm{PD}\}\) and \(k\in \{1,2,3\}\), denotes the probability of a game G having k equilibria. According to the analysis of Sect. 2.2, all of the games have at least one equilibrium at the origin. In addition, the SD-game always has two equilibria, that is
$$\begin{aligned} p_1^\mathrm{SD}=p_3^\mathrm{SD}=0, \quad p_2^\mathrm{SD}=1. \end{aligned}$$
We also know that the H-game has either one or two equilibria. The probability that it has one equilibrium is smaller than the probability that \(S+T=1\). Since \(S+T\) has a continuous density function, it implies that \(p_1^\mathrm{H}=0\). Thus, we also have
$$\begin{aligned} p_1^\mathrm{H}=p_3^\mathrm{H}=0,\quad p_2^\mathrm{H}=1. \end{aligned}$$
For the SH-game and PD-game, we are able to calculate the probability of having two equilibria explicitly since its condition on T and S is simple which depends only on a convex combination of T and S. The conditions on S and T for these games to have one equilibrium or three equilibria are much more complex since they involve \(\Delta \) defined in (10), which is a nonlinear function of S and T.
SH-Game Suppose that \(S\sim U([-1,0]),~~T\sim U([0,1])\). Then,
$$\begin{aligned}&qT\sim U([0,q]), \quad f_{qT}(x)={\left\{ \begin{array}{ll}\frac{1}{q} &{}\quad \text {if}\quad 0\le x\le q,\\ 0 &{}\quad \text {otherwise} \end{array}\right. }; \\ {}&(1-q) S\sim U([q-1,0]),\quad f_{(1-q)S}(y)={\left\{ \begin{array}{ll}\frac{1}{(1-q)}&{}\quad \text {if}\quad q-1\le y\le 0,\\ 0 &{}\quad \text {otherwise}. \end{array}\right. } \end{aligned}$$
We now compute \(p_2^\mathrm{SH}\) explicitly. The probability that the SH-game has two equilibria, \(p_2^\mathrm{SH}\), is the probability that \(h(0)h(1)<0\). Since \(h(1)<0\), we have
$$\begin{aligned} p_2^\mathrm{SH}={\mathrm {Prob}}(h(0)>0)={\mathrm {Prob}}(qT+S(1-q)>0)=\int _0^\infty f_Z^\mathrm{SH}(x)\,\mathrm{d}x, \end{aligned}$$
(21)
where \(f_Z^\mathrm{SH}\) is the probability density function of the random variable \(Z:=qT+(1-q)S\), which is given by:
$$\begin{aligned} f_Z^\mathrm{SH}(x)&=(f_{qT}*f_{(1-q)S})(x)=\int _{-\infty }^\infty f_{qT}(x-y) f_{(1-q)S}(y)\,\mathrm{d}y \\ {}&=\frac{1}{1-q}\int _{q-1}^0 f_{qT}(x-y)\,\mathrm{d}y \\ {}&\overset{(*)}{=}\frac{1}{1-q}{\left\{ \begin{array}{ll} \int _{q-1}^{x}\frac{1}{q}\,\mathrm{d}y&{}\quad \text {if}~~q-1\le x\le 2q-1,\\ \int _{x-q}^{x}\frac{1}{q}\,\mathrm{d}y&{}\quad \text {if}~~2q-1\le x\le 0,\\ \int _{x- q}^0\frac{1}{q}\,\mathrm{d}y&{}\quad \text {if}~~0\le x\le q,\\ 0&{}\quad \text {otherwise} \end{array}\right. } \\ {}&=\frac{1}{1-q}{\left\{ \begin{array}{ll} \frac{x+1-q}{q}&{}\quad \text {if}~~q-1\le x\le 2q-1,\\ 1&{}\quad \text {if}~~2q-1\le x\le 0,\\ \frac{q-x}{q}&{}\quad \text {if}~~0\le x\le q,\\ 0&{}\quad \text {otherwise}. \end{array}\right. } \end{aligned}$$
Note that to obtain \((*)\), we use the fact that \(f_{qT}(x-y)\) is 1 / q if \(0\le x-y\le q\) and is zero otherwise. Thus, the domain of the integral is restricted to
$$ \begin{aligned} D=\{(x,y):~~ q-1\le y\le 0~ \& ~~0\le x-y\le q\}, \end{aligned}$$
which gives rise to the cases in \((*)\). Substituting the formula of \(f_Z\) into (21), we obtain
$$\begin{aligned} p_2^\mathrm{SH}=\int _0^\infty f^\mathrm{SH}_Z(x)\,\mathrm{d}x=\frac{1}{1-q}\int _0^q \frac{q-x}{q}\,\mathrm{d}x=\frac{q}{2(1-q)}. \end{aligned}$$
It follows that \(q\mapsto p_2^\mathrm{SH}\) is an increasing function. We plot this function in Fig. 2.
Fig. 2

Probability of having two equilibrium points for Prisoner’s Dilemma (PD) and Stag Hunt (SH) games, according to analytical results obtained in Sect. 3. Both functions are increasing; \(p_2^\mathrm{PD}\) is always bigger than \(p_2^\mathrm{SD}\); the maximum of \(p_2^\mathrm{PD}\) is 1 while the maximum of \(p_2^\mathrm{SD}\) is 1 / 2. These results also corroborate the simulation results using samplings in Fig. 3

PD-Game Suppose that \(T\sim U([1,2])\) and \(S\sim U([-1,0])\). Then,
$$\begin{aligned}&qT\sim U([q,2q]),\quad f_{qT}={\left\{ \begin{array}{ll} \frac{1}{q}&{}\quad \text {if}\quad q\le x\le 2q,\\ 0&{}\quad \text {otherwise}; \end{array}\right. } \\ {}&(1-q) S\sim U([q-1,0]),\quad f_{(1-q)S}(y)={\left\{ \begin{array}{ll}\frac{1}{(1-q)}&{}\quad \text {if}\quad q-1\le y\le 0,\\ 0&{} \quad \text {otherwise}. \end{array}\right. } \end{aligned}$$
Similarly as in (21), we have
$$\begin{aligned} p_2^\mathrm{PD}=\int _0^\infty f_Z^\mathrm{PD}(x)\,\mathrm{d}x, \end{aligned}$$
where \(f_Z^\mathrm{PD}\) is the probability density function of \(Z=qT+(1-q)S\). To calculate this function, we need to consider two different cases \(0< q\le 1/3\) (hence \(q-1\le -2q\le -q<0\)) and \(1/3\le q\le 1/2\) (hence \(-2q\le q-1\le -q<0\)). For \(0< q\le 1/3\), we have
$$\begin{aligned} f_Z^\mathrm{PD}(x)&=(f_{qT}*f_{(1-q)S})(x)=\int _{-\infty }^\infty f_{qT}(x-y) f_{(1-q)S}(y)\,\mathrm{d}y \\ {}&=\frac{1}{1-q}\int _{q-1}^0 f_{qT}(x-y)\,\mathrm{d}y \\ {}&=\frac{1}{1-q}{\left\{ \begin{array}{ll} \int _{q-1}^{x-q}\frac{1}{q}\,\mathrm{d}y&{}\quad \text {if}\quad 2q-1\le x\le 3q-1,\\ \int _{x-2q}^{x-q}\frac{1}{q}\,\mathrm{d}y&{}\quad \text {if}\quad 3q-1\le x\le q,\\ \int _{x-2q}^0\frac{1}{q}\,\mathrm{d}y&{}\quad \text {if}\quad q\le x\le 2q,\\ 0&{}\quad \text {otherwise} \end{array}\right. } \\ {}&=\frac{1}{1-q}{\left\{ \begin{array}{ll} \frac{x+1-2q}{q}&{}\quad \text {if}\quad 2q-1\le x\le 3q-1,\\ 1&{}\quad \text {if}\quad 3q-1\le x\le q,\\ \frac{2q-x}{q}&{}\quad \text {if}\quad q\le x\le 2q,\\ 0&{}\quad \text {otherwise} \end{array}\right. } \end{aligned}$$
Hence for \(0\le q\le 1/3\), we have
$$\begin{aligned} p_2^\mathrm{PD}&=\int _0^\infty f_Z^\mathrm{PD}(x)\,\mathrm{d}x=\int _0^{q}f_Z^\mathrm{PD}(x)\,\mathrm{d}x+\int _{q}^{2q}f_Z^\mathrm{PD}(x)\, \mathrm{d}x \\&= \frac{1}{1-q}\Big (\int _0^q 1\,\mathrm{d}x+ \int _q^{2q}\frac{2q-x}{q}\,\mathrm{d}x\Big ) \\ {}&=\frac{3q}{2(1-q)}. \end{aligned}$$
For \(1/3\le q\le 1/2\), we have
$$\begin{aligned} f_Z^\mathrm{PD}(x)&=\frac{1}{1-q}\int _{q-1}^0f_{qT}(x-y)\,\mathrm{d}y\\&=\frac{1}{1-q}{\left\{ \begin{array}{ll} \int _{q-1}^{x-q}\frac{1}{q}\,\mathrm{d}y&{}\quad \text {if}\quad 2q-1\le x\le 3q-1,\\ \int _{x-2q}^{x-q}\frac{1}{q}\,\mathrm{d}y&{}\quad \text {if}\quad 3q-1\le x\le q,\\ \int _{x-2q}^0\frac{1}{q}\,\mathrm{d}y&{}\quad \text {if}\quad q\le x\le 2q,\\ 0&{}\quad \text {otherwise} \end{array}\right. }\\&=\frac{1}{1-q}{\left\{ \begin{array}{ll} \frac{x+1-2q}{q}&{}\quad \text {if}\quad 2q-1\le x\le 3q-1,\\ 1&{}\quad \text {if}\quad 3q-1\le x\le q,\\ \frac{2q-x}{q}&{}\quad \text {if}\quad q\le x\le 2q,\\ 0&{}\quad \text {otherwise} \end{array}\right. } \end{aligned}$$
Hence for \(1/3\le q\le 1/2\), we have
$$\begin{aligned} p_2^\mathrm{PD}&=\int _0^\infty f_Z^\mathrm{PD}(x)\,\mathrm{d}x=\int _0^{3q-1}f_Z^\mathrm{PD}(x)\,\mathrm{d}x+\int _{3q-1}^{q}f_Z^\mathrm{PD}(x)\,\mathrm{d}x+\int _q^{2q}f_Z^\mathrm{PD}(x)\,\mathrm{d}x\\&=\frac{1}{1-q}\Big (\int _0^{3q-1}\frac{x+1-2q}{q} \,\mathrm{d}y+ \int _{3q-1}^q 1\,\mathrm{d}y+\int _q^{2q}\frac{2q-x}{q}\Big )\\&=3-\frac{1}{2q(1-q)}. \end{aligned}$$
Fig. 3

Probabilities of observing a certain number of equilibrium points for each social dilemma game, for different mutation strengths, q. S and T are drawn from uniform distributions. The results are averaged over sampling \(10^6\) pairs of S and T drawn from the corresponding ranges in a social dilemma. All results are obtained using Mathematica

In summary, we obtain
$$\begin{aligned} p_2^\mathrm{PD}={\left\{ \begin{array}{ll} \frac{3 q}{2(1-q)}&{}\quad \text {if}\quad 0<q\le 1/3,\\ 3-\frac{1}{2q(1-q)}&{}\quad \text {if}\quad 1/3\le q\le 1/2. \end{array}\right. } \end{aligned}$$
It follows that \(q\mapsto p_2^\mathrm{PD}\) is also increasing. We also plot this function in Fig. 2. Moreover, in Fig. 3, we numerically compute the probability of having a certain number of equilibria for each game by averaging over \(10^6\) samples of T and S. The numerical results are in accordance with the analytical computations. In the H-game: \(p_2=1\) (hence \(p_1=p_3=0\)) for all values of q. In the SD-game: when \(q=0\), \(p_3=1\) (hence \(p_1=p_2=0\)) but \(p_2=1\) (hence \(p_1=p_3=0\)) for all \(q>0\). In the PD-game: when \(q=0\), \(p_2=1\) (hence \(p_1=p_3=0\)) but when \(0<q<1/2\), all \(p_1,p_2 and p_3\) are positive although \(p_3\) is very small; \(p_2\) is increasing and attains its maximum 1 at \(q=1/2\). In the SH-game: when \(q=0\), \(p_3=1\) (hence \(p_1=p_2=0\)). When \(0<q<1/2\), the picture is more diverse: all \(p_1,p_2\) and \(p_3\) are non-negligible; \(p_2\) is increasing and attains its maximum 1 / 2 at \(q=1/2\). Moreover, note that for \(q > 0\), there is at least one equilibrium (\(x = 0\)) in all cases, where the remaining ones are internal equilibria. To the contrary, when \(q = 0\), PD and H games always have two non-internal equilibria (at \(x = 0\) and \(x = 1\)), while SH and SG games have three equilibria (two non-internal and one internal). With mutation (\(q > 0\)), \(x = 1\) is no longer an equilibrium in all cases. Therefore, the SD-game has the same number of internal equilibria (one) while it gains one more internal equilibrium in H-game. In the PD-game, the probability of having at least one internal equilibrium increases with q. In the SH-game, the probability of having two internal (i.e. gaining one more compared to the no mutation case) is high. In short, except for the SD-game, introducing mutation leads to the probability of gaining an additional internal equilibrium (thus increasing behavioural diversity) in all social dilemmas. This probability is 100% in the H-game, increases with q in the PD-game (reaching 100% when \(q = 0.5\)) and is roughly 40–60% in the SH-game.

3.2 Expected Number of Equilibria of Multi-Player Two-Strategy Games

We recall that finding an equilibrium point of the replicator–mutator dynamics for d-player two-strategy games is equivalent to finding a positive root of the polynomial (17) with coefficients given in (18). In this section, by employing techniques from random polynomial theory, we provide explicit formulas for the computation of the expected number of internal equilibrium points of the replicator–mutator dynamics where the entries of the payoff matrix are random variables, thus extending our previous results for the replicator dynamics [4, 5, 6, 7]. We will apply the following result on the expected number of positive roots of a general random polynomial.

Theorem 3

[8, Theorem 3.1] Consider a random polynomial
$$\begin{aligned} Q(x)=\sum _{i=0}^n \alpha _k x^k, \end{aligned}$$
where \(\{\alpha _k\}_{0\le k\le n}\) are the elements of a multivariate normal distribution with mean zero and covariance matrix C. Then, the expected number of positive roots of Q is given by:
$$\begin{aligned} E_Q=\frac{1}{\pi }\int _{0}^\infty \Big (\frac{\partial ^2}{\partial x\partial y}\big (\log v(x)^\mathrm{T} C v(y)\big )\big \vert _{y=x=t}\Big )^\frac{1}{2}\,\mathrm{d}t, \end{aligned}$$
(22)
where
$$\begin{aligned} v(x)=\begin{pmatrix} 1\\ x\\ \vdots \\ x^n \end{pmatrix},\quad v(y)=\begin{pmatrix} 1\\ y\\ \vdots \\ y^n \end{pmatrix}. \end{aligned}$$
Defining
$$\begin{aligned} H(x,y)=\sum _{i,j=0}^n C_{ij} x^i y^j, \quad M(t)=H(t,t),\quad A(t)=\partial ^2_{xy} H(x,y)\vert _{y=x=t}, \quad B(t)=\partial _x H(x,y)\vert _{y=x=t}, \end{aligned}$$
then \(E_Q\) can be written as:
$$\begin{aligned} E_Q=\frac{1}{\pi }\int _0^\infty \frac{\sqrt{A(t)M(t)-B(t)^2}}{M(t)}\,\mathrm{d}t. \end{aligned}$$
(23)
We now apply Theorem 3 to the random polynomial P given in (17) and obtain formulas for the expected number of equilibria of the replicator–mutator dynamics for d-player two-strategy games. It turns out that the case \(q=0.5\) needs special treatment since according to Remark 1\(x=1/2\) is always an equilibrium point.

3.2.1 The Case \(q\ne 0.5\)

Suppose that \(a_k\) and \(b_k\) are independent standard normally distributed random variables with mean zero. Then, for \(q\ne \frac{1}{2}\), the random vector \({\mathbf {c}}=\{c_0,\ldots , c_{d+1}\}\) defined in (18) has a (symmetric) covariance matrix \(C=(C_{ij})_{0\le i,j\le d+1}\) given by:
$$\begin{aligned} C_{kk}&={\left\{ \begin{array}{ll} q^2~~&{}\text {for}~~k=0,\\ 2 (q-1)^2+q^2(d-1)^2~~&{}\text {for}~~k=1,\\ q^2 \begin{pmatrix} d-1\\ k-2 \end{pmatrix}^2+2(q-1)^2\begin{pmatrix} d-1\\ k-1 \end{pmatrix}^2+q^2\begin{pmatrix} d-1\\ k \end{pmatrix}^2~~&{}\text {for}~~k=2,\ldots , d-1,\\ 2(q-1)^2+ q^2(d-1)^2~~&{}\text {for}~~k=d,\\ q^2 ~~&{}\text {for}~~k={d+1}; \end{array}\right. } \\ C_{k k+1}&={\left\{ \begin{array}{ll} q(q-1)~~&{}\text {for}~~k=0,\\ q(q-1)+q(q-1)(d-1)^2~~&{}\text {for}~~k=1,\\ q(q-1)\begin{pmatrix} d-1\\ k-1 \end{pmatrix}^2+q(q-1)\begin{pmatrix} d-1\\ k \end{pmatrix}^2~~&{}\text {for}~~k=2,\ldots , d-2,\\ q(q-1)(d-1)^2+q(q-1)~~&{}\text {for}~~k=d-1,\\ q(q-1)~~&{}\text {for}~~k=d; \end{array}\right. } \\ C_{i j}&=0~~\text {for}~~0\le i<j\le d+1~: j-i\ge 2. \end{aligned}$$
Using the convention that whenever \(k<0\) or \(k>n\) then \( \begin{pmatrix} n\\ k \end{pmatrix}=0 \), we can simplify C as:
$$\begin{aligned} C_{kk}&=q^2 \begin{pmatrix} d-1\\ k-2 \end{pmatrix}^2+2(q-1)^2\begin{pmatrix} d-1\\ k-1 \end{pmatrix}^2+q^2\begin{pmatrix} d-1\\ k \end{pmatrix}^2~~\text {for}~~ k=0,\ldots , d+1, \end{aligned}$$
(24)
$$\begin{aligned} C_{kk+1}&=q(q-1)\begin{pmatrix} d-1\\ k-1 \end{pmatrix}^2+q(q-1)\begin{pmatrix} d-1\\ k \end{pmatrix}^2, ~~\text {for}~~ k=0,\ldots , d, \end{aligned}$$
(25)
$$\begin{aligned} C_{i j}&=0~~\text {for}~~0\le i<j\le d+1~: j-i\ge 2. \end{aligned}$$
(26)
Applying Theorem 3, we obtain the following result.

Proposition 1

Suppose that \(a_k\) and \(b_k\) are independent standard normally distributed random variables with mean zero and that \(q\ne 0.5\). We define
$$\begin{aligned} H(x,y)&=\sum _{k=0}^{d+1}C_{kk}x^k y^k+\sum _{k=0}^d C_{kk+1}(x^k y^{k+1}+x^{k+1}y^k),\\ M(t)&=H(t,t),\quad A(t)=\partial ^2_{xy}H(x,y)\big \vert _{y=x=t},\quad B(t)=\partial _{x}H(x,y)\big \vert _{y=x=t}, \end{aligned}$$
where the coefficient \(C_{ij}, 0\le i,j \le d+1\) are given in (24), (25) and (26). Then the expected number of equilibria of a d-player two-strategy replicator–mutator dynamics is given by
$$\begin{aligned} E=\frac{1}{\pi }\int _0^\infty \frac{\sqrt{A(t)M(t)-B^2(t)}}{M(t)}\,\mathrm{d}t. \end{aligned}$$

3.2.2 The Case \(q=0.5\)

The case \(q=0.5\) needs to be treated differently since in this case, according to Remark 1, \(x=1/2\) is always an equilibrium. Other equilibrium points are roots of the average fitness of the whole population \({\bar{f}}(x)=0\) due to Remark 1, that is
$$\begin{aligned} 0={\bar{f}}(x)= & {} x f_1(x)+(1-x)f_2(x)\overset{(12)}{=}\sum _{k=0}^{d-1}a_k\begin{pmatrix} d-1\\ k \end{pmatrix} x^{k+1} (1-x)^{d-1-k} \\&+\sum _{k=0}^{d-1}b_k\begin{pmatrix} d-1\\ k \end{pmatrix} x^{k} (1-x)^{d-k}. \end{aligned}$$
Since \(x=1\) is not a solution, by dividing the right-hand side of the above equation by \((1-x)^d\), and let \(t:=\frac{x}{1-x}\), then we obtain the following equation:
$$\begin{aligned} P(t)&=\sum _{k=0}^{d-1}a_k\begin{pmatrix} d-1\\ k \end{pmatrix} t^{k+1}+\sum _{k=0}^{d-1}b_k\begin{pmatrix} d-1\\ k \end{pmatrix} t^{k} \\ {}&=\sum _{k=0}^d\left[ a_{k-1}\begin{pmatrix} d-1\\ k-1 \end{pmatrix}+b_k\begin{pmatrix} d-1\\ k \end{pmatrix}\right] t^k \\ {}&=:\sum _{k=0}^d c_k t^k, \end{aligned}$$
where
$$\begin{aligned} c_k=a_{k-1}\begin{pmatrix} d-1\\ k-1 \end{pmatrix}+b_k\begin{pmatrix} d-1\\ k \end{pmatrix},\quad \text {for}~k=0,\ldots , d. \end{aligned}$$
Suppose that \(a_k\) and \(b_k\) are independent standard normally distributed random variables with mean zero. Then, the random vector \({\mathbf {c}}=\{c_0,\ldots , c_{d+1}\}\) has a (symmetric) covariance matrix \(C=(C_{ij})_{0\le i,j\le d+1}\) given by:
$$\begin{aligned} C_{ij}=\left( \begin{pmatrix} d-1\\ k-1 \end{pmatrix}^2+\begin{pmatrix} d-1\\ k \end{pmatrix}^2\right) \, \delta _{ij}, \end{aligned}$$
where \(\delta _{ij}\) is the Kronecker delta. Applying Theorem 3 and noticing that \(x=1/2\) is always an equilibrium, we obtain the following result.

Proposition 2

Suppose that \(a_k\) and \(b_k\) are independent standard normally distributed random variables with mean zero and that \(q=0.5\). We define
$$\begin{aligned} H(x,y)&=\sum _{k=0}^{d}\left( \begin{pmatrix} d-1\\ k-1 \end{pmatrix}^2+\begin{pmatrix} d-1\\ k \end{pmatrix}^2\right) x^k y^k,\\ M(t)&=H(t,t),\quad A(t)=\partial ^2_{xy}H(x,y)\big \vert _{y=x=t},\quad B(t)=\partial _{x}H(x,y)\big \vert _{y=x=t}, \end{aligned}$$
Then, the expected number of equilibria of a d-player two-strategy replicator–mutator dynamics is given by:
$$\begin{aligned} E=1+\frac{1}{\pi }\int _0^\infty \frac{\sqrt{A(t)M(t)-B^2(t)}}{M(t)}\,\mathrm{d}t. \end{aligned}$$
In Fig. 4, we show that the results obtain from analytical formulas of E corroborate with those obtained from numerical simulations by averaging over a large number of randomly generated payoff matrices. Figure 4 also reveals that the expected number of equilibria exhibits several interesting behaviours. We will elaborate more on this point in Sect. 4.
Fig. 4

(Left panel) Analytical vs. simulation sampling results of the average number of internal equilibrium points (E) for varying q and for different values of d. The solid lines are generated from analytical (A) formulas of E. The solid diamonds capture simulation (S) results obtained by averaging over \(10^6\) samples of the payoff entries (normal distribution). Analytical and simulations results are in accordance with each other. (Right panel) Plot of E for increasing d and for different values of q. In general, E increases with d. E is always larger when \(q > 0\) than when \(q = 0\). Also, E is largest when q is close to 0 (i.e. rare mutation). All results are obtained using Mathematica

Fig. 5

Plot of \(\log (E)/\log (d+1)\) for varying d. For different values of q, this quantity converges to the same value. All results are obtained using Mathematica

4 Conclusion and Outlook

Understanding equilibrium properties of the replicator–mutator dynamics for multi-player multi-strategy games is a difficult problem due to its complexity: to find an equilibrium, one needs to solve a system of multivariate polynomials. In this paper, employing techniques from classical and random polynomial theory, we study the number of equilibria for both deterministic and random two-strategy games. For deterministic games, using Descartes’ rule of signs and its recent developments, we provide a method to compute the number of equilibria via the sign changes of the coefficients of a polynomial. For two-player social dilemma games, we compute the probability of observing a certain number of equilibria when the payoff entries are uniformly distributed. For multi-player two-strategy random games whose payoffs are independently distributed according to a normal distribution, we obtain explicit formulas to compute the expected number of equilibria by relating it to the expected number of positive roots of a random polynomial. We also perform numerical simulations to compare with and to illustrate our analytical results. We observe that E is always larger in the presence of mutation (i.e. when \(q > 0\)) than when mutation is absent (i.e. when \(q = 0\)), implying that mutation leads to larger behavioural diversity in a dynamical system (see again Fig. 4). Interestingly, E is largest when q is close to 0 (i.e. rare mutation), rather than when it is large. In general, our findings might have important implications for the understanding of social and biological diversities, where biological mutations and behavioural errors are present, i.e. in the study of evolution of cooperative behaviour and population fitness distribution [22, 26, 31]. Furthermore, numerical simulations also suggest a number of open problems that we leave for future work.

Asymptotic Behaviour of the Expected Number of Equilibria When the Number of Players Tends to Infinity In [5], we proved that
$$\begin{aligned} \lim \limits _{d\rightarrow \infty }\frac{\ln E(d)}{\ln (d-1)}=\frac{1}{2}, \end{aligned}$$
(27)
where E(d) is the expected number of internal equilibria of the replicator dynamics for d-player two-strategy games, in which the payoff entries are randomly distributed. To obtain (27), we utilized several useful connections to Legendre’s polynomials. In Fig. 5, we plot \(\frac{\ln E(q,d)}{\ln (d+1)}\), where E(qd) is the expected number of equilibria for the replicator–mutator dynamics, as a function of d for various values of q. We observe that they all converge to the same limit as d tends to infinity, but in different manner: for \(q=0\), it increasingly approaches the limit, while for \(q > 0\) sufficiently small, at first they are decreasing and then for sufficiently large d, they also increasingly approach to the limit. Thus, it is expected that there is a phase transition. Proving this rigorously would be an interesting problem. The method used in [5] seems not to be working since there is no direct connections to Legendre’s polynomials.
Asymptotic Behaviour of the Expected Number of Equilibria When the Mutation Tends to Zero The classical replicator dynamics is obtained from the replicator–mutator dynamics by setting the mutation to be zero. Thus, it is a natural question to ask how a certain quantity (such as the expected number of equilibria) behaves when the mutation tends to zero. Both Figs. 4 and  5 demonstrate that the expected number of equilibria changes significantly when the mutation is turned on. In addition, using explicit formulas of the probability of observing two equilibria for the SH-game and the PD-game obtained in Sect. 3, we clearly see a jump when q approaches zero:
$$\begin{aligned} \lim _{q\rightarrow 0}p_2^{\mathrm{q,SH,PD}}=0\ne 1=p_2^{0,\mathrm{SH,PD}}. \end{aligned}$$
Both observations suggest that these quantities exhibit singular behaviour at \(q=0\). Characterizing this behaviour would be a challenging problem for future work.

Bifurcation Phenomena of the Replicator–Mutator Dynamics for Multi-Player Games In [25], the authors proved Hopf bifurcations for the replicator–mutator dynamics with \(d=2\) and \(n\ge 3\) and characterized the existence of stable limit cycles using an analytical derivation of the Hopf bifurcations points and the corresponding first Lyapunov coefficients. In addition, they also showed that the limiting behaviours are tied to the structure of the fitness model. Another interesting topic for further research would be to extend the results of [25] to multi-player games.

Notes

Acknowledgements

We would like to thank anonymous referees for useful suggestions which help us improve the presentation of the paper; in particular, Remark 2 was suggested to us by one of the referees. The Anh Han also acknowledges support from Future of Life Institute (Grant RFP2-154).

References

  1. 1.
    Avendaño M (2010) Descartes’ rule of signs is exact!. J Algebra 324(10):2884–2892MathSciNetCrossRefGoogle Scholar
  2. 2.
    Broom M, Cannings C, Vickers GT (1997) Multi-player matrix games. Bull Math Biol 59(5):931–952CrossRefGoogle Scholar
  3. 3.
    Broom M (2000) Bounds on the number of esss of a matrix game. Math Biosci 167(2):163–175MathSciNetCrossRefGoogle Scholar
  4. 4.
    Duong MH, Han TA (2016) On the expected number of equilibria in a multi-player multi-strategy evolutionary game. Dyn Games Appl 6(3):324–346MathSciNetCrossRefGoogle Scholar
  5. 5.
    Duong MH, Han TA (2016) Analysis of the expected density of internal equilibria in random evolutionary multi-player multi-strategy games. J Math Biol 73(6):1727–1760MathSciNetCrossRefGoogle Scholar
  6. 6.
    Duong MH, Tran HM, Han TA (2019) On the expected number of internal equilibria in random evolutionary games with correlated payoff matrix. Dyn Games Appl 9(2):458–485MathSciNetCrossRefGoogle Scholar
  7. 7.
    Duong MH, Tran HM, Han TA (2019) On the distribution of the number of internal equilibria in random evolutionary games. J Math Biol 78(1):331–371MathSciNetCrossRefGoogle Scholar
  8. 8.
    Edelman A, Kostlan E (1995) How many zeros of a random polynomial are real? Bull Am Math Soc (NS) 32(1):1–37MathSciNetCrossRefGoogle Scholar
  9. 9.
    Fudenberg D, Harris C (1992) Evolutionary dynamics with aggregate shocks. J Econ Theory 57:420–441MathSciNetCrossRefGoogle Scholar
  10. 10.
    Galla T, Farmer JD (2013) Complex dynamics in learning complicated games. Proc Nat Acad Sci 110(4):1232–1236MathSciNetCrossRefGoogle Scholar
  11. 11.
    Gross T, Rudolf L, Levin SA, Dieckmann U (2009) Generalized models reveal stabilizing factors in food webs. Science 325(5941):747–750CrossRefGoogle Scholar
  12. 12.
    Gokhale CS, Traulsen A (2010) Evolutionary games in the multiverse. Proc Natl Acad Sci USA 107(12):5500–5504MathSciNetCrossRefGoogle Scholar
  13. 13.
    Gokhale CS, Traulsen A (2014) Evolutionary multiplayer games. Dyn Games Appl 4(4):468–488MathSciNetCrossRefGoogle Scholar
  14. 14.
    Hadeler KP (1981) Stable polymorphisms in a selection model with mutation. SIAM J Appl Math 41(1):1–7MathSciNetCrossRefGoogle Scholar
  15. 15.
    Hauert C, De Monte S, Hofbauer J, Sigmund K (2002) Volunteering as red queen mechanism for cooperation in public goods games. Science 296:1129–1132CrossRefGoogle Scholar
  16. 16.
    Han TA, Pereira LM, Lenaerts T (2017) Evolution of commitment and level of participation in public goods games. Auton Agent Multi Agent Syst 31(3):561–583CrossRefGoogle Scholar
  17. 17.
    Han TA, Traulsen A, Gokhale CS (2012) On equilibrium properties of evolutionary multi-player games with random payoff matrices. Theor Popul Biol 81(4):264–272CrossRefGoogle Scholar
  18. 18.
    Imhof LA, Fudenberg D, Nowak MA (2005) Evolutionary cycles of cooperation and defection. Proc Nat Acad Sci 102(31):10797–10800CrossRefGoogle Scholar
  19. 19.
    Komarova NL, Levin SA (2010) Eavesdropping and language dynamics. J Theor Biol 264(1):104–118CrossRefGoogle Scholar
  20. 20.
    Komarova NL, Niyogi P, Nowak MA (2001) The evolutionary dynamics of grammar acquisition. J Theor Biol 209(1):43–59CrossRefGoogle Scholar
  21. 21.
    Komarova NL (2004) Replicator-mutator equation, universality property and population dynamics of learning. J Theor Biol 230(2):227–239MathSciNetCrossRefGoogle Scholar
  22. 22.
    Levin SA (2000) Multiple scales and the maintenance of biodiversity. Ecosystems 3(6):498–506CrossRefGoogle Scholar
  23. 23.
    Nowak MA, Komarova NL, Niyogi P (2001) Evolution of universal grammar. Science 291(5501):114–118MathSciNetCrossRefGoogle Scholar
  24. 24.
    Olfati-Saber R (2007) Evolutionary dynamics of behavior in social networks. In: 2007 46th IEEE conference on decision and control, pp 4051–4056Google Scholar
  25. 25.
    Pais D, Caicedo-Núnez C, Leonard N (2012) Hopf bifurcations and limit cycles in evolutionary network dynamics. SIAM Journal on Applied Dynamical Systems 11(4):1754–1784MathSciNetCrossRefGoogle Scholar
  26. 26.
    Peña J (2012) Group-size diversity in public goods games. Evolution 66(3):623–636CrossRefGoogle Scholar
  27. 27.
    Perc M, Jordan JJ, Rand DG, Wang Z, Boccaletti S, Szolnoki A (2017) Statistical physics of human cooperation. Phys Rep 687:1–51MathSciNetCrossRefGoogle Scholar
  28. 28.
    Peña J, Lehmann L, Nöldeke G (2014) Gains from switching and evolutionary stability in multi-player matrix games. J Theor Biol 346:23–33CrossRefGoogle Scholar
  29. 29.
    Peña J, Nöldeke G, Lehmann L (2015) Evolutionary dynamics of collective action in spatially structured populations. J Theor Biol 382:122–136MathSciNetCrossRefGoogle Scholar
  30. 30.
    Powers V, Reznick B (2001) A new bound for Pólya’s theorem with applications to polynomials positive on polyhedra. J Pure Appl Algebra 164(1):221–229MathSciNetCrossRefGoogle Scholar
  31. 31.
    Santos FC, Pinheiro FL, Lenaerts T, Pacheco JM (2012) The role of diversity in the evolution of cooperation. J Theor Biol 299:88–96MathSciNetCrossRefGoogle Scholar
  32. 32.
    Santos FC, Pacheco JM, Lenaerts T (2006) Evolutionary dynamics of social dilemmas in structured heterogeneous populations. Proc Natl Acad Sci USA 103:3490–3494CrossRefGoogle Scholar
  33. 33.
    Stadler PF, Schuster P (1992) Mutation in autocatalytic reaction networks. J Math Biol 30(6):597–632MathSciNetCrossRefGoogle Scholar
  34. 34.
    Sturmfels B (2002) Solving systems of polynomial equations, CBMS regional conferences series, no. 97. American Mathematical Society, ProvidenceCrossRefGoogle Scholar
  35. 35.
    Wang Z, Kokubo S, Jusup M, Tanimoto J (2015) Universal scaling for the dilemma strength in evolutionary games. Phys Life Rev 14:1–30CrossRefGoogle Scholar

Copyright information

© The Author(s) 2019

Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors and Affiliations

  1. 1.School of MathematicsUniversity of BirminghamBirminghamUK
  2. 2.School of Computing and Digital TechnologiesTeesside UniversityMiddlesbroughUK

Personalised recommendations