Biology and Evolutionary Games

  • Mark Broom
  • Vlastimil Křivan
Reference work entry


This chapter surveys some evolutionary games used in biological sciences. These include the Hawk–Dove game, the Prisoner’s Dilemma, Rock–Paper–Scissors, the war of attrition, the Habitat Selection game, predator–prey games, and signaling games.


Battle of the Sexes Foraging games Habitat Selection game Hawk–Dove game Prisoner’s Dilemma Rock–Paper–Scissors Signaling games War of attrition 

1 Introduction

Evolutionarily game theory (EGT) as conceived by Maynard Smith and Price (1973) was motivated by evolution. Several authors (e.g., Lorenz 1963; Wynne-Edwards 1962) at that time argued that animal behavior patterns were “for the good of the species” and that natural selection acts at the group level. This point of view was at odds with the Darwinian viewpoint where natural selection operates on the individual level. In particular, adaptive mechanisms that maximize a group benefit do not necessarily maximize individual benefit. This led Maynard Smith and Price (1973) to develop a mathematical model of animal behavior, called the Hawk–Dove game, that clearly shows the difference between group selection and individual selection. We thus start this chapter with the Hawk–Dove game.

Today, evolutionary game theory is one of the milestones of evolutionary ecology as it put the concept of Darwinian evolution on solid mathematical grounds. Evolutionary game theory has spread quickly in behavioral and evolutionary biology with many influential models that change the way that scientists look at evolution today. As evolutionary game theory is noncooperative, where each individual maximizes its own fitness, it seemed that it cannot explain cooperative or altruistic behavior that was easy to explain on the grounds of the group selection argument. Perhaps the most influential model in this respect is the Prisoner’s Dilemma (Poundstone 1992), where the evolutionarily stable strategy leads to a collective payoff that is lower than the maximal payoff the two individuals can achieve if they were cooperating. Several models within evolutionary game theory have been developed that show how mutual cooperation can evolve. We discuss some of these models in Sect. 3. A popular game played by human players across the world, which can also be used to model some biological populations, is the Rock–Paper–Scissors game (RPS; Sect. 4). All of these games are single-species matrix games, so that their payoffs are linear, with a finite number of strategies. An example of a game that cannot be described by a matrix and that has a continuum of strategies is the war of attrition in Sect. 5.1 (or alternatively the Sir Philip Sidney game mentioned in Sect. 10). A game with nonlinear payoffs which examines an important biological phenomenon is the sex-ratio game in Sect. 5.2.

Although evolutionary game theory started with consideration of a single species, it was soon extended to two interacting species. This extension was not straightforward, because the crucial mechanism of a (single-species) EGT, that is, negative frequency dependence that stabilizes phenotype frequencies at an equilibrium, is missing if individuals of one species interact with individuals of another species. These games are asymmetric, because the two contestants are in different roles (such asymmetric games also occur within a single species). Such games that can be described by two matrices are called bimatrix games. Representative examples include the Battle of the Sexes (Sect. 6.1) and the Owner–Intruder game (Sect. 6.2). Animal spatial distribution that is evolutionarily stable is called the Ideal Free Distribution (Sect. 7). We discuss first the IFD for a single species and then for two species. The resulting model is described by four matrices, so it is no longer a bimatrix game. The IFD, as an outcome of animal dispersal, is related to the question of under which conditions animal dispersal can evolve (Sect. 8). Section 9 focuses on foraging games. We discuss two models that use EGT. The first model, that uses decision trees, is used to derive the diet selection model of optimal foraging. This model asks what the optimal diet of a generalist predator is in an environment that has two (or more) prey species. We show that this problem can be solved using the so-called agent normal form of an extensive game. We then consider a game between prey individuals that try to avoid their predators and predators that aim to capture prey individuals. The last game we consider in some detail is a signaling game of mate quality, which was developed to help explain the presence of costly ornaments, such as the peacock’s tail.

We conclude with a brief section discussing a few other areas where evolutionary game theory has been applied. However, a large variety of models that use EGT have been developed in the literature, and it is virtually impossible to survey all of them.

2 The Hawk–Dove Game: Selection at the Individual Level vs. Selection at the Population Level

One of the first evolutionary games was introduced to understand evolution of aggressiveness among animals (Maynard Smith and Price 1973). Although many species have strong weapons (e.g., teeth or horns), it is a puzzling observation that in many cases antagonistic encounters do not result in a fight. In fact, such encounters often result in a complicated series of behaviors, but without causing serious injuries. For example, in contests between two male red deer, the contestants first approach each other, and provided one does not withdraw, the contest escalates to a roaring contest and then to the so-called parallel walk. Only if this does not lead to the withdrawal of one deer does a fight follow. It was observed (Maynard Smith 1982) that out of 50 encounters, only 14 resulted in a fight. The obvious question is why animals do not always end up in a fight? As it is good for an individual to get the resource (in the case of the deer, the resources are females for mating), Darwinian selection seems to suggest that individuals should fight whenever possible. One possible answer why this is not the case is that such a behavior is for the good of the species, because any species following this aggressive strategy would die out quickly. If so, then we should accept that the unit of selection is not an individual and abandon H. Spencer’s “survival of the fittest” (Spencer 1864).

The Hawk–Dove model explains animal contest behavior from the Darwinian point of view. The model considers interactions between two individuals from the same population that meet in a pairwise contest. Each individual uses one of the two strategies called Hawk and Dove. An individual playing Hawk is ready to fight when meeting an opponent, while an individual playing Dove does not escalate the conflict. The game is characterized by two positive parameters where V denotes the value of the contested resource and C is the cost of the fight measured as the damage one individual can cause to his opponent. The payoffs for the row player describe the increase/decrease in the player’s fitness after an encounter with an opponent. The game matrix is and the model predicts that when the cost of the fight is lower than the reward obtained from getting the resource, C < V , all individuals should play the Hawk strategy that is the strict Nash equilibrium (NE) (thus an evolutionarily stable strategy (ESS) ) of the game. When the cost of a fight is larger than the reward obtained from getting the resource, C > V , then p = VC (0 < p < 1) is the corresponding monomorphic ESS. In other words, each individual will play Hawk when encountering an opponent with probability p and Dove with probability 1 − p. Thus, the model predicts that aggressiveness in the population decreases with the cost of fighting. In other words, the species that possess strong weapons (e.g., antlers in deer) should solve conflicts with very little fighting.
Can individuals obtain a higher fitness when using a different strategy? In a monomorphic population where all individuals use a mixed strategy 0 < p < 1, the individual fitness and the average fitness in the population are the same and equal to
$$\displaystyle \begin{aligned}E(p,p)=\frac{V}{2}-\frac{C}{2} p^2.\end{aligned}$$
This fitness is maximized for p = 0, i.e., when the level of aggressiveness in the population is zero, all individuals play the Dove strategy, and individual fitness equals V∕2. Thus, if selection operated on a population or a species level, all individuals should be phenotypically Doves who never fight. However, the strategy p = 0 cannot be an equilibrium from an evolutionary point of view, because in a Dove-only population, Hawks will always have a higher fitness (V ) than Doves (V∕2) and will invade. In other words, the Dove strategy is not resistant to invasion by Hawkish individuals. Thus, securing all individuals to play the strategy D, which is beneficial from the population point of view, requires some higher organizational level that promotes cooperation between animals (Dugatkin and Reeve 1998, see also Sect. 3).
On the contrary, at the evolutionarily stable equilibrium p = VC, individual fitness
$$\displaystyle \begin{aligned}E(p^{*},p^{*})=\frac{V}{2} \left(1-\frac{V}{C}\right)\end{aligned}$$
is always lower than V∕2. However, the ESS cannot be invaded by any other single mutant strategy.

Darwinism assumes that selection operates at the level of an individual, which is then consistent with noncooperative game theory. However, this is not the only possibility. Some biologists (e.g., Gilpin 1975) postulated that selection operates on a larger unit, a group (e.g., a population, a species etc.), maximizing the benefit of this unit. This approach was termed group selection. Alternatively, Dawkins (1976) suggested that selection operates on a gene level. The Hawk–Dove game allows us to show clearly the difference between the group and Darwinian selections.

Group selection vs. individual selection also nicely illustrates the so-called tragedy of the commons (Hardin 1968) (based on an example given by the English economist William Forster Lloyd) that predicts deterioration of the environment, measured by fitness, in an unconstrained situation where each individual maximizes its profit. For example, when a common resource (e.g., fish) is over-harvested, the whole fishery collapses. To maintain a sustainable yield, regulation is needed that prevents over-exploitation (i.e., which does not allow Hawks that would over-exploit the resource to enter). Effectively, such a regulatory body keeps p at zero (or close to it), to maximize the benefits for all fishermen. Without such a regulatory body, Hawks would invade and necessarily decrease the profit for all. In fact, as the cost C increases (due to scarcity of resources), fitness at the ESS decreases, and when C equals Open image in new window , fitness is zero.

2.1 Replicator Dynamics for the Hawk–Dove Game

In the previous section, we have assumed that all individuals play the same strategy, either pure or mixed. If the strategy is mixed, each individual randomly chooses one of its elementary strategies on any given encounter according to some given probability. In this monomorphic interpretation of the game, the population mean strategy coincides with the individual strategy. Now we will consider a distinct situation where n phenotypes exist in the population. In this polymorphic setting we say that a population is in state \(\mathbf {p}\in \varDelta _n\) (where \(\varDelta _{n}=\{\mathbf {p}\in \mathbb {R}^{n}\;|\;p_{i}\geq 0, p_{1}+\dots +p_{n}=1\}\) is a probability simplex) if pi is the proportion of the population using strategy i. As opposed to the monomorphic case, in this polymorphic interpretation, the individual strategies and the mean population strategy are different, because the mean strategy characterizes the population, not a single individual.

The ESS definition does not provide us with a mechanistic description of phenotype frequency dynamics that would converge to an ESS. One of the frequency dynamics often used in evolutionary game theory is the replicator dynamics (Taylor and Jonker 1978). Replicator dynamics assume that the population growth rate of each phenotype is given by its fitness, and they focus on changes in phenotypic frequencies in the population (see Volume I,  Chap. 6, “Evolutionary Game Theory”). Let us consider the replicator equation for the Hawk–Dove game. Let x be the frequency of Hawks in the population. The fitness of a Hawk is
$$\displaystyle \begin{aligned}E(H,x)=\frac{V-C}{2} x +V (1-x)\end{aligned}$$
and, similarly, the fitness of a Dove is
$$\displaystyle \begin{aligned}E(D,x)=(1-x) \frac{V}{2}.\end{aligned}$$
Then the average fitness in the population is
$$\displaystyle \begin{aligned}{E}(x,x)=x E(H,x)+(1-x) E(D,x)=\frac{V-C x^2}{2},\end{aligned}$$
and the replicator equation is
$$\displaystyle \begin{aligned}\frac{dx}{dt}=x \left(E(H,x)-E(x,x)\right)=\frac{1}{2} x (1-x) (V-C x).\end{aligned}$$
Assuming C > V, we remark that the interior distribution equilibrium of this equation, x = VC, corresponds to the mixed ESS for the underlying game. In this example phenotypes correspond to elementary strategies of the game. It may be that phenotypes also correspond to mixed strategies.

3 The Prisoner’s Dilemma and the Evolution of Cooperation

The Prisoner’s Dilemma (see Flood 1952; Poundstone 1992) is perhaps the most famous game in all of game theory, with applications from areas including economics, biology, and psychology. Two players play a game where they can Defect or Cooperate, yielding the payoff matrix These abbreviations are derived from Reward (reward for cooperating), Temptation (temptation for defecting when the other player cooperates), Sucker (paying the cost of cooperation when the other player defects), and Punishment (paying the cost of defecting). The rewards satisfy the conditions T > R > P > S. Thus while Cooperate is Pareto efficient (in the sense that it is impossible to make any of the two players better off without making the other player worse off), Defect row dominates Cooperate and so is the unique ESS, even though mutual cooperation would yield the greater payoff. Real human (and animal) populations, however, involve a lot of cooperation; how is that enforced?
There are many mechanisms for enabling cooperation, see for example Nowak (2006). These can be divided into six types as follows:
  1. 1.

    Kin selection, that occurs when the donor and recipient of some apparently altruistic act are genetic relatives.

  2. 2.

    Direct reciprocity, requiring repeated encounters between two individuals.

  3. 3.

    Indirect reciprocity, based upon reputation. An altruistic individual gains a good reputation, which means in turn that others are more willing to help that individual.

  4. 4.

    Punishment, as a way to enforce cooperation.

  5. 5.

    Network reciprocity, where there is not random mixing in the population and cooperators are more likely to interact with other cooperators.

  6. 6.

    Multi-level selection, alternatively called group selection, where evolution occurs on more than one level.

We discuss some of these concepts below.

3.1 Direct Reciprocity

Direct reciprocity requires repeated interaction and can be modeled by the Iterated Prisoner’s Dilemma (IPD). The IPD involves playing the Prisoner’s Dilemma over a (usually large) number of rounds and thus being able to condition choices in later rounds on what the other player played before. This game was popularized by Axelrod (1981, 1984) who held two tournaments where individuals could submit computer programs to play the IPD. The winner of both tournaments was the simplest program submitted, called Tit for Tat (TFT), which simply cooperates on the first move and then copies its opponent’s previous move.

TFT here has three important properties: it is nice so it never defects first; it is retaliatory so it meets defection with a defection next move; it is forgiving so even after previous defections, it meets cooperation with cooperation next move. TFT effectively has a memory of one place, and it was shown in Axelrod and Hamilton (1981) that TFT can resist invasion by any strategy that is not nice if it can resist both Always Defect ALLD and Alternative ALT, which defects (cooperates) on odd (even) moves. However, this does not mean that TFT is an ESS, because nice strategies can invade by drift as they receive identical payoffs to TFT in a TFT population (Bendorf and Swistak 1995). We note that TFT is not the only strategy that can promote cooperation in the IPD; others include Tit for Two Tats (TF2T which defects only after two successive defections of its opponent), Grim (which defects on all moves after its opponent’s first defection), and win stay/lose shift (which changes its choice if and only if its opponent defected on the previous move).

Games between TFT, ALLD, and ALT against TFT have the following sequence of moves: We assume that the number of rounds is not fixed and that there is always the possibility of a later round (otherwise the game can be solved by backwards induction, yielding ALLD as the unique NE strategy). At each stage, there is a further round with probability w (as in the second computer tournament); the payoffs are then
$$\displaystyle \begin{aligned} \begin{array}{rcl} E(TFT,TFT)&\displaystyle =&\displaystyle R+Rw+Rw^{2}+\ldots=\frac{R}{1-w}, \end{array} \end{aligned} $$
$$\displaystyle \begin{aligned} \begin{array}{rcl} E(ALLD,TFT)&\displaystyle =&\displaystyle T+Pw+Pw^{2}+\ldots=T+\frac{Pw}{1-w}, \end{array} \end{aligned} $$
$$\displaystyle \begin{aligned} \begin{array}{rcl} E(ALT,TFT)&\displaystyle =&\displaystyle T+Sw+Tw^{2}+Sw^{3}+\ldots=\frac{T+Sw}{1-w^2}. \end{array} \end{aligned} $$
Thus TFT resists invasion if and only if
$$\displaystyle \begin{aligned} \frac{R}{1-w}>\max \left(T+\frac{Pw}{1-w}, \frac{T+Sw}{1-w^2} \right)\end{aligned} $$
i.e., if and only if
$$\displaystyle \begin{aligned} w > \max \left( \frac{T-R}{T-P}, \frac{T-R}{R-S}\right),\end{aligned} $$
i.e., when the probability of another contest is sufficiently large (Axelrod 1981, 1984). We thus see that for cooperation to evolve here, the extra condition 2R > S + T is required, since otherwise the right-hand side of inequality (23.5) would be at least 1.
While TFT proved successful at promoting cooperation above, what if errors occur, so that an intention to cooperate becomes a defection (or is perceived as such)? After a single mistake, a pair of interacting TFT players will be locked in an alternating cycle of Defect versus Cooperation and then mutual defection after a second mistake when C was intended. Under such circumstances, TF2T can maintain cooperation, whereas TFT cannot. In fact a better strategy (in the sense that it maintains cooperation when playing against itself but resists invasion from defecting strategies) is GTFT (generous tit for tat; see Komorita et al. 1968), which combines pure cooperation with TFT by cooperating after a cooperation, but meeting a defection with a defection with probability
$$\displaystyle \begin{aligned} \min \left( 1-\frac{T-R}{R-S}, \frac{R-P}{T-P} \right). \end{aligned}$$

3.2 Kin Selection and Hamilton’s Rule

In most evolutionary game theoretical models, the aim of each individual is to maximize its own fitness, irrespective of the fitness of other individuals. However, if individuals are related, then the fitnesses of others should be taken into account.

Let us consider two interacting individuals, with coefficient of relatedness r, which is the probability that they share a copy of a given allele. For example, father and son will have r = 1∕2. One individual acts as a potential donor, the other as a recipient, which receives a benefit b from the donor at the donor’s cost c. The donating individual pays the full cost but also indirectly receives the benefit b multiplied by the above factor r. Thus donation is worthwhile provided that
$$\displaystyle \begin{aligned} rb>c \;\;\text{i.e.,} \;\;r > \frac{c}{b} \end{aligned}$$
which is known as Hamilton’s rule (Hamilton 1964).

Note that this condition is analogous to the condition for cooperation to resist invasion in the IPD above, where a commonly used special class of the PD matrix has payoffs representing cooperation as making a donation and defecting as not. Then TFT resists invasion when w > cb.

3.3 Indirect Reciprocity and Punishment

The IPD is an example of direct reciprocity. Suppose now we have a population of individuals who play many contests, but these are not in long sequences against a single “opponent” as above? If faced with a series of single-shot games, how can cooperation be achieved?

Such situations are often investigated by the use of public goods games involving experiments with groups of real people, as in the work of Fehr and Gachter (2002). In these experiments individuals play a series of games, each game involving a new group. In each game there were four individuals, each of them receiving an initial endowment of 20 dollars, and each had to choose a level of investment into a common pool. Any money that was invested increased by a factor of 1.6 and was then shared between the four individuals, meaning that the return for each dollar invested was 40 cents to each of the players. In particular the individual making the investment of one dollar only receives 40 cents and so makes a loss of 60 cents. Thus, like the Prisoner’s Dilemma, it is clear that the best strategy is to make no investment but simply to share rewards from the investments of other players. In these experiments, investment levels began reasonably high, but slowly declined, as players saw others cheat.

In later experiments, each game was played over two rounds, an investment round and a punishment round, where players were allowed to punish others. In particular every dollar “invested” in punishment levied a fine of three dollars on the target of the punishment. This led to investments which increased from their initial level, as punishment brought cheating individuals into line. It should be noted that in a population of individuals many, but not all of whom, punish, optimal play for individuals in this case should not be to punish, but to be a second-order free rider who invests but does not punish, and therefore saves the punishment fee. Such a population would collapse down to no investment after some number of rounds. Thus it is clear that the people in the experiments were not behaving completely rationally.

Thus we could develop the game to have repeated rounds of punishment. An aggressive punishing strategy would then in round 1, punish all defectors; in round 2, punish all cooperators who did not punish defectors in round 1; in round 3, punish all cooperators who did not punish in round 2 as above; and so on. Thus such players not only punish cheats, but anyone who does not play exactly as they do. Imagine a group of m individuals with k cooperators (who invest and punish), defectors and m − k −  − 1 investors (who do not punish). This game, with this available set of strategies, requires two rounds of punishment as described above. The rewards to our focal individual in this case will be
$$\displaystyle \begin{aligned} R= \begin{cases} \frac{(m-\ell)cV}{m}-kP &\text{ if an investor,} \\ \frac{(m-\ell)cV}{m}-(m-k-1) & \text{ if a cooperator,} \\ \frac{(m-\ell-1)cV}{m}+V-kP &\text{ if a defector}, \end{cases}\end{aligned} $$
where V is the initial level of resources of each individual, c < m is the return on investment (every dollar becomes 1 + c dollars), and P is the punishment multiple (every dollar invested in punishment generates a fine of P dollars). The optimal play for our focal individual is
$$\displaystyle \begin{aligned} \begin{array}{ll} \text{ Defect}\ \ & \ \ \text{if}\ \ V\left(1-\frac{c}{m}\right)>kP-(m-k-1), \\ \text{ Cooperate}\ \ & \text{otherwise.} \end{array} \end{aligned}$$
Thus defect is always stable and invest and punish is stable if V (1 − cm) < (m − 1)P.

We note that there are still issues on how such punishment can emerge in the first place (Sigmund 2007).

4 The Rock–Paper–Scissors Game

The Rock–Paper–Scissors game is a three-strategy matrix game, which people commonly play recreationally. In human competition, the game dates back at least to seventeenth-century China. There is a lot of potential psychology involved in playing the game, and there are numerous tournaments involving it. The important feature of the game is that Rock beats Scissors, Scissors beats Paper, and Paper beats Rock. The payoff matrix is where all a’s and b’s are positive. For the conventional game played between people ai = bi = 1 for i = 1, 2, 3.
There is a unique internal NE of the above game given by the vector
$$\displaystyle \begin{aligned} {\mathbf{p}}=\frac{1}{K} (a_{1}a_{3}+b_{1}b_{2}+a_{1}b_{1}, a_{1}a_{2}+b_{2}b_{3}+a_{2}b_{2}, a_{2}a_{3}+b_{1}b_{3}+a_{3}b_{3}), \end{aligned}$$
where the constant K is just the sum of the three terms to ensure that p is a probability vector. In addition, p is a globally asymptotically stable equilibrium of the replicator dynamics if and only if a1a2a3 > b1b2b3. It is an ESS if and only if a1 − b1, a2 − b2, and a3 − b3 are all positive, and the largest of their square roots is smaller than the sum of the other two square roots (Hofbauer and Sigmund 1998). Thus if p is an ESS of the RPS game, then it is globally asymptotically stable under the replicator dynamics. However, since the converse is not true, the RPS game provides an example illustrating that while all internal ESSs are global attractors of the replicator dynamics, not all global attractors are ESSs.

We note that the case when a1a2a3 = b1b2b3 (including the conventional game with ai = bi = 1) leads to closed orbits of the replicator dynamics, and a stable (but not asymptotically stable) internal equilibrium. This is an example of a nongeneric game, where minor perturbations of the parameter values can lead to large changes in the nature of the game solution.

This game is a good representation for a number of real populations. The most well known of these is among the common side-blotched lizard Uta stansburiana. This lizard has three types of distinctive throat coloration, which correspond to very different types of behavior. Males with orange throats are very aggressive and have large territories which they defend against intruders. Males with dark blue throats are less aggressive and hold smaller territories. Males with yellow stripes do not have a territory at all but bear a strong resemblance to females and use a sneaky mating strategy. It was observed in Sinervo and Lively (1996) that if the Blue strategy is the most prevalent, Orange can invade; if Yellow is prevalent, Blue can invade; and if Orange is prevalent, then Yellow can invade.

An alternative real scenario is that of Escherichia coli bacteria, involving three strains of bacteria (Kerr et al. 2002). One strain produces the antibiotic colicin. This strain is immune to it, as is a second strain, but the third is not. When only the first two strains are present, the second strain outcompetes the first, since it forgoes the cost of colicin production. Similarly the third outcompetes the second, as it forgoes costly immunity, which without the first strain is unnecessary. Finally, the first strain outcompetes the third, as the latter has no immunity to the colicin.

5 Non-matrix Games

We have seen that matrix games involve a finite number of strategies with a payoff function that is linear in the strategy of both the focal player and that of the population. This leads to a number of important simplifying results (see Volume I,  Chap. 6, “Evolutionary Game Theory”). All of the ESSs of a matrix can be found in a straightforward way using the procedure of Haigh (1975). Further, adding a constant to all entries in a column of a payoff matrix leaves the collection of ESSs (and the trajectories of the replicator dynamics) of the matrix unchanged. Haigh’s procedure can potentially be shortened, using the important Bishop–Cannings theorem (Bishop and Cannings 1976), a consequence of which is that if p1 is an ESS, no strategy p2 whose support is either a superset or a subset of the support of p1 can be an ESS.

However, there are a number of ways that games can involve nonlinear payoff functions. Firstly, playing the field games yield payoffs that are linear in the focal player but not in the population (e.g., see Sects. 7.1 and 9.1). Another way this can happen is to have individual games of the matrix type, but where opponents are not selected with equal probability from the population, for instance, if there is some spatial element. Thirdly, the payoffs can be nonlinear in both components. Here strategies do not refer to a probabilistic mix of pure strategies, but a unique trait, such as the height of a tree as in Kokko (2007) or a volume of sperm; see, e.g., Ball and Parker (2007). This happens in particular in the context of adaptive dynamics (see Volume I,  Chap. 6, “Evolutionary Game Theory”).

Alternatively a non-matrix game can involve linear payoffs, but this time with a continuum of strategies (we note that the cases with nonlinear payoffs above can also involve such a continuum, especially the third type). A classical example of this is the war of attrition (Maynard Smith 1974).

5.1 The War of Attrition

We consider a Hawk–Dove game, where both individuals play Dove, but that instead of the reward being allocated instantly, they become involved in a potentially long displaying contest where a winner will be decided by one player conceding, and there is a cost proportional to the length of the contest. An individual’s strategy is thus the length of time it is prepared to wait. Pure strategies are all values of t on the non-negative part of the real line, and mixed strategies are corresponding probability distributions. These kinds of contests are for example observed in dung flies (Parker and Thompson 1980).

Choosing the cost to be simply the length of time spent, the payoff for a game between two pure strategies St (wait until time t) and Ss (wait until time s) for the player that uses strategy St is
$$\displaystyle \begin{aligned} E(S_t,S_s)=\begin{cases} V-s \ \ \ & t>s, \\ V/2-t & t=s, \\ -t & t<s. \\ \end{cases}\end{aligned} $$
and the corresponding payoff from a game involving two mixed strategists playing the probability distributions f(t) and g(s) to the f(t) player is
$$\displaystyle \begin{aligned} \int_{0}^{\infty} \int_{0}^{\infty} f(t)g(s) E(S_t,S_s) dt ds.\end{aligned} $$
It is clear that no pure strategy can be an ESS, since Ss is invaded by St (i.e., E(St, Ss) > E(Ss, Ss)) for any t > s, or any positive t < s − V∕2. There is a unique ESS which is found by first considering (analogous to the Bishop–Cannings theorem; see Volume I,  Chap. 6, “Evolutionary Game Theory”) a probability distribution p(s) that gives equal payoffs to all pure strategies that could be played by an opponent. This is required, since otherwise some potential invading strategies could do better than others, and since p(s) is simply a weighted average of such strategies, it would then be invaded by at least one type of opponent. Payoff of a pure strategy St played against a mixed strategy Sp(s) given by a probability distribution p(s) over the time interval is
$$\displaystyle \begin{aligned} E(S_{t}, S_{p(s)})=\int_{0}^{t}(V-s)p(s)ds+\int_{t}^{\infty} (-t)p(s)ds.\end{aligned} $$
Differentiating equation (23.6) with respect to t (assuming that such a derivative exists) gives
$$\displaystyle \begin{aligned} (V-t)p(t) - \int_t^\infty p(s) ds + t p(t)=0. \end{aligned} $$
If P(t) is the associated distribution function, so that p(t) = P′(t) for all t ≥ 0, then Eq. (23.7) becomes
$$\displaystyle \begin{aligned} VP'(t) + P(t)-1=0 \end{aligned}$$
and we obtain
$$\displaystyle \begin{aligned} p(t) = \frac{1}{V} \exp \left(-\frac{t}{V}\right). \end{aligned} $$
It should be noted that we have glossed over certain issues in the above, for example, consideration of strategies without full support or with atoms of probability. This is discussed in more detail in Broom and Rychtar (2013). The above solution was shown to be an ESS in Bishop and Cannings (1976).

5.2 The Sex-Ratio Game

Why is it that the sex ratio in most animals is close to a half? This was the first problem to be considered using evolutionary game theory (Hamilton 1967), and its consideration, including the essential nature of the solution, dates right back to Darwin (1871). To maximize the overall birth rate of the species, in most animals there should be far more females than males, given that females usually make a much more significant investment in bringing up offspring than males. This, as mentioned before, is the wrong perspective, and we need to consider the problem from the viewpoint of the individual.

Suppose that in a given population, an individual female will have a fixed number of offspring, but that she can allocate the proportion of these that are male. This proportion is thus the strategy of our individual. As each female (irrespective of its strategy) has the same number of offspring, this number does not help us in deciding which strategy is the best. The effect of a given strategy can be measured as the number of grandchildren of the focal female. Assume that the number of individuals in a large population in the next generation is N1 and in the following generation is N2. Further assume that all other females in the population play the strategy m and that our focal individual plays strategy p.

As N1 is large, the total number of males in the next generation is mN1 and so the total number of females is (1 − m)N1. We shall assume that all females (males) are equally likely to be the mother (father) of any particular member of the following generation of N2 individuals. This means that a female offspring will be the mother of N2∕((1 − m)N1) of the following generation of N2 individuals, and a male offspring will be the father of N2∕(mN1) of these individuals. Thus our focal individual will have the following number of grandchildren
$$\displaystyle \begin{aligned} E(p,m) = p \frac{N_{2}}{m N_{1}} +(1-p) \frac{N_{2}}{(1-m)N_{1}} = \frac{N_{2}}{N_{1}} \left( \frac{p}{m}+\frac{1-p}{1-m} \right). \end{aligned} $$
To find the best p, we maximize E(p, m). For m < 0.5 the best response is p = 1, and for m > 0.5 we obtain p = 0. Thus m = 0.5 is the interior NE at which all values of p obtain the same payoff. This NE satisfies the stability condition in that E(0.5, m′) − E(m′, m′) > 0 for all m′≠0.5 (Broom and Rychtar 2013).

Thus from the individual perspective, it is best to have half your offspring as male. In real populations, it is often the case that relatively few males are the parents of many individuals, for instance, in social groups often only the dominant male fathers offspring. Sometimes other males are actually excluded from the group; lion prides generally consist of a number of females, but only one or two males, for example. From a group perspective, these extra males perform no function, but there is a chance that any male will become the father of many.

6 Asymmetric Games

The games we have considered above all involve populations of identical individuals. What if individuals are not identical? Maynard Smith and Parker (1976) considered two main types of difference between individuals. The first type was correlated asymmetries where there were real differences between them, for instance, in strength or need for resources, which would mean their probability of success, cost levels, valuation of rewards, set of available strategies, etc., may be different, i.e., the payoffs “correlate” with the type of the player. Examples of such games are the predator–prey games of Sect. 9.2 and the Battle of the Sexes below in Sect. 6.1.

The second type, uncorrelated asymmetries, occurred when the individuals were physically identical, but nevertheless occupied different roles; for example, one was the owner of a territory and the other was an intruder, which we shall see in Sect. 6.2. For uncorrelated asymmetries, even though individuals do not have different payoff matrices, it is possible to base their strategy upon the role that they occupy. As we shall see, this completely changes the character of the solutions that we obtain.

We note that the allocation of distinct roles can apply to games in general; for example, there has been significant work on the asymmetric war of attrition (see, e.g., Hammerstein and Parker 1982; Maynard Smith and Parker 1976), involving cases with both correlated and uncorrelated asymmetries.

The ESS was defined for a single population only, and the stability condition of the original definition cannot be easily extended for bimatrix games. This is because bimatrix games assume that individuals of one species interact with individuals of the other species only, so there is no frequency-dependent mechanism that could prevent mutants of one population from invading residents of that population at the two-species NE. In fact, it was shown (Selten 1980) that requiring the stability condition of the ESS definition to hold in bimatrix games restricts the ESSs to strict NEs, i.e., to pairs of pure strategies. Two key assumptions behind Selten’s theorem are that the probability that an individual occupies a given role is not affected by the strategies that it employs, and that payoffs within a given role are linear, as in matrix games. If either of these assumptions are violated, then mixed strategy ESSs can result (see, e.g., Broom and Rychtar 2013; Webb et al. 1999).

There are interior NE in bimatrix games that deserve to be called “stable,” albeit in a weaker sense than was used in the (single-species) ESS definition. For example, some of the NEs are stable with respect to some evolutionary dynamics (e.g., with respect to the replicator dynamics, or the best response dynamics). A static concept that captures such stability that proved useful for bimatrix games is the Nash–Pareto equilibrium (Hofbauer and Sigmund 1998). The Nash-Pareto equilibrium is an NE which satisfies an additional condition that says that it is impossible for both players to increase their fitness by deviating from this equilibrium. For two-species games that cannot be described by a bimatrix (e.g., see Sect. 7.4), this concept of two-species evolutionary stability was generalized by Cressman (2003) (see Volume I,  Chap. 6, “Evolutionary Game Theory”) who defined a two-species ESS (p, q) as an NE such that, if the population distributions of the two species are slightly perturbed, then an individual in at least one species does better by playing its ESS strategy than by playing the slightly perturbed strategy of this species. We illustrate these concepts in the next section.

6.1 The Battle of the Sexes

A classical example of an asymmetric game is the Battle of the Sexes (Dawkins 1976), where a population contains females with two strategies, Coy and Fast, and males with two strategies, Faithful and Philanderer. A Coy female needs a period of courtship, whereas a Fast female will mate with a male as soon as they meet. Faithful males are willing to engage in long courtships and after mating will care for the offspring. A Philanderer male will not engage in courtship and so cannot mate with a Coy female and also leaves immediately after mating with a Fast female.

Clearly in this case, any particular individual always occupies a given role (i.e., male or female) and cannot switch roles as is the case in the Owner–Intruder game in Sect. 6.2 below. Thus, males and females each have their own payoff matrix which are often represented as a bimatrix. The payoff bimatrix for the Battle of the Sexes is Here B is the fitness gained by having an offspring, CR is the (potentially shared) cost of raising the offspring, and CC is the cost of engaging in a courtship. All three of these terms are clearly positive. The above bimatrix is written in the form \((A_{1},A_{2}^{T})\), where matrix A1 is the payoff matrix for males (player 1) and matrix A2 is the payoff matrix for females (player 2), respectively.
For such games, to define a two-species NE, we study the position of the two equal payoff lines, one for each sex. The equal payoff line for males (see the horizontal dashed line in Fig. 23.1) is defined to be those \((\mathbf {p},\mathbf {q})\in \varDelta _{2}\times \varDelta _{2}\) for which the payoff when playing Faithful equals the payoff when playing the Philanderer strategy, i.e.,
$$\displaystyle \begin{aligned} (1,0) A_{1}\mathbf {q}^{T}=(0,1) A_{1}\mathbf {q}^{T}\end{aligned} $$
which yields \(q_1= \frac {C_R}{2 (B - C_C)}\). Similarly, along the equal payoff line for females (see the vertical dashed line in Fig. 23.1), the payoff when playing strategy Coy must equal the payoff when playing strategy Fast, i.e.,
$$\displaystyle \begin{aligned}(1,0) A_{2}\mathbf {p}^{T}=(0,1) A_{2}\mathbf {p}^{T}.\end{aligned} $$
If the two equal payoff lines do not intersect in the unit square, no completely mixed strategy (both for males and females) is an NE (Fig. 23.1A, B). In fact, there is a unique ESS (Philanderer, Coy), i.e., with no mating (clearly not appropriate for a real population), for sufficiently small B (Fig. 23.1A), \(B<\min (C_{R}/2+C_{C}, C_{R})\), a unique ESS (Philanderer, Fast) for sufficiently high B (Fig. 23.1B), when B > CR. For intermediate B satisfying CR∕2 + CC < B < CR, there is a two-species weak ESS
$$\displaystyle \begin{aligned}\mathbf {p} = \left( \frac{C_{R}-B}{C_{C}+C_{R}-B},\frac{C_C}{C_C + C_R-B}\right),\;\; \mathbf {q}=\left( \frac{C_R}{2 (B - C_C)},1-\frac{C_R}{2 (B - C_C)}\right), \end{aligned}$$
where at least the fitness of one species increases toward this equilibrium, except when \(p_1=\frac {C_R-B}{C_C+C_R-B}\) or \(q_1= \frac {C_R}{2 (B - C_C)}\) (Fig. 23.1C). In all three cases of Fig. 23.1, the NE is a Nash-Pareto pair, because it is impossible for both players to simultaneously deviate from the Nash equilibrium and increase their payoffs. In panels A and B, both arrows point in the direction of the NE. In panel C at least one arrow is pointing to the NE \((\mathbf {p},\mathbf {q})\) if both players deviate from that equilibrium. However, this interior NE is not a two-species ESS since, when only one player (e.g., player one) deviates, no arrow points in the direction of \((\mathbf {p},\mathbf {q})\). This happens on the equal payoff lines (dashed lines). For example, let us consider points on the vertical dashed line above the NE. Here vertical arrows are zero vectors and horizontal arrows point away from \( \mathbf {p}\). Excluding points on the vertical and horizontal line from the definition of a two-species ESS leads to a two-species weak ESS.
Fig. 23.1

The ESS for the Battle of the Sexes game. Panel A assumes small B and the only ESS is (p1, q1) = (0, 1) = (Philanderer, Coy). Panel B assumes large B and the only ESS is (p1, q1) = (0, 0) = (Philanderer, Fast). For intermediate values of B (panel C), there is an interior NE. The dashed lines are the two equal payoff lines for males (horizontal line) and females (vertical line). The direction in which the male and female payoffs increase are shown by arrows (e.g., a horizontal arrow to the right means the first strategy (Faithful) has the higher payoff for males, whereas a downward arrow means the second strategy (Fast) has the higher payoff for females). We observe that in panel C these arrows are such that at least the payoff of one player increases toward the Nash–Pareto pair, with the exception of the points that lie on the equal payoff lines. This qualifies the interior NE as a two-species weak ESS

6.2 The Owner–Intruder Game

The Owner–Intruder game is an extension of the Hawk–Dove game, where player 1 (the owner) and player 2 (the intruder) have distinct roles (i.e., they cannot be interchanged as is the case of symmetric games). In particular an individual can play either of Hawk or Dove in either of the two roles. This leads to the bimatrix representation of the Hawk–Dove game below, which cannot be collapsed down to the single 2 × 2 matrix from Sect. 2, because the strategy that an individual plays may be conditional upon the role that it occupies (in Sect. 2 there are no such distinct roles). The bimatrix of the game is
Provided we assume that each individual has the same chance to be an owner or an intruder, the game can be symmetrized with the payoffs to the symmetrized game given in the following payoff matrix, where
$$\displaystyle \begin{array}{llll} \text{Hawk}& - & \text{play Hawk when both owner and intruder,}\\ \text{Dove} & - & \text{play Dove when both owner and intruder,}\\ \text{Bourgeois} & - & \text{play Hawk when owner and Dove when intruder,}\\ \text{Anti-Bourgeois}& - & \text{play Dove when owner and Hawk when intruder.}\end{array} $$
It is straightforward to show that if V ≥ C, then Hawk is the unique ESS (Fig. 23.2A), and that if V < C, then Bourgeois and Anti-Bourgeois (alternatively called Marauder) are the only ESSs (Fig. 23.2B). As we see, there are only pure strategy solutions, as opposed to the case of the Hawk–Dove game, which had a mixed ESS VC for V < C, because the interior NE in Fig. 23.2B is not a Nash–Pareto pair as in the upper-left and lower-right regions both arrows are pointing away from the NE. Thus, if both players simultaneously deviate from the Nash equilibrium, their payoffs increase. Consequently, this NE is not a two-species (weak) ESS.
Fig. 23.2

The ESS for the Owner–Intruder game. Panel A assumes V > C and the only ESS is to be Hawk at both roles (i.e., (p1, q1) = (1, 1) = (Hawk, Hawk)). If V < C (Panel B), there are two boundary ESSs (black dots) corresponding to the Bourgeois ((p1, q1) = (1, 0) = (Hawk, Dove)) and Anti-Bourgeois ((p1, q1) = (0, 1) = (Dove, Hawk)) strategies. The directions in which the owner and intruder payoffs increase are shown by arrows (e.g., a horizontal arrow to the right means the Hawk strategy has the higher payoff for owner, whereas a downward arrow means the Dove strategy has the higher payoff for intruder). The interior NE (the light gray dot at the intersection of the two equal payoff lines) is not a two-species ESS as there are regions (the upper-left and lower-right corners) where both arrows point in directions away from this point

Important recent work on this model and its ramifications for the part that respecting ownership plays has been carried out by Mesterton Gibbons, Sherratt and coworkers (see Mesterton-Gibbons and Sherratt 2014; Sherratt and Mesterton-Gibbons 2015). In particular, why in real populations is the Bourgeois respect for ownership strategy so common and the Anti-Bourgeois strategy so rare? One explanation offered by Maynard Smith (1982) was “infinite regress.” In this argument, immediately after a contest, the winner becomes the owner of the territory, and the loser becomes a potential intruder which could immediately rechallenge the individual that has just displaced it. In an Anti-Bourgeois population, this would result in the new owner conceding and the new intruder (the previous owner) once again being the owner, but then the displaced owner could immediately rechallenge, and the process would continue indefinitely. It is shown in Mesterton-Gibbons and Sherratt (2014) that under certain circumstances, but not always, this allows Bourgeois to be the unique ESS. Sherratt and Mesterton-Gibbons (2015) discuss many issues, such as uncertainty of ownership, asymmetry of resource value, continuous contest investment (as in the war of attrition), and potential signaling of intentions (what they call “secret handshakes,” similar to some of the signals we discuss in Sect. 10) in detail. There are many reasons that can make evolution of Anti-Bourgeois unlikely, and it is probably a combination of these that make it so rare.

6.3 Bimatrix Replicator Dynamics

The single-species replicator dynamics such as those for the Hawk–Dove game (Sect. 2.1) can be extended to two roles as follows (Hofbauer and Sigmund 1998). Note that here this is interpreted as two completely separate populations, i.e., any individual can only ever occupy one of the roles, and its offspring occupy that same role. If \( A = (a_{ij})_{\begin{array}{lll} i= 1,\ldots,n \\ j= 1,\ldots,m \end{array}}\, \mathrm {and}\, B = (b_{ij})_{\begin{array}{lll} i= 1,\ldots,m \\ j= 1,\ldots,n \end{array}} \) are the payoff matrices to an individual in role 1 and role 2, respectively, the corresponding replicator dynamics are
$$\displaystyle \begin{aligned} \begin{array}{lcl} \frac{d}{dt} p_{1i} (t)&=&p_{1i} \bigl((A {{\mathbf{p}}_{\mathbf{2}}}^{T})_{i}-{{\mathbf{p}}_{\mathbf{1}}}A{{\mathbf{p}}_{\mathbf{2}}}^{T}\bigr) i=1,\ldots,n; \\[0.5cm] \frac{d}{dt} p_{2j} (t)&=&p_{2j} \bigl((B {{\mathbf{p}}_{\mathbf{1}}}^{T})_{j}-{{\mathbf{p}}_{\mathbf{2}}}B{{\mathbf{p}}_{\mathbf{1}}}^{T}\bigr) \, j=1,\ldots,m; \end{array}\notag\end{aligned} $$
where p1 ∈ Δn and p2 ∈ Δm are the population mixtures of individuals in role 1 and 2, respectively. For example, for the two-role, two-strategy game, where without loss of generality we can set a11 = a22 = b11 = b22 = 0 (since as for matrix games, adding a constant to all of the payoffs an individual gets against a given strategy does not affect the NEs/ ESSs), we obtain
$$\displaystyle \begin{aligned} \begin{array}{lcl} \displaystyle{\frac{dx}{dt}} &=&\displaystyle{x(1-x)(a_{12}-(a_{12}+a_{21})y), }\\[0.5cm] \displaystyle{\frac{dy}{dt}} &=&\displaystyle{y(1-y)(b_{12}-(b_{12}+b_{21})x),} \end{array} \end{aligned} $$
where x is the frequency of the first strategy players in the role 1 population and y is the corresponding frequency for role 2. Hofbauer and Sigmund (1998) show that orbits converge to the boundary in all cases except if a12a21 > 0, b12b21 > 0, and a12b12 < 0, which yield closed periodic orbits around the internal equilibrium. Replicator dynamics for the Battle of the Sexes and the Owner–Intruder game are shown in Fig. 23.3.
Fig. 23.3

Bimatrix replicator dynamics (23.11) for the Battle of the Sexes game (A–C) and the Owner–Intruder game (D, E), respectively. Panels A–C correspond to panels given in Fig. 23.1, and panels D and E correspond to those of Fig. 23.2. This figure shows that trajectories of the bimatrix replicator dynamics converge to a two-species ESS as defined in Sects. 6.1 and 6.2. In particular, the interior NE in panel E is not a two-species ESS, and it is an unstable equilibrium for the bimatrix replicator dynamics. In panel C the interior NE is two-species weak ESS, and it is (neutrally) stable for the bimatrix replicator dynamics

Note there are some problems with the interpretation of the dynamics of two populations in this way, related to the assumption of exponential growth of populations, since the above dynamics effectively assume that the relative size of the two populations remains constant (Argasinski 2006).

7 The Habitat Selection Game

Fretwell and Lucas (1969) introduced the Ideal Free Distribution (IFD) to describe a distribution of animals in a heterogeneous environment consisting of discrete patches i = 1, …, n. The IFD assumes that animals are free to move between several patches, travel is cost-free, each individual knows perfectly the quality of all patches, and all individuals have the same competitive abilities. Assuming that these patches differ in their basic quality Bi (i.e., their quality when unoccupied), the IFD model predicts that the best patch will always be occupied.

Let us assume that patches are arranged in descending order (B1 > ⋯ > Bn > 0) and mi is the animal abundance in patch i. Let pi = mi∕(m1 + ⋯ + mn) be the proportion of animals in patch i, so that \(\mathbf {p}=(p_1,\dots ,p_n)\) describes the spatial distribution of the population. For a monomorphic population, pi also specifies the individual strategy as the proportion of the lifetime an average animal spends in patch i. We assume that the payoff in each patch, Vi(pi), is a decreasing function of animal abundance in that patch, i.e., the patch payoffs are negatively density dependent. Then, fitness of a mutant with strategy \(\tilde {\mathbf {p}}=(\tilde {p}_1,\dots ,\tilde {p}_n)\) in the resident monomorphic population with distribution \(\mathbf {p}=(p_1,\dots ,p_n)\) is
$$\displaystyle \begin{aligned}E(\tilde{\mathbf {p}},\mathbf {p})=\sum_{i=1}^{n} \tilde{p}_{i} V_{i}(p_{i}). \end{aligned}$$
However, we do not need to make the assumption that the population is monomorphic, because what really matters in calculating \(E(\tilde {\mathbf {p}},\mathbf {p})\) above is the animal distribution \(\mathbf {p}.\) If the population is not monomorphic, this distribution can be different from strategies animals use, and we call it the population mean strategy. Thus, in the Habitat Selection game, individuals do not enter pairwise conflicts, but they play against the population mean strategy (referred to as a “playing the field” or “population” game).
Fretwell and Lucas (1969) introduced the concept of the Ideal Free Distribution which is a population distribution \(\mathbf {p}=(p_1,\dots ,p_n)\) that satisfies two conditions:
  1. 1.

    There exists a number 1 ≤ k ≤ n such that p1 > 0, …, pk > 0 and pk+1 = ⋯ = pn = 0

  2. 2.

    V1(p1) = ⋯ = Vk(pk) = V and V≥ Vi(pi) for i = k + 1, …, n.

They proved that provided patch payoffs are negatively density dependent (i.e., decreasing functions of the number of individuals in a patch), then there exists a unique IFD which Cressman and Křivan (2006) later showed is an ESS. In the next section, we will discuss two commonly used types of patch payoff functions.

7.1 Parker’s Matching Principle

Parker (1978) considered the case where resource input rates ri, i = 1, …, n are constant and resources are consumed immediately when they enter the patch and so there is no standing crop. This leads to a particularly simple definition of animal patch payoffs as the ratio of the resource input rate divided by the number of individuals there, i.e.,
$$\displaystyle \begin{aligned} V_i=\frac{r_i}{m_i}=\frac{r_i}{p_i M} \end{aligned} $$
where M = m1 + ⋯ + mn is the overall population abundance. The matching principle then says that animals distribute themselves so that their abundance in each patch is proportional to the rate with which resources arrive into the patch, pipj = rirj. This is nothing other than the IFD for payoff functions (23.12). It is interesting to notice that all patches will be occupied independently of the total population abundance. Indeed, as the consumer density in the i −th patch decreases, payoff ri∕(piM) increases, which attracts some animals, and there cannot be unoccupied patches. There is an important difference between this (nonlinear) payoff function (23.12) and the linear payoff function that we consider in the following Eq. (23.13), because as the local population abundance in a patch decreases, then (23.12) tends to infinity, but (23.13) tends to ri. This means that in the first case there cannot be unoccupied patches (irrespective of their basic patch quality ri) because the payoffs in occupied patches are finite, but the payoff in unoccupied patches would be infinite (provided all ri > 0). This argument does not apply in the case of the logistic payoff (23.13). This concept successfully predicts the distribution of house flies that arrive at a cow pat where they immediately mate (Blanckenhorn et al. 2000; Parker 1978, 1984) or of fish that are fed at two feeders in a tank (Berec et al. 2006; Milinski 1979, 1988).

In the next section, we consider the situation where resources are not consumed immediately upon entering the system.

7.2 Patch Payoffs are Linear

Here we consider two patches only, and we assume that the payoff in habitat i(= 1, 2) is a linearly decreasing function of population abundance:
$$\displaystyle \begin{aligned} V_{i}={ r_{i}\left( 1-\frac{m_{i}}{K_{i}}\right) } ={r_{i}\left( 1-\frac{p_{i}M}{K_{i}}\right)} \end{aligned} $$
where mi is the population density in habitat i, ri is the intrinsic per capita population growth rate in habitat i, and Ki is its carrying capacity. The total population size in the two-habitat environment is denoted by M(= m1 + m2), and the proportion of the population in habitat i is pi = miM. Payoff (23.13) is often used in population dynamics where it describes the logistic population growth.
Let us consider an individual which spends proportion \(\tilde {p}_{1}\) of its lifetime in habitat 1 and \(\tilde {p}_{2}\) in habitat 2. Provided total population density is fixed at M, then its fitness in the population with mean strategy \(\mathbf {p}=(p_{1},p_{2})\) is
$$\displaystyle \begin{aligned} E(\tilde{\mathbf {p}},\mathbf {p})=\tilde{p}_{1}V_{1}(p_{1})+\tilde{p}_{2} V_{2}(p_{2})= \mathbf{\tilde{p}} \;U \mathbf {p}^{T}, \end{aligned}$$
$$\displaystyle \begin{aligned} U=\left( \begin{array}{cc} r_1 (1-\frac{M}{K_1}) & r_{1} \\ r_2 & r_2 (1-\frac{M}{K_2}) \end{array} \right) \end{aligned}$$
is the payoff matrix with two strategies, where strategy i represents staying in patch i (i = 1, 2). This shows that the Habitat Selection game with a linear payoff can be written for a fixed population size as a matrix game. If the per capita intrinsic population growth rate in habitat 1 is higher than that in habitat 2 (r1 > r2), the IFD is (Křivan and Sirot 2002)
$$\displaystyle \begin{aligned} p_{1}= \left\{ \begin{array}{ll} 1 & {\mathrm{if}} \;\;M<K_{1} \frac{r_{1}-r_{2}}{r_{1}}\\ \displaystyle{\frac{r_{2}K_{1}}{r_{2}K_{1}+r_{1}K_{2}}+\frac{K_{1}K_{2}(r_{1}-r_{2} )}{(r_{2}K_{1}+r_{1}K_{2})M}} & \mathrm{otherwise.} \end{array} \right. \end{aligned} $$
When the total population abundance is low, the payoff in habitat 1 is higher than the payoff in habitat 2 for all possible population distributions because the competition in patch 1 is low due to low population densities. For higher population abundances, neither of the two habitats is always better than the other, and under the IFD payoffs in both habitats must be the same (V1(p1) = V2(p2)). Once again, it is important to emphasize here that the IFD concept is different from maximization of the mean animal fitness
$$\displaystyle \begin{aligned}\overline{W}(\mathbf {p},\mathbf {p})= p_{1} V_{1}(p_{1})+p_{2} V_{2}(p_{2})\end{aligned}$$
which would lead to
$$\displaystyle \begin{aligned} p_{1}= \left\{ \begin{array}{ll} 1 & {\mathrm{if}} \;\;\displaystyle{M<K_{1} \frac{r_{1}-r_{2}}{2 r_{1}}}\\ \displaystyle{\frac{r_2 K_1}{r_1 K_2+r_2 K_1}+\frac{K_1 K_2 (r_1-r_2)}{2(r_1 K_2+r_2 K_1)M}} & \mathrm{otherwise.} \end{array} \right. \end{aligned} $$
The two expressions (23.14) and (23.15) are the same if and only if r1 = r2. Interestingly, by comparing (23.14) and (23.15), we see that maximizing mean fitness leads to fewer animals than the IFD in the patch with higher basic quality ri (i.e., in patch 1).

7.3 Some Extensions of the Habitat Selection Game

The Habitat Selection game as described makes several assumptions that were relaxed in the literature. One assumption is that patch payoffs are decreasing functions of population abundance. This assumption is important because it guarantees that a unique IFD exists. However, patch payoffs can also be increasing functions of population abundance. In particular, at low population densities, payoffs can increase as more individuals enter a patch, and competition is initially weak. For example, more individuals in a patch can increase the probability of finding a mate. This is called the Allee effect. The IFD for the Allee effect has been studied in the literature (Cressman and Tran 2015; Fretwell and Lucas 1969; Křivan 2014; Morris 2002). It has been shown that for hump-shaped patch payoffs, up to three IFDs can exist for a given overall population abundance. At very low overall population abundances, only the most profitable patch will be occupied. At intermediate population densities, there are two IFDs corresponding to pure strategies where all individuals occupy patch 1 only, or patch 2 only. As population abundance increases, competition becomes more severe, and an interior IFD appears exactly as in the case of negative density-dependent payoff functions. At high overall population abundances, only the interior IFD exists due to strong competition among individuals. It is interesting to note that as the population numbers change, there can be sudden (discontinuous) changes in the population distribution. Such erratic changes in the distribution of deer mice were observed and analyzed by Morris (2002).

Another complication that leads to multiple IFDs is the cost of dispersal. Let us consider a positive migration cost c between two patches. An individual currently in patch 1 will migrate to patch 2 only if the payoff there is such that V2(p2) − c ≥ V1(p1). Similarly, an individual currently in patch 2 will migrate to patch 1 only if its payoff does not decrease by doing so, i.e., V1(p1) − c ≥ V2(p2). Thus, all distributions (p1, p2) that satisfy these two inequalities form the set of IFDs (Mariani et al. 2016).

The Habitat Selection game was also extended to situations where individuals perceive space as a continuum (e.g., Cantrell et al. 2007, 2012; Cosner 2005). The movement by diffusion is then combined, or replaced, by a movement along the gradient of animal fitness.

7.4 Habitat Selection for Two Species

Instead of a single species, we now consider two species with population densities M and N dispersing between two patches. We assume that individuals of these species compete in each patch both intra- and inter-specifically. Following our single-species Habitat Selection game, we assume that individual payoffs are linear functions of species distribution (Křivan and Sirot 2002; Křivan et al. 2008)
$$\displaystyle \begin{aligned} \begin{array}{rcl} V_i(\mathbf {p},\mathbf {q})&= &\displaystyle{r_i \left(1-\frac{p_i M}{K_i}-\frac{\alpha_i q_i N}{K_i}\right)}, \\[0.5cm] W_i(\mathbf {p},\mathbf {q})&= &\displaystyle{s_i \left(1-\frac{q_i N}{L_i}-\frac{\beta_i p_i M}{L_i}\right)}, \end{array} \end{aligned} $$
where \(\mathbf {p}=(p_{1},p_{2})\) denotes the distribution of species one and \(\mathbf {q}=(q_{1},q_{2})\) the distribution of species two. Here, positive parameters αi (respectively βi) are interspecific competition coefficients, ri (respectively si) are the intrinsic per capita population growth rates, and Ki (respectively Li) are the environmental carrying capacities. The two-species Habitat Selection game cannot be represented in a bimatrix form (to represent it in a matrix form, we would need four matrices), because the payoff in patch i for a given species depends not only on the distribution (strategy) of its competitors but also on the distribution of its own conspecifics. The equal payoff line for species one (two) are those \((\mathbf {p},\mathbf {q})\in \varDelta _{2}\times \varDelta _{2}\) for which \(V_1(\mathbf {p},\mathbf {q})=V_2(\mathbf {p},\mathbf {q})\) (\(W_1(\mathbf {p},\mathbf {q})=W_2(\mathbf {p},\mathbf {q})\)). Since payoffs are linear functions, these are lines in the coordinates p1 and q1, but as opposed to the case of bimatrix games in Sects. 6.1 and 6.2, they are neither horizontal nor vertical. If they do not intersect in the unit square, the two species cannot coexist in both patches at an NE. The most interesting case is when the two equal payoff lines intersect inside the unit square. Křivan et al. (2008) showed that the interior intersection is the two-species ESS provided
$$\displaystyle \begin{array}{lll} { r_1s_1K_2L_2(1-\alpha_1\beta_1) +r_1s_2K_2L_1(1-\alpha_1\beta_2)+ } \\ r_2s_1K_1L_2(1-\alpha_2\beta_1)+r_2s_2K_1L_1(1-\alpha_2\beta_2)>0. \end{array} $$
Geometrically, this condition states that the equal payoff line for species one has a more negative slope than that for species two. This allows us to extend the concept of the single-species Habitat Selection game to two species that compete in two patches. In this case the two-species IFD is defined as a two-species ESS. We remark that the best response dynamics do converge to such two-species IFD (Křivan et al. 2008).

One of the predictions of the Habitat Selection game for two species is that as competition gets stronger, the two species will spatially segregate (e.g., Křivan and Sirot 2002; Morris 1999). Such spatial segregation was observed in experiments with two bacterial strains in a microhabitat system with nutrient-poor and nutrient-rich patches (Lambert et al. 2011).

8 Dispersal and Evolution of Dispersal

Organisms often move from one habitat to another, which is referred to as dispersal. We focus here on dispersal and its relation to the IFD discussed in the previous section.

Here we consider n habitat patches and a population of individuals that disperse between them. In what follows we will assume that the patches are either adjacent (in particular when there are just two patches) or that the travel time between them is negligible when compared to the time individuals spend in these patches. There are two basic questions:
  1. 1.

    When is dispersal an adaptive strategy, i.e., when does individual fitness increase for dispersing animals compared to those who are sedentary?

  2. 2.

    Where should individuals disperse?

To describe changes in population densities, we will consider demographic population growth in each patch and dispersal between patches. Dispersal is described by the propensity of individuals to disperse (δ ≥ 0) and by a dispersal matrix D. The entries of this matrix (Dij) describe the transition probabilities that an individual currently in patch j moves to patch i per unit of time. We remark that Dii is the probability of staying in patch i. Per capita population growth rates in patches are given by fi (e.g., fi can be the logistic growth rate Vi (23.13) in Sect. 7.2). The changes in population numbers are then described by population–dispersal dynamics
$$\displaystyle \begin{aligned} \displaystyle{\frac{dm_{i}}{dt}}=m_{i}f_{i}(m_{i})+\delta \sum_{j=1}^{n}\left( D_{ij}(\mathbf{m})m_{j}-D_{ji}(\mathbf{m})m_{i} \right) \;\text{for}\;i=1,\dots ,n \end{aligned} $$
where \(\mathbf {m}=(m_{1},\cdots ,m_{n})\) is the vector of population densities in n patches. Thus, the first term in the above summation describes immigration to patch i from other patches, and the second term describes emigration from patch i to other patches. In addition, we assume that D is irreducible, i.e., there are no isolated patches.
The case that corresponds to the passive diffusion between patches assumes that entries of the dispersal matrix are constant and the matrix is symmetric. It was shown (Takeuchi 1996) that when functions fi are decreasing with fi(0) > 0 and fi(Ki) = 0 for some Ki > 0, then model (23.16) has an interior equilibrium which is globally asymptotically stable. However this does not answer the question of whether such an equilibrium is evolutionarily stable, i.e., whether it is resistant to invasion of mutants with the same traits (parameters) as the resident population, but different propensity to disperse δ. The answer to this question depends on the entries of the dispersal matrix. An interior population distribution \(\mathbf {m}^{*}=(m_{1}^{*},\dots ,m_{n}^{*})\) will be the IFD provided patch payoffs in all patches are the same, i.e., \(f_{1}(m_{1}^{*})=\dots =f_{n}(m_{n}^{*}).\) This implies that at the population equilibrium, there is no net dispersal, i.e.,
$$\displaystyle \begin{aligned} \delta \sum_{j=1}^{n} \left( D_{ij}m_{j}^{*}-D_{ji}m_{i}^{*} \right)=0. \end{aligned}$$
There are two possibilities. Either
$$\displaystyle \begin{aligned} \sum_{j=1}^{n} \left( D_{ij}m_{j}^{*}-D_{ji}m_{i}^{*} \right) =0, \end{aligned} $$
or δ = 0. The pattern of equalized immigration and emigration satisfying (23.17) is called “balanced dispersal” (Doncaster et al. 1997; Holt and Barfield 2001; McPeek and Holt 1992). Under balanced dispersal, there is an inverse relation between local population size and its dispersal rate. In other words, individuals at good sites are less likely to disperse than those from poor sites. When dispersal is unbalanced, Hastings (1983) showed that mutants with lower propensity to disperse will outcompete the residents and no dispersal (δ = 0) is the only evolutionarily stable strategy.
However, dispersal can be favored even when it is not balanced. Hamilton and May (1977) showed that unconditional and costly dispersal among very many patches can be promoted because it reduces competition between relatives. Their model was generalized by Comins et al. (1980) who assumed that because of stochastic effects, a proportion e of patches can become empty at any time step. A proportion p of migrants survives migration and re-distributes at random (assuming the Poisson distribution) among the patches. These authors derived analytically the evolutionarily stable dispersal strategy that is given by a complicated implicit formula (see formula (3) in Comins et al. 1980). As population abundance increases, the evolutionarily stable dispersal rate converges to a simpler formula
$$\displaystyle \begin{aligned} \delta=\frac{e}{1-p (1-e)}. \notag \end{aligned} $$
Here the advantage of dispersal results from the possibility of colonizing an extinct patch.

Evolution of mobility in predator–prey systems was also studied by Xu et al. (2014). These authors showed how interaction strength between mobile vs. sessile prey and predators influences the evolution of dispersal.

9 Foraging Games

Foraging games describe interactions between prey, their predators, or both. These games assume that either predator or prey behave in order to maximize their fitness. Typically, the prey strategy is to avoid predators while predators try to track their prey. Several models that focus on various aspects of predator–prey interactions were developed in the literature (e.g., Brown and Vincent 1987; Brown et al. 1999, 2001; Vincent and Brown 2005).

An important component of predation is the functional response defined as the per predator rate of prey consumption (Holling 1959). It also serves as a basis for models of optimal foraging (Stephens and Krebs 1986) that aim to predict diet selection of predators in environments with multiple prey types. In this section we start with a model of optimal foraging, and we show how it can be derived using extensive form games (see Volume I,  Chap. 6, “Evolutionary Game Theory”). As an example of a predator–prey game, we then discuss predator–prey distribution in a two-patch environment.

9.1 Optimal Foraging as an Agent Normal Form of an Extensive Game

Often it is assumed that a predator’s fitness is proportional to its prey intake rate, and the functional response serves as a proxy of fitness. In the case of two or more prey types, the multi-prey functional response is the basis of the diet choice model (Charnov 1976; Stephens and Krebs 1986) that predicts the predator’s optimal diet as a function of prey densities in the environment. Here we show how functional responses can be derived using decision trees of games given in extensive form (Cressman 2003; Cressman et al. 2014; see also Broom et al. 2004, for an example of where this methodology was used in a model of food stealing). Let us consider a decision tree in Fig. 23.4 describing a single predator feeding on two prey types. This decision tree assumes that a searching predator meets prey type 1 with probability p1 and prey type 2 with probability p2 during the search time τs. For simplicity we will assume that p1 + p2 = 1. Upon an encounter with a prey individual, the predator decides whether to attack the prey (prey type 1 with probability q1 and prey type 2 with probability q2) or not. When a prey individual is captured, the energy that the predator receives is denoted by e1 or e2. The predator’s cost is measured by the time lost. This time consists of the search time τs and the time needed to handle the prey (h1 for prey type 1 and h2 for prey type 2).
Fig. 23.4

The decision tree for two prey types. The first level gives the prey encounter distribution. The second level gives the predator activity distribution. The final row of the diagram gives the probability of each predator activity event and so sums to 1. If prey 1 is the more profitable type, the edge in the decision tree corresponding to not attacking this type of prey is never followed at optimal foraging (indicated by the dashed edge in the tree)

Calculation of functional responses is based on renewal theory which proves that the long-term intake rate of a given prey type can be calculated as the mean energy intake during one renewal cycle divided by the mean duration of the renewal cycle (Houston and McNamara 1999; Stephens and Krebs 1986). A single renewal cycle is given by a predator passing through the decision tree in Fig. 23.4. Since type i prey are only killed when the path denoted by pi and then qi is followed, the functional response to prey i(= 1, 2) is
$$\displaystyle \begin{aligned} \begin{array}{rcl} f_{i}(q_1,q_2)&\displaystyle =&\displaystyle \frac{p_{i}q_{i}}{p_{1}\left(q_{1}(\tau _{s}+h_{1})+(1-q_{1})\tau _{s}\right)+p_{2}\left(q_{2}(\tau _{s}+h_{2})+(1-q_{2})\tau_{s}\right)} \\ &\displaystyle =&\displaystyle \frac{p_i q_{i}}{\tau _{s}+p_1q_{1}h_{1}+p_2q_{2}h_{2}} . \end{array} \end{aligned} $$
When xi denotes density of prey type i in the environment and the predator meets prey at random, pi = xix, where x = x1 + x2. Setting λ = 1∕(τsx) leads to
$$\displaystyle \begin{aligned} f_{i}(q_1,q_2)=\frac{\lambda x_i q_{i}}{1+\lambda x_1q_{1}h_{1}+\lambda x_2 q_{2}h_{2}} . \end{aligned}$$
These are the functional responses assumed in standard two prey type models. The predator’s rate of energy gain is given by
$$\displaystyle \begin{aligned} E(q_1,q_2)=e_{1}f_{1}(q_1,q_2)+e_{2}f_{2}(q_1,q_2)=\frac{e_{1}p_1q_{1}+e_{2}p_2q_{2}}{\tau_{s}+p_1q_{1}h_1+p_2q_{2} h_{2}}. \end{aligned} $$
This is the proxy of the predator’s fitness which is maximized over the predator’s diet (q1, q2), (0 ≤ qi ≤ 1, i = 1, 2).
Here, using the agent normal form of extensive form game theory (Cressman 2003), we show an alternative, game theoretical approach to find the optimal foraging strategy. This method assigns a separate player (called an agent) to each decision node (here 1 or 2). The possible decisions at this node become the agent’s strategies, and its payoff is given by the total energy intake rate of the predator it represents. Thus, all of the virtual agents have the same common payoff. The optimal foraging strategy of the single predator is then a solution to this game. In our example, player 1 corresponds to decision node 1 with strategy set Δ1 = {q1∣0 ≤ q1 ≤ 1} and player 2 to node 2 with strategy set Δ2 = {q2∣0 ≤ q2 ≤ 1}. Their common payoff E(q1, q2) is given by (23.18), and we seek the NE of the two-player game. Assuming that prey type 1 is the more profitable for the predator, as its energy content per unit handling time is higher than the profitability of the second prey type (i.e., e1h1 > e2h2) we get E(1, q2) > E(q1, q2) for all 0 ≤ q1 < 1 and 0 ≤ q2 ≤ 1. Thus, at any NE, player 1 must play q1 = 1. The NE strategy of player 2 is then any best response to q1 = 1 (i.e., any q2 that satisfies \(E(1,q_{2}^{\prime })\leq E(1,q_{2})\) for all \(0\leq q_2^{\prime } \leq 1\)) which yields
$$\displaystyle \begin{aligned} q_{2}=\left\{ \begin{array} [c]{cc} 0 & \text{ if }p_{1}>p_{1}^{\ast}\\ 1 & \text{ if }p_{1}<p_{1}^{\ast}\\ \lbrack0,1] & \text{ if }p_{1}=p_{1}^{\ast}, \end{array} \right. \end{aligned} $$
$$\displaystyle \begin{aligned} p_1^{\ast}=\frac{e_{2}\tau_s}{e_1h_2-e_2h_1}. \end{aligned} $$
This NE coincides with the optimal strategy derived by maximization of (23.18). It makes quite striking predictions. While the more profitable prey type is always included in the predator’s diet, inclusion of the less profitable prey type is independent of its own density and depends on the more profitable prey type density only. This prediction was experimentally tested with great tits (e.g., Berec et al. 2003; Krebs et al. 1977). That the Nash equilibrium coincides with the optimal foraging strategy (i.e., with the maximum of E) in this model is not a coincidence. Cressman et al. (2014) proved that this is so for all foraging games with a 2-level decision tree. For decision tress with more levels, they showed that the optimal foraging strategy is always an NE of the corresponding agent normal form game and that other, nonoptimal, NE may also appear.

9.2 A Predator-Prey Foraging Game

As an example we consider here a predator–prey foraging game between prey and predators in a two-patch environment. If xi denotes the abundance of prey in patch i(= 1, 2), the total abundance of prey is x = x1 + x2 and, similarly, the total abundance of predators is y = y1 + y2. Let \(\mathbf {u}=(u_{1},u_{2})\) be the distribution of prey and \(\mathbf {v}=(v_{1},v_{2})\) be the distribution of predators. We neglect the travel time between patches so that u1 + u2 = v1 + v2 = 1 (i.e., each animal is either in patch 1 or patch 2). We assume that the prey population grows exponentially at each patch with the per capita population growth rate ri and it is consumed by predators. The killing rate is given by the functional response. For simplicity we neglect the handling time so that the functional response in patch i is fi = λixi, i.e., the per prey per predator killing rate is λi. The prey payoff in patch i is given by the per capita prey population growth rate in that patch, i.e., ri − λiviy as there are viy predators in patch i. The fitness of a prey individual is
$$\displaystyle \begin{aligned} V(\mathbf{u},\mathbf{v})=(r_1-\lambda_1 v_1 y) u_1 +(r_2-\lambda_2 v_2 y) u_2. \end{aligned} $$
The predator payoff in patch i is given by the per capita predator population growth rate eiuix − mi, where ei is a coefficient by which the energy gained by feeding on prey is transformed into new predators and mi is the per capita predator mortality rate in patch i. The fitness of a predator with strategy \(\mathbf {v}=(v_{1},v_{2})\) when the prey use strategy \(\mathbf {u}=(u_{1},u_{2})\) is
$$\displaystyle \begin{aligned} W(\mathbf{v},\mathbf{u})=(e_1 \lambda_1 u_1 x -m_1) v_1 +(e_2 \lambda_2 u_2 x -m_2) v_2. \end{aligned} $$
This predator–prey game can be represented by the following payoff bimatrix That is, the rows in this bimatrix correspond to the prey strategy (the first row means the prey are in patch 1; the second row means the prey are in patch 2), and similarly columns represent the predator strategy. The first of the two expressions in the entries of the bimatrix is the payoff for the prey, and the second is the payoff for the predators.
For example, we will assume that for prey patch 1 has a higher basic patch quality when compared to patch 2 (i.e., r1 ≥ r2) while for predators patch 1 has a higher mortality rate (m1 > m2). The corresponding NE is (Křivan 1997)
  1. (a)

    \((u_1^*,v_1^*)\) if \(\displaystyle {x > \frac {m_1-m_2}{e_1 \lambda _1},\;\; y > \frac {r_1-r_2}{\lambda _1}}\),

  2. (b)

    \(\displaystyle {\left (1,1\right ) }\) if \(\displaystyle {x> \frac {m_1-m_2}{e_1 \lambda _1},\;\; y< \frac {r_1-r_2}{\lambda _1}}\),

  3. (c)

    \(\displaystyle {\left (1 ,0\right )}\) if \(\displaystyle {x<\frac {m_1-m_2}{e_1 \lambda _1}},\)

$$\displaystyle \begin{aligned}(u_1^*,v_1^{*})=\left(\frac{m_1-m_2+e_2 \lambda_2 x}{(e_1 \lambda_1+e_2 \lambda_2)x},\frac{r_1-r_2+\lambda_2 y}{(\lambda_1+\lambda_2)y}\right).\end{aligned}$$
If prey abundance is low (case (c)), all prey will be in patch 1, while predators will stay in patch 2. Because the mortality rate for predators in patch 1 is higher than in patch 2 and prey abundance is low, patch 2 is a refuge for predators. If predator abundance is low and prey abundance is high (case (b)), both predators and prey will aggregate in patch 1. When the NE is strict (cases (b) and (c) above), it is also the ESS because there is no alternative strategy with the same payoff. However, when the NE is mixed (case (a)), there exist alternative best replies to it. This mixed NE is the two-species weak ESS. It is globally asymptotically stable for the continuous-time best response dynamics (Křivan et al. 2008) that model dispersal behavior whereby individuals move to the patch with the higher payoff. We remark that for some population densities (\(x=\frac {m_1-m_2}{e_1 \lambda _1}\) and \(y=\frac {r_1-r_2}{\lambda _1}\)), the NE is not uniquely defined, which is a general property of matrix games (the game is nongeneric in this case).

10 Signaling Games

Signaling between animals occurs in a number of contexts. This can be signals, often but not necessarily between conspecifics, warning of approaching predators. This situation can be game theoretic, as the signaler runs a potentially higher risk of being targeted by the predator. There are also cases of false signals being given when no predator is approaching to force food to be abandoned which can then be consumed by the signaler (Flower 2011). Alternatively within a group of animals, each individual may need to decide how to divide their time between vigilance and feeding, where each individual benefits from the vigilance of others as well as itself, and this has been modeled game-theoretically (e.g., Brown 1999; McNamara and Houston 1992; Sirot 2012).

Another situation occurs between relatives over items of food, for example, a parent bird feeding its offspring. Young birds beg aggressively for food, and the parent must decide which to feed, if any (it can instead consume the item itself). The most well-known model of this situation is the Sir Philip Sidney game (Maynard Smith 1991) and is a model of cost-free signaling (Bergstrom and Lachmann 1998).

The classic example of a signaling game is between potential mates. Males of differing quality advertise this quality to females, often in a way that is costly, and the females choose who to mate with based upon the strength of the signal. Examples are the tail of the peacock or the elaborate bowers created by bowerbirds. There is obviously a large incentive to cheat, and so how are such signals kept honest? A signal that is not at least partly correlated to quality would be meaningless, and so would eventually be ignored. The solution as developed by Zahavi (1975, 1977), the handicap principle, is that these costly signals are easier to bear by higher-quality mates and that evolution leads to a completely honest signal, where each quality level has a unique signal.

10.1 Grafen’s Signaling Game

The following signaling model is due to (Grafen 1990a,b). Consider a population with a continuum of male quality types q and a single type of female. Assume that a male of quality q gives a signal a = A(q) of this quality, where higher values of a are more costly. It is assumed that there is both a minimum quality level q0 > 0 (there may or may not be a maximum quality level) and a minimum signal level a0 ≥ 0 (which can be thought of as giving no signal). When a female receives a signal, she allocates a quality level to the signal P(a). We have a nonlinear asymmetric game with sequential decisions; in particular the nonlinearity makes this game considerably more complicated than asymmetric games such as the Battle of the Sexes of Sect. 6.1. The female pays a cost for misassessing a male of quality q as being of quality p of D(q, p), which is positive for p ≠ q, with D(q, q) = 0. Assuming that the probability density of males of quality q is g(q), the payoff to the female, which is simply minus the expected cost, is
$$\displaystyle \begin{aligned} -\int_{q_{0}}^{\infty} D(q,p)g(q)dq. \end{aligned}$$
An honest signaling system with strategies A and P occurs if and only if P(A(q)) = q, for all q. We note that here the female never misassesses a male and so pays zero cost. Clearly any alternative female assessment strategy would do worse. But how can we obtain stability against alternative (cheating) male strategies?
The fitness of a male of quality q, W(a, p, q), depends upon his true quality, the quality assigned to him by the female and the cost of his signal. W(a, p, q) will be increasing in p and decreasing in a. For stability of the honest signal, we need that the incremental advantage of a higher level of signaling is greater for a high-quality male than for a low-quality one, so that
$$\displaystyle \begin{aligned} -\frac{\frac{\partial}{\partial a}W(a,p,q)}{\frac{\partial}{\partial p}W(a,p,q)} \end{aligned} $$
is strictly decreasing in q (note that the ratio is negative, so minus this ratio is positive), i.e., the higher quality the male, the lower the ratio of the marginal cost to the marginal benefit for an increase in the level of advertising. This ensures that completely honest signaling cannot be invaded by cheating, since costs to cheats to copy the signals of better quality males would be explicitly higher than for the better quality males, who could always thus achieve a cost they were willing to pay that the lower quality cheats would not.
The following example male fitness function is given in Grafen (1990a) (the precise fitness function to the female does not affect the solution provided that correct assessment yields 0, and any misassessment yields a negative payoff)
$$\displaystyle \begin{aligned} W(a,p,q)=p^{r}q^{a}, \end{aligned} $$
with qualities in the range q0 ≤ q < 1 and signals of strength a ≥ a0, for some r > 0.
We can see that the function from (23.24) satisfies the above conditions on W(a, p, q). In particular consider the condition from expression (23.23)
$$\displaystyle \begin{aligned} -\frac{\partial}{\partial a}W(a,p,q)=-p^{r}q^{a} \ln q, \;\;\;\; \frac{\partial}{\partial p}W(a,p,q)=rp^{r-1} q^{a} \end{aligned}$$
which are the increase in cost per unit increase in the signal level and the increase in the payoff per unit increase in the female’s perception (which in turn is directly caused by increases in signal level), respectively. The ratio from (23.23), which is proportional to the increase in cost per unit of benefit that this would yield, becomes \(-p \ln q/r\), takes a larger value for lower values of q. Thus there is an honest signaling solution. This is shown in Grafen (1990a) to be given by
$$\displaystyle \begin{aligned} A(q)=a_{0}-r\ln \left( \frac{\ln(q)}{\ln(q_{0})}\right), \;\;\;\;P(a)=q_{0}^{\exp(-(a-a_{0})/r)}. \end{aligned}$$

11 Conclusion

In this chapter we have covered some of the important evolutionary game models applied to biological situations. We should note that we have left out a number of important theoretical topics as well as areas of application. We briefly touch on a number of those below.

All of the games that we have considered involved either pairwise games, or playing the field games, where individuals effectively play against the whole population. In reality contests will sometimes involve groups of individuals. Such models were developed in Broom et al. (1997), for a recent review see Gokhale and Traulsen (2014). In addition the populations were all both effectively infinite and well-mixed in the sense that for any direct contest involving individuals, each pair was equally likely to meet. In reality populations are finite and have (e.g., spatial) structure. The modeling of evolution in finite populations often uses the Moran process (Moran 1958), but more recently games in finite populations have received significant attention (Nowak 2006). These models have been extended to include population structure by considering evolution on graphs (Lieberman et al. 2005), and there has been an explosion of such model applications, especially to consider the evolution of cooperation. Another feature of realistic populations that we have ignored is the state of the individual. A hungry individual may behave differently to one that has recently eaten, and nesting behavior may be different at the start of the breeding season to later on. A theory of state-based models has been developed in Houston and McNamara (1999).

In terms of applications, we have focused on classical biological problems, but game theory has also been applied to medical scenarios more recently. This includes the modeling of epidemics, especially with the intention of developing defense strategies. One important class of models (see, e.g., Nowak and May 1994) considers the evolution of the virulence of a disease as the epidemic spreads. An exciting new line of research has recently been developed which considers the development of cancer as an evolutionary game, where the population of cancer cells evolves in the environment of the individual person or animal (Gatenby et al. 2010). A survey of alternative approaches is considered in Durrett (2014).

12 Cross-References



This project has received funding from the European Union Horizon 2020 research and innovation programe under the Marie Sklodowska-Curie grant agreement No 690817. VK acknowledges support provided by the Institute of Entomology (RVO:60077344).


  1. Argasinski K (2006) Dynamic multipopulation and density dependent evolutionary games related to replicator dynamics. A metasimplex concept. Math Biosci 202:88–114Google Scholar
  2. Axelrod R (1981) The emergence of cooperation among egoists. Am Political Sci Rev 75:306–318Google Scholar
  3. Axelrod R (1984) The evolution of cooperation. Basic Books, New YorkGoogle Scholar
  4. Axelrod R, Hamilton WD (1981) The Evolution of cooperation. Science 211:1390–1396Google Scholar
  5. Ball MA, Parker GA (2007) Sperm competition games: the risk model can generate higher sperm allocation to virgin females. J Evol Biol 20:767–779Google Scholar
  6. Bendorf J, Swistak P (1995) Types of evolutionary stability and the problem of cooperation. Proc Natl Acad Sci USA 92:3596–3600Google Scholar
  7. Berec M, Křivan V, Berec L (2003) Are great tits parus major really optimal foragers? Can J Zool 81:780–788Google Scholar
  8. Berec M, Křivan V, Berec L (2006) Asymmetric competition, body size and foraging tactics: testing an ideal free distribution in two competing fish species. Evol Ecol Res 8:929–942Google Scholar
  9. Bergstrom C, Lachmann M (1998) Signaling among relatives. III. Talk is cheap. Proc Natl Acad Sci USA 95:5100–5105Google Scholar
  10. Bishop DT, Cannings C (1976) Models of animal conflict. Adv appl probab 8:616–621Google Scholar
  11. Blanckenhorn WU, Morf C, Reuter M (2000) Are dung flies ideal-free distributed at their oviposition and mating site? Behaviour 137:233–248Google Scholar
  12. Broom M, Rychtar J (2013) Game-theoretical models in biology. CRC Press/Taylor & Francis Group, Boca RatonGoogle Scholar
  13. Broom M, Cannings C, Vickers G (1997) Multi-player matrix games. Bull Austral Math Soc 59:931–952Google Scholar
  14. Broom M, Luther RM, Ruxton GD (2004) Resistance is useless? – extensions to the game theory of kleptoparasitism. Bull Math Biol 66:1645–1658Google Scholar
  15. Brown JS (1999) Vigilance, patch use and habitat selection: foraging under predation risk. Evol Ecol Res 1:49–71Google Scholar
  16. Brown JS, Vincent TL (1987) Predator-prey coevolution as an evolutionary game. Lect Notes Biomath 73:83–101Google Scholar
  17. Brown JS, Laundré JW, Gurung M (1999) The ecology of fear: optimal foraging, game theory, and trophic interactions. J Mammal 80:385–399Google Scholar
  18. Brown JS, Kotler BP, Bouskila A (2001) Ecology of fear: Foraging games between predators and prey with pulsed resources. Ann Zool Fennici 38:71–87Google Scholar
  19. Cantrell RS, Cosner C, DeAngelis DL, Padron V (2007) The ideal free distribution as an evolutionarily stable strategy. J Biol Dyn 1:249–271Google Scholar
  20. Cantrell RS, Cosner C, Lou Y (2012) Evolutionary stability of ideal free dispersal strategies in patchy environments. J Math Biol 65:943–965. doi:10.1007/s00285-011-0486-5Google Scholar
  21. Charnov EL (1976) Optimal foraging: attack strategy of a mantid. Am Nat 110:141–151Google Scholar
  22. Comins H, Hamilton W, May R (1980) Evolutionarily stable dispersal strategies. J Theor Biol 82:205–230Google Scholar
  23. Cosner C (2005) A dynamic model for the ideal-free distribution as a partial differential equation. Theor Popul Biol 67:101–108Google Scholar
  24. Cressman R (2003) Evolutionary dynamics and extensive form games. The MIT Press, Cambridge, MAGoogle Scholar
  25. Cressman R, Křivan V (2006) Migration dynamics for the ideal free distribution. Am Nat 168:384–397Google Scholar
  26. Cressman R, Tran T (2015) The ideal free distribution and evolutionary stability in habitat selection games with linear fitness and Allee effect. In: Cojocaru MG (ed) Interdisciplinary topics in applied mathematics, modeling and computational science. Springer proceedings in mathematics & statistics, vol 117. Springer, Cham, pp 457–464Google Scholar
  27. Cressman R, Křivan V, Brown JS, Gáray J (2014) Game-theoretical methods for functional response and optimal foraging behavior. PLOS ONE 9:e88,773Google Scholar
  28. Darwin C (1871) The descent of man and selection in relation to sex. John Murray, LondonGoogle Scholar
  29. Dawkins R (1976) The selfish gene. Oxford University Press, OxfordGoogle Scholar
  30. Doncaster CP, Clobert J, Doligez B, Gustafsson L, Danchin E (1997) Balanced dispersal between spatially varying local populations: an alternative to the source-sink model. Am Nat 150:425–445Google Scholar
  31. Dugatkin LA, Reeve HK (1998) Game theory & animal behavior. Oxford University Press, New YorkGoogle Scholar
  32. Durrett R (2014) Spatial evolutionary games with small selection coefficients. Electron J Probab 19:1–64Google Scholar
  33. Fehr E, Gachter S (2002) Altruistic punishment in humans. Nature 415:137–140Google Scholar
  34. Flood MM (1952) Some experimental games. Technical report RM-789-1, The RAND corporation, Santa MonicaGoogle Scholar
  35. Flower T (2011) Fork-tailed drongos use deceptive mimicked alarm calls to steal food. Proc R Soc B 278:1548–1555Google Scholar
  36. Fretwell DS, Lucas HL (1969) On territorial behavior and other factors influencing habitat distribution in birds. Acta Biotheor 19:16–32Google Scholar
  37. Gatenby RA, Gillies R, Brown J (2010) The evolutionary dynamics of cancer prevention. Nat Rev Cancer 10:526–527Google Scholar
  38. Gilpin ME (1975) Group selection in predator-prey communities. Princeton University Press, PrincetonGoogle Scholar
  39. Gokhale CS, Traulsen A (2014) Evolutionary multiplayer games. Dyn Games Appl 4:468–488Google Scholar
  40. Grafen A (1990a) Biological signals as handicaps. J Theor Biol 144:517–546Google Scholar
  41. Grafen A (1990b) Do animals really recognize kin? Anim Behav 39:42–54Google Scholar
  42. Haigh J (1975) Game theory and evolution. Adv Appl Probab 7:8–11Google Scholar
  43. Hamilton WD (1964) The genetical evolution of social behavior. J Theor Biol 7:1–52Google Scholar
  44. Hamilton WD (1967) Extraordinary sex ratios. Science 156:477–488Google Scholar
  45. Hamilton WD, May RM (1977) Dispersal in stable environments. Nature 269:578–581Google Scholar
  46. Hammerstein P, Parker GA (1982) The asymmetric war of attrition. J Theor Biol 96:647–682Google Scholar
  47. Hardin G (1968) The tragedy of the commons. Science 162:1243–1248Google Scholar
  48. Hastings A (1983) Can spatial variation alone lead to selection for dispersal? Theor Popul Biol 24:244–251Google Scholar
  49. Hofbauer J, Sigmund K (1998) Evolutionary games and population dynamics. Cambridge University Press, Cambridge, UKGoogle Scholar
  50. Holling CS (1959) Some characteristics of simple types of predation and parasitism. Can Entomol 91:385–398Google Scholar
  51. Holt RD, Barfield M (2001) On the relationship between the ideal-free distribution and the evolution of dispersal. In: Danchin JCE, Dhondt A, Nichols J (eds) Dispersal. Oxford University Press, Oxford/New York, pp 83–95Google Scholar
  52. Houston AI, McNamara JM (1999) Models of adaptive behaviour. Cambridge University Press, Cambridge, UKGoogle Scholar
  53. Kerr B, Riley M, Feldman M, Bohannan B (2002) Local dispersal promotes biodiversity in a real-life game of rock-paper-scissors. Nature 418:171–174Google Scholar
  54. Kokko H (2007) Modelling for field biologists and other interesting people. Cambridge University Press, Cambridge, UKGoogle Scholar
  55. Komorita SS, Sheposh JP, Braver SL (1968) Power, use of power, and cooperative choice in a two-person game. J Personal Soc Psychol 8:134–142Google Scholar
  56. Krebs JR, Erichsen JT, Webber MI, Charnov EL (1977) Optimal prey selection in the great tit (Parus major). Anim Behav 25:30–38Google Scholar
  57. Křivan V (1997) Dynamic ideal free distribution: effects of optimal patch choice on predator-prey dynamics. Am Nat 149:164–178Google Scholar
  58. Křivan V (2014) Competition in di- and tri-trophic food web modules. J Theor Biol 343:127–137Google Scholar
  59. Křivan V, Sirot E (2002) Habitat selection by two competing species in a two-habitat environment. Am Nat 160:214–234Google Scholar
  60. Křivan V, Cressman R, Schneider C (2008) The ideal free distribution: a review and synthesis of the game theoretic perspective. Theor Popul Biol 73:403–425Google Scholar
  61. Lambert G, Liao D, Vyawahare S, Austin RH (2011) Anomalous spatial redistribution of competing bacteria under starvation conditions. J Bacteriol 193:1878–1883Google Scholar
  62. Lieberman E, Hauert C, Nowak MA (2005) Evolutionary dynamics on graphs. Nature 433:312–316Google Scholar
  63. Lorenz K (1963) Das sogenannte Böse zur Naturgeschichte der Aggression. Verlag Dr. G Borotha-SchoelerGoogle Scholar
  64. Mariani P, Křivan V, MacKenzie BR, Mullon C (2016) The migration game in habitat network: the case of tuna. Theor Ecol 9:219–232CrossRefGoogle Scholar
  65. Maynard Smith J (1974) The theory of games and the evolution of animal conflicts. J Theor Biol 47:209–221MathSciNetCrossRefGoogle Scholar
  66. Maynard Smith J (1982) Evolution and the theory of games. Cambridge University Press, Cambridge, UKCrossRefGoogle Scholar
  67. Maynard Smith J (1991) Honest signalling: the Philip Sidney Game. Anim Behav 42:1034–1035CrossRefGoogle Scholar
  68. Maynard Smith J, Parker GA (1976) The logic of asymmetric contests. Anim Behav 24:159–175CrossRefGoogle Scholar
  69. Maynard Smith J, Price GR (1973) The logic of animal conflict. Nature 246:15–18CrossRefGoogle Scholar
  70. McNamara JM, Houston AI (1992) Risk-sensitive foraging: a review of the theory. Bull Math Biol 54:355–378CrossRefGoogle Scholar
  71. McPeek MA, Holt RD (1992) The evolution of dispersal in spatially and temporally varying environments. Am Nat 140:1010–1027CrossRefGoogle Scholar
  72. Mesterton-Gibbons M, Sherratt TN (2014) Bourgeois versus anti-Bourgeois: a model of infinite regress. Anim Behav 89:171–183CrossRefGoogle Scholar
  73. Milinski M (1979) An evolutionarily stable feeding strategy in sticklebacks. Zeitschrift für Tierpsychologie 51:36–40CrossRefGoogle Scholar
  74. Milinski M (1988) Games fish play: making decisions as a social forager. Trends Ecol Evol 3:325–330CrossRefGoogle Scholar
  75. Moran P (1958) Random processes in genetics. In: Mathematical proceedings of the Cambridge philosophical society, vol 54. Cambridge University Press, Cambridge, UK, pp 60–71Google Scholar
  76. Morris DW (1999) Has the ghost of competition passed? Evol Ecol Res 1:3–20Google Scholar
  77. Morris DW (2002) Measuring the Allee effect: positive density dependence in small mammals. Ecology 83:14–20CrossRefGoogle Scholar
  78. Nowak MA (2006) Five rules for the evolution of cooperation. Science 314:1560–1563CrossRefGoogle Scholar
  79. Nowak MA, May RM (1994) Superinfection and the evolution of parasite virulence. Proc R Soc Lond B 255:81–89CrossRefGoogle Scholar
  80. Parker GA (1978) Searching for mates. In: Krebs JR, Davies NB (eds) Behavioural ecology: an evolutionary approach. Blackwell, Oxford, pp 214–244Google Scholar
  81. Parker GA (1984) Evolutionarily stable strategies. In: Krebs JR, Davies NB (eds) Behavioural ecology: an evolutionary approach. Blackwell, Oxford, pp 30–61Google Scholar
  82. Parker GA, Thompson EA (1980) Dung fly struggles: a test of the war of attrition. Behav Ecol Sociobiol 7:37–44CrossRefGoogle Scholar
  83. Poundstone W (1992) Prisoner’s Dilemma. Oxford University Press, New YorkGoogle Scholar
  84. Selten R (1980) A note on evolutionarily stable strategies in asymmetrical animal conflicts. J Theor Biol 84:93–101MathSciNetCrossRefGoogle Scholar
  85. Sherratt TN, Mesterton-Gibbons M (2015) The evolution of respect for property. J Evol Biol 28:1185–1202CrossRefGoogle Scholar
  86. Sigmund K (2007) Punish or perish? Retaliation and collaboration among humans. Trends Ecol Evol 22:593–600CrossRefGoogle Scholar
  87. Sinervo B, Lively CM (1996) The rock-paper-scissors game and the evolution of alternative male strategies. Nature 380:240–243CrossRefGoogle Scholar
  88. Sirot E (2012) Negotiation may lead selfish individuals to cooperate: the example of the collective vigilance game. Proc R Soc B 279:2862–2867CrossRefGoogle Scholar
  89. Spencer H (1864) The Principles of biology. Williams and Norgate, LondonGoogle Scholar
  90. Stephens DW, Krebs JR (1986) Foraging theory. Princeton University Press, PrincetonGoogle Scholar
  91. Takeuchi Y (1996) Global dynamical properties of Lotka-Volterra systems. World Scientific Publishing Company, SingaporeCrossRefGoogle Scholar
  92. Taylor PD, Jonker LB (1978) Evolutionary stable strategies and game dynamics. Math Biosci 40:145–156MathSciNetCrossRefGoogle Scholar
  93. Vincent TL, Brown JS (2005) Evolutionary game theory, natural selection, and Darwinian dynamics. Cambridge University Press, Cambridge, UKCrossRefGoogle Scholar
  94. Webb JN, Houston AI, McNamara JM, Székely T (1999) Multiple patterns of parental care. Anim Behav 58:983–993CrossRefGoogle Scholar
  95. Wynne-Edwards VC (1962) Animal dispersion in relation to social behaviour. Oliver & Boyd, EdinburghGoogle Scholar
  96. Xu F, Cressman R, Křivan V (2014) Evolution of mobility in predator-prey systems. Discret Contin Dyn Syst Ser B 19:3397–3432MathSciNetCrossRefGoogle Scholar
  97. Zahavi A (1975) Mate selection–selection for a handicap. J Theor Biol 53:205–214CrossRefGoogle Scholar
  98. Zahavi A (1977) Cost of honesty–further remarks on handicap principle. J Theor Biol 67:603–605CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  1. 1.Department of MathematicsCity, University LondonLondonUK
  2. 2.Biology CenterCzech Academy of SciencesČeské BudějoviceCzech Republic
  3. 3.Faculty of ScienceUniversity of South BohemiaČeské BudějoviceCzech Republic

Personalised recommendations