Skip to main content
Log in

When less information is good enough: experiments with global stag hunt games

  • Original Paper
  • Published:
Experimental Economics Aims and scope Submit manuscript

Abstract

There is mixed evidence on whether subjects coordinate on the efficient equilibrium in experimental stag hunt games under complete information. A design that generates an anomalously high level of coordination, Rankin et al. (Games Econo Behav 32(2):315–337, 2000), varies payoffs each period in repeated play rather than holding them constant. These payoff “perturbations” are eerily similar to those used to motivate the theory of global games, except the theory operates under incomplete information. Interestingly, that equilibrium selection concept is known to coincide with risk dominance, rather than payoff dominance. Thus, in theory, a small change in experimental design should produce a different equilibrium outcome. We examine this prediction in two treatments. In one, we use public signals to match Rankin et al. (2000)’s design; in the other, we use private signals to match the canonical example of global games theory. We find little difference between treatments, in both cases, subject play approaches payoff dominance. Our literature review reveals this result may have more to do with the idiosyncrasies of our complete information framework than the superiority of payoff dominance as an equilibrium selection principle.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

Notes

  1. It should be noted that our experiment utilizes a linear projection of this game where payoffs are mapped from 0 and 1 to 100 and 500 while the experiments of Rankin et al. (2000) use w and \(w+370\) for \(w\in [0,50]\) where the value of w is changed in each period. Carlsson and van Damme (1993a) use this exact game but Carlsson and van Damme (1993b) scale this game by a factor of 4. The scales are theoretically equivalent and produce the same results. We normalize and use the [0,1] scale for ease of reading and interpretation.

  2. Another minor but important difference is that increasing noise in the currency attack game changes the odds of success as well as the expected payoff. However, in global stag hunt games, the unique equilibrium, coinciding with the risk-dominant threshold, is the same for any noise levels that are arbitrarily small, because the expected payoff does not change with different noise levels.

  3. Szkup and Trevino (2015) study this similar game but with a continuum of players.

  4. An exception, Heinemann et al. (2009) compare the results of the currency attack game with known lotteries to measure the level of strategic uncertainty perceived by subjects within the game.

  5. For example, suppose \(n=8\) and \(k=3\), \(\pi (3,8)=\frac{2}{7}\).

  6. There are also mixed strategy Nash equilibria but no other pure Nash equilibria.

  7. Selecting A is not always the payoff-dominant equilibrium if Q can be larger than 1; in this case, B strictly dominates A. Therefore, payoff dominance is a threshold strategy with \(Q^{*}=1\).

  8. Our complete information treatment is different from Rankin et al. (2000) in three important ways. First, subjects are matched against everyone in the cohort each period and receive a payoff equal to the mean of the matches. Second, Q is allowed to be smaller than 0 and larger than 1 as required by global games theory to get a unique equilibrium under incomplete information. In order to apply the iterated dominance argument, the initial subclass of games must be large enough and contain games with different equilibrium structures. Lastly, action labels are fixed (a risky choice is always labeled A and a safe choice is always labeled B) and subjects play the games for 100 periods. After each period, each subject receives feedback on the the actual value of Q, the number of subjects in the cohort who chose A and B, and his/her payoff.

  9. One may consider comparing behavior under incomplete information between two matching protocols: random matching and mean matching. Under complete information, the results from Rankin et al. (2000) (random matching) and Stahl and Van Huyck (2002) (mean matching) show no difference.

  10. We omit cohort 10 because it has a perfect threshold. In the last 25 periods, all subjects selected A when \(Q\le 486\) and B when \(Q\ge 502\). There was no value of Q between 486 and 502; we used their average (494) as a threshold.

  11. In the last 50 periods, all eight subjects in that cohort played A when Q was in the interval [0, 500) and played B when Q was in the interval [500, 600] in every period. There is another cohort under the complete information treatment that produced an almost perfect step function but this cohort switched a strategy at \(Q=400\).

  12. A similar logistic regression (not shown) using a time trend as in Eq. (3) does not change the qualitative results of thresholds.

  13. If a subject used an exact threshold, that is, he always played A when \(Q{}_{i}\le w\) and played B when \(Q{}_{i}\ge z\) when there was no \(Q{}_{i}\) between w and z in any periods, we will use \(\frac{w+z}{2}\) as a threshold. For example, if a subject always played A when \(Q{}_{i}\le 497\) and played B when \(Q{}_{i}\ge 502\), that subject’s threshold is 499.5. There are seven and eight exact threshold players under complete and incomplete information, respectively. If there are some errors, that is, a subject did not always play A when \(Q{}_{i}\le w\) or B when \(Q{}_{i}\ge z\) for any values of w and z, we select w and z which yield the least errors. There are eight and seven players under complete and incomplete information whom we use this method, respectively. Because of their random behavior, we also exclude two players under complete information whose estimated thresholds from the logit model are above 600.

  14. The largest and smallest individual thresholds under incomplete information are 511 and 289, while under complete information they are 504 and 344. Similar to the cohort thresholds, the dispersion of estimated thresholds across individuals is higher in the incomplete information treatment. The standard deviations of thresholds are 51.1 and 37.6 for incomplete and complete information, respectively.

  15. For example, a subject wrote, “I chose B when the odds were that Q was greater than 500. I used the estimate to decide this.”

  16. The most frequent exact threshold strategy was to choose A if Q is less than 500 and B otherwise. It was chosen by nineteen percent of the subjects. Other popular choices were thresholds at 450, 400, and 440 to 445 in order of decreasing popularity. Self-reported and estimated thresholds in our analysis for each subject were similar.

  17. Subjects using a fuzzy threshold seem to be engaged in fast and slow thinking popularized by Kahneman (2011). Schotter and Trevino (2017) exploit the difference in measured response time to accurately predict observed individual thresholds in a global game. We could also view these subjects as using pure strategies for low and high signals, and mixed strategies for signals between w and z.

  18. Reading the debriefing answers from the cohort that perfectly coordinated on the payoff-dominant threshold of 500, cohort 10, we are now convinced that subjects initially started with a wishful thinking strategy rather than any equilibrium concepts such as payoff dominance. The answers from subjects in cohorts with significantly lower thresholds, cohorts 1 and 2, revealed that many subjects in these cohorts started by using low thresholds. Once other subjects observed that they could not coordinate on high thresholds, they had to change to lower thresholds. In contrast, more subjects in other cohorts started from using high thresholds.

References

  • Al-Ubaydli, O., Jones, G., & Weel, J. (2013). Patience, cognitive skill, and coordination in the repeated stag hunt. Journal of Neuroscience, Psychology, and Economics, 6(2), 71.

    Article  Google Scholar 

  • Battalio, R., Samuelson, L., & Van Huyck, J. (2001). Optimization incentives and coordination failure in laboratory stag hunt games. Econometrica, 69(3), 749–764.

    Article  Google Scholar 

  • Brindisi, F., Celen, B., & Hyndman, K. (2014). The effect of endogenous timing on coordination under asymmetric information: An experimental study. Games and Economic Behavior, 86, 264–281.

    Article  Google Scholar 

  • Büyükboyacı, M. (2014). Risk attitudes and the stag-hunt game. Economics Letters, 124(3), 323–325.

    Article  Google Scholar 

  • Cabrales, A., Nagel, R., & Armenter, R. (2007). Equilibrium selection through incomplete information in coordination games: An experimental study. Experimental Economics, 10, 221–234.

    Article  Google Scholar 

  • Carlsson, H., & van Damme, E. (1993a). Equilibrium selection in stag hunt games. In K. Binmore, A. Kirman, & P. Tani (Eds.), Frontiers of Game Theory (pp. 237–253). Cambridge: The MIT Press.

    Google Scholar 

  • Carlsson, H., & van Damme, E. (1993b). Global games and equilibrium selection. Econometrica, 61(5), 989–1018.

    Article  Google Scholar 

  • Clark, K., Kay, S., & Sefton, M. (2001). When are nash equilibria self-enforcing? An experimental analysis. International Journal of Game Theory, 29(4), 495–515.

    Article  Google Scholar 

  • Clark, K., & Sefton, M. (2001). Repetition and signalling: Experimental evidence from games with efficient equilibria. Economics Letters, 70(3), 357–362.

    Article  Google Scholar 

  • Cooper, R., DeJong, D., Forsythe, R., & Ross, T. (1990). Selection criteria in coordination games: Some experimental results. The American Economic Review, 80(1), 218–233.

    Google Scholar 

  • Cooper, R., DeJong, D., Forsythe, R., & Ross, T. (1992). Communication in coordination games. The Quarterly Journal of Economics, 107(2), 739–771.

    Article  Google Scholar 

  • Cornand, C. (2006). Speculative attacks and informational structure: An experimental study. Review of International Economics, 14(5), 797–817.

    Article  Google Scholar 

  • Crawford, V., Costa-Gomes, M., & Iriberri, N. (2013). Structural models of nonequilibrium strategic thinking: Theory, evidence, and applications. Journal of Economic Literature, 51(1), 5–62.

    Article  Google Scholar 

  • Devetag, G., & Ortmann, A. (2007). When and why? A critical survey on coordination failure in the laboratory. Experimental economics, 10(3), 331–344.

    Article  Google Scholar 

  • Dubois, D., Willinger, M., & Van Nguyen, P. (2012). Optimization incentive and relative riskiness in experimental stag-hunt games. International Journal of Game Theory, 41(2), 369–380.

    Article  Google Scholar 

  • Duffy, J., & Feltovich, N. (2002). Do actions speak louder than words? An experimental comparison of observation and cheap talk. Games and Economic Behavior, 39(1), 1–27.

    Article  Google Scholar 

  • Duffy, J., & Feltovich, N. (2006). Words, deeds, and lies: Strategic behaviour in games with multiple signals. The Review of Economic Studies, 73(3), 669–688.

    Article  Google Scholar 

  • Duffy, J., & Ochs, J. (2012). Equilibrium selection in static and dynamic entry games. Games and Economic Behavior, 76, 97–116.

    Article  Google Scholar 

  • Fischbacher, U. (2007). z-tree: Zurich toolbox for ready-made economic experiments. Experimental Economics, 10(2), 171–178.

    Article  Google Scholar 

  • Greiner, B. (2015). Subject pool recruitment procedures: Organizing experiments with ORSEE. Journal of the Economic Science Association, 1(1), 114–125.

    Article  Google Scholar 

  • Harsanyi, J . C., & Selten, R. (1988). A general theory of equilibrium selection in games. Cambridge: The MIT Press.

    Google Scholar 

  • Heinemann, F., Nagel, R., & Ockenfels, P. (2004). The theory of global games on test: Experimental analysis of coordination games with public and private information. Econometrica, 72(5), 1583–1599.

    Article  Google Scholar 

  • Heinemann, F., Nagel, R., & Ockenfels, P. (2009). Measuring strategic uncertainty in coordination games. The Review of Economic Studies, 76(1), 181–221.

    Article  Google Scholar 

  • Kahneman, D. (2011). Thinking, fast and slow. New York: Farrar, Straus and Giroux.

    Google Scholar 

  • Kawagoe, T., & Ui, T. (2010). Global games and ambiguous information: An experimental study. SSRN Paper No. 1601683, Social Science Research Network.

  • Morris, S., & Shin, H. (1998). Unique equilibrium in a model of self-fulfilling currency attacks. American Economic Review, 88, 587–597.

    Google Scholar 

  • Morris, S., & Shin, H. (2003). Global games: Theory and applications. In M. Dewatripont, L. Hansen, & S. Turnovsky (Eds.), Advances in economics and econometrics: Theory and applications. Eighth world congress (pp. 56–114). Cambridge: Cambridge University Press.

    Chapter  Google Scholar 

  • Rankin, F., Van Huyck, J., & Battalio, R. (2000). Strategic similarity and emergent conventions: Evidence from similar stag hunt games. Games and Economic Behavior, 32(2), 315–337.

    Article  Google Scholar 

  • Schmidt, D., Shupp, R., Walker, J. M., & Ostrom, E. (2003). Playing safe in coordination games: The roles of risk dominance, payoff dominance, and history of play. Games and Economic Behavior, 42(2), 281–299.

    Article  Google Scholar 

  • Schotter, A., & Trevino, I. (2017). Is response time predictive of choice? An experimental study of threshold strategies. Mimeo, UCSD.

    Google Scholar 

  • Shurchkov, O. (2013). Coordination and learning in dynamic global games: Experimental evidence. Experimental Economics, 16(3), 313–334.

    Article  Google Scholar 

  • Shurchkov, O. (2016). Public announcements and coordination in dynamic global games: Experimental evidence. Journal of Behavioral and Experimental Economics, 61, 20–30.

    Article  Google Scholar 

  • Stahl, D., & Van Huyck, J. (2002). Learning conditional behavior in similar stag hunt games. Mimeo, Texas A&M University.

    Google Scholar 

  • Szkup, M., & Trevino, I. (2015). Information acquisition in global games of regime change. Journal of Economic Theory, 160, 387–428.

    Article  Google Scholar 

  • Szkup, M., & Trevino, I. (2017). Sentiments, strategic uncertainty, and information structures in coordination games. Mimeo, UCSD.

    Google Scholar 

  • Van Huyck, J. B., Battalio, R. C., & Beil, R. O. (1990). Tacit coordination games, strategic uncertainty, and coordination failure. The American Economic Review, 80(1), 234–248.

    Google Scholar 

  • Van Huyck, J. B., Battalio, R. C., & Beil, R. O. (1991). Strategic uncertainty, equilibrium selection, and coordination failure in average opinion games. The Quarterly Journal of Economics, 106(3), 885–910.

    Article  Google Scholar 

  • Van Huyck, J., Cook, J., & Battalio, R. (1997). Adaptive behavior and coordination failure. Journal of Economic Behavior and Organization, 32, 483–503.

    Article  Google Scholar 

Download references

Acknowledgements

Financial support was provided by the Texas A&M Humanities and Social Science Enhancement of Research Capacity Program. The research was conducted under TAMU IRB approval IRB2012-0664. We thank Ravi Hanumara for his help on z-Tree programming and the Economic Research Laboratory group at Texas A&M for testing the program. We also thank Yan Chen, Catherine C. Eckel, Daniel Fragiadakis, Nikos Nikiforakis, two anonymous referees, and the seminar participants at NYU-Abu Dhabi, American University of Sharjah, Thammasat University, Texas A&M University, the workshop in honor of John Van Huyck, the 2015 North American Economic Science Association meetings, the 2014 Texas Experimental Association Symposium, the 2013 North American Economic Science Association meetings, and the 2013 Southern Economic Association Conference for valuable comments and discussion.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ajalavat Viriyavipart.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 223 KB)

Mathematical appendix

Mathematical appendix

1.1 Proof of Proposition 1

According to Harsanyi and Selton, the players’ prior belief \(S_{i}\) about player i’s strategy should coincide with the prediction of an outside observer who reasons in the following way about the game:

  1. (i)

    Player i believes that his opponents will either all choose A or that they all choose B; he assigns a subjective probability \(z_{i}\) to the first event and \(1-z_{i}\) to the second.

  2. (ii)

    Whatever the value of \(z_{i}\), player i will choose a best response to his beliefs.

  3. (iii)

    The beliefs (i.e. the \(z_{i}\)) of different players are independent and they are all uniformly distributed on [0,1].

From (i) and (ii), the outside observer concludes that player i chooses A if \(z_{i}>Q\), and that he chooses B if \(z_{i}<Q\). Hence, using (iii), the outside observer forecasts player i’s strategy as \(S_{i}=(1-Q)A+QB\), with different \(S_{i}\) being independent. Harsanyi and Selten assume that the mixed strategy vector \(S=(S_{1},\ldots ,S_{n})\) describes the players’ prior expectations in the game. Since S is not a Nash equilibrium, this expectation is not self-fulfilling, and, thus, has to be adapted. In a stag hunt game, the situation is symmetric, either all players will have A as the unique best response against S in which case all-A is the distinguished equilibrium, or they will all have B as the unique best response against S and in which case all-B is the distinguished equilibrium.

We can write player i’s expected payoff associated with A when each player chooses A with probability t as:

$$\begin{aligned} A_{n}^{\pi }(t)=\sum _{k=1}^{n}\frac{\left( n-1\right) !}{(k-1)!(n-k)!}t^{k-1}(1-t)^{n-k}\pi \left( k,n\right) , \end{aligned}$$

where \(\pi \left( k,n\right)\) is the payoff of selecting A when k players including player i select A from a total of n players.

If the players’ prior S is \((1-Q)A+QB\), then the expected payoff associated with A is \(A_{n}^{\pi }(1-Q)\). Each player’s best response against S is A if \(A_{n}^{\pi }(1-Q)>Q\) and B if \(A_{n}^{\pi }(1-Q)<Q\). When \(\pi \left( k,n\right) =\frac{k-1}{n-1}\), \(Q=A_{n}^{\pi }(1-Q)\) when \(Q=0.5\), we can conclude that A risk dominates B when \(Q<0.5\) and B risk dominates A when \(Q>0.5\).

1.2 Proof of Proposition 2

Consider a 2X2 stag hunt game with incomplete information as shown in Table 1 where q is uniform on some interval [ab] where \(a<0\) and \(b>1\). Each player receives a signal \(Q_{i}=Q+\epsilon _{i}\) that provides an unbiased estimate of Q. The \(\epsilon _{i}\) is uniformly distributed within \([-E,E]\) where \(E \le -\frac{a}{2}\) and \(E\le \frac{b-1}{2}\). The signals \(Q_{1}\) and \(Q_{2}\) are independent. After observing the signals, the players choose actions simultaneously and get payoffs corresponding to the game in Table 1 with the actual value of Q. It is understood that the structure of the class of games and the joint distribution of Q, \(Q_{1}\) and \(Q_{2}\) are common knowledge.

It is easily seen that player i’s posterior of q will be uniform on \([Q_{i}-E,Q_{i}+E]\) if he observes \(Q_{i}\in [a+E,b-E]\), so his conditionally expected payoff from choosing B will simply be \(Q_{i}\). Moreover, for \(Q_{i}\in [a+E,b-E]\), the conditional distribution of the opponent’s observation \(Q_{j}\) will be symmetric around \(Q_{i}\) and have support \([Q_{i}-2E,Q_{i}+2E]\). Hence, the probability that \(Q_{j}<Q_{i}\) is equal to the probability that \(Q_{j}>Q_{i}\), which is 0.5.

Now suppose player i observes \(Q_{i}<0\), his conditionally expected payoff from choosing B is negative and smaller than the minimum payoff from choosing A, which is 0. Hence A is conditionally strictly dominate B for player i when he observes \(Q_{i}<0\). It should be clear that iterated dominance arguments allow us to get further. For instance, if player j is restricted to playing A when observing \(Q_{j}<0\), then player i, observing \(Q_{i}<0\), must assign at least probability of 0.5 that player j would select A. Consequently, player i’s conditionally expected payoff from choosing A will be at least 0.5, so choosing B (which yields 0) can be excluded by iterated dominance for \(Q_{i}=0\).

Let \(Q_{i}^{*}\) be the smallest observation for which A cannot be established by iterated dominance. By symmetry, obviously, \(Q_{1}^{*}=Q_{2}^{*}=Q^{*}\). Iterated dominance requires player i to play A for any \(Q_{i}<Q^{*}\), so if player j observes \(Q^{*}\) he will assign at least probability 0.5 to player i’s choosing A and, thus, player j’s expected payoff from choosing A will be at least 0.5. Since player j’s expected payoff from choosing B equals \(Q^{*}\), we must have \(Q^{*}\ge 0.5\), for otherwise iterated dominance would require player j to play A when he observes \(Q^{*}\).

We can proceed in the same way for large values of Q. B is dominant for each player if \(Q_{i}>1\) because the expected payoff from choosing B is larger than the maximum payoff of choosing A. Let \(Q^{**}\) be the largest value for which B cannot be established by iterated dominance. If player j observes \(Q_{j}=Q^{**}\), his expected payoff from choosing A will be at most 0.5 given that player i conforms to iterated dominance. Since \(Q^{**}\) equals player j’s expected payoff from choosing B, we can conclude that \(Q^{**}\le 0.5\). Combine these two findings with the fact that \(Q^{*}\le Q^{**}\), we get \(Q^{*}=Q^{**}=0.5\).

When \(n>2\), player i’s payoff is the average payoff from the matches with the other \(n-1\) players or \(\pi (k,n)=\frac{k-1}{n-1}\). Since the game is symmetric, each player has the same \(Q^{*}\). The probability that for each player \(j\ne i\), \(Q_{j}<Q_{i}\) is equal to the probability that \(Q_{j}>Q_{i}\), which is 0.5. We can use the same argument as the case where \(n=2\) to conclude that A strictly dominates B at high values of \(Q_{i}\) and B strictly dominates A at low values of \(Q_{i}\). We need to find a critical value of \(Q^{*}\) in which the expected payoff from selecting A and B are the same when player i observe \(Q^{*}\). When subject i observes \(Q^{*}\), his expected payoff is \(\sum _{k=1}^{n}\frac{\pi \left( k,n\right) \times \frac{\left( n-1\right) !}{(k-1)!(n-k)!}}{2^{(n-1)}}\). This expression is equal to 0.5 which is the same to the expected payoff in a standard 2X2 game. That is \(Q^{*}\) does not change and equals to 0.5. Using the mean matching protocol preserves the expected payoff which results in the same \(Q^{*}\) of 0.5.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Van Huyck, J., Viriyavipart, A. & Brown, A.L. When less information is good enough: experiments with global stag hunt games. Exp Econ 21, 527–548 (2018). https://doi.org/10.1007/s10683-018-9577-0

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10683-018-9577-0

Keywords

JEL Classification

Navigation