Skip to main content

Knowledge, Belief and Counterfactual Reasoning in Games

  • Chapter
Readings in Formal Epistemology

Part of the book series: Springer Graduate Texts in Philosophy ((SGTP,volume 1))

Abstract

Deliberation about what to do in any context requires reasoning about what will or would happen in various alternative situations, including situations that the agent knows will never in fact be realized. In contexts that involve two or more agents who have to take account of each others’ deliberation, the counterfactual reasoning may become quite complex. When I deliberate, I have to consider not only what the causal effects would be of alternative choices that I might make, but also what other agents might believe about the potential effects of my choices, and how their alternative possible actions might affect my beliefs. Counter factual possibilities are implicit in the models that game theorists and decision theorists have developed – in the alternative branches in the trees that model extensive form games and the different cells of the matrices of strategic form representations – but much of the reasoning about those possibilities remains in the informal commentary on and motivation for the models developed. Puzzlement is sometimes expressed by game theorists about the relevance of what happens in a game ‘off the equilibrium path’: of what would happen if what is (according to the theory) both true and known by the players to be true were instead false. My aim in this paper is to make some suggestions for clarifying some of the concepts involved in counterfactual reasoning in strategic contexts, both the reasoning of the rational agents being modeled, and the reasoning of the theorist who is doing the modeling, and to bring together some ideas and technical tools developed by philosophers and logicians that I think might be relevant to the analysis of strategic reasoning, and more generally to the conceptual foundations of game theory.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 139.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Ernest Adams (1970) first pointed to the contrast illustrated by this pair of conditionals. The particular example is Jonathan Bennett’s.

  2. 2.

    The relation between causal and evidential reasoning is the central concern in the development of causal decision theory. See Gibbard and Harper (1981), Skyrms (1982) and Lewis (1980).

  3. 3.

    That is, for all players i (x)(∃y)xRiy, (x)(y)(z)((xRiy & yRiz) → xRiz), and (x)(y)(z)((xRiy & xRiz) → yRiz).

  4. 4.

    It has been suggested that there is a substantive, and implausible, assumption built into the way that degrees of belief are modeled: namely, that any two worlds in which a player has the same full beliefs he also has the same partial beliefs. But this assumption is a tautological consequence of the introspection assumption, which implies that a player fully believes that he himself has the partial beliefs that he in fact has. It does follow from the introspection assumptions that player j cannot be uncertain about player i’s partial beliefs while being certain about all of i’s full beliefs. But that is just because the totality of i’s full beliefs includes his beliefs about his own partial beliefs, and by the introspection assumption, i’s beliefs about his own partial beliefs are complete and correct. Nothing, however, prevents there being a model in which there are different worlds in which player i has full beliefs about objective facts that are exactly the same, even though the degrees of belief about such facts are different. This situation will be modeled by disjoint but isomorphic sets of possible worlds. In such a case, another player j might be certain about player i’s full beliefs about everything except i’s own partial beliefs, while being uncertain about i’s partial beliefs.

  5. 5.

    More precisely, for any given model M = <W,a,<Si,Ri,Pi> i∈N>, not necessarily meeting the closure condition, define a new model M′ as follows: W′ = W × C;a′ = <a,S(a)>; for all w ∈ W and c ∈ C, S′(<w,c>) = c; for all x,y ∈ W and c,d ∈ C, <x,c> R′i <y,d> if the following three conditions are met: (i) xRiy, (ii) ci = di, and (iii) for all j ≠ i, Sj(y) = dj; P′i(<x,c>) = Pi(x). This model will be finite if the original one was, and will satisfy the closure condition.

  6. 6.

    In these and other definitions, a variable for a strategy or profile, enclosed in brackets, denotes the proposition that the strategy or profile is realized. So, for example, if e ∈ C−i (if e is a strategy profile for players other than player i) then [e] = {x ∈ W:Sj(x) = ej for all j ≠ i}.

  7. 7.

    This model theoretic definition of rationalizability coincides with the standard concept defined by Bernheim (1984) and Pearce (1984) only in two person games. In the general case, it coincides with the weaker concept, correlated rationalizability. Model theoretic conditions appropriate for the stronger definition would require that players’ beliefs about each other satisfy a constraint that (in games with more than two players) goes beyond coherence: specifically, it is required that no player can believe that any information about another player’s strategy choices would be evidentially relevant to the choices of a different player. I think this constraint could be motivated, in general, only if one confused causal with evidential reasoning. The structure of the game ensures that players’ strategy choices are made independently: if player one had chosen differently, it could not have influenced the choice of player two. But this assumption of causal independence has no consequences about the evidential relevance of information about player one’s choice for the beliefs that a third party might rationally have about player two. (Brian Skyrms (1992, pp. 147–8) makes this point.)

  8. 8.

    This characterization theorem is proved in Stalnaker (1994).

  9. 9.

    The earliest formulation, so far as I know, of what has come to be called the AGM belief revision theory was given by William Harper (1975). For a general survey of the belief revision theory, see Gärdenfors (1988). Other important papers include Alchourón and Makinson (1982), Alchourón et al. (1985), Grove (1988), Makinson (1985) and Spohn (1987).

  10. 10.

    There is this difference between the conditional belief state Bi,x(ɸ) and the posterior belief state that would actually result if the agent were in fact to learn that ɸ: if he were to learn that ɸ, he would believe that he then believed that ɸ, whereas in our static models, there is no representation of what the agent comes to believe in the different possible worlds at some later time. But the potential posterior belief states and the conditional belief states as defined do not differ with respect to any information represented in the model. In particular, the conditional and posterior belief states do not differ with respect to the agent’s beliefs about his prior beliefs.

  11. 11.

    The work done by Q is to rank the worlds incompatible with prior beliefs; it does not distinguish between worlds compatible with prior beliefs – they are ranked together at the top of the ordering determined by Q. So Q encodes the information about what the prior beliefs are – that is why R becomes redundant. A model with both Q and R relations would specify the prior belief sets in two ways. Condition (q3) is the requirement that the two specifications yield the same results.

    Here is a simple abstract example, just to illustrate the structure: suppose there are just three possible worlds, x y and z, that are subjectively indistinguishable in those worlds to player i. Suppose {x} is the set of worlds compatible with i′s beliefs in x, y, and z, which is to say that the R relation is the following set: {<x,x>,<y,x>,<z,x>}. Suppose further that y has priority over z, which is to say if i were to learn the proposition {y,z}, his posterior or conditional belief state would be {y}. In other words, the Q relation is the following set: {<x,x>,<y,x>,<z,x>,<y,y>,<z,y>,<z,z>}.

  12. 12.

    These extended probability functions are equivalent to lexicographic probability systems. See Blume et al. (1991a, b) for an axiomatic treatment of lexicographic probability in the context of decision theory and game theory. These papers discuss a concept equivalent to the one defined below that I am calling perfect rationality.

    I don’t want to suggest that this is the only way of combining the AGM belief revision structure with probabilities. For a very different kind of theory, see Mongin (1994). In this construction, probabilities are nonadditive, and are used to represent the belief revision structure, rather than to supplement it as in the models I have defined. I don’t think the central result in Mongin (1994) (that the same belief revision structure that I am using is in a sense equivalent to a nonadditive, and so non-Bayesian, probability conception of prior belief) conflicts with, or presents a problem for, the way I have defined extended probability functions: the probability numbers just mean different things in the two constructions.

  13. 13.

    For example, Fudenberg and Tirole (1992) make the following remark about the relation between game theory and decision theory: ‘Games and decisions differ in one key respect: probability-0 events are both exogenous and irrelevant in decision problems, whereas what would happen if a player played differently in a game is both important and endogenously determined’.

    To the extent that this is true, it seems to me an accident of the way the contrasting theories are formulated, and to have no basis in any difference in the phenomena that the theories are about.

  14. 14.

    The proof of this theorem, and others stated without proof in this paper, are available from the author. The argument is a variation of the proof of the characterization theorem for simple (correlated) rationalizability given in Stalnaker (1994). See Dekel and Fudenberg (1990) for justification of the same solution concept in terms of different conditions that involve perturbations of the payoffs.

    I originally thought that the set of strategies picked out by this concept of perfect rationalizability coincided, in the case of two person games, with perfect rationalizability as defined by Bernheim (1984), but Pierpaolo Battigalli pointed out to me that Bernheim’s concept is stronger.

  15. 15.

    Most notably, Robert Aumann’s important and influential result on the impossibility of agreeing to disagree, and subsequent variations on it all depend on the partition structure, which requires the identification of knowledge with belief. See Aumann (1976) and Bacharach (1985). The initial result is striking, but perhaps slightly less striking when one recognizes that the assumption that there is no disagreement is implicitly a premise of the argument.

  16. 16.

    If one were to add to the models we have defined the assumption that the R relation is reflexive, and so (given the other assumptions) is an equivalence relation, the result would be that the three relations, Ri, Qi, and ≈ i, would all collapse into one. There would be no room for belief revision, since it would be assumed that no one had a belief that could be revised. Intuitively, the assumption would be that it is a necessary truth that all players are Cartesian skeptics: they have no probability-one beliefs about anything except necessary truths and facts about their own states of mind. This assumption is not compatible with belief that another player is rational, unless it is assumed that it is a necessary truth that the player is rational.

  17. 17.

    The algorithm, which eliminates iteratively profiles rather than strategies, is given in Stalnaker (1994), and it is also proved there that the set of strategies picked out by this algorithm is characterized by the class of models meeting the model theoretic condition.

  18. 18.

    The modal logic for the knowledge operators in a language that was interpreted relative to this semantic structure would be S4.3. This is the logic characterized by the class of Kripke models in which the accessibility relation is transitive, reflexive, and weakly connected (if xQiy and xQiz, then either yQiz or zQiy). The logic of common knowledge would be S4.

  19. 19.

    Each theorem claims that any strategy that is realized in a model of one kind is also realized in a model that meets more restrictive conditions. In each case the proof is given by showing how to modify a model meeting the weaker conditions so that it also meets the more restrictive conditions.

  20. 20.

    Although in this paper we have considered only static games, it is a straightforward matter to enrich the models by adding a temporal dimension to the possible worlds, assuming that players have belief states and perform actions at different times, actually revising their beliefs in the course of the playing of the game in accordance with a belief revision policy of the kind we have supposed. Questions about the relationship between the normal and extensive forms of games, and about the relations between different extensive-form games with the same normal form can be made precise in the model theory, and answered.

  21. 21.

    I would like to thank Pierpaolo Battigalli, Yannis Delmas, Drew Fudenberg, Philippe Mongin, Hyun Song Shin, Brian Skyrms, and an anonymous referee for helpful comments on several earlier versions of this paper.

References

  • Adams, E. (1970). Subjunctive and indicative conditionals. Foundations of Language, 6, 89–94.

    Google Scholar 

  • Alchourón, C., & Makinson, D. (1982). The logic of theory change: Contraction functions and their associated revision functions. Theoria, 48, 14–37.

    Article  Google Scholar 

  • Alchourón, C., Gärdenfors, P., & Makinson, D. (1985). On the logic of theory change: Partial meet functions for contraction and revision. Journal of Symbolic Logic, 50, 510–530.

    Article  Google Scholar 

  • Aumann, R. (1976). Agreeing to disagree. Annals of Statistics, 4, 1236–1239.

    Article  Google Scholar 

  • Bacharach, M. (1985). Some extensions of a claim of Aumann in an axiomatic model of knowledge. Journal of Economic Theory, 37, 167–190.

    Article  Google Scholar 

  • Bernheim, B. (1984). Rationalizable strategic behavior. Econometrica, 52, 1007–1028.

    Article  Google Scholar 

  • Blume, L., Brandenburger, A., & Dekel, E. (1991a). Lexicographic probabilities and choice under uncertainty. Econometrica, 59, 61–79.

    Article  Google Scholar 

  • Blume, L., Brandenburger, A., & Dekel, E. (1991b). Lexicographic probabilities and equilibrium refinements. Econometrica, 59, 81–98.

    Article  Google Scholar 

  • Dekel, E., & Fudenberg, D. (1990). Rational behavior with payoff uncertainty. Journal of Economic Theory, 52, 243–267.

    Article  Google Scholar 

  • Fudenberg, D., & Tirole, J. (1992). Game theory. Cambridge, MA: MIT Press.

    Google Scholar 

  • Gärdenfors, P. (1988). Knowledge in flux: Modeling the dynamics of epistemic states. Cambridge, MA: MIT Press.

    Google Scholar 

  • Gibbard, A., & Harper, W. (1981). Counterfactuals and two kinds of expected utility. In C. Hooker et al. (Eds.), Foundations and applications of decision theory. Dordrecht/Boston: Reidel.

    Google Scholar 

  • Grove, A. (1988). Two modelings for theory change. Journal of Philosophical Logic, 17, 157–170.

    Article  Google Scholar 

  • Harper, W. (1975). Rational belief change, popper functions and counterfactuals. Synthese, 30, 221–262.

    Article  Google Scholar 

  • Lewis, D. (1980). Causal decision theory. Australasian Journal of Philosophy, 59, 5–30.

    Article  Google Scholar 

  • Makinson, D. (1985). How to give it up: A survey of some formal aspects of the logic of theory change. Synthese, 62, 347–363.

    Article  Google Scholar 

  • Mongin, P. (1994). The logic of belief change and nonadditive probability. In D. Prawitz & D. Westerstahl (Eds.), Logic and philosophy of science in Uppsala. Dordrecht: Kluwer.

    Google Scholar 

  • Pappas, G., & Swain, M. (1978). Essays on knowledge and justification. Ithaca: Cornell University Press.

    Google Scholar 

  • Pearce, G. (1984). Rationalizable strategic behavior and the problem of perfection. Econometrica, 52, 1029–1050.

    Article  Google Scholar 

  • Pettit, P., & Sugden, R. (1989). The backward induction paradox. Journal of Philosophy, 86, 169–182.

    Article  Google Scholar 

  • Skyrms, B. (1982). Causal decision theory. Journal of Philosophy, 79, 695–711.

    Article  Google Scholar 

  • Skyrms, B. (1992). The dynamics of rational deliberation. Cambridge, MA: Harvard University Press.

    Google Scholar 

  • Spohn, W. (1987). Ordinal conditional functions: A dynamic theory of epistemic states. In W. Harper & B. Skyrms (Eds.), Causation in decision, belief change and statistics (Vol. 2, pp. 105–134). Dordrecht: Reidel.

    Google Scholar 

  • Stalnaker, R. (1994). On the evaluation of solution concepts. Theory and Decision, 37, 49–73.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Robert Stalnaker .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing Switzerland

About this chapter

Cite this chapter

Stalnaker, R. (2016). Knowledge, Belief and Counterfactual Reasoning in Games. In: Arló-Costa, H., Hendricks, V., van Benthem, J. (eds) Readings in Formal Epistemology. Springer Graduate Texts in Philosophy, vol 1. Springer, Cham. https://doi.org/10.1007/978-3-319-20451-2_42

Download citation

Publish with us

Policies and ethics