Skip to main content

The Theory of Games

  • Chapter
  • First Online:
Network Economics and the Allocation of Savings

Part of the book series: Lecture Notes in Economics and Mathematical Systems ((LNE,volume 653))

  • 874 Accesses

Abstract

We begin with an exposition on the theory of noncooperative games. The strategic form game representation is introduced first, followed by selected equilibrium concepts based on the Nash equilibrium and some of its refinements like its undominated variant, the Coalition-Proof and the Strong Nash equilibrium. The second part on noncooperative games covers their representation in extensive form, where we introduce another variant, the subgame perfect Nash equilibrium Next comes the consideration of cooperative games. We first define the notion of a cooperative game as compared to a noncooperative one and introduce the characteristic function which assigns the gains from cooperation to each possible coalition of players. Depending on whether there even exist overall gains from cooperation and how these gains specifically arise and change within and over different coalitions, we can classify cooperative games and assign certain properties to them. The last part of this chapter is devoted to some of the most prominent solution concepts for cooperative games. These concepts determine how the gains from cooperation are or can be distributed among the players and are generally either set- or point-valued. We begin with the classic von Neumann Morgenstern solution to which we relate the concept of the core. There, we show different conditions under which, regarding the underlying game, a solution in terms of the core exists. The section on point-valued solutions, also called allocation rules, starts with a general definition and some common properties of these rules and how the latter respond to changes to the game they are based on. The allocation rule we cover in detail is the Shapley value, followed by its weighted variant, which again is related to the core. The chapter concludes with a brief treatment of the so-called bargaining problem and corresponding bargaining solutions of Nash and Kalai–Smorodinsky. With this we include an alternative approach to allocate the gains from cooperation among a number of players.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

eBook
USD 16.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    See Myerson (1991, p. 1) and Aumann (2008, p. 529).

  2. 2.

    In contrast to single-player decisions taken, e.g., for consumer utility maximization, which depend on own preferences as well as the parameters for endowment and prices, but not on other consumers or their specific actions.

  3. 3.

    This very problem is ignored in cooperative games, which take a shortcut by assigning a value to a coalition of players. A derivation to do so from noncooperative games is provided on p. 39.

  4. 4.

    See for example Varian (1999, p. 496 f.) or Mas-Colell et al. (1995, p. 238 f.).

  5. 5.

    This definition can be found in any textbook on game theory, see for example Myerson (1991, p. 46), Ritzberger (2003, pp. 143 ff.) or Mas-Colell, Whinston, and Green (1995, p. 230).

  6. 6.

    A given strategy of a player is a mapping that assigns to each of his moves exactly one of the choices available at this move. It specifies an action for every possible eventuality a player might face during the course of the game. The breakdown in moves and actions will become more clear in the section on extensive form game representation.

  7. 7.

    The somewhat sloppy notation “ − i” stands for “all players but player i” and is commonly used in the field of game theory, even though formally unsound. Because the labelling of the players within the Cartesian product of the individual pure strategy spaces is arbitrary, we also often write a strategy profile in the form s = (s i , s  − i ) when we want to emphasize player i. If written correctly as i and N ∖ { i}, this notation can naturally be extended to coalitions of players as well.

  8. 8.

    These are not required for our subsequent treatment, but play an important role when deriving the so-called characteristic function, the foundation of cooperative game theory, in Sect. 2.3.2.

  9. 9.

    Based on our assumption of a finite number of pure strategies, Δ i can conveniently be represented by the unit simplex of dimension | S i  | − 1. Then, each mixed strategy is an element of (or point within) this simplex.

  10. 10.

    As Θ is the product space of n compact and convex simplices, it is itself a compact and convex subset of the Euclidean space with dimension ∑ i = 1 n | S i  | − n.

  11. 11.

    This so-called market entry game appears in many different variants throughout the literature. Most generally speaking, it describes a situation of strategic interaction between two players, with two choices of action for each. One player is the incumbent of a given market, the other a potential entrant. The latter considers entering the market or staying out of it, while the former can either accommodate the new entrant or try to deter entry into the market, albeit with costs to himself. Depending on the specific circumstances, the payoffs arising from the (four) possible outcomes usually reflect monopoly- or duopoly-style profits when the incumbent accommodates and zero profits or even losses whenever he fights the entrant.

  12. 12.

    These games were previously referred to as games with nontransferable utility or NTU games.

  13. 13.

    In the section on noncooperative games we intentionally use R instead of S to denote coalitions as subsets of the player set N. Even though the latter is generally the norm, we want to avoid confusion with the strategies and strategy sets, which are denoted by the letters s and S, respectively.

  14. 14.

    Jehle and Reny (2001, p. 171, pp. 272 ff.) speak of a Pareto improvement whenever someone can be made better off, without making anybody else worse off. A weak Pareto improvement is a change in payoffs that will make everybody strictly better off. This is derived from the notions of Pareto efficiency and weak Pareto efficiency. The former describes a state in which no player can be made better off without making somebody else worse off. The latter describes a state, in which not all players can be made strictly better off simultaneously. Note also that the former implies the latter, but not vice versa. For a more formal definition we refer the reader to Myerson (1991, p. 417).

  15. 15.

    The equivalence of Definitions 2.8 and 2.9 can be shown by contradiction: Suppose an allocation x  ∗  satisfies the former but not the latter definition. Then, some coalitions T ⊆ R ⊆ N must exist that can profitably deviate. For R = N, condition 2b of Definition 2.8 is violated. For any R⊊N, condition 2a would be violated for any deviation strategies \(\widetilde{{s}}_{R}\) and \(\widehat{{s}}_{T}\). The other direction is analogous: If an s  ∗  meeting Definition 2.9 is assumed, no coalition R⊊N will want to deviate and no other self-enforcing strategy can exist for R = N.

  16. 16.

    A possible interpretation of this chance player is a probability distribution over different deterministic games that ensue, being then played by the “real” players.

  17. 17.

    We deviate from the usual convention to number players in the order that they are called upon to move in the game. We find it more intuitive in this case to label the incumbent as player 1, and the potential newcomer as player 2. Also, this way, we stay in line with Example 2.1.

  18. 18.

    Please note if one reversed the assignments of the players such that player 1 was to pick first whether to fight or to accommodate, the payoffs would be identical. Only, in this case player 2 would have to “suffer” from imperfect information, not knowing the decision of player 1. The simultaneous character is unaffected by this change. Such interchanging of simultaneous decisions is called an I-transform. These operations are treated extensively (no pun intended) in Ritzberger (2003, p. 155 ff.) and Thompson (1997, p. 43 f.).

  19. 19.

    See Ritzberger (2003, pp. 143 ff.).

  20. 20.

    The converse does not necessarily hold.

  21. 21.

    This is the case in Example 2.2, where player 1, depending on which penultimate node he is at, should either choose action “a” or “f”.

  22. 22.

    As we noted earlier, we are only concerned with cooperative games where utility is transferable among the players (TU) and ignore even the existence of games with nontransferable utility (NTU).

  23. 23.

    Luce and Raiffa (1957, pp. 180 ff.) provide a very interesting introduction on characteristic functions, especially on p. 189 f., where they draw up striking parallels and differences between characteristic functions and probability measures.

  24. 24.

    It is the type commonly used in the literature, though there are rarely any remarks concerning its origin. Another concept is the β-characteristic function v β: Here, the order of actions is reversed, i.e. first coalition S maximizes its payoff and then the remaining players in \(\overline{S}\) try to hold it down to a minimum. The functional is given by

    $${v}_{\beta }(S) {=\min { }_{{\sigma }_{ \overline{S}}\in {\Delta }_{\overline{S}}}\max }_{{\sigma }_{S}\in {\Delta }_{S}}{U}_{S}({\sigma }_{S},{\sigma }_{\overline{S}}).$$
    (2.10)

    Unlike stated in von Neumann and Morgenstern (1947, p. 240) for zero-sum games, functions v α and v β need not assume identical values in general-sum games. These distinctions are treated more detailed in Friedman (1991, pp. 240 ff.).

  25. 25.

    Under equal restrictions on the constants, strategic equivalence is also often equally defined by the expression

    $$v(S) = a \cdot w(S) -{\sum \nolimits }_{i\in S}{c}_{i},$$
    (2.12)

    where for all i ∈ N we have c i  = ab i . In the literature, slightly different notions of equivalence have surfaced: von Neumann and Morgenstern (1947, pp. 245 ff.) define strategic equivalence on zero-sum games, consequently dropping the multiplicative constant a. On the other hand, S-equivalence, as introduced by McKinsey (1950, p. 120) exhibits this multiplicative constant but is equal to (2.12) otherwise. In the literature, both terms usually refer to the concept defined McKinsey (1950), which we also adopted.

  26. 26.

    There exists a bijective mapping from the space of games’ equivalence classes to the space of (0, 1)-normalized games.

  27. 27.

    There is also the much stronger notion of inessential coalitions: A coalition S is inessential, if as a whole it can not achieve more than the sum of any conceivable partition of S. This concept is much stricter than plain inessentiality, as it compares not only the set with the sum of its singletons, but with all possible partitions. With respect to a notion defined below, one could say on an inessential coalition S, the value function v is decomposable to all possible partitions of S.

  28. 28.

    In general, the marginal value of a player i when joining a coalition S ⊆ N ∖ { i} is given by the left-hand side of (2.19). It is defined in more detail in (2.32) below.

  29. 29.

    It is standard in the literature to use the letter u for characteristic functions of unanimity games, as well as for utility functions in noncooperative games. We hope to separate both domains sufficiently to avoid any ambiguity.

  30. 30.

    For a given set T from the outer summation of (2.25), there are r − t elements from which we can draw. The set S consists of the union of T with these elements. Depending on how many elements we add to T, r − s denotes this number in decreasing order as s increases. This follows from the symmetry of the binomial expression.

  31. 31.

    In a unanimity game u S , not only the value of all coalitions T⊊S but also that of any R ⊆ N ∖ S is zero.

  32. 32.

    Our approach follows closely that of Moulin (1991, pp. 112f.).

  33. 33.

    Let P = { P 1, P 2, , P m } be a partition of N, where P i  ∩ P j  =  for all ij and ∪ i = 1 m P i  = N.

  34. 34.

    The expression imputation was introduced by von Neumann and Morgenstern (1947, p. 263), albeit for a zero-sum game with slightly different conditions.

  35. 35.

    So far, “dominate” is to be understood as a relation comparing two imputations at a time. But the criterion for “domination” is rather vague, as no formal definition is presented and only a certain number of players are required to prefer the one imputation over the other. The formal approach is presented much later in von Neumann and Morgenstern (1947, pp. 263 ff.)

  36. 36.

    For now, we emphasize the word “solution” whenever it appears not in its general meaning, but in the sense of von Neumann and Morgenstern (1947).

  37. 37.

    Here, “ ” stands for “not ≻ ” or “is not dominated by”.

  38. 38.

    The use of dashed lines in Fig. 2.7 has exclusively illustrative purposes. No indication of openness or closedness is supposed to be given by these lines. Such properties are solely described by weak or strict inequalities.

  39. 39.

    What “almost arbitrary” means will become more clear during the course of the argument and is also elaborated at the end.

  40. 40.

    The curve must extend all the way to the boundary of I(N, v), because we have shown that such a set is a stable set and according to Corollary 2.2, no proper subsets of stable sets are stable sets themselves.

  41. 41.

    Shapley and Shubik (1973, pp. 76 f.) give an extensive and well structured overview of the literature on stable sets. So does Shubik (1982) in the appendix.

  42. 42.

    More formally defined, this relation can be found in Mas-Colell et al. (1995, p. 653). Also, an account on why the expression “improve upon” should be used rather than the not uncommon “block” is provided by Shapley (1973). We will follow his advice throughout.

  43. 43.

    The contract curve is the locus of all Pareto-efficient allocations, for a given initial endowment, within the so-called Edgeworth–Bowley box.

  44. 44.

    For detailed conditions on this coincidence, see Chang (2000, p. 458) or Rafels and Tijs (1997, p. 492).

  45. 45.

    If the core consists only of the empty set, these conditions are trivially fulfilled.

  46. 46.

    See Definition 2.41, p. 100.

  47. 47.

    The weights m i of extreme points of the convex hull co S are not to be confused with the weights γ j assigned to the sets of a balanced collection.

  48. 48.

    The sets co S are | S | − 1-dimensional unit-simplices themselves. They are called the faces of the simplex ΔN. By definition, the faces are all subsets of the (n − 1)-dimensional unit-simplex and so their barycenters b S must also be contained in ΔN. Also, \(b(S) = \frac{1} {\vert S\vert }{\sum \nolimits }_{i\in S}{\mathsf{X}}_{S}^{i}\) for all S ⊆ N.

  49. 49.

    For reasons of clarity, we omit these lines and display the barycenter b N , where their intersection would be.

  50. 50.

    Take for example the minimally balanced collection \(\mathcal{C} =\{\{ 1, 2\},\{ 1, 3\},\{ 2, 3\}\}\) from above. Balancing weights would be \({\gamma }_{j} = \gamma = \frac{1} {2}\) for all j = 1, 2, 3. If we add the sets {1},{2}, and {3} to collection \(\mathcal{C}\), it is still balanced with coefficients \({\gamma }_{j} = \gamma = \frac{1} {3}\) for all j = 1, , 6. Minimally balanced coalitions corresponding to larger player sets N can also be extended in the manner above, but also in a less symmetric way.

  51. 51.

    Notably, expression (2.86) was deduced from the core conditions!

  52. 52.

    Because set inclusion is only a partial order 2N cannot be ordered in a unique way according to the cardinality of its elements. But one could at least index elements with the same cardinality consecutively, starting with the singleton coalitions and finishing with N. A more systematic approach, given players were indexed themselves, would be the following: {, { 1}, { 2}, { 1, 2}, { 3}, { 1, 3}, { 2, 3}, { 1, 2, 3}, , N}. Whenever a new player, say i, is considered, he will be added as singleton first and thereafter in union with all coalitions that already exist. This method allows the collection of sets to be extended without destroying its logic. For increasing N, new elements are simply added at the end.

  53. 53.

    A very concise introduction to linear programming, including the fundamental results and their proofs, can be found in Owen (1995, pp. 35 ff.). Here, we make use of the theorem given ibid. pp. 40 ff. Also, Owen (1999, pp. 77 ff.) focuses on the dual relation of linear programs and shows how one can be derived from the other. At the respective extreme point of the dual programs, i.e.  minimum or maximum, the weak inequality turns into an equality, which allows for expression (2.104).

  54. 54.

    Each γ  that solves the underlying polyhedron G 1(A, 1).

  55. 55.

    This system is a proper – and binding – subset of the core inequalities x(S) ≥ v(S) which hold for all S ⊆ N (see (2.67)).

  56. 56.

    Decomposable games were introduced in Definition 2.31, p. 75.

  57. 57.

    In a convex game, there exists no S ⊆ N for which C S  = , hence also \({C}_{{S}^{{_\ast}}}\neq \varnothing \). This is shown in Shapley (1971, p. 18 f., p. 22).

  58. 58.

    This of course only makes sense for games where otherwise C(N, v) = .

  59. 59.

    Note that by definition no sign of ε is fixed. Naturally, for games with empty core, one would assume it to be positive, but it can also be used with a negative value to strengthen the core inequalities for games where C(N, v)≠. In this regard one could think of the ε-core as an instrument to narrow down the multiple choices of allocations a core might provide.

  60. 60.

    This is actually a convenient shortcut, as none of the adapted core-inequalities x({i}) ≥ v({i}) − ε are binding in this example. In general, the solution is a linear program including all 2n − 1 core constraints.

  61. 61.

    Lucas (1971, pp. 519 f.) classifies solution concepts according to their coverage.

  62. 62.

    Especially the Weighted Shapley value will reveal a close relation to the core, even though their theoretical foundations differ completely.

  63. 63.

    This axiom must not be confused with Axiom 2 in Shapley (1953b), to which the author misleadingly refers as “efficiency”. He actually applies the “carrier” property, which we treat next.

  64. 64.

    In this form, it is often referred to as marginality, see for example Young (1988, p. 270).

  65. 65.

    Note that expression (2.158) covers all four cases of players i and j with respect to their membership in set S, incorporating both versions of d i as given in (2.32) simultaneously, when applicable.

  66. 66.

    This property amounts to what is known as homogeneity of degree one for general functions: f(λ ⋅x) = λ1f(x), with \(\lambda \in \mathbb{R}\).

  67. 67.

    Interestingly, in Footnote 35 of Harsanyi (1956, p. 157) the author claims to have discovered independently an allocation rule identical to that of Shapley (1953b), while on leave in Australia.

  68. 68.

    The reader only interested in an intuitive foundation of the Shapley operator is welcome to skip the technicalities of the additive approach. These arise right after the axioms below and continue until (2.202) on p. 214.

  69. 69.

    This has been dropped in the subsequent literature. Its origin was most likely due to the fact that von Neumann and Morgenstern (1947, pp. 241 ff.) define superadditivity as a fundamental property of a characteristic function v. This again stems from the fact that they treat exclusively zero-sum games.

  70. 70.

    The undefined case where | T |  = 0 is excluded.

  71. 71.

    Even though the binomial coefficient was changed to accommodate for possible combinations where iR but i ∈ T, this does not apply to the second exponent in (2.186).

  72. 72.

    The binomial theorem is given by

    $${(x + y)}^{n} ={ \sum \nolimits }_{k=0}^{n}{ n \atop k} {x}^{n-k}{y}^{k},$$

    while our adaptions are the following: x → 1, y → − x, n → n − r, and k → t − r.

  73. 73.

    One could also drop the restriction i ∈ S in (2.202), as all marginal contributions are identical to zero for coalitions not including player i.

  74. 74.

    Shapley (1953b) originally defined his operator for superadditive games. To drop this restriction, changes to the axioms are necessary, which we incorporated in Theorem 2.6.

  75. 75.

    Think of an n!  ×n matrix, where every row represents a different marginal vector. The Shapley value of player i picks exactly one entry of each row, to be summed up and averaged with \(\frac{1} {n!}\). The sum of all players Shapley values is nothing else but the sum of all n! marginal vectors, which each sums to v(N), averaged with \(\frac{1} {n!}\).

  76. 76.

    Alternatively, to follow Shapley (1953b), the carrier axiom is also fulfilled by (2.202). By the definition of a carrier, no player who is not in every carrier can have an impact on the value function. His marginal values are therefore zero for all coalitions he joins and so the expression in square brackets in (2.202) is identical to zero.

  77. 77.

    There, we used the carrier axiom. But in a unanimity game it is clear that u T (S) = u T (N) holds for all sets S with T ⊆ S ⊆ N and all players i ∈ N ∖ T are null-players.

  78. 78.

    The (0, 1)-normalization has no influence on the outcome of either solution concept, it is merely applied for convenience. The core element (0, 1, 0) is straightforward to explain: The coalitions {1, 2}, {2, 3}, and N are entitled to an allocation of 1 in the core. Single players have a stand-alone value of zero and since player 2 is the only member of all profitable coalitions, he will get the whole payoff, thereby (and only thereby) not violating any of the core constraints.

  79. 79.

    A word of warning is given by Owen (1968), as he points out that for some types of games (a majority game in his example), the weights can skew the allocations given by the Weighted Shapley operator into counterintuitive directions.

  80. 80.

    In order to enable zero-allocations to players who are not null-players, we skip the so-called positively Weighted Shapley value (see also Shapley 1953b).

  81. 81.

    As usual, S i  ∩ S j  =  for all ij ∈ { 1, 2, , m} and ⋃ i = 1 m S i  = N. As the partition Σ is ordered, its elements appear not in a set, but as entries of an m-dimensional vector.

  82. 82.

    For the trivial partition Σ = (N), the sets S and \(\widehat{S}\) always coincide, and the result is the positively Weighted Shapley value, where no player (unless a null) can end up with a zero-allocation. If, in addition, the weights are all equal (and strictly positive), i.e. λ i  = λ j for all i, j ∈ N, then the outcome is the ordinary Shapley value.

  83. 83.

    Roughly, the idea is the following: First the stand-alone value is allocated to all singleton coalitions. Then the sum of two stand-alone values is subtract from the value of the respective coalitions of size two. The same is repeated with whatever might be left and then taken to coalitions of size three and so on, until the grand coalition is reached.

  84. 84.

    Whenever no confusion can arise, we write simply π, dropping the subscript that associates the permutation with the set over which it is defined.

  85. 85.

    As Σ restricts the possibility of permutating the player set N, but does not allow for any additional orderings, it follows that Π Σ  ⊆ Π N .

  86. 86.

    For an explanation, reconsider expression (2.212): The numerator being identical for any ordering π S  ∈ Π S , we have to focus on minimizing the denominator. Without loss of generality, suppose that the weights increase with index l, i.e. λ l  ≤ λ l + 1, and that the ordering π S is also such that π S (l) < π S (l + 1). Now pick some player k, who is neither first nor last under π S , and for which λ k  > λ k + 1. (If such a player doesn’t exist, then \({p}_{\lambda }({\pi }_{S}) \equiv {p}_{\lambda }\) for all π S  ∈ Π S and we have nothing to show.) Now, replace π S with π S , which are identical, except that player k and k + 1 are reversed. Then, in the denominator of (2.212), the kth element of the product of sums is strictly larger under π S than under π S , with all other elements equal. Therefore, p λ S ) > p λ S ).

  87. 87.

    More precisely, Shapley (1971, pp. 19 f.) demonstrates that all the marginal worth vectors are the (only!) vertices of the core of a convex game.

  88. 88.

    These results are refined by Rafels and Ybern (1995) and van Velzen et al. (2002) who show that only a certain set or number of marginal worth vectors have to be identified as elements of the core for the game to be convex.

  89. 89.

    The interested reader will find a more formal and sophisticated treatment of the Krein–Milman theorem, along with a proof, in Royden (1988, p. 207). We use its assertion that, in finite-dimensional vector-spaces, the convex hull of all extreme points of a compact convex set K is equal to the closed convex hull of the set K itself: co{exK} = coK.

  90. 90.

    In principle this procedure also works for the “neighboring” players n and 1, but we want to avoid an excursion into so-called modulon, an operation that (informally speaking) closes the ordering of players to a circle with neither beginning nor ending.

  91. 91.

    Minkowski’s Theorem, as given in Phelps (2001, p. 1), states: “If X is a compact convex subset of a finite-dimensional vector space E, and if x is an element of X, then x is a finite convex combination of extreme points of X.”

  92. 92.

    Actually, in this context, not all marginal vectors are required, according to Carathéodory’s Theorem: “If X is a compact convex subset of an n-dimensional space E, then each x in X is a convex combination of at most n + 1 extreme points of X.” See Phelps (2001, p. 7).

  93. 93.

    If we do not want to leave the realm of probability spaces, we can directly stick with Choquet’s Theorem: It states that any element of a compact convex set (here: the convex hull of extreme points) is the barycenter of some probability measure with support on (only but not necessary all) extreme points of the set. Details and a proof can be found in Phelps (2001, p. 14).

  94. 94.

    With the choice of weight system, especially the ordered partition Σ, some caveats are given: One can include many different combinations of vertices (i.e. orderings) into the support of p ω, as long as the logical structure of Σ is not violated. For any n-player game with N = { 1, 2, , n}, it is not possible to assign nonzero probability to orderings that swap players π(i) and π(j) with π(i) > > π(j), without also assigning nonzero probabilities to permutations of players in between i and j. All Players from π(j) to π(i) have to be included in the same element of Σ. So, if i and j can be swapped under some permutation in the support, this must also be true for the players in between.

  95. 95.

    This problem resurfaces in the application of the Weighted Shapley value, in Sect. 5.3.6.

  96. 96.

    This vector inequality is not to be confused with the previously defined relation of domination, even though it could be reproduced by the relation “domination through” with effective set N (see Definition 2.38, p. 98).

  97. 97.

    As opposed to, say the Weighted Shapley value Φω, where the weight system ω contains not only individual weights but also an ordered partition of the player set.

  98. 98.

    It should be noted here that we assume U to be comprehensive, not merely convex, but the latter property suffices for the existence of a maximum in this setting. Also, because (U, d) is non-degenerate by assumption, k cannot be equal to 0.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Philipp Servatius .

Rights and permissions

Reprints and permissions

Copyright information

© 2012 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Servatius, P. (2012). The Theory of Games. In: Network Economics and the Allocation of Savings. Lecture Notes in Economics and Mathematical Systems, vol 653. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-21096-9_2

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-21096-9_2

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-21095-2

  • Online ISBN: 978-3-642-21096-9

  • eBook Packages: Business and EconomicsEconomics and Finance (R0)

Publish with us

Policies and ethics