1 Introduction

In studying the computation of solutions of multi-player games, we encounter the well-known problem that a game’s payoff function has description length exponential in the number of players. One approach is to assume that the game comes from a concisely-represented class (for example, graphical games, anonymous games, or congestion games), and another one is to consider algorithms that have query access to the game’s payoff function.

In this paper, we study the computation of approximate Nash equilibria of multi-player games having the feature that if a player changes her behaviour, she only has a small effect on the payoffs that result to any other player. These games, sometimes called large games, or Lipschitz games, have recently been studied in the literature, since they model various real-world economic interactions; for example, an individual’s choice of what items to buy may have a small effect on prices, where other individuals are not strongly affected. Note that these games do not have concisely-represented payoff functions, which makes them a natural class of games to consider from the query-complexity perspective. It is already known how to compute approximate correlated equilibria for unrestricted n-player games. Here we study the more demanding solution concept of approximate Nash equilibrium.

Large games (equivalently, small-influence games) are studied in Kalai [16] and Azrieli and Shmaya [1]. In these papers, the existence of pure ε-Nash equilibria for \(\varepsilon = \gamma \sqrt {8n\log (2kn)}\) is established, where γ is the largeness/Lipschitz parameter of the game, and k is the number of pure strategies for each player. In particular, since we assume that \(\gamma = \frac {1}{n}\) and k = 2 we notice that ε = O(n− 1/2) so that there exist arbitrarily accurate pure Nash equilibria in large games as the number of players increases. Kearns et al. [17] study this class of games from the mechanism design perspective of mediators who aim to achieve a good outcome to such a game via recommending actions to players.

Babichenko [2] studies large binary-action anonymous games. Anonymity is exploited to create a randomised dynamic on pure strategy profiles that with high probability converges to a pure approximate equilibrium in O(n log n) steps.

Payoff query complexity has been recently studied as a measure of the difficulty of computing game-theoretic solutions, for various classes of games. Upper and lower bounds on query complexity have been obtained for bimatrix games [6, 7], congestion games [7], and anonymous games [11]. For general n-player games (where the payoff function is exponential in n), the query complexity is exponential in n for exact Nash, also exact correlated equilibria [15]; likewise for approximate equilibria with deterministic algorithms (see also [4]). For randomised algorithms, query complexity is exponential for well-supported approximate equilibria [3], which has since been strengthened to any ε-Nash equilibria [5]. With randomised algorithms, the query complexity of approximate correlated equilibrium is Θ(log n) for any positive ε [10].

Our main result applies in the setting of completely uncoupled dynamics in equilibria computation. These dynamics have been studied extensively: Hart and Mas-Colell [13] show that there exist finite-memory uncoupled strategies that lead to pure Nash equilibria in every game where they exist. Also, there exist finite memory uncoupled strategies that lead to ε-NE in every game. Young’s interactive trial and error [18] outlines completely uncoupled strategies that lead to pure Nash equilibria with high probability when they exist. Regret testing from Foster and Young [8] and its n-player extension by Germano and Lugosi in [9] show that there exist completely uncoupled strategies that lead to an ε-Nash equilibrium with high probability. Randomisation is essential in all of these approaches, as Hart and Mas-Colell [14] show that it is impossible to achieve convergence to Nash equilibria for all games if one is restricted to deterministic uncoupled strategies. This prior work is not concerned with rate of convergence; by contrast here we obtain efficient bounds on runtime. Convergence in adaptive dynamics for exact Nash equilibria is also studied by Hart and Mansour in [12] where they provide exponential lower bounds via communication complexity results. Babichenko [3] also proves an exponential lower bound on the rate of convergence of adaptive dynamics to an approximate Nash equilibrium for general binary games. Specifically, he proves that there is no k-queries dynamic that converges to an ε-WSNE in \(\frac {2^{{\Omega }(n)}}{k}\) steps with probability of at least 2−Ω(n) in all n-player binary games. Both of these results motivate the study of specific subclasses of these games, such as the “large” games studied here.

2 Preliminaries

We consider games with n players where each player has k actions \(\mathcal {A} = \{0, 1, ...,k-1\}\). Let a = (ai, ai) denote an action profile in which player i plays action ai and the remaining players play action profile ai. We also consider mixed strategies, which are defined by the probability distributions over the action set \(\mathcal {A}\). We write p = (pi, pi) to denote a mixed-strategy profile where pi is a distribution over \(\mathcal {A}\) corresponding to the i-th player’s mixed strategy. To be more precise, pi is a vector \( (p_{ij})_{j = 1}^{k-1}\) such that \({\sum }_{j = 1}^{k-1}p_{ij} \leq 1\) where pij denotes the i-th player’s probability mass on her j-th strategy. Furthermore, we denote \(p_{i0} = 1 - {\sum }_{j = 1}^{k-1}p_{ij}\) to be the implicit probability mass the i-th player places on her 0-th pure strategy.

Each player i has a payoff function \(u_{i}\colon \mathcal {A}^{n} \rightarrow [0, 1]\) mapping an action profile to some value in [0, 1]. We will sometimes write \(u_{i}(p) = \mathbb {E}_{a\sim p}\left [u_{i}(a)\right ]\) to denote the expected payoff of player i under mixed strategy p. An action a is player i’s best response to mixed strategy profile p if \(a \in \text {argmax}_{j\in \mathcal {A}} u_{i}(j , p_{-i})\).

We assume our algorithms or the players have no other prior knowledge of the game but can access payoff information through querying a payoff oracle\(\mathcal {Q}\). For each payoff query specified by an action profile \(a\in \mathcal {A}^{n}\), the query oracle will return \((u_{i}(a))_{i = 1}^{n}\), the n-dimensional vector of payoffs to each player. Our goal is to compute an approximate Nash equilibrium with a small number of queries. In the completely uncoupled setting, a query works as follows: each player i chooses her own action ai independently of the other players, and learns her own payoff ui(a) but no other payoffs.

Definition 1 (Regret; (approximate) Nash equilibrium)

Let p be a mixed strategy profile, the regret for player i at p is

$$reg(p, i) = \max\limits_{j\in\mathcal{A}} \mathbb{E}_{a_{-i}\sim p_{-i}}\left[u_{i}(j, a_{-i})\right] - \mathbb{E}_{a \sim p}\left[u_{i}(a)\right]. $$

A mixed strategy profile p is an ε-approximate Nash equilibrium (ε-NE) if for each player i, the regret satisfies reg(p, i) ≤ ε.

In Section 6.1 we will address the stronger notion of a well-supported approximate Nash equilibrium. In essence, such an equilibrium is a mixed-strategy profile where players only place positive probability on actions that are approximately optimal. In order to precisely define this, we introduce \(supp(p_{i}) = \{j \in \mathcal {A} \ | \ p_{ij} > 0 \}\) to be the set of actions that are played with positive probability in player i’s mixed strategy pi.

Definition 2 (Well-supported approximate Nash equilibrium)

A mixed-strategy profile \(p = (p_{i})_{i = 1}^{n}\) is an ε well-supported Nash Equilibrium (ε -WSNE) if and only if the following holds for all players i ∈ [n]:

$$j \in supp(p_{i}) \Rightarrow \max\limits_{\ell\in\mathcal{A}} \mathbb{E}_{a_{-i}\sim p_{-i}}\left[u_{i}(\ell, a_{-i})\right] - u_{i}(j) < \varepsilon $$

An ε-WSNE is always an ε-NE, but the converse is not necessarily true as a player may place probability mass on strategies that are more than ε from optimal yet still maintain a low regret in the latter.

Observation 1

To find an exact Nash (or even, correlated) equilibrium of a large game,in the worst case it is necessary to query the game exhaustively, evenwith randomised algorithms. This uses a similar negative result for generalgames due to [15], and noting that we can obtain a strategically equivalentγ-largegame (Definition3), by scaling down the payoffs into the interval [0, γ].

We will assume the following largeness condition in our games. Informally, such largeness condition implies that no single player has a large influence on any other player’s utility function.

Definition 3 (Large Games)

A game is γ-large if for any two distinct players ij, any two distinct actions aj and \(a_{j}^{\prime }\) for player j, and any tuple of actions aj for everyone else:

$$|u_{i}(a_{j}, a_{-j}) - u_{i}(a_{j}^{\prime}, a_{-j})| \leq \gamma \in [0, 1]. $$

We will call γ the largeness parameter of the game; in [1] this quantity is called the Lipschitz value of the game. One immediate implication of the largeness assumption is the following Lipschitz property of the utility functions.

Lemma 1

For any playeri ∈ [n],and any action\(j\in \mathcal {A}\),the fixed utility functionui(j, pi) : [0, 1](n− 1)×(k− 1) → [0, 1] is aγ-Lipschitzfunction of the second argumentpi ∈ [0, 1](n− 1)×(k− 1)w.r.t. the1norm.

Proof

Without loss of generality consider i = 1 and j = 0. Let q = p− 1 and \(q^{\prime } = p^{\prime }_{-1}\) be two mixed strategy profiles for the other players. For i ≥ 2 and \(j \in \mathcal {A} \setminus \{0\}\), let \(\delta _{ij} = q^{\prime }_{ij} - q_{ij}\). Note that \(\|q - q^{\prime }\|_{1} = {\sum }_{ij} |\delta _{ij}|\).

Let eij be the unit vector that has a 1 in the (ij)-th entry and 0 elsewhere. We first show that there exists an ordering of the discrete set {(ij) | 2 ≤ in, 1 ≤ jk} denoted by {α1, α2, ..., α(n− 1)(k− 1)} such that for all = 1, ..., (n − 1)(k − 1), the vector \(q_{\ell } = q + {\sum }_{i = 1}^{\ell } \delta _{\alpha _{i}} e_{\alpha _{i}}\) represents valid mixed strategy profiles for players i ≥ 2.

Suppose that we fix i, and consider qi and \(q^{\prime }_{i} \) as the mixed strategies of player i arising in q and q. We recall that these are vectors in [0, 1]k− 1 whose components sum is less than 1. We consider two cases. In the first, suppose that there exists a j such that δij < 0 by definition, δij < qij, hence qi + δijej is a valid mixed strategy for player i.

In the second, suppose that δij > 0 for all j. Now suppose that \(\delta _{ij} > q_{i0} = 1 - {\sum }_{j = 1}^{k-1} q_{ij}\) for all j. If such is the case then \(q^{\prime }_{i}\) cannot possibly be a valid mixed strategy for player i, hence it must be the case that for some j, δij < qi0, hence once again qi + δijej is a valid mixed strategy for player i.

Since such a choice of valid updates by δij can always be found for valid qi and \(q^{\prime }_{i}\), we can recursively find valid shifts by δij in a specific coordinate to reach \(q^{\prime }_{i}\) from qi. If this is applied in order for all players i ≥ 2, the aforementioned claim holds and indeed \(q_{\ell } = q + {\sum }_{i = 1}^{\ell } \delta _{\alpha _{i}} e_{\alpha _{i}}\) for some ordering {α1, ..., α(n− 1)(k− 1)}.

With this in hand, we can use telescoping sums and the largeness condition to prove our lemma. For simplicity of notation, in what follows we assume that q0 = q, and we recall that by definition q(n− 1)(k− 1) = q.

$$\begin{array}{@{}rcl@{}} |u_{i}(j, q^{\prime}) - u_{i}(j, q)| &=& \left|\sum\limits_{\ell = 1}^{(n-1)(k-1)} u_{i}(j, q_{\ell}) - u_{i}(j, q_{\ell-1}) \right| \\ \text{(Triangle Inequality)}\qquad &\leq& \sum\limits_{\ell = 1}^{(n-1)(k-1)} \left| u_{i}(j, q_{\ell}) - u_{i}(j, q_{\ell-1}) \right|\\ \text{(Definition of Largeness)}\qquad &\leq& \sum\limits_{\ell = 1}^{(n-1)(k-1)} \gamma |\delta_{\alpha_{\ell}}| = \gamma \|q^{\prime} - q\|_{1} \end{array} $$

which proves our claim. □

From now on until Section 6 we will focus on \(\frac {1}{n}\)-large binary action games where \(\mathcal {A} = \{0, 1\}\) and \(\gamma =\frac {1}{n}\). The reason for this is that the techniques we introduce can be more conveniently conveyed in the special case of \(\gamma =\frac {1}{n}\), and subsequently extended to general γ.

Recall that pi denotes a mixed strategy of player i. In the special case of binary-action games, we slightly abuse the notation to let pi denote the probability that player i plays 1 (as opposed to 0), since in the binary-action case, this single probability describes i’s mixed strategy.

The following notion of discrepancy will be useful.

Definition 4 (Discrepancy)

Letting p be a mixed strategy profile, the discrepancy for player i at p is

$$disc(p, i) = \left| \mathbb{E}_{a_{-i}\sim p_{-i}}\left[u_{i}(0, a_{-i})\right] - \mathbb{E}_{a_{-i}\sim p_{-i}}\left[u_{i}(1, a_{-i})\right] \right|. $$

Estimating Payoffs for Mixed Profiles

We can approximate the expected payoffs for any mixed strategy profile by repeated calls to the oracle \(\mathcal {Q}\). In particular, for any target accuracy parameter β and confidence parameter δ, consider the following procedure to implement an oracle \(\mathcal {Q}_{\beta , \delta }\):

  • For any input mixed strategy profile p, compute a new mixed strategy profile \(p^{\prime } = (1 - \frac {\beta }{2})p + (\frac {\beta }{2})\mathbf {1}\) such that each player i is playing uniform distribution with probability \(\frac {\beta }{2}\) and playing distribution pi with probability \(1 - \frac {\beta }{2}\).

  • Let \(N = \frac {64}{\beta ^{3}} \log \left (8n /\delta \right )\), and sample N payoff queries randomly from p, and call the oracle \(\mathcal {Q}\) with each query as input to obtain a payoff vector.

  • Let \(\widehat u_{i,j}\) be the average sampled payoff to player i for playing action j.Footnote 1 Output the payoff vector \((\widehat {u}_{ij})_{i\in [n], j\in \{0, 1\}}\).

Lemma 2

For anyβ, δ ∈ (0, 1)and any mixed strategy profilep, the oracle\(\mathcal {Q}_{\beta , \delta }\)with probability at least1 − δoutputs a payoff vector\((\widehat u_{i,j})_{i\in [n], j\in \{0, 1\}}\)that has an additiveerror of at mostβ,that is for each playeri, and each actionj ∈{0, 1},

$$|u_{i}(j, p_{-i}) - \widehat{u}_{i, j}| \leq \beta. $$

The lemma follows from Proposition 1 of [10] and the largeness property.

Extension to Stochastic Utilities

We consider a generalisation where the utility to player i of any pure profile a may consist of a probability distribution Da,i over [0, 1], and if a is played, i receives a sample from Da,i. The player wants to maximise her expected utility with respect to sampling from a (possibly mixed) profile, together with sampling from any Da,i that results from a being chosen. If we extend the definition of \(\mathcal {Q}\) to output samples of the Da,i for any queried profile a, then \(\mathcal {Q}_{\beta ,\delta }\) can be defined in a similar way as before, and simulated as above using samples from \(\mathcal {Q}\). Our algorithmic results extend to this setting.

3 Warm-up: 0⋅25-Approximate Equilibrium

In this section, we exhibit some simple procedures whose general approach is to query a constant number of mixed strategies (for which additive approximations to the payoffs can be obtained by sampling). Observation 2 notes that a \(\frac {1}{2}\)-approximate Nash equilibrium can be found without using any payoff queries:

Observation 2

Consider the following “uniform” mixed strategy profile. Each player puts \(\frac {1}{2}\) probability mass on each action: for all i, \(p_{i} = \frac {1}{2}\) . Such a mixed strategy profile is a \(\frac {1}{2}\) -approximate Nash equilibrium.

We present two algorithms that build on Observation 2 to obtain better approximations than \(\frac {1}{2}\). For simplicity of presentation, we assume that we have access to a mixed strategy query oracle \(\mathcal {Q}_{M}\) that returns exact expected payoff values for any input mixed strategy p. Our results continue to hold if we replace \(\mathcal {Q}_{M}\) by \(\mathcal {Q}_{\beta , \delta }\). Footnote 2

Obtaining ε = 0⋅272

First, we show that having each player making small adjustment from the “uniform” strategy can improve ε from \(\frac {1}{2}\) to around 0⋅27. We simply let players with large regret shift more probability weight towards their best responses. More formally, consider the following algorithm OneStep with two parameters α, Δ ∈ [0, 1]:

  • Let the players play the “uniform” mixed strategy. Call the oracle \(\mathcal {Q}_{M}\) to obtain the payoff values of ui(0, pi) and ui(1, pi) for each player i.

  • For each player i, if ui(0, pi) − ui(1, pi) > α, then set \(p_{i} = \frac {1}{2} - {\Delta }\); if ui(1, pi) − ui(0, pi) > α, set \(p_{i} = \frac {1}{2} + {\Delta }\); otherwise keep playing \(p_{i} = \frac {1}{2}\).

Theorem 1

If we use algorithmOneStepwith parameters\(\alpha = 2 - \sqrt {\frac {11}{3}}\)and\({\Delta } = \sqrt {\frac {11}{48}} - \frac {1}{4}\),then the resulting mixed strategy profile is anε-approximateNash equilibrium withε ≤ 0⋅272.

Proof

Let p denote the “uniform” mixed strategy, and p denote the output strategy by OneStep. We know that ∥pp1nΔ. By Lemma 1, we know that for any player i and action j, \(|u_{i}(j, p_{-i}) - u_{i}(j, p_{-j}^{\prime })|\leq {\Delta }\).

Consider a player i whose discrepancy in p satisfies disc(p, i) ≤ α. Then such player’s discrepancy in p is at most disc(p, i) ≤ α + 2Δ, so her regret in p is bounded by

$$ reg(p^{\prime}, i) = p_{i}^{\prime} \, disc(p^{\prime}, i) = disc(p^{\prime}, i)/2 \leq \alpha/2 + {\Delta}. $$
(1)

Consider a player i such that disc(p, i) > α. Then we consider two different cases. In the first case, the best response of player i remains the same in both profiles p and p. Since disc(p, i) ≤ 1, we can bound the regret by

$$ reg(p^{\prime}, i) = p_{i}^{\prime} \, disc(p^{\prime}, i) = \left( \frac{1}{2} - {\Delta} \right). $$
(2)

In the second case, the best response of player i changes when the profile p changes to p. In this case, the discrepancy is at most 2Δ − α, and so the regret is bounded by

$$ reg(p^{\prime}, i) = p_{i}^{\prime}\, disc(p^{\prime}, i) = \left( \frac{1}{2} + {\Delta} \right)(2{\Delta} - \alpha). $$
(3)

By combining all cases from (1) to (3), we know the regret is upper-bounded by

$$ reg(p^{\prime}, i) \leq \max \left( \frac{\alpha}{2}+ {\Delta}, \frac{1}{2}-{\Delta}, \frac{1}{2} (1 + 2{\Delta}) (2{\Delta} - \alpha) \right) $$
(4)

By choosing values

$$(\alpha^{*}, {\Delta}^{*}) = \left( 2 - \sqrt{\frac{11}{3}}, \sqrt{\frac{11}{48}} - \frac{1}{4} \right) \approx (0{\cdot}085, 0{\cdot}229) $$

The right hand side of (4) is bounded by 0⋅272. Thus if we use the optimal α and Δ in our algorithm, we can attain an ε = 0⋅272 approximate Nash equilibrium. □

Obtaining ε = 0⋅25

We now give a slightly more sophisticated algorithm than the previous one. We will again have the players starting with the “uniform” mixed strategy, then let players shift more weights towards their best responses, and finally let some of the players switch back to the uniform strategy if their best responses change in the adjustment. Formally, the algorithm TwoStep proceeds as:

  • Start with the “uniform” mixed strategy profile, and query the oracle \(\mathcal {Q}_{M}\) for the payoff values. Let bi be player i’s best response.

  • For each player i, set the probability of playing their best response bi to be \(\frac {3}{4}\). Call \(\mathcal {Q}_{M}\) to obtain payoff values for this mixed strategy profile, and let \(b^{\prime }_{i}\) be each player i’s best response in the new profile.

  • For each player i, if \(b_{i}\neq b^{\prime }_{i}\), then resume playing \(p_{i} = \frac {1}{2}\). Otherwise maintain the same mixed strategy from the previous step.

Theorem 2

The mixed strategy profile output byTwoStepis anε-approximateNash equilibrium withε ≤ 0⋅25.

Proof

Let p denote the “uniform” strategy profile, p denote the strategy profile after the first adjustment, and p denote the output strategy profile by TwoStep.

For any player i, there are three cases regarding the discrepancy disc(p, i).

  1. 1.

    The discrepancy \(disc(p, i) > \frac {1}{2}\);

  2. 2.

    The discrepancy \(disc(p, i) \leq \frac {1}{2}\) and player i returns to the uniform mixed strategy at the end;

  3. 3.

    The discrepancy \(disc(p, i) \leq \frac {1}{2}\) and player i does not return to the uniform mixed strategy in the end.

Before we go through all the cases, the following facts are useful. Observe that ∥pp∥,∥pp∥,∥pp∥≤ n/4, so for any action j,

$$\max\{|u_{i}(j, p_{-i}^{\prime}) - u_{i}(j, p_{-i}^{\prime\prime})|, |u_{i}(j, p_{-i}) - u_{i}(j, p_{-i}^{\prime}), |u_{i}(j, p_{-i}) - u_{i}(j, p_{-i}^{\prime\prime})| \} \leq \frac{1}{4} $$
(5)

It follows that

$$\max\{|disc(p^{\prime},i) - disc(p^{\prime\prime}, i)|, |disc(p, i) - disc(p^{\prime}, i)|, |disc(p,i ) - disc(p^{\prime\prime}, i)| \} \leq \frac{1}{2} $$

We will now bound the regret of player i in the first case. Since in the mixed strategy profile p, the best response of player i is better than the other action by more than \(\frac {1}{2}\). This means the best response action will remain the same in p and p for this player, and she will play this action with probability \(\frac {3}{4}\) in the end, so her regret is bounded by \(\frac {1}{4}\).

Let us now focus on the second case where discrepancy \(disc(p, i) \leq \frac {1}{2}\) and player i returns to the uniform strategy of part 1. It is sufficient to show that the discrepancy at the end satisfies \(disc(p^{\prime \prime }, i) \leq \frac {1}{2}\). Without loss generality, assume that the player best response in the “uniform” strategy profile is action bi = 1, and the best response after the first adjustment is action bi = 0. This means

$$u_{i}(1, p_{-i}) - u_{i}(0, p_{-i})\geq 0 \quad \text{and, }\quad u_{i}(0, p_{-i}^{\prime}) - u_{i}(1, p_{-i}^{\prime}) \geq 0. $$

By combining with (5), we have

$$\begin{array}{@{}rcl@{}} u_{i}(1, p_{-i}^{\prime\prime}) - u_{i}(0, p_{-i}^{\prime\prime})\leq u_{i}(1, p_{-i}^{\prime}) - u_{i}(0, p_{-i}^{\prime}) + \frac{1}{2} \leq \frac{1}{2}\\ u_{i}(0, p_{-i}^{\prime\prime}) - u_{i}(1, p_{-i}^{\prime\prime}) \leq u_{i}(0, p_{-i}) - u_{i}(1, p_{-i}) + \frac{1}{2} \leq \frac{1}{2}. \end{array} $$

Therefore, we know \(disc(p^{\prime \prime },i) \leq \frac {1}{2}\), and hence the regret \(reg(p^{\prime \prime },i) \leq \frac {1}{4}\).

Finally, we consider the third case where \(disc(p, i) \leq \frac {1}{2}\) and player i does not return to a uniform strategy. Without loss generality, assume that action 1 is best response for player i in both p and p, and so ui(1, pi′) ≥ ui(0, pi′). By (5), we also have

$$u_{i}(0, p_{-i}^{\prime\prime}) - u_{i}(1, p_{-i}^{\prime\prime}) \leq \frac{1}{2}. $$

If in the end her best response changes to 0, then the regret is bounded by \(reg(p^{\prime \prime },i) \leq \frac {1}{8}\). Otherwise if the best response remains to be 1, then the regret is again bounded by \(reg(p^{\prime \prime }, i) \leq \frac {1}{4}\)

Hence, in all of the cases above we could bound the player’s regret by \(\frac {1}{4}\). □

4 \(\frac {1}{8}\)-Approximate Equilibrium via Uncoupled Dynamics

In this section, we present our main algorithm that achieves approximate equilibria with \(\varepsilon \approx \frac {1}{8}\) in a completely uncoupled setting. In order to arrive at this we first model game dynamics as an uncoupled continuous-time dynamical system where a player’s strategy profile updates depend only on her own mixed strategy and payoffs. Afterwards we present a discrete-time approximation to these continuous dynamics to arrive at a query-based algorithm for computing \((\frac {1}{8} + \alpha )\)-Nash equilibrium with query complexity logarithmic in the number of players. Here, α > 0 is a parameter that can be chosen, and the number of mixed-strategy profiles that need to be tested is inversely proportional to α. Finally, as mentioned in Section 2, we recall that these algorithms carry over to games with stochastic utilities, for which we can show that our algorithm uses an essentially optimal number of queries.

Throughout the section, we will rely on the following notion of a strategy/payoff state, capturing the information available to a player at any moment of time.

Definition 5 (Strategy-payoff state)

For any player i, the strategy/payoff state for player i is defined as the ordered triple si = (vi1, vi0, pi) ∈ [0, 1]3, where vi1 and vi0 are the player’s utilities for playing pure actions 1 and 0 respectively, and pi denotes the player’s probability of playing action 1. Furthermore, we denote the player’s discrepancy by Di = |vi1vi0| and we let \(p_{i}^{*}\) denote the probability mass on the best response, that is if vi1vi0, \(p_{i}^{*} = p_{i}\), otherwise \(p_{i}^{*} = 1-p_{i}\).

4.1 Continuous-Time Dynamics

First, we will model game dynamics in continuous time, and assume that a player’s strategy/payoff state (and thus all variables it contains) is a differentiable time-valued function. When we specify these values at a specific time t, we will write si(t) = (vi1(t), vi0(t), pi(t)). Furthermore, for any time-differentiable function g, we denote its time derivative by \(\dot {g} = \frac {d}{dt} g\). We will consider continuous game dynamics formally defined as follows.

Definition 6 (Continuous game dynamic)

A continuous game dynamic consists of an update function f that specifies a player’s strategy update at time t. Furthermore, f depends only on si(t) and \(\dot {s}_{i}(t)\). In other words, \(\dot {p}_{i}(t) = f(s_{i}(t), \dot {s}_{i}(t))\) for all t.

Observation 3

We note that in this framework, a specific player’s updates do not dependon other players’ strategy/payoff states nor their history of play. This willeventually lead us to uncoupled Nash equilibria computation in Section 4.2.

A central object of interest in our continuous dynamic is a linear sub-space \(\mathcal {P} \subset [0, 1]^{3}\) such that all strategy/payoff states in it incur a bounded regret. Formally, we will define \(\mathcal {P}\) via its normal vector \(\vec {n} = (-\frac {1}{2},\frac {1}{2},1)\) so that \(\mathcal {P} = \{ s_{i} | \ s_{i} \cdot \vec {n} = \frac {1}{2} \}\). Equivalently, we could also write \(\mathcal {P} = \{s_{i} \ | \ p_{i}^{*} = \frac {1}{2}(1+D_{i})\}\). (See Fig. 1 for a visualisation.) With this observation, it is straightforward to see that any player with strategy/payoff state in \(\mathcal {P}\) has regret at most \(\frac {1}{8}\).

Fig. 1
figure 1

Visualisation of \(\mathcal {P}\); on the red line, vi0 = vi1 so the player is indifferent and mixes with equal probabilities; at the red points the player has payoffs of 0 and 1, and makes a pure best response

Lemma 3

If player i’s strategy/payoff state satisfies \(s_{i}\in \mathcal {P}\) , then her regret is at most \(\frac {1}{8}\) .

Proof

This follows from the fact that a player’s regret can be expressed as \(D_{i}(1-p_{i}^{*})\) and the fact that all points on \(\mathcal {P}\) also satisfy \(p_{i}^{*} = \frac {1}{2}(1+D_{i})\). In particular, the maximal regret of \(\frac {1}{8}\) is achieved when \(D_{i} = \frac {1}{2}\) and \(p_{i}^{*} = \frac {3}{4}\). □

Next, we want to show there exists a dynamic that allows all players to eventually reach \(\mathcal {P}\) and remain on it over time. We notice that for a specific player, \(\dot {v}_{i1}\), \(\dot {v}_{i0}\) and subsequently \(\dot {D}_{i}\) measure the cumulative effect of other players shifting their strategies. However, if we limit how much any individual player can change their mixed strategy over time by imposing \(|\dot {p}_{i}| \leq 1\) for all i, Lemma 1 guarantees \(|\dot {v}_{ij}| \leq 1\) for j = 0, 1 and consequently \(|\dot {D}_{i}| \leq 2\). With these quantities bounded, we can consider an adversarial framework where we construct game dynamics by solely assuming that \(|\dot {p}_{i}(t)| \leq 1\), \(|\dot {v}_{ij}(t)| \leq 1\) for j = 0, 1 and \(|\dot {D}_{i}(t)| \leq 2\) for all times t ≥ 0.

Now assume an adversary controls \(\dot {v}_{i0}\), \(\dot {v}_{i1}\) and hence \(\dot {D}_{i}\), one can show that if a player sets \(\dot {p}_{i}(t) = \frac {1}{2}(\dot {v}_{i1}(t) - \dot {v}_{i0}(t))\), then she could stay on \(\mathcal {P}\) whenever she reaches the subspace.

Lemma 4

If \(s_{i}(0) \in \mathcal {P}\) , and \(\dot {p}_{i}(t) = \frac {1}{2}(\dot {v}_{i1}(t) - \dot {v}_{i0}(t))\) , then \(s_{i}(t) \in \mathcal {P} \ \forall \ t \geq 0\) .

Theorem 3

Under the initial conditions \(p_{i}(0) = \frac {1}{2}\) for all i, the following continuous dynamic, Uncoupled Continuous Nash (UCN) , has all players reach \(\mathcal {P}\) in at most \(\frac {1}{2}\) time units. Furthermore, upon reaching \(\mathcal {P}\) a player never leaves.

$$\dot{p}_{i}(t) = f(s_{i}(t), \dot{s}_{i}(t)) = \left\{\begin{array}{cl} ~~~1 & \text{ if } s_{i} \notin \mathcal{P} \text{ and } v_{i1} \geq v_{i0} \\ -1 & \text{ if } s_{i} \notin \mathcal{P} \text{ and } v_{i1} < v_{i0} \\ \hfill \frac{1}{2}(\dot{v}_{i1}(t) - \dot{v}_{i0}(t)) \hfill & \text{ if } s_{i} \in \mathcal{P} \end{array}\right. $$

Proof

From Lemma 4 it is clear that once a player reaches \(\mathcal {P}\) they never leave the plane. It remains to show that it takes at most \(\frac {1}{2}\) time units to reach \(\mathcal {P}\).

Since \(p_{i}(0) = p_{i}^{*}(0) = \frac {1}{2}\), it follows that if \(s_{i}(0) \notin \mathcal {P}\) then \(p_{i}^{*}(0) < \frac {1}{2}(1 + D_{i}(0))\). On the other hand, if we assume that \(\dot {p}_{i}^{*}(t) = 1\) for \(t \in [0,\frac {1}{2}]\), and that player preferences do not change, then it follows that \(p_{i}^{*}(\frac {1}{2}) = 1\) and \(p_{i}^{*}(\frac {1}{2}) \geq \frac {1}{2}(1 + D_{i}(\frac {1}{2}))\), where equality holds only if \(D_{i}(\frac {1}{2}) = 1\). By continuity of \(p_{i}^{*}(t)\) and Di(t) it follows that for some \(k \leq \frac {1}{2}\), \(s_{i}(k) \in \mathcal {P}\). It is simple to see that the same holds in the case where preferences change. □

4.2 Discrete Time-Step Approximation

The continuous-time dynamics of the previous section hinge on obtaining expected payoffs in mixed strategy profiles, thus we will approximate expected payoffs via \(\mathcal {Q}_{\beta ,\delta }\). Our algorithm will have each player adjusting their mixed strategy over rounds, and in each round query \(\mathcal {Q}_{\beta , \delta }\) to obtain the payoff values.

Since we are considering discrete approximations to UCN, the dynamics will no longer guarantee that strategy/payoff states stay on the plane \(\mathcal {P}\). For this reason we define the following region around \(\mathcal {P}\):

Definition 7

Let \(\mathcal {P}^{\lambda } = \{ s_{i} \ | \ s_{i} \cdot \vec {n} \in [\frac {1}{2} - \lambda , \frac {1}{2} + \lambda ]\}\), with normal vector \(\vec {n} = (-\frac {1}{2},\frac {1}{2},1)\). Equivalently, \(\mathcal {P}^{\lambda } = \{s_{i} \ | \ p_{i}^{*} = \frac {1}{2}(1+D_{i}) + c, \ c \in [-\lambda ,\lambda ]\}\).

Just as in the proof of Lemma 3, we can use the fact that a player’s regret is \(D_{i}(1-p_{i}^{*})\) to bound regret on \(\mathcal {P}^{\lambda }\).

Lemma 5

The worst case regret of any strategy/payoff state in \(\mathcal {P}^{\lambda }\) is \(\frac {1}{8}(1 + 2\lambda )^{2}\) . This is attained on the boundary: \(\partial \mathcal {P}^{\lambda } = \{ s_{i} \ | \ s_{i} \cdot \vec {n} = \frac {1}{2} \pm \lambda \}\) .

Corollary 1

For a fixedα > 0,if\(\lambda = \frac {\sqrt {1 + 8\alpha } - 1}{2}\),then\(\mathcal {P}^{\lambda }\)attains a maximal regret of\(\frac {1}{8} +\alpha \).

We present an algorithm in the completely uncoupled setting, UN(α, η), that for any parameters α, η ∈ (0, 1] computes a \((\frac {1}{8} + \alpha )\)-Nash equilibrium with probability at least 1 − η.

Since pi(t) ∈ [0, 1] is the mixed strategy of the i-th player at round t we let \(p(t) = (p_{i}(t))_{i = 1}^{n}\) be the resulting mixed strategy profile of all players at round t. Furthermore, we use the mixed strategy oracle \(\mathcal {Q}_{\beta ,\delta }\) from Lemma 2 that for a given mixed strategy profile p returns the vector of expected payoffs for all players with an additive error of β and a correctness probability of 1 − δ.

The following lemma is used to prove the correctness of UN(α, η):

Lemma 6

Suppose that\(w \in \mathbb {R}^{3}\)withwλand let function\(h(x) = x \cdot \vec {n}\),where\(\vec {n}\)is the normal vector of\(\mathcal {P}\).Thenh(x + w) − h(x) ∈ [− 2λ, 2λ].Furthermore, ifw3 = 0,thenh(x + w) − h(x) ∈ [−λ, λ].

Proof

The statement follows from the following expression:

$$h(x + w) - h(x) = w \cdot \vec{n} = \frac{1}{2} (w_{2} - w_{1}) + w_{3} $$

figure a

Theorem 4

With probability1 − η,UN(α, η)correctly returns a\((\frac {1}{8} + \alpha )\)-approximateNash equilibrium by using\(O(\frac {1}{\alpha ^{4}} \log \left (\frac {n}{\alpha \eta } \right ) ) \)queries.

Proof

By Lemma 2 and union bound, we can guarantee that with probability at least 1 − η all sample approximations to mixed payoff queries have an additive error of at most \({\Delta } = \frac {\lambda }{4}\). We will condition on this accuracy guarantee in the remainder of our argument. Now we can show that for each player there will be some round kN, such that at the beginning of the round their strategy/payoff state lies in \(\mathcal {P}^{\lambda /2}\). Furthermore, at the beginning of all subsequent rounds tk, it will also be the case that their strategy/payoff state lies in \(\mathcal {P}^{\lambda /2}\).

The reason any player generally reaches \(\mathcal {P}^{\lambda /2}\) follows from the fact that in the worst case, after increasing p by Δ for N rounds, p = 1, in which case a player is certainly in \(\mathcal {P}^{\lambda /2}\). Furthermore, Lemma 6 guarantees that each time p is increased by Δ, the value of \(\widehat {s}_{i} \cdot \vec {n}\) changes by at most \(\frac {\lambda }{2}\) which is why \(\widehat {s}_{i}\) are always steered towards \(\mathcal {P}^{\lambda /4}\). Due to inherent noise in sampling, players may at times find that \(\widehat {s}_{i}\) slightly exit \(\mathcal {P}^{\lambda /4}\) but since additive errors are at most \(\frac {\lambda }{4}\). We are still guaranteed that true si lie in \(\mathcal {P}^{\lambda /2}\).

The second half of step 4 forces a player to remain in \(\mathcal {P}^{\lambda /2}\) at the beginning of any subsequent round tk. The argumentation for this is identical to that of Lemma 4 in the continuous case.

Finally, the reason that individual probability movements are restricted to \({\Delta } = \frac {\lambda }{4}\) is that at the end of the final round, players will move their probabilities and will not be able to respond to subsequent changes in their strategy/payoff states. From the second part of Lemma 6, we can see that in the worst case this can cause a strategy/payoff state to move from the boundary of \(\mathcal {P}^{\lambda /2}\) to the boundary of \(\mathcal {P}^{\frac {3\lambda }{4}} \subset \mathcal {P}^{\lambda }\). However, λ is chosen in such a way so that the worst-case regret within \(\mathcal {P}^{\lambda }\) is at most \(\frac {1}{8}+\alpha \), therefore it follows that UN(α, η) returns a \(\frac {1}{8} + \alpha \) approximate Nash equilibrium. Furthermore, the number of queries is

$$(N + 1) \left( \frac{1024}{\lambda^{3}} \log \left( \frac{8nN}{\eta} \right) \right) = \left( \frac{1}{\lambda} + 1 \right) \left( \frac{1024}{\lambda^{3}} \log \left( \frac{8n}{\lambda \eta} \right) \right). $$

It is not difficult to see that \(\frac {1}{\lambda } = O(\frac {1}{\alpha })\) which implies that the number of queries made is \(O \left (\frac {1}{\alpha ^{4}} \log \left (\frac {n}{ \alpha \eta } \right ) \right )\) in the limit. □

4.3 Logarithmic Lower Bound

As mentioned in the preliminaries section, all of our previous results extend to stochastic utilities. In particular, if we assume that G is a game with stochastic utilities where expected payoffs are large with parameter \(\frac {1}{n}\), then we can apply UN(α, η) with O(log(n)) queries to obtain a mixed strategy profile where no player has more than \(\frac {1}{8} + \alpha \) incentive to deviate. Most importantly, for > 2, we can use the same methods as [10] to lower bound the query complexity of computing a mixed strategy profile where no player has more than \((\frac {1}{2} - \frac {1}{\ell })\) incentive to deviate.

Theorem 5

If > 2,the query complexity of computing a mixedstrategy profile where no player has more than\((\frac {1}{2} - \frac {1}{\ell })\)incentive to deviate for stochastic utility games is Ω(log (− 1)(n)).Alongside Theorem 4 this implies the query complexity ofcomputing mixed strategy profiles where no player has more than\(\frac {1}{8}\)incentive to deviate in stochastic utility games isΘ(log(n)).

Proof

Suppose that we have n players and that > 2. For every b ∈{0, 1}n we can construct a stochastic utility game Gb as follows: For each player i, the utility of strategy bi is bernoulli with bias \(\frac {\ell }{\ell -1}\) and the utility of strategy 1 − bi is bernoulli with bias \(\frac {1}{\ell }\). Note that this game is trivially \(\left (\frac {1}{n} \right )\)-Lipschitz, as each player’s payoff distributions are completely independent of other players’ strategies. □

Suppose that \(\mathcal {G}\) is the uniform distribution on the set of all Gb, then using the same argumentation as Theorem 3 of [10], we get the following:

Theorem 6

Let\(\mathcal {A}\)be a deterministic payoff-query algorithm that uses at most log (− 1)(n) queries and outputs a mixed strategyp. If\(\mathcal {A}\)performs on\(\mathcal {G}\),then with probability more than\(\frac {1}{2}\),there will exist a player with a regret greater than\(\frac {1}{2} - \frac {1}{\ell }\)inp.

We can immediately apply Yao’s minimax principle to this result to complete the proof.

5 Achieving \(\varepsilon < \frac {1}{8}\) with Communication

We return to continuous dynamics to show that we can obtain a worst-case regret of slightly less than \(\frac {1}{8}\) by using limited communication between players, thus breaking the uncoupled setting we have been studying until now.

First of all, let us suppose that initially \(p_{i}(0) = \frac {1}{2}\) for each player i and that UCN is run for \(\frac {1}{2}\) time units so that strategy/payoff states for each player lie on \(\mathcal {P} = \{ s_{i} \ | \ p_{i}^{*} = \frac {1}{2}(1 + D_{i}) \}\). We recall from Lemma 3 that the worst case regret of \(\frac {1}{8}\) on this plane is achieved when \(p_{i}^{*} = \frac {3}{4}\) and \(D_{i} = \frac {1}{2}\). We say a player is bad if they achieve a regret of at least 0⋅12, which on \(\mathcal {P}\) corresponds to having \(p_{i}^{*} \in [0{\cdot }7,0{\cdot }8]\). Similarly, all other players are good. We denote 𝜃 ∈ [0, 1] as the proportion of players that are bad. Furthermore, as the following lemma shows, we can in a certain sense assume that \(\theta \leq \frac {1}{2}\).

Lemma 7

If\(\theta > \frac {1}{2}\),then for a period of 0⋅15 time units, we can allow each bad player to shift to their best response withunit speed, and have all good players update according toUCNto stay on\(\mathcal {P}\).After this movement, at most1 − 𝜃players are bad.

Proof

If i is a bad player, in the worst case scenario, \(\dot {D}_{i} = 2\), which keeps their strategy/payoff state, si, on the plane \(\mathcal {P}\). However, at the end of 0⋅15 time units, they will have \(p_{i}^{*} > 0{\cdot }85\), hence they will no longer be bad. On the other hand, since the good players follow the dynamic, they stay on \(\mathcal {P}\), and at worst, all of them become bad. □

Observation 4

After this movement, players who werebad are the only players possibly away from\(\mathcal {P}\)and they have a discrepancy that is greater than 0 ⋅ 1.Furthermore, all players who become bad lie on\(\mathcal {P}\).

We can now outline a continuous-time dynamic that utilises Lemma 7 to obtain a \((\frac {1}{8} - \frac {1}{220})\) maximal regret.

  1. 1.

    Have all players begin with \(p_{i}(0) = \frac {1}{2}\)

  2. 2.

    Run UCN for \(\frac {1}{2}\) time units.

  3. 3.

    Measure, 𝜃, the proportion of bad players. If \(\theta > \frac {1}{2}\) apply the dynamics of Lemma 7.

  4. 4.

    Let all bad players use \(\dot {p}_{i}^{*} = 1\) for \({\Delta } = \frac {1}{220}\) time units.

Theorem 7

If all players follow the aforementioned dynamic, no single player will have a regret greater than \(\frac {1}{8} - \frac {1}{220}\) .

In essence one shows that if Δ is a small enough time interval (less than 0⋅1 to be exact), then all bad players will unilaterally decrease their regret by at least 0⋅1Δ and good players won’t increase their regret by more than Δ. The time step \({\Delta } = \frac {1}{220}\) is thus chosen optimally.

Proof

We have seen via Lemma 7 that after step 3 the proportion of bad players is at most \(\theta \leq \frac {1}{2}\), we wish to show that step 4 reduces maximal regret by at least \(\frac {1}{220}\) for every bad player whilst maintaining a low regret for good players.

Since after step 3 all bad players remain on \(\mathcal {P}\), we can consider an arbitrary bad player on the plane \(\mathcal {P}\) with regret r = D(1 − p). Let us suppose that we allow all bad players to unilaterally shift their probabilities to their best response for a time period of Δ < 0⋅4 ≤ D units (the bound implies bad player preferences do not change). This means that the worst case scenario for their regret is when their discrepancy increases to D + 2𝜃Δ. If we let r be their new regret after this move, we get the following:

$$r^{\prime} = (D + 2\theta{\Delta})(1-p^{*}-{\Delta}) = D(1-p^{*}) + 2\theta{\Delta}(1-p^{*}) -D{\Delta} - 2\theta{\Delta}^{2} $$
$$= r -2\theta {\Delta}^{2} + \left( 2\theta(1-p^{*}) - D \right){\Delta} $$

However, we can use our initial constraints on D and p from the fact that the players were bad, along with the fact that \(\theta \leq \frac {1}{2}\) to obtain the following:

$$2\theta(1-p^{*}) \leq (1-p^{*}) \leq 0{\cdot}3 < 0{\cdot}4 \leq D $$

Hence as long as Δ < 0⋅4, r < r hence we can better the new bad players, without hurting the good players by choosing a suitably small value of Δ.

To see that we don’t hurt good players to much, suppose that we have a good player with discrepancy D and best-response mass, p. By definition, their initial regret is r = D(1 − p) < 0⋅12. There are two extreme cases to what can happen to their regret after the bad players shift their strategies in step 4. Either their discrepancies increase by 2𝜃Δ, in which case preferences are maintained, or either discrepancies decrease by 2𝜃Δ and preferences change (which can only occur when 2𝜃Δ > D). For the first case we can calculate the new regret r as follows:

$$r^{\prime} = (D + 2\theta {\Delta})(1-p^{*}) = r + 2\theta(1-p^{*}){\Delta} \leq r + (1-p^{*}){\Delta} \leq r+ {\Delta} $$

This means that the total change in regret is at most Δ. Note that if a player was originally bad and then shifted according to Lemma 7 then their discrepancy is at least 0⋅1. For this reason if we limit ourselves to values of Δ < 0⋅1, then all such players will always fall in this case since their preferences cannot change.

Now we analyse the second case where preferences switch. Since we are only considering Δ < 0⋅1, then we can assume that all such profiles must lie on \(\mathcal {P}\). In this case we get the following new regret:

$$r^{\prime} = (2\theta{\Delta} - D)(p) = r + 2\theta p^{*} {\Delta} - D \leq r + p^{*} {\Delta} - D \leq r + {\Delta} $$

Consequently, in the scenario that preferences change, the change of regret is bounded by Δ as well. This means that for Δ < 0⋅1, the decrease in regret for bad players is at least:

$$2\theta {\Delta}^{2} + (D - 2\theta(1-p^{*})){\Delta} > 0{\cdot}1{\Delta} $$

And for such time-steps Δ, the regret for good players increases by at most Δ. Thus under these bounds, the optimal value is \({\Delta } = \frac {1}{220}\) which gives rise to a maximal regret of \(\frac {1}{8} - \frac {1}{220} = \frac {137}{1100}\). □

As a final note, we see that this process requires one round of communication in being able to perform the operations in Lemma 7, that is we need to know if \(\theta > \frac {1}{2}\) or not to balance player profiles so that there are at most the same number of bad players to good players. Furthermore, in exactly the same fashion as UN(α, η), we can discretise the above process to obtain a query-based algorithm that obtains a regret of \(\frac {1}{8} - \frac {1}{220} + \alpha < \frac {1}{8}\) for arbitrary α.

6 Extensions

In this section we address two extensions to our previous results:

  • (Section 6.1) We extend the algorithm UCN to large games with a more general largeness parameter \(\gamma = \frac {c}{n} \in [0, 1]\), where c is a constant.

  • (Section 6.2) We consider large games with k actions and largeness parameter \(\frac {c}{n}\) (previously we focused on k = 2). Our algorithm used a new uncoupled approach that is substantially different from the previous ones we have presented.

6.1 Continuous Dynamics for Binary-action Games with Arbitrary γ

We recall that for large games, the largeness parameter γ denotes the extent to which players can affect each others’ utilities. Instead of assuming that \(\gamma = \frac {1}{n}\) we now let \(\gamma = \frac {c}{n} \in [0, 1]\) for some constant c. We show that we can extend UCN and still ensure a better than \(\frac {1}{2}\)-equilibrium. We recall that for the original UCN, players converge to a linear subspace of strategy/payoff states and achieve a bounded regret. For arbitrary \(\gamma = \frac {c}{n}\), we can extend this subspace of strategy/payoff states as follows:

$$\mathcal{P}_{\gamma} = \left\{ (p^{*},D) \ | \ p^{*} = \min \left( \frac{1}{2} + \frac{D}{2c}, 1 \right) \right\} $$

where D and p represent respectively a player’s discrepancy and probability allocated to the best response. For c = 1 we recover the subspace \(\mathcal {P}\) as in UCN. Furthermore, if \(|\dot {{p}^{*}}| \leq 1\) for each player, then \(| \dot {D} | \leq 2c\), which means that we can implement an update as follows:

$$\dot{{p}^{*}} = \frac{\dot{D}}{2c} $$

This leads us to the following natural extension to Theorem 3:

Theorem 8

Under the initial conditions \(p_{i}(0) = \frac {1}{2}\) for all i, the following continuous dynamic, UCN - γ , has all players reach \(\mathcal {P}_{\gamma }\) in at most \(\frac {1}{2}\) time units. Furthermore, upon reaching \(\mathcal {P}_{\gamma }\) a player never leaves.

$$\dot{{p}^{*}_{i}}(t) = f(D_{i}(t), \dot{D}_{i}(t)) = \left\{\begin{array}{cl} 1 & \text{ if } s_{i} \notin \mathcal{P}_{\gamma} \\ 0 & \text{ if } s_{i} \in \mathcal{P}_{\gamma} \text{ and } p^{*}_{i} > \frac{1}{2} + \frac{D_{i}}{2c} \\ \frac{\dot{D_{i}}}{2c} & \text{ otherwise } \end{array}\right. $$

Notice that unlike UCN, this dynamic is no longer necessarily a continuously differentiable function with respect to time when c > 1. However, it is still continuous.

Once again, we note that for all strategy/payoff states, regret can be expressed as

$$R = (1-p^{*})D, $$

from which we can prove the following:

Theorem 9

Suppose that\(\gamma = \frac {c}{n}\)and that a player’s strategy/payoff state lies on\(\mathcal {P}_{\gamma }\),then her regret is at most\(\frac {c}{8}\)forc ≤ 2and her regret is at most\(\frac {1}{2} - \frac {1}{2c}\)forc > 2.Furthermore, the equilibria obtained are alsoc-WSNE.

Proof

If c ≤ 2, then regret is maximised when \(D = \frac {c}{2}\) and consequently when \(p^{*} = \frac {3}{4}\). This results in a regret of \(\frac {c}{8}\). On the other hand, if c > 2, then regret is maximised when D = 1 and consequently \(p^{*} = \frac {1}{2} + \frac {1}{2c}\). This results in a regret of \(\frac {1}{2} - \frac {1}{2c}\).

As for the second part of the theorem, from the definition of \(\mathcal {P}_{\gamma }\) and from the definition of ε-WSNE in Section 2 it is straightforward to see that when Dc, p = 1 which means that no weight is put on the strategy whose utility is at most c from that of the best response. □

Thus we obtain a regret that is better than simply randomising between both strategies, although as should be expected, the advantage goes to zero as the largeness parameter increases.

6.1.1 Discretisation and Query Complexity

In the same way as UN-(α, η), where we discretised UN, Theorem 7 can be discretised to yield the following result.

Theorem 10

For a given accuracy parameterαand correctness probabilityη,we can implement a query-based discretisation ofUCN-γthat withprobability 1 − ηcorrectlycomputes anε-approximateNash equilibrium for

$$\varepsilon = \left\{\begin{array}{cl} \frac{c}{8} + \alpha & \text{ if } c \leq 2 \\ \frac{1}{2} - \frac{1}{2c} + \alpha \hfill & \text{ if } c > 2 \end{array}\right. $$

Furthermore the discretisation uses\(O \left (\frac {1}{\alpha ^{4}} \left (\frac {n}{\alpha \eta } \right ) \right )\)queries.

6.2 Equilibrium Computation for k-action Games

When the number of pure strategies per player is k > 2, the initial “strawman” idea corresponding to Observation 2 is to have all n players randomise uniformly over their k strategies. Notice that the resulting regret may in general be as high as \(1-\frac {1}{k}\). In this section we give a new uncoupled-dynamics approach for computing approximate equilibria in k-action games where (for largeness parameter \(\gamma =\frac {1}{n}\)) the worst-case regret approaches \(\frac {3}{4}\) as k increases, hence improving over uniform randomisation over all strategies. Recall that in general we are considering \(\gamma = \frac {c}{n}\) for fixed c ∈ [0, n]. The following is just a simple extension of the payoff oracle \(\mathcal {Q}_{\beta ,\delta }\) to the setting with k actions: for any input mixed strategy profile p, the oracle will with probability at least 1 − δ, output payoff estimates for p with error at most β for all n players.

Estimating Payoffs for Mixed Profiles in k-action Games

Given a payoff oracle \(\mathcal {Q}\) and any target accuracy parameter β and confidence parameter δ, consider the following procedure to implement an oracle \(\mathcal {Q}_{\beta , \delta }\):

  • For any input mixed strategy profile p, compute a new mixed strategy profile \(p^{\prime } = (1 - \frac {\beta }{2})p + (\frac {\beta }{2k})\mathbf {1}\) such that each player i is playing uniform distribution with probability \(\frac {\beta }{2}\) and playing distribution pi with probability \(1 - \frac {\beta }{2}\).

  • Let \(m = \frac {64k^{2}}{\beta ^{3}} \log \left (8n /\delta \right )\), and sample m payoff queries randomly from p, and call the oracle \(\mathcal {Q}\) with each query as input to obtain a payoff vector.

  • Let \(\widehat u_{i,j}\) be the average sampled payoff to player i for playing action j.Footnote 3 Output the payoff vector \((\widehat {u}_{ij})_{i\in [n], j\in \{0, 1\}}\).

As in previous sections, we begin by assuming that our algorithm has access to \(\mathcal {Q}_{M}\), the more powerful query oracle that returns exact expected payoffs with regards to mixed strategies. We will eventually show in Section 6.2.1 that this does not result in a loss of generality, as when utilising \(\mathcal {Q}_{\beta , \delta }\) we incur a bounded additive loss with regards to the approximate equilibria we obtain.

The general idea of Algorithm 2 is as follows. For a parameter \(N\in \mathbb {N}\), every player uses a mixed strategy consisting of a discretised distribution in which a player’s probability is divided into N quanta of probability \(\frac {1}{N}\), each of which is allocated to a single pure strategy. We refer to these quanta as “blocks” and label them B1, …, BN. Initially, blocks may be allocated arbitrarily to pure strategies. Then in time step t, for t = 1, …, N, block t is reallocated to the player’s best response to the other players’ current mixed strategies.

figure b

The general idea of the analysis of Algorithm 2 is the following. In each time step, a player’s utilities change by at most nγ/N = c/N. Hence, at the completion of Algorithm 2, block N is allocated to a nearly-optimal strategy, and generally, block Nr is allocated to a strategy whose closeness to optimality goes down as r increases, but enables us to derive the improved overall performance of each player’s mixed strategy.

Theorem 11

BU returns a mixed strategy profile \((\vec {p}_{i})_{i \in [n]}\) that is an ε -NE when:

$$\varepsilon = \left\{\begin{array}{cl} c \left( 1 + \frac{1}{N} \right) & \text{ if } c \leq \frac{1}{2} \\ 1 - \frac{1}{4c} + \frac{1}{2N} & \text{ if } c > \frac{1}{2} \end{array}\right. $$

Notice for example that for \(\gamma =\frac {1}{n}\) (i.e. putting c = 1), each player’s regret is at most \(\frac {3}{4}+\frac {1}{2N}\), so we can make this arbitrarily close to \(\frac {3}{4}\) since N is a parameter of the algorithm.

Proof

For an arbitrary player i ∈ [n], in each step t = 1, ..., N, probability block Bt is re-assigned to i’s current best response.

Since every player is doing the same transfer of probability, by the largeness condition of the game, one can see that every block’s assigned strategy incurs a regret that increases by at most \(\frac {2c}{N}\) at every time step. This means that at the end of N rounds, the j-th block will at worst be assigned to a strategy that has \(\frac {\min \{1, (2c)j \}}{N}\) regret. This means we can bound a player’s total regret as follows:

$$R \leq \sum\limits_{i = 1}^{N} \frac{\min \{1, (2c)i \}}{N} \cdot \frac{1}{N} $$

There are two important cases for this sum: when 2c ≤ 1 and when 2c > 1. In the first case:

$$R \leq \sum\limits_{i = 1}^{N} \frac{2c i}{N^{2}} = n\gamma \left( 1 + \frac{1}{N} \right) $$

And in the second:

$$R \leq \left( \sum\limits_{i = 1}^{N/2c} \frac{2c i}{N^{2}} \right) + \left( N - \frac{N}{2c} \right) \cdot \frac{1}{N} = 1 - \frac{1}{4c} + \frac{1}{2N} $$

In fact, we can slightly improve the bounds in Theorem 9 via introducing a dependence on k. In order to do so, we need to introduce some definitions first.

Definition 8

We denote \(\mathcal {A}^{b,h}\) as the truncated triangle in the cartesian plane under the line y = hx for x ∈ [0, b] and height capped at y = 1. Note that if bh ≤ 1 the truncated triangle is the entire triangle, unlike the case where bh > 1. See Fig. 2 for a visualisation.

Fig. 2
figure 2

Visualisation of \(\mathcal {A}^{b,h}\) when bh ≤ 1 (Left) and bh > 1 (Right)

Definition 9

For a given truncated triangle \(\mathcal {A}^{b,h}\) and a partition of the base, \(\mathcal {P} = \{x_{1}, ...,x_{r}\}\) where 0 ≤ x1 ≤… ≤ xrb, we denote the left sum of \(\mathcal {A}^{b,h}\) under \(\mathcal {P}\) by \(LS(\mathcal {A}^{b,h}, \mathcal {P})\) (for reference see Fig. 3) and define it as follows:

$$LS(\mathcal{A}^{b,h}, \mathcal{P}) = \sum\limits_{i = 1}^{|\mathcal{P}|} (hx_{i})(x_{i + 1} - x_{i}) $$
Fig. 3
figure 3

Example of left sum of five-element partition of base in the case where bh > 1

With these definitions in hand, we can set up a correspondence between the worst case regret of BU and left sums of \(\mathcal {A}^{(1+\frac {1}{N}), 2c}\). Suppose in the process of BU a player has blocks B1, ..., BN in the queue. Furthermore, without loss of generality, suppose that her k strategies are sorted in ascending order of utility so that u1, ..., uk where uj is the expected utility of the j-th strategy at the end of the process. Furthermore, let Rj = u1uj (i.e. the regret of strategy j), so that we also have 0 = R1R2 ≤ ... ≤ Rk ≤ 1. If N is much larger than k, then by the pigeon-hole principle, many blocks will be assigned to the same strategy, and hence will incur the same regret. However, as in the analysis of the previous bounds, each block has restrictions as to how much regret their assigned strategy can incur due to the largeness condition of the game. In particular, the assigned strategy of block Bb can only be assigned to a strategy j such that \(R_{j} \leq \min \{1, (2c)\} \cdot \left (\frac {b}{N} \right )\). For such an assignment, since the block has probability mass \(\frac {1}{N}\), it contributes a value of \(R_{j} \cdot \left (\frac {j}{N} \right ) \left (\frac {1}{N} \right )\) to the overall regret of a player. Hence for fixed regret values (R1, .., Rk), we can pick a valid assignment of these values to blocks and get an expression for total regret that can be visualised geometrically in Fig. 4.

Fig. 4
figure 4

For N = 9 and k = 5, and \(c > \frac {1}{2}\), this shows a visualisation of a feasible allotments of regret values to blocks after BU. Note that this does not exhibit worst case regret

The next important question is what valid assignment of blocks to regret values results in the maximal amount of total regret for a player. In Fig. 4, Block 1 is assigned to strategy 1, Blocks 2,3, and 7 are assigned to strategy 2, blocks 4 and 5 are assigned to strategy 3, block 5 is assigned to strategy 4 and finally blocks 8 and 9 are assigned to strategy 5.

One can see that this does not result in maximal regret. Rather it is simple to see that a greedy allotment of blocks to regret values results in maximal total regret. Such a greedy allotment can be described as follows: assign as many possible (their regret constraints permitting) blocks at the end of the queue to Rk, then repeat this process one-by-one for Ri earlier in the queue. This is visualised in Fig. 5, and naturally leads to the following result:

Fig. 5
figure 5

For N = 9 and k = 5, and \(c > \frac {1}{2}\), this shows a visualisation of a feasible allotments of regret values to blocks after BU. Unlike Fig. 4, this does exhibit worst-case regret

Theorem 12

For any fixedR1, ..., Rk,the worst case assignment of probability blocksBbto strategies corresponds to a left sum of\(\mathcal {A}^{(1+\frac {1}{N}), 2c}\)for some partition of\([0, 1+\frac {1}{N}]\)with cardinality at mostk − 1.

This previous theorem reduces the problem of computing worst case regret to that of computing maximal left sums under arbitrary partitions. To that end, we define the precise worst-case partition value we will be interested in.

Definition 10

For a given \(\mathcal {A}^{b,h}\), let us denote the maximal left sum under partitions of cardinality k by \(\mathcal {A}^{b,h}_{k}\). Mathematically, the value is defined as follows:

$$\mathcal{A}^{b,h}_{k} = \sup\limits_{|\mathcal{P}| = k} LS(\mathcal{A}^{b,h}, \mathcal{P}) $$

We can explicity compute these values which in turn will bound a player’s maximal regret.

Lemma 8

\(\mathcal {A}^{1,1}_{k} = \left (\frac {1}{2} \right ) \left (\frac {k}{k + 1} \right )\) which is obtained on the partition \(\mathcal {P} = \{\frac {1}{k + 1}, \frac {2}{k + 1}, ...,\frac {k}{k + 1}\}\)

Proof

This result follows from induction and self-similarity of the original triangle. For k = 1, our partitions consist of a single point x ∈ [0, 1] hence the area under the triangle will be \(\mathcal {A}^{1,1}_{1}(x) = (1-x)x\) which as a quadratic function of x has a maximum at \(x = \frac {1}{2}\). At this point we get \(\mathcal {A}^{1,1}_{1}(x) = \frac {1}{2} \cdot \frac {1}{2}\) as desired.

Now let us assume that the lemma holds for k = n, we wish to show that it holds for k = n + 1. Any k = n + 1 element partition must have a left-most element, x1. We let \(\mathcal {A}^{\prime }(x)\) be the maximal truncated area for an n + 1 element partition, given that x1 = x. By fixing x we add an area of x(1 − x) under the triangle and we are left with n points to partition [x,1]. We notice however that we are thus maximising truncated area under a similar triangle to the original that has been scaled by a factor of (1 − x). We can therefore use the inductive assumption and get the following expression:

$$\mathcal{A}^{\prime}(x) = (1-x)x + (1-x)^{2} \mathcal{A}^{1,1}_{n} = (1-x)x + \frac{1}{2} (1-x)^{2} \left( \frac{n}{n + 1} \right) $$

It is straightforward to see that \(\mathcal {A}^{\prime }(x)\) is maximised when \(x = \frac {1}{k + 2}\). Consequently the maximal truncated area arises from the partition where \(x_{i} = \frac {i}{n + 2}\) which in turn proves our claim. □

Via linear scaling, one can extend the above result to arbitrary base and height values b, h.

Corollary 2

Forbh ≤ 1,\(\mathcal {A}^{b,h}_{k} = \left (\frac {bh}{2} \right ) \left (\frac {k}{k + 1} \right )\)which is obtained on the partition\(\mathcal {P} = \{\frac {b}{k + 1}, \frac {2b}{k + 1}, ...,\frac {kb}{k + 1}\}\)

Corollary 3

Forbh > 1,we obtain the following expressions for\(\mathcal {A}^{b,h}_{k}\):

$$\mathcal{A}^{b,h}_{k} = \left\{\begin{array}{cl} \left( \frac{bh}{2} \right) \left( \frac{k}{k + 1} \right) & \text{ if } \frac{k}{k + 1} \leq \frac{b}{h} \\ b(1 - \frac{1}{h} - \frac{1}{2hk}) & \text{ otherwise } \end{array}\right. $$

Proof

For the first case (when \(\frac {k}{k + 1} \leq \frac {b}{h}\)), let us consider \(\mathcal {B}^{b,h}\) to be the the triangle with base b and height h that unlike \(\mathcal {A}^{b,h}\) is not truncated at unit height. From scaling our previous result from Corollary 2, the largest k-element left sum for \(\mathcal {B}^{b,h}\) occurs for the partition \(\mathcal {P} = \{\frac {b}{k + 1}, \frac {2b}{k + 1}, ...,\frac {bk}{k + 1} \}\). However, from the fact that \(\mathcal {A}^{b,h} \subset \mathcal {B}^{b,h}\), at precisely these values the left sums of \(\mathcal {P}\) for both geometric figures coincide. It follows that this partition also gives a maximal k-element partition for left sums of \(\mathcal {A}^{b,h}\) and thus the claim holds.

On the other hand, let us know consider the case where \(\frac {k}{k + 1} > \frac {b}{h}\). In a similar spirit to previous proofs, let us define \(\mathcal {A}(x): [0,b] \rightarrow \mathbb {R}\) to be the maximal left-sum under \(\mathcal {A}^{b,h}\) for a given partition \(\mathcal {P}\) whose right-most element is x. From Figs. 4 and 5, it should be clear that we should only consider \(x \in [0,\frac {b}{h}]\), because if ever we have a \(x \geq \frac {b}{h}\), that would correspond to some block being assigned a regret value of Rj = 1 for some strategy j. However with the existence of such a maximal regret strategy, the greedy allotment of blocks to strategies would assign the most blocks possible to strategy j (or some other maximal regret strategy), which would correspond again to the final element in our partition being \(\frac {b}{h}\).

Now that we have restricted our focus to \(x \in [0,\frac {b}{h}]\), we wish to consider the triangle \(\mathcal {B}^{\ell ,\frac {k + 1}{k}}\) of base length \(\ell = \frac {(k + 1)b}{kh}\), and height \(\frac {k + 1}{k}\) which is not truncated at height 1. Let us define \(\mathcal {B}(x)\) to be a similar function that computes the maximal k-element left sum under \(\mathcal {B}^{\ell ,\frac {k + 1}{k}}\) given that the right-most partition element is x ∈ [0, cbh]. Geometrically, one can see that we get the following identity:

$$\mathcal{A}(x) = \mathcal{B}(x) + \frac{hx}{b} \left( b - \frac{b}{h} \right) $$

However, from Corollary 2, the optimal k-element partition on \(\mathcal {B}^{\ell ,\frac {k + 1}{k}}\) has a right-most element of \(\frac {\ell k}{k + 1} = \frac {b}{h}\), it follows that \(\mathcal {B}(x)\) is maximised at \(x = \frac {b}{h}\). Furthermore, the second part of the above sum is also maximised at this value, therefore \(\mathcal {A}(x)\) is maximised at \(\frac {b}{h}\). Concretely, this means that the maximal k-element partition for \(\mathcal {A}^{b,h}\) is \(\mathcal {P} = \{\frac {b}{hk}, \frac {2b}{hk}, ...,\frac {(k-1)b}{hk}, \frac {b}{h}\}\). This partition results in a maximal left sum of \(\mathcal {A}^{\frac {b}{h},1}_{k-1} + \left (b - \frac {b}{h} \right )\) which after simplification gives us the value \(b(1 - \frac {1}{h} - \frac {1}{2hk})\) as desired. □

Finally, we can combine everything above to obtain:

Theorem 13

With access to a query oracle that computes exact expected utilities for mixed strategy profiles, BU returns an ε -approximate Nash equilibrium for

$$\varepsilon = \left\{\begin{array}{cl} c \left( \frac{k-1}{k} \right) \left( 1 + \frac{1}{N} \right) & \text{ if } c \leq \frac{1}{2} \\ c \left( \frac{k-1}{k} \right)\left( 1 + \frac{1}{N} \right) & \text{ if } c > \frac{1}{2} \text{ and } \frac{k-1}{k} \leq \frac{1}{2c} \\ \left( 1 - \frac{1}{4c} - \frac{1}{4c (k-1)} \right) \left( 1 + \frac{1}{N} \right) & \text{ if } c > \frac{1}{2} \text{ and } \frac{k-1}{k} > \frac{1}{2c} \end{array}\right. $$

Proof

This just a straightforward application of Theorem 10 and Corollaries 2 and 3. □

6.2.1 Query Complexity of Block Method

In the above analysis we assumed access to a mixed strategy oracle as we computed expected payoffs at each time-step for all players. When using \(\mathcal {Q}_{\beta , \delta }\) however, there is an additive error and a bounded correctness probability to take into account.

In terms of the additive error, if we assume that there is an additive error of β on each of the N queries in BU, then at any time step, the b-th block will be assigned to a strategy that incurs at most \(\left (\frac {\min \{1, (2c)b \}}{N} + \beta \right )\) regret, which can visualised geometrically in Fig. 6, and which leads to the following extension of Theorem 10.

Fig. 6
figure 6

Example of α additive error in utility sampling. For this 7 element partition, regret bounds are increased by α and we get an augmented truncated triangle

Theorem 14

InBU, if queries incorporate an additive error ofαon expected utilities, for any fixed choice ofR1, ..., Rk,the worst case assignment of probability blocksBbto strategies corresponds to a left sum of\(\mathcal {A}^{(1+\frac {1}{N} + \frac {\beta }{2c}), 2c}\)for some partition of\([0, 1+\frac {1}{N}]\)with cardinality at mostk − 1.

Finally, since our approximate query oracle is correct with a bounded probability, in order to assure that the same additive error of β holds on all N queries of BU, we need to impose a correctness probability of \(\frac {\delta }{N}\) in order to achieve the former with a union bound. This leads to the following query complexity result for BU.

Theorem 15

For anyα, η > 0, if weimplementBUusing\(\mathcal {Q}_{\beta ,\delta }\)withβ = αand\(\delta = \frac {\eta }{N}\), with probability 1 − η, we will obtainanε-approximateNash equilibrium for

$$\varepsilon = \left\{\begin{array}{cl} c \left( \frac{k-1}{k} \right) \left( 1 + \frac{1}{N} + \frac{\alpha}{2c} \right)& \text{ if } c \leq \frac{1}{2} \\ c \left( \frac{k-1}{k} \right)\left( 1 + \frac{1}{N} + \frac{\alpha}{2c}\right)& \text{ if } c > \frac{1}{2} \text{ and } \frac{k-1}{k} \leq \frac{1}{2c} \\ \left( 1 - \frac{1}{4c} - \frac{1}{4c (k-1)} \right) \left( 1 + \frac{1}{N} + \frac{\alpha}{2c} \right)& \text{ if } c > \frac{1}{2} \text{ and } \frac{k-1}{k} > \frac{1}{2c} \end{array}\right. $$

The total numberof queries used is\(\frac {64k^{2}}{\alpha ^{3}} \log \left (\frac {8nN}{\delta } \right )\)

Once again, it is interesting to note that the first regret bounds we derived do not depend on k. It is also important to note the regret has an extra term of the form \(O(\frac {1}{N})\) in the number of probability blocks. Although this can be minimised in the limit, there is a price to be paid in query complexity, as this would involve a larger number of rounds in the computation of approximate equilibria.

6.3 Comparison Between Both Methods

We can compare the guarantees from our methods from Sections 6.1 and 6.2 when we let the number of strategies k = 2 and we consider largeness parameters \(\gamma = \frac {c}{n} \in [0, 1]\). Furthermore, we consider how both methods compare when N.

 

c ≤ 1

1 ≤ c ≤ 2

c ≥ 2

UNC

\(\frac {c}{8}\)

\(\frac {c}{8}\)

\(\frac {1}{2} - \frac {1}{2c}\)

BU

\(\frac {c}{2}\)

\(1 - \frac {1}{2c}\)

\(1 - \frac {1}{2c}\)

One can see that UNC does better by a multiplicative factor of \(\frac {1}{4}\) in the case of small c and better by an additive factor of \(\frac {1}{2}\) for large c.

7 Conclusion and Further Research

The obvious question raised by our results is the possible improvement in the additive approximation obtainable. Since pure approximate equilibria are known to exist for these games, the search for such equilibria is of interest. A slightly weaker objective (but still stronger than the solutions we obtain here) is the search for well-supported approximate equilibria in cases where c > 1 and for better well-supported approximate equilibria in general.

There is also the question of lower bounds, especially in the completely uncoupled setting. Our algorithms are randomised (estimating the payoffs that result from a mixed strategy profile via random sampling) and one might also ask what can be achieved using deterministic algorithms.