1 Introduction

1.1 Generative Adversarial Networks

Generative adversarial networks (GANs) are a powerful framework for learning generative models. They have witnessed successful applications in a wide range of fields, including image synthesis [23, 25], image super-resolution [14, 15], and anomaly detection [28]. A GAN maintains two deep neural networks: the discriminator and the generator. The generator aims to produce samples that resemble the data distribution, while the discriminator aims to distinguish the generated samples and the data samples.

Mathematically, the standard GAN training aims to solve the following optimization problem:

$$\begin{aligned} \min _{G} \max _{D} V(G,D) = \mathsf {E}_{{\varvec{x}}\sim p_d({\varvec{x}})} \{\log D({\varvec{x}}) \} + \mathsf {E}_{{\varvec{z}}\sim p_z({\varvec{z}})} \{\log (1- D(G({\varvec{z}})))\}. \end{aligned}$$
(1)

The global optimum point is reached when the generated distribution \(p_g\), which is the distribution of \(G({\varvec{z}})\) given \({\varvec{z}}\sim p_z({\varvec{z}})\), is equal to the data distribution. The optimal point is reached based on the assumption that the discriminator and generator are jointly optimized. Practical training of GANs, however, may not satisfy this assumption. In some training process, instead of ideal joint optimization, the discriminator and generator seek for best response by turns, namely the discriminator (resp. generator) is alternately updated with the generator (resp. discriminator) fixed.

Another conventional training methods are based on a gradient descent form of GAN optimization. In particular, they simultaneously take small gradient steps in both generator and discriminator parameters in each training iteration [9]. There have been some studies on the convergence behaviors of gradient-based training. The local convergence behavior has been studied in [11, 18]. The gradient-based optimization is proved to converge assuming that the discriminator and the generator is convex over the network parameters [20]. The inherent connection between gradient-based training and primal-dual subgradient methods for solving convex optimizations is built in [4].

Despite the promising practical applications, a lot of works still witness the lack of convergence behaviors in training GANs. Two common failure modes are oscillation and mode collapse, where the generator only produces a small family of samples [3, 9, 16]. One important observation in [17] is that such non convergence behaviors stem from the fact that each generator update step is a partial collapse towards a delta function, which is the best response to the objective function. This motivates the study of this paper on the dynamics of best-response training and the proposal of a novel training method to address these convergence issues.

1.2 Contributions

In this paper, we view GANs as a two-player zero-sum game and the training process as a repeated game. For the optimal solution to Eq. (1), the corresponding generated distribution and discriminator \((p^*_g, D^*)\) is shown to be the unique Nash equilibrium in the game. Inspired by the well-established fictitious play mechanism in game theory, we propose a novel training algorithm to resolve the convergence issue and find this Nash equilibrium.

The proposed training algorithm is referred to as Fictitious GAN, where the discriminator (resp. generator) is updated based on the mixed outputs from the sequence of historical trained generators (resp. discriminators). The previously trained models actually carry important information and can be utilized for the updates of the new model. We prove that Fictitious GAN achieves the optimal solution to Eq. (1). In particular, the discriminator outputs converge to the optimum discriminator function and the mixed output from the sequence of trained generators converges to the data distribution.

Moreover, Fictitious GAN can be regarded as a meta-algorithm that can be applied on top of existing GAN variants. Both synthetic data and real-world image datasets are used to demonstrate the improved performance due to the fictitious training mechanism.

2 Related Works

The idea of training using multiple GAN models have been considered in other works. In [1, 12], the mixed outputs of multiple generators is used to approximate the data distribution. The multiple generators with a modified loss function have been used to alleviate the mode collapse problem [7]. In [17], the generator is updated based on a sequence of unrolled discriminators. In [19], dual discriminators are used to combine the Kullback-Leibler (KL) divergence and reverse KL divergences into a unified objective function. Using an ensemble of discriminators or GAN models has shown promising performance [6, 27]. One distinguishing difference between the above-mentioned methods and our proposed method is that in our method only a single deep neural network is trained at each training iteration, while multiple generators (resp. discriminators) only provide inputs to a single discriminator (resp. generators) at each training stage. Moreover, the outputs from multiple networks is simply uniformly averaged and serves as input to the target training network, while other works need to train the optimal weights to average the network models. The proposed method thus has a much lower computational complexity.

The use of historical models have been proposed as a heuristic method to increase the diversity of generated samples [24], while the theoretical convergence guarantee is lacking. Game theoretic approaches have been utilized to achieve a resource-bounded Nash equilibrium in GANs [21]. Another closely related work to this paper is the recent work [10] that applies the Follow-the-Regularized-Leader (FTRL) algorithm to train GANs. In their work, the historical models are also utilized for online learning. There are at least two distinct features in our work. First, we borrow the idea of fictitious play from game theory to prove convergence to the Nash equilibrium for any GAN architectures assuming that networks have enough capacity, while [10] only proves convergence for semi-shallow architectures. Secondly, we prove that a single discriminator, instead of a mixture of multiple discriminators, asymptotically converges to the optimal discriminator. This provides important design guidelines for the training, where asymptotically a single discriminator needs to be maintained.Footnote 1

3 Toy Examples

In this section, we use two toy examples to show that both the best-response approach and the gradient-based training approach may oscillate for simple minimax optimization problems.

Take the GAN framework for instance, for the best-response training approach, the discriminator and the generator are updated to the optimum point at each iteration. Mathematically, the discriminator and the generator is alternately updated according to the following rules:

$$\begin{aligned}&\max _{D} \mathsf {E}_{{\varvec{x}}\sim p_d({\varvec{x}})} \{\log D({\varvec{x}}) \} + \mathsf {E}_{{\varvec{z}}\sim p_z({\varvec{z}})} \{\log (1- D(G({\varvec{z}})) \} \end{aligned}$$
(2)
$$\begin{aligned}&\min _{G} \mathsf {E}_{{\varvec{z}}\sim p_z({\varvec{z}})} \{\log (1- D( G({\varvec{z}}) )) \} \end{aligned}$$
(3)

Example 1

  Let the data follow the Bernoulli distribution \(p_d \sim \) Bernoulli (a), where \(0<a <1\). Suppose the initial generated distribution \(p_g \sim \) Bernoulli (b), where \(b \ne a\). We show that in the best-response training process, the generated distribution oscillates between \(p_g \sim \) Bernoulli (1) and \(p_g \sim \) Bernoulli (0).

We show the oscillation phenomenon in training using best-response training approach. To minimize (3), it is equivalent to find \(p_g\) such that \( \mathsf {E}_{{\varvec{x}}\sim p_g({\varvec{x}})} \{\log (1- D({\varvec{x}})) \}\) is minimized. At each iteration, the output distribution of the updated generator would concentrate all the probability mass at \(x=0\) if \(D(0) > D(1)\), or at \(x=1\) if \(D(0) < D(1)\). Suppose \(p_g(x) = 1\{x=0\}\), where \(1\{\cdot \}\) is the indicator function, then by solving (2), the discriminator at the next iteration is updated as

$$\begin{aligned} D(x) = \frac{p_d (x)}{p_d(x) + p_g (x)}, \end{aligned}$$
(4)

which yields \(D(1) =1\) and \(D(0) < D(1)\). Therefore, the generated distribution at the next iteration becomes \(p_g(x) = 1\{x = 1\}\). The oscillation between \(p_g \sim \) Bernoulli (1) and \(p_g \sim \) Bernoulli (0) continues by induction. A similar phenomenon can be observed for Wasserstein GAN.

The first toy example implies that the oscillation behavior is a fundamental problem to the iterative best-response training. In practical training of GANs, instead of finding the best response, the discriminator and generator are updated based on gradient descent towards the best-response of the objective function. However, the next example adapted from [8] demonstrates the failure of convergence in a simple minimax problem using a gradient-based method.

Example 2

Consider the following minimax problem:

$$\begin{aligned} \min _{-10 \le y \le 10} \max _{-10 \le x \le 10} xy. \end{aligned}$$
(5)

Consider the gradient based training approach with step size \(\triangle \). The update rule of x and y is:

$$\begin{aligned} \begin{bmatrix} x_{n+1} \\ y_{n+1} \end{bmatrix} = \begin{bmatrix} 1&\triangle \\ -\triangle&1 \end{bmatrix} \begin{bmatrix} x_{n} \\ y_{n} \end{bmatrix}. \end{aligned}$$
(6)

By using the knowledge of eigenvalues and eigenvectors, we can obtain

$$\begin{aligned} \begin{bmatrix} x_{n} \\ y_{n} \end{bmatrix} = \begin{bmatrix} -c_1^nc_2\sin (n\theta +\beta ) \\ c_1^nc_2\cos (n\theta +\beta ) \end{bmatrix}, \end{aligned}$$
(7)

where \(c_1 = \sqrt{1+\triangle ^2} > 1\) and \(c_2,\theta ,\beta \) are constants depending on the initial \((x_0,y_0)\). As \(n\rightarrow \infty \), since \(c_1>1\), the process will not converge.

Fig. 1.
figure 1

Performance of gradient method with fixed step size for Example 2. (a) illustrates the choices of x and y as iteration processes, the red point (0.1, 0.1) is the initial value. (b) illustrates the value of xy as a function of iteration numbers.

Figure 1 shows the performance of gradient based approach, the initial value \((x_0,y_0) = (0.1,0.1)\) and step size is 0.01. It can be seen that both players’ actions do not converge. This toy example shows that even the gradient based approach with arbitrarily small step size may not converge.

We will revisit the convergence behavior in the context of game theory. A well-established learning mechanism in game theory naturally leads to a training algorithm that resolves the non-convergence issues of these two toy examples.

4 Nash Equilibrium in Zero-Sum Games

In this section, we introduce the two-player zero-sum game and describe the learning mechanism of fictitious play, which provably achieves a Nash equilibrium of the game. We will show that the minimax optimization of GAN can be formulated as a two-player zero-sum game, where the optimal solution corresponds to the unique Nash equilibrium in the game. In the next section we will propose a training algorithm which simulates the fictitious play mechanism and provably achieves the optimal solution.

4.1 Zero-Sum Games

We start with some definitions in game theory. A game consists of a set of n players, who are rational and take actions to maximize their own utilities. Each player i chooses a pure strategy \(s_i\) from the strategy space \(S_i = \{s_{i,0},\cdots ,s_{i,m-1}\}\). Here player i has m strategies in her strategy space. A utility function \(u_i(s_i,s_{-i})\), which is defined over all players’ strategies, indicates the outcome for player i, where the subscript \(-i\) stands for all players excluding player i. There are two kinds of strategies, pure and mixed strategy. A pure strategy provides a specific action that a player will follow for any possible situation in a game, while a mixed strategy \(\mu _i=(p_i(s_{i,0}),\cdots ,p_i(s_{i,m-1}))\) for player i is a probability distribution over the m pure strategies in her strategy space with \(\sum _j p_i(s_{i,j}) =1\). The set of possible mixed strategies available to player i is denoted by \(\Delta S_i\). The expected utility of mixed strategy \((\mu _i,\mu _{-i})\) for player i is

$$\begin{aligned} \mathsf {E} \left\{ u_i(\mu _i,\mu _{-i}) \right\} = \sum _{s_i \in S_i}\sum _{s_{-i} \in S_{-i}} u_i(s_i,s_{-i})p_i(s_i)p_{-i}(s_{-i}). \end{aligned}$$
(8)

For ease of notation, we write \(u_i(\mu _i,\mu _{-i})\) as \(\mathsf {E} \left\{ u_i(\mu _i,\mu _{-i}) \right\} \) in the following. Note that a pure strategy can be expressed as a mixed strategy that places probability 1 on a single pure strategy and probability 0 on the others. A game is referred to as a finite game or a continuous game, if the strategy space is finite or nonempty and compact, respectively. In a continuous game, the mixed strategy indicates a probability density function (pdf) over the strategy space.

Definition 1

For player i, a strategy \(\mu _i^*\) is called a best response to others’ strategy \(\mu _{-i}\) if \(u_i(\mu _i^*,\mu _{-i})\ge u_i(\mu _i,\mu _{-i})\) for any \(\mu _i \in \Delta S_i.\)

Definition 2

A set of mixed strategies \(\mu ^{*} = (\mu _1^{*}, \mu _2^{*}, \cdots , \mu _n^{*})\) is a Nash equilibrium if, for every player i, \(\mu _i^*\) is a best response to the strategies \(\mu _{-i}^*\) played by the other players in this game.

Definition 3

A zero-sum game is one in which each player’s gain or loss is exactly balanced by the others’ loss or gain and the sum of the players’ payoff is always zero.

Now we focus on a continuous two-player zero-sum game. In such a game, given the strategy pair \((\mu _1, \mu _2)\), player 1 has a utility of \(u(\mu _1, \mu _2)\), while player 2 has a utility of \(-u(\mu _1, \mu _2)\). In the framework of GAN, the training objective (1) can be regarded as a two-player zero-sum game, where the generator and discriminator are two players with utility functions \(-V(G,D)\) and V(GD), respectively. Both of them aim to maximize their utility and the sum of their utilities is zero.

Knowing the opponent is always seeking to maximize its utility, Player 1 and 2 choose strategies according to

$$\begin{aligned} \mu _1^*&= \mathop {\text {argmax}}\limits _{\mu _1 \in \Delta S_1} \min _{\mu _2 \in \Delta S_2} u(\mu _1,\mu _2) \end{aligned}$$
(9)
$$\begin{aligned} \mu _2^*&= \mathop {\text {argmin}}\limits _{ \mu _2 \in \Delta S_2 }\max _{\mu _1 \in \Delta S_1} u(\mu _1,\mu _2). \end{aligned}$$
(10)

Define \(\underline{v}=\max \limits _{\mu _1\in \Delta S_1}\min \limits _{\mu _{2}\in \Delta S_2} u(\mu _1,\mu _{2})\) and \(\bar{v}=\min \limits _{\mu _{2}\in \Delta S_{2}}\max \limits _{\mu _{1}\in \Delta S_{1}} u (\mu _1,\mu _{2})\) as the lower value and upper value of the game, respectively. Generally, \(\underline{v} \le \bar{v}\). Sion [26] showed that these two values coincide under some regularity conditions:

Theorem 1

(Sion’s Minimax Theorem [26]). Let X and Y be convex, compact spaces, and f: \(X \times Y \rightarrow \mathbb R\). If for any \(x \in X\), \(f(x,\cdot )\) is upper semi-continuous and quasi-concave on Y and for any \(y\in Y\), \(f(\cdot ,y)\) is lower semi-continuous and quasi-convex on X, then \(\inf _{x\in X}\sup _{y\in Y}f(x,y) = \sup _{y\in Y}\inf _{x\in X}f(x,y)\).

Hence, in a zero-sum game, if the utility function \(u(\mu _1,\mu _2)\) satisfies the conditions in Theorem 1, then \(\underline{v} = \bar{v}\). We refer to \(v = \underline{v} = \bar{v}\) as the value of the game. We further show that a Nash equilibrium of the zero-sum game achieves the value of the game.

Corollary 1

In a two-player zero-sum game with the utility function satisfying the conditions in Theorem 1, if a strategy \((\mu _1^*,\mu _2^*)\) is a Nash equilibrium, then \(u(\mu _1^*,\mu _2^*) = v\).

Corollary 1 implies that if we have an algorithm that achieves a Nash equilibrium of a zero-sum game, we may utilize this algorithm to optimally train a GAN. We next describe a learning mechanism to achieve a Nash equilibrium.

4.2 Fictitious Play

Suppose the zero-sum game is played repeatedly between two rational players, then each player may try to infer her opponent’s strategy. Let \(s_i^{n} \in S_i\) denote the action taken by player i at time n. At time n, given the previous actions \(\{s_{2}^0,s_{2}^1,\cdots ,s_{2}^{n-1}\}\) chosen by player 2, one good hypothesis is that player 2 is using stationary mixed strategies and chooses strategy \(s_{2}^t\), \(0\le t\le n-1\), with probability \(\frac{1}{n}\). Here we use the empirical frequency to approximate the probability in mixed strategies. Under this hypothesis, the best response for player 1 at time n is to choose the strategy \(\mu _1^*\) satisfying:

$$\begin{aligned} \mu _1^* =\mathop {\text {argmax}}\limits _{\mu _1\in \Delta S_1} u (\mu _1,\mu _2^n), \end{aligned}$$
(11)

where \(\mu _2^n\) is the empirical distribution of player 2’s historical actions. Similarly, player 2 can choose the best response assuming player 1 is choosing its strategy according to the empirical distribution of the historical actions.

Notice that the expected utility is a linear combination of utilities under different pure strategies, hence for any hypothesis \(\mu _{-i}^n\), player i can find a pure strategy \(s_i^{n}\) as a best response. Therefore, we further assume each player plays the best pure response at each round. In game theory this learning rule is called fictitious play, proposed by Brown [2].

Danskin [5] showed that for any continuous zero-sum games with any initial strategy profile, fictitious play will converge. This important result is summarized in the following theorem.

Theorem 2

Let \(u(s_1,s_2)\) be a continuous function defined on the direct product of two compact sets \(S_1\) and \(S_2\). The pure strategy sequences \(\{s_1^n\}\) and \(\{s_2^n\}\) are defined as follows: \(s_1^0\) and \(s_2^0\) are arbitrary, and

$$\begin{aligned} s_1^n \in \mathop {{\mathop {\mathrm {argmax}}}}\limits _{s_1\in S_1} \frac{1}{n} \sum _{k=0}^{n-1}u (s_1,s_2^k), \ \ \ \ s_2^n \in \mathop {{\mathop {\mathrm {argmin}}}}\limits _{s_2\in S_2}\frac{1}{n} \sum _{k=0}^{n-1} u (s_1^k,s_2), \end{aligned}$$
(12)

then

$$\begin{aligned} \lim _{n\rightarrow \infty }\frac{1}{n}\sum _{k=0}^{n-1} u (s_1^n,s_2^k)=\lim _{n\rightarrow \infty }\frac{1}{n}\sum _{k=0}^{n-1}u (s_1^k,s^n_2)=v, \end{aligned}$$
(13)

where v is the value of the game.

4.3 Effectiveness of Fictitious Play

In this section, we show that fictitious play enables the convergence of learning to the optimal solution for the two counter-examples in Sect. 3.

Example 1: Figure 2 shows the performance of the best-response approach, where the data follows a Bernoulli distribution \(p_d \sim \) Bernoulli (0.25), the initialization is \(D(x) = x\) for \(x \in [0,1]\) and the initial generated distribution \(p_g \sim \) Bernoulli (0.1). It can be seen that the generated distribution based on best responses oscillates between \(p_g(x=0) = 1\) and \(p_g(x=1) = 1\).

Assuming best response at each iteration n, under fictitious play, the discriminator is updated according to \(D_{n} = \arg \max _{D} \frac{1}{n} \sum _{w=0}^{n-1} V(p_{g,w}, D)\) and the generated distribution is updated according to \(p_{g,n} = \arg \max _{p_g}\frac{1}{n} \sum _{w=0}^{n-1} V(p_g, D_w)\). Figure 2 shows the change of \(D_n\) and the empirical mean of the generated distributions \(\bar{p}_{g,n} = \frac{1}{n} \sum _{w=0}^{n-1} p_{g,w}\) as training proceeds. Although the best-response generated distribution at each iteration oscillates as in Fig. 2a, the learning mechanism of fictitious play makes the empirical mean \(\bar{p}_{g,n}\) converge to the data distribution.

Fig. 2.
figure 2

Performance of best-response training for Example 1. (a) is Bernoulli distribution of \(p_g\) assuming best-response updates. (b) illustrates D(x) in Fictitious GAN assuming best response at each training iteration. (c) illustrates the average of \(p_g(x)\) in Fictitious GAN assuming best response at each training iteration.

Example 2: At each iteration n, player 1 chooses \(x = \arg \max _x \frac{1}{n}\sum _{i=0}^{n-1} x y_i\), which is equal to \( 10* \text {sign}(\sum _{i=0}^{n-1}y_i)\). Similarly, player 2 chooses y according to \(y = -10* \text {sign}(\sum _{i=0}^{n-1}x_i)\). Hence regardless of what the initial condition is, both players will only choose 10 or \(-10\) at each iteration. Consequently, as iteration goes to infinity, the empirical mixed strategy only proposes density on 10 and \(-10\). It is proved in the Supplementary material that the mixed strategy \((\sigma _1^*,\sigma _2^*)\) that both players choose 10 and \(-10\) with probability \(\frac{1}{2}\) is a Nash equilibrium for this game. Figure 3 shows that under fictitious play, both players’ empirical mixed strategy converges to the Nash equilibrium and the expected utility for each player converges to 0.

Fig. 3.
figure 3

(a) and (b) illustrate the empirical distribution of x and y at 10 and \(-10\), respectively. (c) illustrates the expected utility for player 1 under fictitious play.

One important observation is fictitious play can provide the Nash equilibrium if the equilibrium is unique in the game. However, if there exist multiple Nash equilibriums, different initialization may yield different solutions. In the above example, it is easy to check (0, 0) is also a Nash equilibrium, which means both players always choose 0, but fictitious play can lead to this solution only when the initialization is (0, 0). The good thing we show in the next section is, due to the special structure of GAN (the utility function is linear over generated distribution), fictitious play can help us find the desired Nash equilibrium.

5 Fictitious GAN

5.1 Algorithm Description

As discussed in the last section, the competition between the generator and discriminator in GAN can be modeled as a two-player zero-sum game. The following theorem proved in the supplementary material shows that the optimal solution of (1) is actually a unique Nash equilibrium in the game.

Theorem 3

Consider (1) as a two-player zero-sum game. The optimal solution of (1) with \(p^*_g=p_{d}\) and \(D^*({{\varvec{x}}})=1/2\) is a unique Nash equilibrium in this game. The value of the game is \(-\log 4\).

By relating GAN with the two-player zero-sum game, we can design a training algorithm to simulate the fictitious play such that the training outcome converges to the Nash equilibrium

Fictitious GAN, as described in Algorithm 1, adapts the fictitious play learning mechanism to train GANs. We use two queues \(\mathcal {D}\) and \(\mathcal {G}\) to store the historically trained models of the discriminator and the generator, respectively. At each iteration, the discriminator (resp. generator) is updated according to the best response to V(GD) assuming that the generator (resp. discriminator) chooses a historical strategy uniformly at random. Mathematically, the discriminator and generator are updated according to (14) and (15), where the outputs due to the generator and the discriminator is mixed uniformly at random from the previously trained models. Note the back-propagation is still performed on a single neural network at each training step. Different from standard training approaches, we perform \(k_0\) gradient descent updates when training the discriminator and the generator in order to achieve the best response. In practical learning, queues \(\mathcal {D}\) and \(\mathcal {G}\) are maintained with a fixed size. The oldest model is discarded if the queue is full when we update the discriminator or the generator.

figure a

The following theorem provides the theoretical convergence guarantee for Fictitious GAN. It shows that assuming best response at each update in Fictitious GAN, the distribution of the mixture outputs from the generators converge to the data distribution. The intuition of the proof is that fictitious play achieves a Nash equilibrium in two-player zero-sum games. Since the optimal solution of GAN is a unique equilibrium in the game, fictitious GAN achieves the optimal solution.

Theorem 4

Suppose the discriminator and the generator are updated according to the best-response strategy at each iteration in Fictitious GAN, then

$$\begin{aligned}&\lim _{n \rightarrow \infty } \frac{1}{n} \sum _{w = 0}^{n-1} p_{g,w} ({\varvec{x}}) = p_d ({\varvec{x}}), \end{aligned}$$
(16)
$$\begin{aligned}&\lim _{n \rightarrow \infty } D_{n} ({\varvec{x}}) = \frac{1}{2}, \end{aligned}$$
(17)

where \(D_{w}({\varvec{x}})\) is the output from the w-th trained discriminator model and \(p_{g,w}\) is the generated distribution due to the w-th trained generator.

5.2 Fictitious GAN as a Meta-Algorithm

One advantage of Fictitious GAN is that it can be applied on top of existing GANs. Consider the following minimax problem:

$$\begin{aligned} \min _{G} \max _{D} V(G,D) = \mathsf {E}_{{\varvec{x}}\sim p_d({\varvec{x}})} \{f_0 (D({\varvec{x}})) \} + \mathsf {E}_{{\varvec{z}}\sim p_z({\varvec{z}})} \{f_1(D(G({\varvec{z}})) ) \}, \end{aligned}$$
(18)

where \(f_0(\cdot )\) and \(f_1(\cdot )\) are some quasi-concave functions depending on the GAN variants. Table 1 shows the family of f-GAN [4, 20] and Wasserstein GAN.

We can model these GAN variants as two-player zero-sum games and the training algorithms for these variants of GAN follow by simply changing \(f_0(\cdot )\) and \(f_1(\cdot )\) in the updating rule accordingly in Algorithm 1. Following the proof in Theorem 4, we can show that the time average of generated distributions will converge to the data distribution and the discriminator will converge to \(D^*\) as shown in Table 1.

Table 1. Variants of GANs under the zero-sum game framework.

6 Experiments

Our Fictitious GAN is a meta-algorithm that can be applied on top of existing GANs. To demonstrate the merit of using Fictitious GAN, we apply our meta-algorithm on DCGAN [22] and its extension conditional DCGAN. Conditional DCGAN allows DCGAN to use external label information to generate images of some particular classes. We evaluate the performance on a synthetic dataset and three widely adopted real-world image datasets. Our experiment results show that Fictitious GAN could improve visual quality of both DCGAN and conditional GAN models.

Image dataset. (1) MNIST: contains 60,000 labeled images of 28 \(\times \) 28 grayscale digits. (2) CIFAR-10: consists of colored natural scene images sized at 32 \(\times \) 32 pixels. There are 50,000 training images and 10,000 test images in 10 classes. (3) CelebA: is a large-scale face attributes dataset with more than 200K celebrity images, each with 40 attribute annotations.

Parameter Settings. We used Tensorflow for our implementation. Due to GPU memory limitation, we limit number of historical models to 5 in real-world image dataset experiments. More architecture details are included in supplementary material.

6.1 2D Mixture of Gaussian

Figure 4 shows the performance of Fictitious GAN for a mixture of 8 Gaussain data on a circle in 2 dimensional space. We use the network structure in [17] to evaluate the performance of our proposed method. The data is sampled from a mixture of 8 Gaussians uniformly located on a circle of radius 1.0. Each has standard deviation of 0.02. The input noise samples are a vector of 256 independent and identically distributed (i.i.d.) Gaussian variables with mean zero and unit standard deviation.

While the original GANs experience mode collapse [17, 19], Fictitious GAN is able to generate samples over all 8 modes, even with a single discriminator asymptotically.

Fig. 4.
figure 4

Performance of Fictitious GAN on 2D mixture of Gaussian data. The data samples are marked in blue and the generated samples are marked in orange. (Color figure online)

6.2 Qualitative Results for Image Generation

We show visual quality of samples generated by DCGAN and conditional DCGAN, trained by proposed Fictitious GAN. In Fig. 5 first row corresponds to generated samples. We apply train DCGAN on CelebA dataset, and train conditional DCGAN on MNIST and CIFAR-10. Each image in the first row corresponds to the image in the same grid position in second row of Fig. 5. The second row shows the nearest neighbor in training dataset computed by Euclidean distance. The samples are randomly drawn without cherry picking, they are representative of model output distribution.

In CelebA, we can generate face images with various genders, skin colors and hairstyles. In MNIST dataset, all generated digits have almost visually identical samples. Also, digit images have diverse visual shapes and fonts. CIFAR-10 dataset is more challenging, images of each object have large visual appearance variance. We observe some visual and label consistency in generated images and the nearest neigbhors, especially in the categories of airplane, horse and ship. Note that though we theoretical proved that Fictitious GAN could improve robustness of training in best response strategy, the visual quality still depends on the baseline GAN architecture and loss design, which in our case is conditional DCGAN.

Fig. 5.
figure 5

Generated images in CelebA, MNIST and CIFAR-10. Top row samples are generated, bottom row images are corresponding nearest neighbors in training dataset.

6.3 Quantitative Results

In this section, we quantitatively show that DCGAN models trained by our Fictitious GAN could gain improvement over traditional training methods. Also, we may have a better performance by applying Fictitious gan on other existing gan models. The results of comparison methods are directly copied as reported.

Metric. The visual quality of generated images is measured by the widely used Inception score metric [24]. It measures visual objectiveness of generated image and correlates well with human scoring of the realism of generated images. Following evaluation scheme of [24] setup, we generate 50,000 images from our model to compute the score.

Table 2. Inception Score on CIFAR-10.

As shown in Table 2, Our method outperforms recent state-of-the-art methods. Specifically, we improve baseline DCGAN from 6.16 to 6.63; and conditional DCGAN model from 7.16 to 7.27. It sheds light on the advantage of training with the proposed learning algorithm. Note that in order to highlight the performance improvement gained from fictitious GAN, the inception score of reproduced DCGAN model is 6.72, obtained without using tricks as [24]. Also, we did not use any regularization terms such as conditional loss and entropy loss to train DCGAN, as in [13]. We expect higher inception score when more training tricks are used in addition to Fictitious GAN.

6.4 Ablation Studies

One hyperparameter that affects the performance of Fictitious GAN is the number of historical generator (discriminator) models. We evaluate the performance of Fictitious GAN with different number of historical models, and report the inception scores on the 150-th epoch in CIFAR-10 dataset in Fig. 6. We keep the number of historical discriminators the same as the number of historical generators. We observe a trend of performance boost with an increasing number of historical models in 2 baseline GAN models. The mean of inception score slightly drops for Jenson-Shannon divergence metric when the copy number is 4, due to random initialization and random noise generation in training.

Fig. 6.
figure 6

We show that Fictitious-GAN can improve Inception score as a meta-algorithm with larger number of historical models, We select 2 divergence metrics from Table 1: Jenson-Shanon and KL divergence.

7 Conclusion

In this paper, we relate the minimax game of GAN to the two-player zero-sum game. This relation enables us to leverage the mechanism of fictitious play to design a novel training algorithm, referred to as fictitious GAN. In the training algorithm, the discriminator (resp. generator) is alternately updated as best response to the mixed output of the stale generator models (resp. discriminator). This novel training algorithm can resolve the oscillation behavior due to the pure best response strategy and the inconvergence issue of gradient based training in some cases. Real world image datasets show that applying fictitious GAN on top of the existing DCGAN models yields a performance gain of up to 8%.