Keywords

1 Introduction

Many problems in computer vision aim at translating images from one domain to another, including super-resolution [1], colorization [2], inpainting [3], attribute transfer [4], and style transfer [5]. This cross-domain image-to-image translation setting has therefore received significant attention [6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25]. When the dataset contains paired examples, this problem can be approached by a conditional generative model [6] or a simple regression model [13]. In this work, we focus on the much more challenging setting when such supervision is unavailable.

In many scenarios, the cross-domain mapping of interest is multimodal. For example, a winter scene could have many possible appearances during summer due to weather, timing, lighting, etc. Unfortunately, existing techniques usually assume a deterministic [8,9,10] or unimodal [15] mapping. As a result, they fail to capture the full distribution of possible outputs. Even if the model is made stochastic by injecting noise, the network usually learns to ignore it [6, 26].

In this paper, we propose a principled framework for the Multimodal UNsupervised Image-to-image Translation (MUNIT) problem. As shown in Fig. 1(a), our framework makes several assumptions. We first assume that the latent space of images can be decomposed into a content space and a style space. We further assume that images in different domains share a common content space but not the style space. To translate an image to the target domain, we recombine its content code with a random style code in the target style space (Fig. 1(b)). The content code encodes the information that should be preserved during translation, while the style code represents remaining variations that are not contained in the input image. By sampling different style codes, our model is able to produce diverse and multimodal outputs. Extensive experiments demonstrate the effectiveness of our method in modeling multimodal output distributions and its superior image quality compared with state-of-the-art approaches. Moreover, the decomposition of content and style spaces allows our framework to perform example-guided image translation, in which the style of the translation outputs are controlled by a user-provided example image in the target domain.

Fig. 1.
figure 1

An illustration of our method. (a) Images in each domain \(\mathcal {X}_{i}\) are encoded to a shared content space \(\mathcal {C}\) and a domain-specific style space \(\mathcal {S}_{i}\). Each encoder has an inverse decoder omitted from this figure. (b) To translate an image in \(\mathcal {X}_{1}\) (e.g., a leopard) to \(\mathcal {X}_{2}\) (e.g., domestic cats), we recombine the content code of the input with a random style code in the target style space. Different style codes lead to different outputs.

2 Related Works

Generative Adversarial Networks (GANs). The GAN framework [27] has achieved impressive results in image generation. In GAN training, a generator is trained to fool a discriminator which in turn tries to distinguish between generated samples and real samples. Various improvements to GANs have been proposed, such as multi-stage generation [28,29,30,31,32,33], better training objectives [34,35,36,37,38,39], and combination with auto-encoders [40,41,42,43,44]. In this work, we employ GANs to align the distribution of translated images with real images \(\text{ in } \text{ the } \text{ target } \text{ domain. }\)

Image-to-Image Translation. Isola et al.  [6] propose the first unified framework for image-to-image translation based on conditional GANs, which has been extended to generating high-resolution images by Wang et al.  [20]. Recent studies have also attempted to learn image translation without supervision. This problem is inherently ill-posed and requires additional constraints. Some works enforce the translation to preserve certain properties of the source domain data, such as pixel values [21], pixel gradients [22], semantic features [10], class labels [22], or pairwise sample distances [16]. Another popular constraint is the cycle consistency loss [7,8,9]. It enforces that if we translate an image to the target domain and back, we should obtain the original image. In addition, Liu et al.  [15] propose the UNIT framework, which assumes a shared latent space such that corresponding images in two domains are mapped to the same latent code.

A significant limitation of most existing image-to-image translation methods is the lack of diversity in the translated outputs. To tackle this problem, some works propose to simultaneously generate multiple outputs given the same input and encourage them to be different [13, 45, 46]. Still, these methods can only generate a discrete number of outputs. Zhu et al.  [11] propose a BicycleGAN that can model continuous and multimodal distributions. However, all the aforementioned methods require pair supervision, while our method does not. A couple of concurrent works also recognize this limitation and propose extensions of CycleGAN/UNIT for multimodal mapping [47]/[48].

Our problem has some connections with multi-domain image-to-image translation [19, 49, 50]. Specifically, when we know how many modes each domain has and the mode each sample belongs to, it is possible to treat each mode as a separate domain and use multi-domain image-to-image translation techniques to learn a mapping between each pair of modes, thus achieving multimodal translation. However, in general we do not assume such information is available. Also, our stochastic model can represent continuous output distributions, while [19, 49, 50] still use a deterministic model for each pair of domains.

Style Transfer. Style transfer aims at modifying the style of an image while preserving its content, which is closely related to image-to-image translation. Here, we make a distinction between example-guided style transfer, in which the target style comes from a single example, and collection style transfer, in which the target style is defined by a collection of images. Classical style transfer approaches [5, 51,52,53,54,55,56] typically tackle the former problem, whereas image-to-image translation methods have been demonstrated to perform well in the latter [8]. We will show that our model is able to address both problems, thanks to its disentangled representation of content and style.

Learning Disentangled Representations. Our work draws inspiration from recent works on disentangled representation learning. For example, InfoGAN [57] and \(\beta \)-VAE [58] have been proposed to learn disentangled representations without supervision. Some other works [59,60,61,62,63,64,65,66] focus on disentangling content from style. Although it is difficult to define content/style and different works use different definitions, we refer to “content” as the underling spatial structure and “style” as the rendering of the structure. In our setting, we have two domains that share the same content distribution but have different style distributions.

3 Multimodal Unsupervised Image-to-Image Translation

Assumptions. Let \(x_{1}\in \mathcal {X}_{1}\) and \(x_{2}\in \mathcal {X}_{2}\) be images from two different image domains. In the unsupervised image-to-image translation setting, we are given samples drawn from two marginal distributions \(p(x_{1})\) and \(p(x_{2})\), without access to the joint distribution \(p(x_{1},x_{2})\). Our goal is to estimate the two conditionals \(p(x_{2}|x_{1})\) and \(p(x_{1}|x_{2})\) with learned image-to-image translation models \(p(x_{1\rightarrow 2}|x_{1})\) and \(p(x_{2\rightarrow 1}|x_{2})\), where \(x_{1\rightarrow 2}\) is a sample produced by translating \(x_{1}\) to \(\mathcal {X}_{2}\) (similar for \(x_{2\rightarrow 1}\)). In general, \(p(x_{2}|x_{1})\) and \(p(x_{1}|x_{2})\) are complex and multimodal distributions, in which case a deterministic translation model does not work well.

To tackle this problem, we make a partially shared latent space assumption. Specifically, we assume that each image \(x_{i}\in \mathcal {X}_{i}\) is generated from a content latent code \(c\in \mathcal {C}\) that is shared by both domains, and a style latent code \(s_{i}\in \mathcal {S}_{i}\) that is specific to the individual domain. In other words, a pair of corresponding images \((x_{1},x_{2})\) from the joint distribution is generated by \(x_{1} = G^{*}_{1}(c, s_{1})\) and \(x_{2} = G^{*}_{2}(c, s_{2})\), where \(c, s_{1}, s_{2}\) are from some prior distributions and \(G^{*}_{1}\), \(G^{*}_{2}\) are the underlying generators. We further assume that \(G^{*}_{1}\) and \(G^{*}_{2}\) are deterministic functions and have their inverse encoders \(E^{*}_{1}=(G^{*}_{1})^{-1}\) and \(E^{*}_{2}=(G^{*}_{2})^{-1}\). Our goal is to learn the underlying generator and encoder functions with neural networks. Note that although the encoders and decoders are deterministic, \(p(x_{2}|x_{1})\) is a continuous distribution due to the \(\text{ dependency } \text{ of } s_{2}\).

Our assumption is closely related to the shared latent space assumption proposed in UNIT [15]. While UNIT assumes a fully shared latent space, we postulate that only part of the latent space (the content) can be shared across domains whereas the other part (the style) is domain specific, which is a more reasonable assumption when the cross-domain mapping is many-to-many.

Model. Figure 2 shows an overview of our model and its learning process. Similar to Liu et al. [15], our translation model consists of an encoder \(E_{i}\) and a decoder \(G_{i}\) for each domain \(\mathcal {X}_{i}\) (\(i=1,2\)). As shown in Fig. 2(a), the latent code of each auto-encoder is factorized into a content code \(c_{i}\) and a style code \(s_{i}\), where \((c_{i}, s_{i}) = (E_{i}^{c}(x_{i}), E_{i}^{s}(x_{i})) = E_{i}(x_{i})\). Image-to-image translation is performed by swapping encoder-decoder pairs, as illustrated in Fig. 2(b). For example, to translate an image \(x_{1}\in \mathcal {X}_{1}\) to \(\mathcal {X}_{2}\), we first extract its content latent code \(c_{1} = E^{c}_{1}(x_{1})\) and randomly draw a style latent code \(s_{2}\) from the prior distribution \(q(s_{2})\sim \mathcal {N}(0, \mathbf {I})\). We then use \(G_{2}\) to produce the final output image \(x_{1\rightarrow 2} = G_{2}(c_{1}, s_{2})\). We note that although the prior distribution is unimodal, the output image distribution can be multimodal thanks to the nonlinearity of the decoder.

Our loss function comprises a bidirectional reconstruction loss that ensures the encoders and decoders are inverses, and an adversarial loss that matches the distribution of translated images to the image distribution in the target domain.

Fig. 2.
figure 2

Model overview. Our image-to-image translation model consists of two auto-encoders (denoted by and arrows respectively), one for each domain. The latent code of each auto-encoder is composed of a content code c and a style code s. We train the model with adversarial objectives (dotted lines) that ensure the translated images to be indistinguishable from real images in the target domain, as well as bidirectional reconstruction objectives (dashed lines) that reconstruct both images and latent codes. (Color figure online)

Bidirectional Reconstruction Loss. To learn pairs of encoder and decoder that are inverses of each other, we use objective functions that encourage reconstruction in both image \( \rightarrow \text{ latent } \rightarrow \text{ image } \text{ and } \text{ latent } \rightarrow \text{ image } \rightarrow \text{ latent } \text{ directions }\):

  • Image Reconstruction. Given an image sampled from the data distribution, we should be able to reconstruct it after encoding and decoding.

    $$\begin{aligned} \mathcal {L}^{x_{1}}_{\text {recon}} =\mathbb {E}_{x_{1}\sim p(x_{1})}[||G_{1}(E_{1}^{c}(x_{1}), E_{1}^{s}(x_{1}))-x_{1}||_{1}] \end{aligned}$$
    (1)
  • Latent Reconstruction. Given a latent code (style and content) sampled from the latent distribution at translation time, we should be able to reconstruct it after decoding and encoding.

    $$\begin{aligned} \mathcal {L}^{c_{1}}_{\text {recon}}&= \mathbb {E}_{c_{1}\sim p(c_{1}), s_{2}\sim q(s_{2})}[||E^{c}_{2}(G_{2}(c_{1},s_{2}))-c_{1}||_{1}] \end{aligned}$$
    (2)
    $$\begin{aligned} \mathcal {L}^{s_{2}}_{\text {recon}}&= \mathbb {E}_{c_{1}\sim p(c_{1}), s_{2}\sim q(s_{2})}[||E^{s}_{2}(G_{2}(c_{1},s_{2}))-s_{2}||_{1}] \end{aligned}$$
    (3)

    where \(q(s_{2})\) is the prior \(\mathcal {N}(0, \mathbf {I})\), \(p(c_{1}) \text{ is } \text{ given } \text{ by } c_{1} = E^{c}_{1}(x_{1}) \text{ and } x_{1}\sim p(x_{1}).\)

We note the other loss terms \(\mathcal {L}^{x_{2}}_{\text {recon}}\), \(\mathcal {L}^{c_{2}}_{\text {recon}}\), and \(\mathcal {L}^{s_{1}}_{\text {recon}}\) are defined in a similar manner. We use \(\mathcal {L}_{1}\) reconstruction loss as it encourages sharp output images.

The style reconstruction loss \(\mathcal {L}^{s_{i}}_{\text {recon}}\) is reminiscent of the latent reconstruction loss used in the prior works [11, 31, 44, 57]. It has the effect on encouraging diverse outputs given different style codes. The content reconstruction loss \(\mathcal {L}^{c_{i}}_{\text {recon}}\) encourages the translated image to preserve semantic \(\text{ content } \text{ of } \text{ the } \text{ input } \text{ image. }\)

Adversarial Loss. We employ GANs to match the distribution of translated images to the target data distribution. In other words, images generated by our model should be indistinguishable from real images \(\text{ in } \text{ the } \text{ target } \text{ domain. }\)

$$\begin{aligned} \mathcal {L}^{x_{2}}_{\text {GAN}}&=\mathbb {E}_{c_{1}\sim p(c_{1}), s_{2}\sim q(s_{2})}[\log (1-D_{2}(G_{2}(c_{1},s_{2})))] + \mathbb {E}_{x_{2}\sim p(x_{2})}[\log D_{2}(x_{2})] \end{aligned}$$
(4)

where \(D_{2}\) is a discriminator that tries to distinguish between translated images and real images in \(\mathcal {X}_{2}\). The discriminator \(D_{1}\) and loss \(\mathcal {L}^{x_{1}}_{\text {GAN}}\) are defined similarly.

Total Loss. We jointly train the encoders, decoders, and discriminators to optimize the final objective, which is a weighted sum of the adversarial loss and the bidirectional reconstruction loss terms.

$$\begin{aligned}&\underset{E_{1}, E_{2}, G_{1}, G_{2}}{\min }\ \underset{D_{1}, D_{2}}{\max }\ \mathcal {L}(E_{1}, E_{2}, G_{1}, G_{2}, D_{1}, D_{2}) = \mathcal {L}^{x_{1}}_{\text {GAN}} + \mathcal {L}^{x_{2}}_{\text {GAN}}\ \nonumber \\&\quad +\lambda _{x}(\mathcal {L}^{x_{1}}_{\text {recon}}+\mathcal {L}^{x_{2}}_{\text {recon}})+\lambda _{c}(\mathcal {L}^{c_{1}}_{\text {recon}}+\mathcal {L}^{c_{2}}_{\text {recon}})+\lambda _{s}(\mathcal {L}^{s_{1}}_{\text {recon}}+\mathcal {L}^{s_{2}}_{\text {recon}}) \end{aligned}$$
(5)

where \(\lambda _{x}\), \(\lambda _{c}\), \(\lambda _{s}\) are weights that control the importance of reconstruction terms.

4 Theoretical Analysis

We now establish some theoretical properties of our framework. Specifically, we show that minimizing the proposed loss function leads to (1) matching of latent distributions during encoding and generation, (2) matching of two joint image distributions induced by our framework, and (3) enforcing a weak form of cycle consistency constraint. \(\text{ All } \text{ the } \text{ proofs } \text{ are } \text{ given } \text{ in } \text{ the } \text{ supplementary } \text{ material. }\)

First, we note that the total loss in Eq. (5) is minimized when the translated distribution matches the data distribution and the encoder-decoder are inverses.

Proposition 1

Suppose there exists \(E^{*}_{1}\), \(E^{*}_{2}\), \(G^{*}_{1}\), \(G^{*}_{2}\) such that: (1) \(E^{*}_{1} = (G^{*}_{1})^{-1}\) and \(E^{*}_{2} = (G^{*}_{2})^{-1}\), and (2) \(p(x_{1\rightarrow 2}) = p(x_{2})\) and \(p(x_{2\rightarrow 1}) = p(x_{1})\). Then \(E^{*}_{1}\), \(E^{*}_{2}\), \(G^{*}_{1}\), \(G^{*}_{2}\) minimizes \(\mathcal {L}(E_{1}, E_{2}, G_{1}, G_{2})=\underset{D_{1}, D_{2}}{\max }\ \mathcal {L}(E_{1}, E_{2}, G_{1}, G_{2}, D_{1}, D_{2})\) (Eq. (5)).

Latent Distribution Matching. For image generation, existing works on combining auto-encoders and GANs need to match the encoded latent distribution with the latent distribution the decoder receives at generation time, using either KLD loss [15, 40] or adversarial loss [17, 42] in the latent space. The auto-encoder training would not help GAN training if the decoder received a very different latent distribution during generation. Although our loss function does not contain terms that explicitly encourage the match of latent distributions, \(\text{ it } \text{ has } \text{ the } \text{ effect } \text{ of } \text{ matching } \text{ them } \text{ implicitly. }\)

Proposition 2

When optimality is reached, we have:

$$\begin{aligned} p(c_{1})=p(c_{2}),\ p(s_{1})=q(s_{1}),\ p(s_{2})=q(s_{2}) \end{aligned}$$

The above proposition shows that at optimality, the encoded style distributions match their Gaussian priors. Also, the encoded content distribution matches the distribution at generation time, which is just the encoded distribution from the other domain. This suggests that the content space becomes domain-invariant.

Joint Distribution Matching. Our model learns two conditional distributions \(p(x_{1\rightarrow 2}| x_{1})\) and \(p(x_{2\rightarrow 1}| x_{2})\), which, together with the data distributions, define two joint distributions \(p(x_{1}, x_{1\rightarrow 2})\) and \(p(x_{2\rightarrow 1}, x_{2})\). Since both of them are designed to approximate the same underlying joint distribution \(p(x_{1}, x_{2})\), it is desirable that they are consistent with each other, i.e., \(p(x_{1}, x_{1\rightarrow 2}) = p(x_{2\rightarrow 1}, x_{2})\).

Joint distribution matching provides an important constraint for unsupervised image-to-image translation and is behind the success of many recent methods. Here, we show our model matches the joint distributions at optimality.

Proposition 3

When optimality is reached, we have \(p(x_{1}, x_{1\rightarrow 2}) = p(x_{2\rightarrow 1}, x_{2})\).

Style-Augmented Cycle Consistency. Joint distribution matching can be realized via cycle consistency constraint [8], assuming deterministic translation models and matched marginals [43, 67, 68]. However, we note that this constraint is too strong for multimodal image translation. In fact, we prove in the supplementary material that the translation model will degenerate to a deterministic function if cycle consistency is enforced. In the following proposition, we show that our framework admits a weaker form of cycle consistency, termed as style-augmented cycle consistency, between the image–style joint spaces, which is more \(\text{ suited } \text{ for } \text{ multimodal } \text{ image } \text{ translation. }\)

Proposition 4

Denote \(h_{1}=(x_{1}, s_{2})\in \mathcal {H}_{1}\) and \(h_{2}=(x_{2}, s_{1})\in \mathcal {H}_{2}\). \(h_{1}, h_{2}\) are points in the joint spaces of image and style. Our model defines a deterministic mapping \(F_{1\rightarrow 2}\) from \(\mathcal {H}_{1}\) to \(\mathcal {H}_{2}\) (and vice versa) by \(F_{1\rightarrow 2}(h_{1}) = F_{1\rightarrow 2}(x_{1}, s_{2})\triangleq (G_{2}(E^{c}_{1}(x_{1}), s_{2}), E^{s}_{1}(x_{1}))\). When optimality is achieved, we have \(F_{1\rightarrow 2} = F_{2\rightarrow 1}^{-1}\).

Intuitively, style-augmented cycle consistency implies that if we translate an image to the target domain and translate it back using the original style, we should obtain the original image. Note that we do not use any explicit loss terms to enforce style-augmented cycle consistency, but it is implied by the proposed bidirectional reconstruction loss.

5 Experiments

5.1 Implementation Details

Figure 3 shows the architecture of our auto-encoder. It consists of a content encoder, a style encoder, and a joint decoder. More detailed information and hyperparameters are given in the supplementary material. We will provide an open-source implementation in PyTorch [69].

Content Encoder. Our content encoder consists of several strided convolutional layers to downsample the input and several residual blocks [70] to further process it. All the convolutional layers are followed by \(\text{ Instance } \text{ Normalization } \text{(IN) }\) [71].

Style Encoder. The style encoder includes several strided convolutional layers, followed by a global average pooling layer and a fully connected (FC) layer. We do not use IN layers in the style encoder, since IN removes the original feature mean and variance that represent important style information [54].

Fig. 3.
figure 3

Our auto-encoder architecture. The content encoder consists of several strided convolutional layers followed by residual blocks. The style encoder contains several strided convolutional layers followed by a global average pooling layer and a fully connected layer. The decoder uses a MLP to produce a set of AdaIN [54] parameters from the style code. The content code is then processed by residual blocks with AdaIN layers, and finally decoded to the image space by upsampling and convolutional layers.

Decoder. Our decoder reconstructs the input image from its content and style code. It processes the content code by a set of residual blocks and finally produces the reconstructed image by several upsampling and convolutional layers. Inspired by recent works that use affine transformation parameters in normalization layers to represent styles [54, 72,73,74], we equip the residual blocks with Adaptive Instance Normalization (AdaIN) [54] layers whose parameters are dynamically generated by a multilayer perceptron (MLP) from the style code.

$$\begin{aligned} {\text {AdaIN}}(z, \gamma , \beta )= \gamma \left( \frac{z-\mu (z)}{\sigma (z)}\right) +\beta \end{aligned}$$
(6)

where z is the activation of the previous convolutional layer, \(\mu \) and \(\sigma \) are channel-wise mean and standard deviation, \(\gamma \) and \(\beta \) are parameters generated by the MLP. Note that the affine parameters are produced by a learned network, instead of computed from statistics of a pretrained network as in Huang et al.  [54].

Discriminator. We use the LSGAN objective proposed by Mao et al.  [38]. We employ multi-scale discriminators proposed by Wang et al.  [20] to guide the generators to produce both realistic details and correct global structure.

Domain-Invariant Perceptual Loss. The perceptual loss, often computed as a distance in the VGG [75] feature space between the output and the reference image, has been shown to benefit image-to-image translation when paired supervision is available [13, 20]. In the unsupervised setting, however, we do not have a reference image in the target domain. We propose a modified version of perceptual loss that is more domain-invariant, so that we can use the input image as the reference. Specifically, before computing the distance, we perform Instance Normalization [71] (without affine transformations) on the VGG features in order to remove the original feature mean and variance, which contains much domain-specific information [54, 76]. We find it accelerates training on high-resolution (\(\ge 512\times 512\)) datasets and thus employ it on those datasets.

5.2 Evaluation Metrics

Human Preference. To compare the realism and faithfulness of translation outputs generated by different methods, we perform human perceptual study on Amazon Mechanical Turk (AMT). Similar to Wang et al.  [20], the workers are given an input image and two translation outputs from different methods. They are then given unlimited time to select which translation output looks more accurate. For each comparison, we randomly generate 500 questions and each question is answered by 5 different workers.

LPIPS Distance. To measure translation diversity, we compute the average LPIPS distance [77] between pairs of randomly-sampled translation outputs from the same input as in Zhu et al.  [11]. LPIPS is given by a weighted \(\mathcal {L}_{2}\) distance between deep features of images. It has been demonstrated to correlate well with human perceptual similarity [77]. Following Zhu et al.   [11], we use 100 input images and sample 19 output pairs per input, which amounts to 1900 pairs in total. We use the ImageNet-pretrained \(\text{ AlexNet }\) [78]\( \text{ as } \text{ the } \text{ deep } \text{ feature } \text{ extractor. }\)

(Conditional) Inception Score. The Inception Score (IS) [34] is a popular metric for image generation tasks. We propose a modified version called Conditional Inception Score (CIS), which is more suited for evaluating multimodal image translation. When we know the number of modes in \(\mathcal {X}_{2}\) as well as the ground truth mode each sample belongs to, we can train a classifier \(p(y_{2}|x_{2})\) to classify an image \(x_{2}\) into its mode \(y_{2}\). Conditioned on a single input image \(x_{1}\), the translation samples \(x_{1\rightarrow 2}\) should be mode-covering (thus \(p(y_2|x_{1})= \int p(y|x_{1\rightarrow 2})p(x_{1\rightarrow 2}|x_{1}) \,dx_{1\rightarrow 2}\) should have high entropy) and each individual sample should belong to a specific mode (thus \(p(y_{2}|x_{1\rightarrow 2})\) should have low entropy). Combing these two requirements we get:

$$\begin{aligned} {\text {CIS}} = \mathbb {E}_{x_{1}\sim p(x_{1})}[ \mathbb {E}_{x_{1\rightarrow 2}\sim p(x_{2\rightarrow 1}|x_{1})} [{\text {KL}}(p(y_{2}|x_{1\rightarrow 2})||p(y_{2}|x_{1}))]] \end{aligned}$$
(7)

To compute the (unconditional) IS, \(p(y_{2}|x_{1})\) is replaced with the unconditional class probability \(p(y_{2})= \iint p(y|x_{1\rightarrow 2})p(x_{1\rightarrow 2}|x_{1})p(x_{1}) \,dx_{1}\,dx_{1\rightarrow 2}\).

$$\begin{aligned} {\text {IS}} = \mathbb {E}_{x_{1}\sim p(x_{1})}[ \mathbb {E}_{x_{1\rightarrow 2}\sim p(x_{2\rightarrow 1}|x_{1})} [{\text {KL}}(p(y_{2}|x_{1\rightarrow 2})||p(y_{2}))]] \end{aligned}$$
(8)

To obtain a high CIS/IS score, a model needs to generate samples that are both high-quality and diverse. While IS measures diversity of all output images, CIS measures diversity of outputs conditioned on a single input image. A model that deterministically generates a single output given an input image will receive a zero CIS score, though it might still get a high score under IS. We use the Inception-v3 [79] fine-tuned on our specific datasets as the classifier and estimate Eqs. (7) and (8) using 100 input images and 100 samples per input.

5.3 Baselines

UNIT [15]. The UNIT model consists of two VAE-GANs with a fully shared latent space. The stochasticity of the translation comes from the Gaussian encoders as well as the dropout layers in the VAEs.

CycleGAN [8]. CycleGAN consists of two residual translation networks trained with adversarial loss and cycle reconstruction loss. We use Dropout during both \(\text{ training } \text{ and } \text{ testing } \text{ to } \text{ encourage } \text{ diversity, } \text{ as } \text{ suggested } \text{ in } \text{ Isola }\) et al.  [6].

CycleGAN* [8] with Noise. To test whether we can generate multimodal outputs within the CycleGAN framework, we additionally inject noise vectors to both translation networks. We use the U-net architecture [11] with noise added to input, since we find the noise vectors are ignored by the residual architecture in CycleGAN [8]. Dropout is also utilized during both training and testing.

BicycleGAN [11]. BicycleGAN is the only existing image-to-image translation model we are aware of that can generate continuous and multimodal output distributions. However, it requires paired training data. We compare our model with BicycleGAN when the dataset contains pair information.

5.4 Datasets

Edges \(\leftrightarrow \) Shoes/Handbags. We use the datasets provided by Isola et al.  [6], Yu et al.  [80], and Zhu et al. [81], which contain images of shoes and handbags with edge maps generated by HED [82]. We train one model for edges \(\leftrightarrow \) shoes and another for edges \(\leftrightarrow \) handbags without using paired information.

Animal Image Translation. We collect images from 3 categories/domains, including house cats, big cats, and dogs. Each domain contains 4 modes which are fine-grained categories belonging to the same parent category. Note that the modes of the images are not known during learning the translation model. We learn a separate model for each pair of domains.

Street Scene Images. We experiment with two street scene translation tasks:

  • Synthetic \(\leftrightarrow \) Real. We perform translation between synthetic images in the SYNTHIA dataset [83] and real-world images in the Cityscape dataset [84]. For the SYNTHIA dataset, we use the SYNTHIA-Seqs subset which contains images in different seasons, weather, and illumination conditions.

  • Summer \(\leftrightarrow \) Winter. We use the dataset from Liu et al. [15], which contains summer and winter street images extracted from real-world driving videos.

Yosemite Summer \(\leftrightarrow \) Winter (HD). We collect a new high-resolution dataset containing 3253 summer photos and 2385 winter photos of Yosemite. The images are downsampled such that the shortest side of each image is 1024 pixels.

Fig. 4.
figure 4

Qualitative comparison on edges \(\rightarrow \) shoes. The first column shows the input and ground truth output. Each following column shows 3 random outputs from a method.

Table 1. Quantitative evaluation on edges \(\rightarrow \) shoes/handbags. The diversity score is the average LPIPS distance [77]. The quality score is the human preference score, the percentage a method is preferred over MUNIT. For both metrics, the higher the better.

5.5 Results

First, we qualitatively compare MUNIT with the four baselines above, and three variants of MUNIT that ablate \(\mathcal {L}^{x}_{\text {recon}}\), \(\mathcal {L}^{c}_{\text {recon}}\), \(\mathcal {L}^{s}_{\text {recon}}\) respectively. Figure 4 shows example results on edges \(\rightarrow \) shoes. Both UNIT and CycleGAN (with or without noise) fail to generate diverse outputs, despite the injected randomness. Without \(\mathcal {L}^{x}_{\text {recon}}\) or \(\mathcal {L}^{c}_{\text {recon}}\), the image quality of MUNIT is unsatisfactory. Without \(\mathcal {L}^{s}_{\text {recon}}\), the model suffers from partial mode collapse, with many outputs being almost identical (e.g., the first two rows). Our full model produces images that are both diverse and realistic, similar to BicycleGAN but does not need supervision.

The qualitative observations above are confirmed by quantitative evaluations. We use human preference to measure quality and LPIPS distance to evaluate diversity, as described in Sect. 5.2. We conduct this experiment on the task of edges \(\rightarrow \) shoes/handbags. As shown in Table 1, UNIT and CycleGAN produce very little diversity according to LPIPS distance. Removing \(\mathcal {L}^{x}_{\text {recon}}\) or \(\mathcal {L}^{c}_{\text {recon}}\) from MUNIT leads to significantly worse quality. Without \(\mathcal {L}^{s}_{\text {recon}}\), both quality and diversity deteriorate. The full model obtains quality and diversity comparable to the fully supervised BicycleGAN, and significantly better than all unsupervised baselines. In Fig. 5, we show more example results on edges \(\leftrightarrow \) shoes/handbags.

Fig. 5.
figure 5

Example results of (a) edges \(\leftrightarrow \) shoes and (b) edges \(\leftrightarrow \) handbags.

Fig. 6.
figure 6

Example results of animal image translation.

Fig. 7.
figure 7

Example results on street scene translations.

Fig. 8.
figure 8

Example results on Yosemite summer \(\leftrightarrow \) winter (HD resolution).

Fig. 9.
figure 9

Image translation. Each row has the same content while each column has the same style. The color of the generated shoes and the appearance of the generated cats can be specified by providing example style images.

We proceed to perform experiments on the animal image translation dataset. As shown in Fig. 6, our model successfully translate one kind of animal to another. Given an input image, the translation outputs cover multiple modes, i.e., multiple fine-grained animal categories in the target domain. The shape of an animal has undergone significant transformations, but the pose is overall preserved. As shown in Table 2, our model obtains the highest scores according to both CIS and IS. In particular, the baselines all obtain a very low CIS, indicating their failure to generate multimodal outputs from a given input. As the IS has been shown to correlate well to image quality [34], the higher IS of our method suggests that it also generates images of high quality than baseline approaches.

Table 2. Quantitative evaluation on animal image translation. This dataset \(\text{ contains } 3\) domains. We perform bidirectional translation for each domain pair, resulting in 6 translation tasks. We use CIS and IS to measure the performance on each task. To obtain a high CIS/IS score, a model needs to generate samples that are both high-quality and diverse. While IS measures diversity of all output images, CIS measures diversity of outputs conditioned on a single input image.

Figure 7 shows results on street scene datasets. Our model is able to generate SYNTHIA images with diverse renderings (e.g., rainy, snowy, sunset) from a given Cityscape image, and generate Cityscape images with different lighting, shadow, and road textures from a given SYNTHIA image. Similarly, it generates winter images with different amount of snow from a given summer image, and summer images with different amount of leafs from a given winter image. Figure 8 shows example results of summer \(\leftrightarrow \) winter transfer on the high-resolution Yosemite dataset. Our algorithm generates output images with different lighting.

Example-Guided Image Translation. Instead of sampling the style code from the prior, it is also possible to extract the style code from a reference image. Specifically, given a content image \(x_{1}\in \mathcal {X}_{1}\) and a style image \(x_{2}\in \mathcal {X}_{2}\), our model produces an image \(x_{1\rightarrow 2}\) that recombines the content of the former and the style latter by \(x_{1\rightarrow 2} = G_{2}(E^{c}_{1}(x_{1}), E^{s}_{2}(x_{2}))\). Examples are shown in Fig. 9.

6 Conclusions

We presented a framework for multimodal unsupervised image-to-image translation. Our model achieves quality and diversity superior to existing unsupervised methods and comparable to state-of-the-art supervised approach.