Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

Image-to-Image (I2I) translation aims to learn the mapping between different visual domains. Many vision and graphics problems can be formulated as I2I translation problems, such as colorization [21, 43] (grayscale \(\rightarrow \) color), super-resolution [20, 23, 24] (low-resolution \(\rightarrow \) high-resolution), and photorealistic image synthesis [6, 39] (label \(\rightarrow \) image). Furthermore, I2I translation has recently shown promising results in facilitating domain adaptation [3, 15, 30, 33].

Learning the mapping between two visual domains is challenging for two main reasons. First, aligned training image pairs are either difficult to collect (e.g., day scene \(\leftrightarrow \) night scene) or do not exist (e.g., artwork \(\leftrightarrow \) real photo). Second, many such mappings are inherently multimodal — a single input may correspond to multiple possible outputs. To handle multimodal translation, one possible approach is to inject a random noise vector to the generator for modeling the data distribution in the target domain. However, mode collapse may still occur easily since the generator often ignores the additional noise vectors.

Several recent efforts have been made to address these issues. Pix2pix [17] applies conditional generative adversarial network to I2I translation problems. Nevertheless, the training process requires paired data. A number of recent work [9, 25, 35, 41, 45] relaxes the dependency on paired training data for learning I2I translation. These methods, however, produce a single output conditioned on the given input image. As shown in [17, 46], simply incorporating noise vectors as additional inputs to the generator does not lead the increased variations of the generated outputs due to the mode collapsing issue. The generators in these methods are inclined to overlook the added noise vectors. Very recently, BicycleGAN [46] tackles the problem of generating diverse outputs in I2I problems by encouraging the one-to-one relationship between the output and the latent vector. Nevertheless, the training process of BicycleGAN requires paired images.

Fig. 1.
figure 1

Unpaired diverse image-to-image translation. (Left) Our model performs diverse translation between two collections of images without aligned training pairs. (Right) Example-guided translation.

Fig. 2.
figure 2

Comparisons of unsupervised I2I translation methods. Denote x and y as images in domain \(\mathcal {X}\) and \(\mathcal {Y}\): (a) CycleGAN [45] and DiscoGAN [18] map x and y onto separated latent spaces. (b) UNIT [25] assumes x and y can be mapped onto a shared latent space. (c) Our approach disentangles the latent spaces of x and y into a shared content space \(\mathcal {C}\) and an attribute space \(\mathcal {A}\) of each domain.

In this paper, we propose a disentangled representation framework for learning to generate diverse outputs with unpaired training data. Specifically, we propose to embed images onto two spaces: (1) a domain-invariant content space and (2) a domain-specific attribute space as shown in Fig. 2. Our generator learns to perform I2I translation conditioned on content features and a latent attribute vector. The domain-specific attribute space aims to model the variations within a domain given the same content, while the domain-invariant content space captures information across domains. We achieve this representation disentanglement by applying a content adversarial loss to encourage the content features not to carry domain-specific cues, and a latent regression loss to encourage the invertible mapping between the latent attribute vectors and the corresponding outputs. To handle unpaired datasets, we propose a cross-cycle consistency loss using the disentangled representations. Given a pair of unaligned images, we first perform a cross-domain mapping to obtain intermediate results by swapping the attribute vectors from both images. We can then reconstruct the original input image pair by applying the cross-domain mapping one more time and use the proposed cross-cycle consistency loss to enforce the consistency between the original and the reconstructed images. At test time, we can use either (1) randomly sampled vectors from the attribute space to generate diverse outputs or (2) the transferred attribute vectors extracted from existing images for example-guided translation. Figure 1 shows examples of the two testing modes.

We evaluate the proposed model through extensive qualitative and quantitative evaluation. In a wide variety of I2I tasks, we show diverse translation results with randomly sampled attribute vectors and example-guided translation with transferred attribute vectors from existing images. We evaluate the realism of our results with a user study and the diversity using perceptual distance metrics [44]. Furthermore, we demonstrate the potential application of unsupervised domain adaptation. On the tasks of adapting domains from MNIST [22] to MNIST-M [12] and Synthetic Cropped LineMod to Cropped LineMod [14, 40], we show competitive performance against state-of-the-art domain adaptation methods.

Table 1. Feature-by-feature comparison of image-to-image translation networks. Our model achieves multimodal translation without using aligned training image pairs.

We make the following contributions:

(1) We introduce a disentangled representation framework for image-to-image translation. We apply a content discriminator to facilitate the factorization of domain-invariant content space and domain-specific attribute space, and a cross-cycle consistency loss that allows us to train the model with unpaired data.

(2) Extensive qualitative and quantitative experiments show that our model compares favorably against existing I2I models. Images generated by our model are both diverse and realistic.

(3) We demonstrate the application of our model on unsupervised domain adaptation. We achieve competitive results on both the MNIST-M and the Cropped LineMod datasets.

Our code, data and more results are available at https://github.com/HsinYingLee/DRIT/.

2 Related Work

Generative Adversarial Networks. Recent years have witnessed rapid progress on generative adversarial networks (GANs) [2, 13, 31] for image generation. The core idea of GANs lies in the adversarial loss that enforces the distribution of generated images to match that of the target domain. The generators in GANs can map from noise vectors to realistic images. Several recent efforts explore conditional GAN in various contexts including conditioned on text [32], low-resolution images [23], video frames [38], and image [17]. Our work focuses on using GAN conditioned on an input image. In contrast to several existing conditional GAN frameworks that require paired training data, our model produces diverse outputs without paired data. This suggests that our method has wider applicability to problems where paired training datasets are scarce or not available.

Fig. 3.
figure 3

Method overview. (a) With the proposed content adversarial loss \(L_\mathrm {adv}^\mathrm {content}\) (Sect. 3.1) and the cross-cycle consistency loss \(L_1^\mathrm {cc}\) (Sect. 3.2), we are able to learn the multimodal mapping between the domain \(\mathcal {X}\) and \(\mathcal {Y}\) with unpaired data. Thanks to the proposed disentangled representation, we can generate output images conditioned on either (b) random attributes or (c) a given attribute at test time.

Image-to-Image Translation. I2I translation aims to learn the mapping from a source image domain to a target image domain. Pix2pix [17] applies a conditional GAN to model the mapping function. Although high-quality results have been shown, the model training requires paired training data. To train with unpaired data, CycleGAN [45], DiscoGAN [18], and UNIT [25] leverage cycle consistency to regularize the training. However, these methods perform generation conditioned solely on an input image and thus produce one single output. Simply injecting a noise vector to a generator is usually not an effective solution to achieve multimodal generation due to the lack of regularization between the noise vectors and the target domain. On the other hand, BicycleGAN [46] enforces the bijection mapping between the latent and target space to tackle the mode collapse problem. Nevertheless, the method is only applicable to problems with paired training data. Table 1 shows a feature-by-feature comparison among various I2I models. Unlike existing work, our method enables I2I translation with diverse outputs in the absence of paired training data.

Very recently, several concurrent works [1, 5, 16, 27] (all independently developed) also adopt a disentangled representation similar to our work for learning diverse I2I translation from unpaired training data. We encourage the readers to review these works for a complete picture.

Disentangled Representations. The task of learning disentangled representation aims at modeling the factors of data variations. Previous work makes use of labeled data to factorize representations into class-related and class-independent components [8, 19, 28, 29]. Recently, the unsupervised setting has been explored [7, 10]. InfoGAN [7] achieves disentanglement by maximizing the mutual information between latent variables and data variation. Similar to DrNet [10] that separates time-independent and time-varying components with an adversarial loss, we apply a content adversarial loss to disentangle an image into domain-invariant and domain-specific representations to facilitate learning diverse cross-domain mappings.

Domain Adaptation. Domain adaptation techniques focus on addressing the domain-shift problem between a source and a target domain. Domain Adversarial Neural Network (DANN) [11, 12] and its variants [4, 36, 37] tackle domain adaptation through learning domain-invariant features. Sun et al.  [34] aims to map features in the source domain to those in the target domain. I2I translation has been recently applied to produce simulated images in the target domain by translating images from the source domain [11, 15]. Different from the aforementioned I2I based domain adaptation algorithms, our method does not utilize source domain annotations for I2I translation.

3 Disentangled Representation for I2I Translation

Our goal is to learn a multimodal mapping between two visual domains \(\mathcal {X} \subset \mathbb {R}^{H\times W \times 3}\) and \(\mathcal {Y} \subset \mathbb {R}^{H\times W \times 3}\) without paired training data. As illustrated in Fig. 3, our framework consists of content encoders \(\{E^c_\mathcal {X}, E^c_\mathcal {Y}\}\), attribute encoders \(\{E^a_\mathcal {X}, E^a_\mathcal {Y}\}\), generators \(\{G_\mathcal {X}, G_\mathcal {Y}\}\), and domain discriminators \(\{D_\mathcal {X}, D_\mathcal {Y}\}\) for both domains, and a content discriminators \(D_\mathrm {adv}^\mathrm {c}\). Take domain \(\mathcal {X}\) as an example, the content encoder \(E^c_\mathcal {X}\) maps images onto a shared, domain-invariant content space (\(E^c_\mathcal {X}:\mathcal {X}\rightarrow \mathcal {C}\)) and the attribute encoder \(E^a_\mathcal {X}\) maps images onto a domain-specific attribute space (\(E^a_\mathcal {X}:\mathcal {X}\rightarrow \mathcal {A}_\mathcal {X}\)). The generator \(G_\mathcal {X}\) generates images conditioned on both content and attribute vectors (\(G_\mathcal {X}:\{\mathcal {C}, \mathcal {A}_\mathcal {X}\} \rightarrow \mathcal {X} \)). The discriminator \(D_\mathcal {X}\) aims to discriminate between real images and translated images in the domain \(\mathcal {X}\). Content discriminator \(D^c\) is trained to distinguish the extracted content representations between two domains. To enable multimodal generation at test time, we regularize the attribute vectors so that they can be drawn from a prior Gaussian distribution \( N (0,1)\).

In this section, we first discuss the strategies used to disentangle the content and attribute representations in Sect. 3.1 and then introduce the proposed cross-cycle consistency loss that enables the training on unpaired data in Sect. 3.2. Finally, we detail the loss functions in Sect. 3.3.

3.1 Disentangle Content and Attribute Representations

Our approach embeds input images onto a shared content space \(\mathcal {C}\), and domain-specific attribute spaces, \(\mathcal {A}_\mathcal {X}\) and \(\mathcal {A}_\mathcal {Y}\). Intuitively, the content encoders should encode the common information that is shared between domains onto \(\mathcal {C}\), while the attribute encoders should map the remaining domain-specific information onto \(\mathcal {A}_\mathcal {X}\) and \(\mathcal {A}_\mathcal {Y}\).

$$\begin{aligned} \begin{aligned}&\{z_x^{c},z_x^{a}\} = \{{E^c_\mathcal {X}}(x), {E^a_\mathcal {X}}(x)\}\quad&z_x^{c}\in \mathcal {C}, z_x^{a}\in \mathcal {A}_\mathcal {X}\\&\{z_y^{c},z_y^{a}\} = \{{E^c_\mathcal {Y}}(y), {E^a_\mathcal {Y}}(y)\}\quad&z_y^{c}\in \mathcal {C}, z_y^{a}\in \mathcal {A}_\mathcal {Y} \end{aligned} \end{aligned}$$
(1)

To achieve representation disentanglement, we apply two strategies: weight-sharing and a content discriminator. First, similar to [25], based on the assumption that two domains share a common latent space, we share the weight between the last layer of \(E^c_\mathcal {X}\) and \(E^c_\mathcal {Y}\) and the first layer of \(G_\mathcal {X}\) and \(G_\mathcal {Y}\). Through weight sharing, we force the content representation to be mapped onto the same space. However, sharing the same high-level mapping functions cannot guarantee the same content representations encode the same information for both domains. Therefore, we propose a content discriminator \(D^c\) which aims to distinguish the domain membership of the encoded content features \(z_x^{c}\) and \(z_y^{c}\). On the other hand, content encoders learn to produce encoded content representations whose domain membership cannot be distinguished by the content discriminator \(D^c\). We express this content adversarial loss as:

$$\begin{aligned} \begin{aligned} L_{\mathrm {adv}}^{\mathrm {content}}(E^c_\mathcal {X},E^c_\mathcal {Y}, D^c)&= \mathbb {E}_{x}[\frac{1}{2}\log {D^c(E^c_\mathcal {X}(x))}+\frac{1}{2}\log {(1-D^c(E^c_\mathcal {X}(x)))]}\\&+ \mathbb {E}_{y}[\frac{1}{2}\log {D^c(E^c_\mathcal {Y}(y))}+\frac{1}{2}\log {(1-D^c(E^c_\mathcal {Y}(y)))}] \end{aligned} \end{aligned}$$
(2)

3.2 Cross-Cycle Consistency Loss

With the disentangled representation where the content space is shared among domains and the attribute space encodes intra-domain variations, we can perform I2I translation by combining a content representation from an arbitrary image and an attribute representation from an image of the target domain. We leverage this property and propose a cross-cycle consistency. In contrast to cycle consistency constraint in [45] (i.e., \(\mathcal {X} \rightarrow \mathcal {Y} \rightarrow \mathcal {X}\)) which assumes one-to-one mapping between the two domains, the proposed cross-cycle constraint exploit the disentangled content and attribute representations for cyclic reconstruction.

Our cross-cycle constraint consists of two stages of I2I translation.

Forward Translation. Given a non-corresponding pair of images x and y, we encode them into \(\{z_x^{c}, z_x^{a}\}\) and \(\{z_y^{c}, z_y^{a}\}\). We then perform the first translation by swapping the attribute representation (i.e., \(z_x^{a}\) and \(z_y^{a}\)) to generate \(\{u,v\}\), where \(u\in \mathcal {X}, v \in \mathcal {Y}\).

$$\begin{aligned} \begin{aligned} u = G_\mathcal {X}(z_y^c, z_x^a)\quad v = G_\mathcal {Y}(z_x^c, z_y^a) \end{aligned} \end{aligned}$$
(3)

Backward Translation. After encoding u and v into \(\{z_u^c,z_u^a\}\) and \(\{z_v^c,z_v^a\}\), we perform the second translation by once again swapping the attribute representation (i.e., \(z_u^a\) and \(z_v^a\)).

$$\begin{aligned} \begin{aligned} \hat{x} = G_\mathcal {X}(z_v^c, z_u^a)\quad \hat{y} = G_\mathcal {Y}(z_u^c, z_v^a) \end{aligned} \end{aligned}$$
(4)

Here, after two I2I translation stages, the translation should reconstruct the original images x and y (as illustrated in Fig. 3). To enforce this constraint, we formulate the cross-cycle consistency loss as:

$$\begin{aligned} \begin{aligned} L_1^{\mathrm {cc}}(G_\mathcal {X},G_\mathcal {Y},E_\mathcal {X}^c,E_\mathcal {Y}^c,E_\mathcal {X}^a,E_\mathcal {Y}^a) = \mathbb {E}_{x,y}[&||G_\mathcal {X}(E_\mathcal {Y}^c(v),E_\mathcal {X}^a(u) )-x ||_{1} \\ +&||G_\mathcal {Y}(E_\mathcal {X}^c(u),E_\mathcal {Y}^a(v) )-y ||_{1}],\\ \end{aligned} \end{aligned}$$
(5)

where \(u=G_\mathcal {X}(E_\mathcal {Y}^c(y)),E_\mathcal {X}^a(x))\) and \(v=G_\mathcal {Y}(E_\mathcal {X}^c(x)),E_\mathcal {Y}^a(y))\).

Fig. 4.
figure 4

Loss functions. In addition to the cross-cycle reconstruction loss \(L_1^{\mathrm {cc}}\) and the content adversarial loss \(L_{\mathrm {adv}}^\mathrm {content}\) described in Fig. 3, we apply several additional loss functions in our training process. The self-reconstruction loss \(L_1^{\mathrm {recon}}\) facilitates training with self-reconstruction; the KL loss \(L_{\mathrm {KL}}\) aims to align the attribute representation with a prior Gaussian distribution; the adversarial loss \(L_{\mathrm {adv}}^{\mathrm {domain}}\) encourages G to generate realistic images in each domain; and the latent regression loss \(L_1^{\mathrm {latent}}\) enforces the reconstruction on the latent attribute vector. More details can be found in Sect. 3.3.

3.3 Other Loss Functions

Other than the proposed content adversarial loss and cross-cycle consistency loss, we also use several other loss functions to facilitate network training. We illustrate these additional losses in Fig. 4. Starting from the top-right, in the counter-clockwise order:

Domain Adversarial Loss. We impose adversarial loss \(L_{\mathrm {adv}}^{\mathrm {domain}}\) where \(D_\mathcal {X}\) and \(D_\mathcal {Y}\) attempt to discriminate between real images and generated images in each domain, while \(G_\mathcal {X}\) and \(G_\mathcal {Y}\) attempt to generate realistic images.

Self-Reconstruction Loss. In addition to the cross-cycle reconstruction, we apply a self-reconstruction loss \(L_1^{\mathrm {rec}}\) to facilitate the training. With encoded content/attribute features \(\{z_x^c, z_x^a\}\) and \(\{z_y^c, z_y^a\}\), the decoders \(G_\mathcal {X}\) and \(G_\mathcal {Y}\) should decode them back to original input x and y. That is, \(\hat{x} = G_\mathcal {X}(E_\mathcal {X}^c(x),E_\mathcal {X}^a(x) )\) and \(\hat{y} = G_\mathcal {Y}(E_\mathcal {Y}^c(y),E_\mathcal {Y}^a(y) )\).

KL Loss. In order to perform stochastic sampling at test time, we encourage the attribute representation to be as close to a prior Gaussian distribution. We thus apply the loss \(L_{\mathrm {KL}}= \mathbb {E}[D_{\mathrm {KL}}((z_a)\Vert N(0,1))]\), where \(D_{\mathrm {KL}}(p\Vert q)=-\int {p(z)\log {\frac{p(z)}{q(z)}}\mathrm {d}z}\).

Latent Regression Loss. To encourage invertible mapping between the image and the latent space, we apply a latent regression loss \(L_1^{\mathrm {latent}}\) similar to [46]. We draw a latent vector z from the prior Gaussian distribution as the attribute representation and attempt to reconstruct it with \(\hat{z}=E_\mathcal {X}^a(G_\mathcal {X}(E_\mathcal {X}^c(x),z))\) and \(\hat{z}=E_\mathcal {Y}^a(G_\mathcal {Y}(E_\mathcal {Y}^c(y),z))\).

The full objective function of our network is:

$$\begin{aligned} \begin{aligned} \min _{G,E^c,E^a}\max _{D,D^c}\quad&\lambda _{\mathrm {adv}}^{\mathrm {content}}L_{\mathrm {adv}}^{\mathrm {c}}+\lambda _1^{\mathrm {cc}}L_1^{\mathrm {cc}} + \lambda _{\mathrm {adv}}^{\mathrm {domain}}L_{\mathrm {adv}}^{\mathrm {domain}}+ \lambda _1^{\mathrm {recon}} L_1^{\mathrm {recon}}\\ +&\lambda _1^{\mathrm {latent}}L_1^{\mathrm {latent}}+ \lambda _{\mathrm {KL}}L_{\mathrm {KL}} \end{aligned} \end{aligned}$$
(6)

where the hyper-parameters \(\lambda \)s control the importance of each term.

4 Experimental Results

Datasets. We evaluate our model on several datasets include Yosemite [45] (summer and winter scenes), artworks [45] (Monet and van Gogh), edge-to-shoes [42] and photo-to-portrait cropped from subsets of the WikiArt datasetFootnote 1 and the CelebA dataset [26]. We also perform domain adaptation on the classification task with MNIST [22] to MNIST-M [12], and on the classification and pose estimation tasks with Synthetic Cropped LineMod to Cropped LineMod [14, 40].

Compared Methods. We perform the evaluation on the following algorithms:

Fig. 5.
figure 5

Sample results. We show example results produced by our model. The left column shows the input images in the source domain. The other five columns show the output images generated by sampling random vectors in the attribute space. The mappings from top to bottom are: Monet \(\rightarrow \) photo, van Gogh \(\rightarrow \) Monet, winter \(\rightarrow \) summer, and photograph \(\rightarrow \) portrait.

Fig. 6.
figure 6

Diversity comparison. On the winter \(\rightarrow \) summer translation task, our model produces more diverse and realistic samples over baselines.

Fig. 7.
figure 7

Linear interpolation between attribute vectors. Translation results with linear-interpolated attribute vectors between attributes (highlighted in red). (Color figure online)

  • DRIT: We refer to our proposed model, Disentangled Representation for Image-to-Image Translation, as DRIT.

  • DRIT w/o \({\varvec{D}}^{\varvec{c}}\): Our proposed model without the content discriminator.

  • CycleGAN [45], UNIT [25], BicycleGAN [46]

  • Cycle/Bicycle: As there is no previous work addressing the problem of multimodal generation from unpaired training data, we construct a baseline using a combination of CylceGAN and BicycleGAN. Here, we first train CycleGAN on unpaired data to generate corresponding images as pseudo image pairs. We then use this pseudo paired data to train BicycleGAN.

Fig. 8.
figure 8

Attribute transfer. At test time, in addition to random sampling from the attribute space, we can also perform translation with the query images with the desired attributes. Since the content space is shared across the two domains, we not only can achieve (a) inter-domain, but also (b) intra-domain attribute transfer. Note that we do not explicitly involve intra-domain attribute transfer during training.

4.1 Qualitative Evaluation

Diversity. We first demonstrate the diversity of the generated images on several different tasks in Fig. 5. In Fig. 6, we compare the proposed model with other methods. Both our model without \(D^c\) and Cycle/Bicycle can generate diverse results. However, the results contain clearly visible artifacts. Without the content discriminator, our model fails to capture domain-related details (e.g., the color of tree and sky). Therefore, the variations take place in global color difference. Cycle/Bicycle is trained on pseudo paired data generated by CycleGAN. The quality of the pseudo paired data is not uniformly ideal. As a result, the generated images are of ill-quality.

To have a better understanding of the learned domain-specific attribute space, we perform linear interpolation between two given attributes and generate the corresponding images as shown in Fig. 7. The interpolation results verify the continuity in the attribute space and show that our model can generalize in the distribution, rather than memorize trivial visual information.

Attribute Transfer. We demonstrate the results of the attribute transfer in Fig. 8. Thanks to the representation disentanglement of content and attribute, we are able to perform attribute transfer from images of desired attributes, as illustrated in Fig. 3(c). Moreover, since the content space is shared between two domains, we can generate images conditioned on content features encoded from either domain. Thus our model can achieve not only inter-domain but also intra-domain attribute transfer. Note that intra-domain attribute transfer is not explicitly involved in the training process.

Fig. 9.
figure 9

Realism preference results. We conduct a user study to ask subjects to select results that are more realistic through pairwise comparisons. The number indicates the percentage of preference on that comparison pair. We use the winter \(\rightarrow \) summer translation on the Yosemite dataset for this experiment.

Table 2. Diversity. We use the LPIPS metric [44] to measure the diversity of generated images on the Yosemite dataset.
Table 3. Reconstruct error. We use the edge-to-shoes dataset to measure the quality of our attribute encoding. The reconstruction error is \(||y-G_\mathcal {Y}(E_\mathcal {X}^c(x), E_\mathcal {Y}^a(y))||_{1}\). * BicycleGAN uses paired data for training.

4.2 Quantitative Evaluation

Realism vs. Diversity. Here we have the quantitative evaluation on the realism and diversity of the generated images. We conduct the experiment using winter \(\rightarrow \) summer translation with the Yosemite dataset. For realism, we conduct a user study using pairwise comparison. Given a pair of images sampled from real images and translated images generated from various methods, users need to answer the question “Which image is more realistic?” For diversity, similar to [46], we use the LPIPS metric [44] to measure the similarity among images. We compute the distance between 1000 pairs of randomly sampled images translated from 100 real images.

Figure 9 and Table 2 show the results of realism and diversity, respectively. UNIT obtains low realism score, suggesting that their assumption might not be generally applicable. CycleGAN achieves the highest scores in realism, yet the diversity is limited. The diversity and the visual quality of Cycle/Bicycle are constrained by the data CycleGAN can generate. Our results also demonstrate the need for the content discriminator.

Reconstruction Ability. In addition to diversity evaluation, we conduct an experiment on the edge-to-shoes dataset to measure the quality of the disentangled encoding. Our model was trained using unpaired data. At test time, given a paired data \(\{x,y\}\), we can evaluate the quality of content-attribute disentanglement by measuring the reconstruction errors of y with \(\hat{y} = G_\mathcal {Y}(E_\mathcal {X}^c(x), E_\mathcal {Y}^a(y))\).

We compare our model with BicycleGAN, which requires paired data during training. Table 3 shows our model performs comparably with BicycleGAN despite training without paired data. Moreover, the result suggests that the content discriminator contributes greatly to the quality of disentangled representation.

4.3 Domain Adaptation

We demonstrate that the proposed image-to-image translation scheme can benefit unsupervised domain adaptation. Following PixelDA [3], we conduct experiments on the classification and pose estimation tasks using MNIST [22] to MNIST-M [12], and Synthetic Cropped LineMod to Cropped LineMod [14, 40]. Several example images in these datasets are shown in Fig. 10(a) and (b). To evaluate our method, we first translate the labeled source images to the target domain. We then treat the generated labeled images as training data and train the classifiers of each task in the target domain. For a fair comparison, we use the classifiers with the same architecture as PixelDA. We compare the proposed method with CycleGAN, which generates the most realistic images in the target domain according to our previous experiment, and three state-of-the-art domain adaptation algorithms: PixelDA, DANN [12] and DSN [4].

Fig. 10.
figure 10

Domain adaptation experiments. We conduct the experiment on (a) MNIST to MNIST-M, and (b) Synthetic to Realistic Cropped LineMod. (c) (d) Our method can generate diverse images that benefit the domain adaptation.

Table 4. Domain adaptation results. We report the classification accuracy and the pose estimation error on MNIST to MNIST-M and Synthetic Cropped LineMod to Cropped LineMod. The entries “Source-only” and “Target-only” represent that the training uses either image only from the source and target domain. Numbers in parenthesis are reported by PixelDA, which are slightly different from what we obtain.

We present the quantitative comparisons in Table 4 and visual results from our method in Fig. 10(c) and (d). Since our model can generate diverse output, we generate one time, three times, and five times (denoted as \(\times 1, \times 3, \times 5\)) of target images using the same amount of source images. Our results validate that the proposed method can simulate diverse images in the target domain and improve the performance in target tasks. While our method does not outperform PixelDA, we note that unlike PixelDA, we do not leverage label information during training. Compared to CycleGAN, our method performs favorably even with the same amount of generated images (i.e., \(\times 1\)). We observe that CycleGAN suffers from the mode collapse problem and generates images with similar appearances, which degrade the performance of the adapted classifiers.

5 Conclusions

In this paper, we present a novel disentangled representation framework for diverse image-to-image translation with unpaired data. We propose to disentangle the latent space to a content space that encodes common information between domains, and a domain-specific attribute space that can model the diverse variations given the same content. We apply a content discriminator to facilitate the representation disentanglement. We propose a cross-cycle consistency loss for cyclic reconstruction to train in the absence of paired data. Qualitative and quantitative results show that the proposed model produces realistic and diverse images. We also apply the proposed method to domain adaptation and achieve competitive performance compared to the state-of-the-art methods.