Keywords

1 Introduction

Unsupervised image-to-image translation is the process of learning an arbitrary mapping between image domains without labels or pairings. This can be accomplished via deep learning with generative adversarial networks (GANs), through the use of a discriminator network to provide instance-specific generator training, and the use of a cyclic loss to overcome the lack of supervised pairing. Prior works such as DiscoGAN [19] and CycleGAN [43] are able to transfer sophisticated local texture appearance between image domains, such as translating between paintings and photographs. However, these methods often have difficulty with objects that have both related appearance and shape changes; for instance, when translating between cats and dogs.

Coping with shape deformation in image translation tasks requires the ability to use spatial information from across the image. For instance, we cannot expect to transform a cat into a dog by simply changing the animals’ local texture. From our experiments, networks with fully connected discriminators, such as DiscoGAN, are able to represent larger shape changes given sufficient network capacity, but train much slower [17] and have trouble resolving smaller details. Patch-based discriminators, as used in CycleGAN, work well at resolving high frequency information and train relatively quickly [17], but have a limited ‘receptive field’ for each patch that only allows the network to consider spatially local content. These networks reduce the amount of information received by the generator. Further, the functions used to maintain the cyclic loss prior in both networks retains high frequency information in the cyclic reconstruction, which is often detrimental to shape change tasks.

We propose an image-to-image translation system, designated GANimorph, to address shortcomings present in current techniques. To allow for patch-based discriminators to use more image context, we use dilated convolutions in our discriminator architecture [39]. This allows us to treat discrimination as a semantic segmentation problem: the discriminator outputs per-pixel real-vs.-fake decisions, each informed by global context. This per-pixel discriminator output facilitates more fine-grained information flow from the discriminator to the generator. We also use a multi-scale structure similarity perceptual reconstruction loss to help represent error over image areas rather than just over pixels. We demonstrate that our approach is more successful on a challenging shape deformation toy dataset than previous approaches. We also demonstrate example translations involving both appearance and shape variation by mapping human faces to dolls and anime characters, and mapping cats to dogs (Fig. 1).

The source code to our GANimorph system and all datasets are online: https://github.com/brownvc/ganimorph/.

Fig. 1.
figure 1

Our approach translates texture appearance and complex head and body shape changes between the cat and dog domains (left: input; right: translation).

2 Related Work

Image-to-Image Translation. Image analogies provides one of the earliest examples of image-to-image translation [14]. The approach relies on non-parametric texture synthesis and can handle transformations such as seasonal scene shifts [20], color and texture transformation, and painterly style transfer. Despite the ability of the model to learn texture transfer, the model cannot affect the shape of objects. Recent research has extended the model to perform visual attribute transfer using neural networks [13, 23]. However, despite these improvements, deep image analogies are unable to achieve shape deformation.

Neural Style Transfer. These techniques show transfer of more complex artistic styles than image analogies [10]. They combine the style of one image with the content of another by matching the Gram matrix statistics of early-layer feature maps from neural networks trained on general supervised image recognition tasks. Further, Duomiln et al. [8] extended Gatys et al.’s technique to allow for interpolation between pre-trained styles, and Huang et al. [15] allowed real-time transfer. Despite this promise, these techniques have difficulty adapting to shape deformation, and empirical results have shown that these networks only capture low-level texture information [2]. Reference images can affect brush strokes, color palette, and local geometry, but larger changes such as anime-style combined appearance and shape transformations do not propagate.

Generative Adversarial Networks. Generative adversarial networks (GANs) have produced promising results in image editing [22], image translation [17], and image synthesis [11]. These networks learn an adversarial loss function to distinguish between real and generated samples. Isola et al. [17] demonstrated with Pix2Pix that GANs are capable of learning texture mappings between complex domains. However, this technique requires a large number of explicitly-paired samples. Some such datasets are naturally available, e.g., registered map and satellite photos, or image colorization tasks. We show in our supplemental material that our approach is also able to solve these limited-shape-change problems.

Unsupervised Image Translation GANs. Pix2Pix-like architectures have been extended to work with unsupervised pairs [19, 43]. Given image domains X and Y, these approaches work by learning a cyclic mapping from \(\mathrm{X}\rightarrow \mathrm{Y} \rightarrow \mathrm{X}\) and \(\mathrm{Y}\rightarrow \mathrm{X} \rightarrow \mathrm{Y}\). This creates a bijective mapping that prevents mode collapse in the unsupervised case. We build upon the DiscoGAN [19] and CycleGAN [43] architectures, which themselves extend Coupled GANs for style transfer [25]. We seek to overcome their shape change limitations through more efficient learning and expanded discriminator context via dilated convolutions, and by using a cyclic loss function that considers multi-scale frequency information (Table 1).

Table 1. Translating a human to a doll, and a cat to a dog. Dilated convolutions in the discriminator outperform both patch-based and dense convolution methods for image translations that require larger shape changes and small detail preservation.

Other works tackle complementary problems. Yi et al. [38] focus on improving high frequency features over CycleGAN in image translation tasks, such as texture transfer and segmentation. Ma et al. [27] examine adapting CycleGAN to wider variety in the domains—so-called instance-level translation. Liu et al. [24] use two autoencoders to create a cyclic loss through a shared latent space with additional constraints. Several layers are shared between the two generators and an identity loss ensures that both domains resolve to the same latent vector. This produces some shape transformation in faces; however, the network does not improve the discriminator architecture to provide greater context awareness.

One qualitatively different approach is to introduce object-level segmentation maps into the training set. Liang et al.’s ContrastGAN [22] has demonstrated shape change by learning segmentation maps and combining multiple conditional cyclic generative adversarial networks. However, this additional input is often unavailable and time consuming to declare.

3 Our Approach

Crucial to the success of translation under shape deformation is the ability to maintain consistency over global shapes as well as local texture. Our algorithm adopts the cyclic image translation framework [19, 43] and achieves the required consistency by incorporating a new dilated discriminator, a generator with residual blocks and skip connections, and a multi-scale perceptual cyclic loss.

3.1 Dilated Discriminator

Initial approaches used a global discriminator with a fully connected layer [19]. Such a discriminator collapses an image to a single scalar value for determining image veracity. Later approaches [22, 43] used a patch-based DCGAN [32] discriminator, initially developed for style transfer and texture synthesis [21]. In this type of discriminator, each image patch is evaluated to determine a fake or real score. The patch-based approach allows for fast generator convergence by operating on each local patch independently. This approach has proven effective for texture transfer, segmentation, and similar tasks. However, this patch-based view limits the networks’ awareness of global spatial information, which limits the generator’s ability to perform coherent global shape change.

Reframing Discrimination as Semantic Segmentation. To solve this issue, we reframe the discrimination problem from determining real/fake images or subimages into the more general problem of finding real or fake regions of the image, i.e., a semantic segmentation task. Since the discriminator outputs a higher-resolution segmentation map, the information flow between the generator and discriminator increases. This allows for faster convergence than using a fully connected discriminator, such as in DiscoGAN.

Current state-of-the-art networks for segmentation use dilated convolutions, and have been shown to require far fewer parameters than conventional convolutional networks to achieve similar levels of accuracy [39]. Dilated convolutions provide advantages over both global and patch-based discriminator architectures. For the same parameter budget, they allow the prediction to incorporate data from a larger surrounding region. This increases the information flow between the generator and discriminator: by knowing which regions of the image contribute to making the image unrealistic, the generator can focus on that region of the image. An alternative way to think about dilated convolutions is that they allow the discriminator to implicitly learn context. While multi-scale discriminators have been shown to improve results and stability for high resolution image synthesis tasks [35], we will show that incorporating information from farther away in the image is useful in translation tasks as the discriminator can determine where a region should fit into an image based on surrounding data. For example, this increased spatial context helps localize the face of a dog relative to its body, which is difficult to learn from small patches or patches learned in isolation from their neighbors. Figure 2 (right) illustrates our discriminator architecture.

Fig. 2.
figure 2

(Left) Generators from different unsupervised image translation models. The skip connections and residual blocks are combined via concatenation as opposed to addition. (Right) Our discriminator network architecture is a fully-convolutional segmentation network. Each colored block represents a convolution layer; block labels indicate filter size. In addition to global context from the dilations, the skip connection bypassing the dilated convolution blocks preserves the network’s view of local context. (Color figure online)

3.2 Generator

Our generator architecture builds on those of DiscoGAN and CycleGAN. DiscoGAN uses a standard encoder-decoder architecture (Fig. 2, top left). However, its narrow bottleneck layer can lead to output images that do not preserve all the important visual details from the input image. Furthermore, due to the low capacity of the network, the approach remains limited to low resolution images of size \(64\times 64\). The CycleGAN architecture seeks to increase capacity over DiscoGAN by using a residual block to learn the image translation function [12]. Residual blocks have been shown to work in extremely deep networks, and they are able to represent low frequency information [2, 40].

However, using residual blocks at a single scale limits the information that can pass through the bottleneck and thus the functions that the network can learn. Our generator includes residual blocks at multiple layers of both the decoder and encoder, allowing the network to learn multi-scale transformations that work on both higher and lower spatial resolution features (Fig. 2, bottom left).

3.3 Objective Function

Perceptual Cyclic Loss. As per prior unsupervised image-to-image translation work [19, 22, 24, 38, 43], we use a cyclic loss to learn a bijective mapping between two image domains. However, not all image translation functions can be perfectly bijective, e.g., when one domain has smaller appearance variation, like human face photos vs. anime drawings. When all information in the input image cannot be preserved in the translation, the cyclic loss term should aim to preserve the most important information. Since the network should focus on image attributes of importance to human viewers, we should choose a perceptual loss that emphasizes shape and appearance similarity between the generated and target images.

Defining an explicit shape loss is difficult, as any explicit term requires known image correspondences between domains. These do not exist for our examples and our unsupervised setting. Further, including a more-complex perceptual neural network into the loss calculation imparts a significant computational and memory overhead. While using pretrained image classification networks as a perceptual loss can speed up style transfer [18], these do not work on shape changes as the pretrained networks tend only to capture low-level texture information [2].

Instead, we use multi-scale structure similarity loss (MS-SSIM) [36]. This loss better preserves features visible to humans instead of noisy high frequency information. MS-SSIM can also better cope with shape change since it can recognize geometric differences through area statistics. However, MS-SSIM alone can ignore smaller details, and does not capture color similarity well. Recent work has shown that mixing MS-SSIM with L1 or L2 losses is effective for super resolution and segmentation tasks [41]. Thus, we also add a lightly-weighted L1 loss term, which helps increase the clarity of generated images.

Feature Matching Loss. To increase the stability of the model, our objective function uses a feature matching loss [33]:

$$\begin{aligned} \mathcal {L}_{\text {FM}}(G, D) = \frac{1}{n-1}\sum _{i=1}^{n-1} ||\mathbb {E}_{x \sim p_{\text {data}}}f_i(x) - \mathbb {E}_{z \sim p_{z}}f_i(G(z))||_{2}^{2}. \end{aligned}$$
(1)

Where \(f_i\in D(x)\) represents the raw activation potentials of the \(i^{th}\) layer of the discriminator D, and n is the number of discriminator layers. This term encourages fake and real samples to produce similar activations in the discriminator, and so encourages the generator to create images that look more similar to the target domain. We have found this loss term to prevent generator mode collapse, to which GANs are often susceptible [19, 33, 35].

Scheduled Loss Normalization (SLN). In a multi-part loss function, linear weights are often used to normalize the terms with respect to one another, with previous works often optimizing a single set of weights. However, finding appropriately-balanced weights can prove difficult without ground truth. Further, often a single set of weights is inappropriate because the magnitude of the loss terms changes over the course of training. Instead, we create a procedure to periodically renormalize each loss term and so control their relative values. This lets the user intuitively provide weights that sum to 1 to balance the loss terms in the model, without having knowledge of how their magnitudes will change over training.

Let \(\mathcal {L}\) be a loss function, and let \(\mathcal {X}_n = \{x_t\}_{t=1}^{bn}\) be a sequence of n batches of training inputs, each b images large, such that \(\mathcal {L}(x_t)\) is the training loss at iteration t. We compute an exponentially-weighted moving average of the loss:

$$\begin{aligned} \mathcal {L}_{\text {moavg}}(\mathcal {L}, \mathcal {X}_n) = (1 - \beta )\sum _{x_t\in \mathcal {X}_n} \beta ^{bn - t} \mathcal {L}(x_t)^2 \end{aligned}$$
(2)

where \(\beta \) is the decay rate. We can renormalize the loss function by dividing it by this moving average. If we do this on every training iteration, however, the loss stays at its normalized average and no training progress is made. Instead, we schedule the loss normalization:

$$ \text {SLN}(\mathcal {L}, \mathcal {X}_n, s) = {\left\{ \begin{array}{ll} \mathcal {L}(\mathcal {X}_n)/(\mathcal {L}_{\text {moavg}}(\mathcal {L}, \mathcal {X}_n) + \epsilon ) &{}\text {if } n\ (\text {mod}\ s) = 1\\ \mathcal {L}(\mathcal {X}_n)&{}\text {otherwise} \end{array}\right. } $$

Here, s is the scheduling parameter such that we apply normalization every s training iterations. For all experiments, we use \(\beta = 0.99\), \(\epsilon = 10^{-10}\), and \(s=200\).

One other normalization difference between CycleGAN/DiscoGAN and our approach is the use of instance normalization [15] and batch normalization [16], respectively. We found that batch normalization caused excessive over-fitting to the training data, and so we used instance normalization.

Final Objective. Our final objective comprises three loss normalized terms: a standard GAN loss, a feature matching loss, and two cyclic reconstruction losses. Given image domains X and Y, let \(G: X\rightarrow Y\) map from X to Y and \(F:Y\rightarrow X\) map from Y to X. \(D_{X}\) and \(D_{Y}\) denote discriminators for G and F, respectively.

For GAN loss, we combine normal GAN loss terms from Goodfellow et al. [11]:

$$\begin{aligned} \mathcal {L}_{\text {GAN}}&= \mathcal {L}_{\text {GAN}_X}(F,D_{X},Y,X) + \mathcal {L}_{\text {GAN}_Y}(G, D_{Y},X, Y) \end{aligned}$$
(3)

For feature matching loss, we use Eq. 1 for each domain:

$$\begin{aligned} \mathcal {L}_{\text {FM}} = \mathcal {L}_{\text {FM}_X}(G, D_X) + \mathcal {L}_{\text {FM}_Y}(F, D_Y) \end{aligned}$$
(4)

For the two cyclic reconstruction losses, we consider structural similarity [36] and an \(\mathbb {L}_1\) loss. Let \(X'= F(G(X))\) and \(Y'= G(F(Y))\) be the cyclically-reconstructed input images. Then:

$$\begin{aligned} \mathcal {L}_{\text {SS}}=&(1-\text {MS-SSIM}(X', X)) + (1-\text {MS-SSIM}(Y', Y)) \end{aligned}$$
(5)
$$\begin{aligned} \mathcal {L}_{\text {L1}}=&||X'-X||_1 + ||Y'-Y||_1 \end{aligned}$$
(6)

where we compute MS-SSIM without discorrelation.

Our total objective function with scheduled loss normalization (SLN) is:

$$\begin{aligned} \mathcal {L}_{\text {total}} =&\lambda _{\text {GAN}} \text {SLN}(\mathcal {L}_{\text {GAN}}) + \lambda _{\text {FM}} \text {SLN}(\mathcal {L}_{\text {FM}}) + \nonumber \\ {}&\lambda _{\text {CYC}} \text {SLN}(\lambda _{\text {SS}}\mathcal {L}_{\text {SS}} + \lambda _{\text {L1}} \mathcal {L}_{\text {L1}}) \end{aligned}$$
(7)

with \(\lambda _{\text {GAN}} + \lambda _{\text {FM}} + \lambda _{\text {CYC}} = 1\), \(\lambda _{\text {SS}} + \lambda _{\text {L1}} = 1\), and all coefficients \(\ge 0\). We set \(\lambda _{\text {GAN}}=0.49\), \(\lambda _{\text {FM}}=0.21\), and \(\lambda _{\text {CYC}}=0.3\), and \(\lambda _{\text {SS}}=0.7\) and \(\lambda _{\text {L1}}=0.3\). Empirically, these helped to reduce mode collapse and worked across all datasets. For all training details, we refer the reader to our supplemental material.

Fig. 3.
figure 3

Toy dataset (\(128\times 128\)). Left: \(\mathcal {X}\) instance; a regular polygon with deformed dot matrix overlay. Right: \(\mathcal {Y}\) instance; a deformed polygon and dot lattice. The dot lattice provides information from across the image to the true deformation.

4 Experiments

4.1 Toy Problem: Learning 2D Dot and Polygon Deformations

We created a challenging toy problem to evaluate the ability of our network design to learn shape- and texture-consistent deformation. We define two domains: the regular polygon domain X and its deformed equivalent Y (Fig. 3). Each example \(X_{s, h, d}\in X\) contains a centered regular polygon with \(s\in \{3\ldots 7\}\) sides, plus a deformed matrix of dots overlaid. The dot matrix is computed by taking a unit dot grid and transforming it via h, a Gaussian random normal \(2\times 2\) matrix, and a displacement vector d, a Gaussian normal vector in \(\mathbb {R}^2\). The corresponding domain equivalent in Y is \(Y_{s, h, d}\), with instead the polygon transformed by h and the dot matrix remaining regular. This construction forms a bijection from X to Y, and so the translation problem is well-posed.

Learning a mapping from X to Y requires the network to use the large-scale cues present in the dot matrix to successfully deform the polygon, as local patches with a fixed image location cannot overcome the added displacement d. Table 2 shows that DiscoGAN is unable to learn to map between either domain, and produces an output that is close to the mean of the dataset (off-white). CycleGAN is able to learn only local deformation, which produces hue shifts towards the blue of the polygon when mapping from regular to deformed spaces, and which in most cases produces an undeformed dot matrix when mapping from deformed to regular spaces. In contrast, our approach is significantly more successful at learning the deformation as the dilated discriminator is able to incorporate information from across the image.

Table 2. Toy dataset. When estimating complex deformation, DiscoGAN collapses to the mean dataset value (near white). CycleGAN approximates the deformation of the polygon but not the dot lattice (right-hand side). Our approach learns both.

Quantitative Comparison. As our output is a highly-deformed image, we estimate the learned transform parameters by sampling. We compute a Hausdorff distance between 500 point samples on the ground truth polygon and on the image of the generated polygon after translation: for finite sets of points X and Y, \(d(X, Y) = \max _{y\in Y}\min _{x\in X} ||x - y||\). We hand annotate 220 generated polygon boundaries for our network, sampled uniformly at random along the boundary. Samples exist in a unit square with bottom left corner at (0, 0).

First, DiscoGAN fails to generate polygons at all, despite being able to reconstruct the original image. Second, for ‘regular to deformed’, CycleGAN fails to produce a polygon, whereas our approach produces average Hausdorff distance of \(0.20\pm 0.01\). Third, for ‘deformed to regular’, CycleGAN produces a polygon with distance of \(0.21\pm 0.04\), whereas our approach has distance of \(0.10\pm 0.03\). In the true dataset, note that regular polygons are centered, but CycleGAN only constructs polygons at the position of the original distorted polygon. Our network constructs a regular polygon at the center of the image as desired.

4.2 Real-World Datasets

We evaluate GANimorph on several image datasets. For human faces, we use the aligned version of the CelebFaces Attribute dataset [26], with 202,599 images.

Table 3. GANimorph is able to translate shape and style changes while retaining input attributes such as hair color, pose, glasses, headgear, and background.

Anime Faces. Previous works have noted that anime images are challenging for style transfer methods, since translating between photoreal and anime faces involves both shape and appearance changes. We create a large 966,777 image anime dataset crowdsourced from Danbooru [1]. The Danbooru dataset has a wide variety of styles from super-deformed chibi-style faces, to realistically-proportioned faces, to rough sketches. Since traditional face detectors yield poor results on drawn datasets, we ran the Animeface filter [29] on both datasets.

When translating humans to anime, we see an improvement in our approach for head pose and accessories such as glasses (Table 3, 3rd row, right), plus a larger degree of shape deformation such as reduced face vertical height. The final line of each group represents a particularly challenging example.

Doll Faces. Translating human faces to dolls provides an informative test case: both domains have similar photorealistic appearance, so the translation task focuses on shape more than texture. Similar to Morishita et al. [28], we extracted 13,336 images from the Flickr100m dataset [30] using specific doll manufacturers as keywords. Then, we extract local binary patterns [31] using OpenCV [4], and use the Animeface filter for facial alignment [29].

Table 3, bottom, shows that our architecture handles local deformation and global shape change better than CycleGAN and DiscoGAN, while preserving local texture similarity. Either the shape is malformed (DiscoGAN), or the shape shows artifacts from the original image or unnatural skin texture (CycleGAN). Our method matches skintones from the CelebA dataset, while capturing the overall facial structure and hair color of the doll. For a more difficult doll to human example in the bottom right-hand corner, while our transformation is not realistic, our method still creates more shape change than existing networks.

Table 4. Pets in the Wild: Between dogs and cats, our approach is able to generate shape transforms across pose and appearance variation.

Pets in the Wild. To demonstrate our network on unaligned data, we evaluate on the Kaggle cat and dog dataset [9]. This contains 12,500 images of each species, across many animal breeds at varying scales, lighting conditions, poses, backgrounds, and occlusion factors.

When translating between cats and dogs (Table 4), the network is able to change both the local features such as the addition and removal of fur and whiskers, plus the larger shape deformation required to fool the discriminator, such as growing a snout. Most errors in this domain come from the generator failing to identify an animal from the background, such as forgetting the rear or tail of the animal. Sometimes the generator may fail to identify the animal at all.

We also translate between humans and cats. Table 5 demonstrates how our architecture handles large scale translation with these two variable data distributions. Our failure cases are approximately the same as that of the cats to dogs translation, with some promising results. Overall, we translate a surprising degree of shape deformation even when we might not expect this to be possible.

Table 5. Human and Pet Faces (dataset details in supplemental): As a challenge, we map humans to cats and cats to humans. Pose is reliably translated; semantic appearance like hair color is sometimes translated; some inputs still fail (bottom left).
Table 6. Percentage of pixels classified in translated images via CycleGAN, DiscoGAN, and our algorithm (with design choices). Target classes are in .
Table 7. Example segmentation masks from DeepLabV3 for Table 6 for Cat →  Dog. denotes the cat class, and denotes the intended dog class.

4.3 Quantitative Study

To quantify GANimorph’s translation ability, we consider classification-based metrics to detect class change, e.g., whether a cat was successfully translated into a dog. Since there is no per pixel ground truth in this task for any real-world datasets, we cannot use Fully Convolution Score. Using Inception Score [33] is uninformative since simply outputting the original image would score highly.

Further, similar to adversarial examples, CycleGAN is able to convince many classification networks that the image is translated even though to a human the image appears untranslated: all CycleGAN results from supplemental Table 3 convince both ResNet50 [12] and the traditional segmentation network of Zheng et al. [42], even though the image is unsuccessfully translated.

However, semantic segmentation networks that use dilated convolutions can distinguish CycleGAN’s ‘adversarial examples’ from true translations, such as DeepLabV3 [5]. As such, we run each test image through the DeepLabV3 network to generate a segmentation mask. Then, we compute the percent of non-background-labeled pixels per class, and average across the test set (Table 6). Our approach is able to more fully translate the image in the eyes of the classification network, with images also appearing translated to a human (Table 7).

4.4 Ablation Study

We use these quantiative settings for an ablation study (Table 6). First, we removed MS-SSIM to leave only L1 (\(\mathcal {L}_{\text {SS}}\), Eq. 7), which causes our network to mode collapse. Next, we removed feature match loss, but this decreases both our segmentation consistency and the stability of the network. Then, we replaced our dilated discriminator with a patch discriminator. However, the patch discriminator cannot use global context, and so the network confuses facial layouts. Finally, we replace our dilated discriminator with a fully connected discriminator. We see that our generator architecture and loss function allow our network to outperform DiscoGAN even with the same type of discriminator (fully connected).

Qualitative ablation study results are shown in Table 8. The patch based discriminator translates texture well, but fails to create globally-coherent images. Decreasing the information flow by using a fully-connected discriminator or removing feature match leads to better results. Maximizing the information flow ultimately leads to the best results (last column). Using L1 instead of a perceptual cyclic loss term leads to mode collapse.

Table 8. In qualitative comparisons, GANimorph outperforms all of its ablated versions. For instance, our approach better resolves fine details (e.g., second row, cat eyes) while also better translating the overall shape (e.g., last row, cat nose and ears).

5 Discussion

There exists a trade off in the relative weighting of the cyclic loss. A higher cyclic loss term weight \(\lambda _{cyc}\) will prevent significant shape change and weaken the generator’s ability to adapt to the discriminator. Setting it too low will cause the collapse of the network and prevent any meaningful mapping from existing between domains. For instance, the network can easily hallucinate objects in the other domain if the reconstruction loss is too low. Likewise, setting it too high will prevent the network from deforming the shape properly. As such, an architecture that allowed modifying the weighting of this term at test time would prove valuable for allowing the user control over how much deformation to allow.

One counter-intuitive result we discovered is that in domains with little variety, the mappings can lose semantic meaning (see supplemental material). One example of a failed mapping was from celebA to bitmoji faces [34, 37]. Many attributes were lost, including pose, and the mapping fell back to pseudo-steganographic encoding of the faces [7]. For example, background information would be encoded in color gradients of hair styles, and minor variations in the width of the eyes were used similarly. As such, the cyclic loss limits the ability of the network to abstract relevant details. Approaches such as relying on mapping the variance within each dataset, similar to Benaim et al. [3], may prove an effective means of ensuring the variance in either domain is maintained. We found that this term over-constrained the amount of shape change in the target domain; however, this may be worth further investigation.

Finally, trying to learn each domain simultaneously may also prove an effective way to increase the accuracy of image translation. Doing so allows the discriminator(s) and generator to learn how to better determine and transform regions of interest for either network. Better results might be obtained by mapping between multiple domains using parameter-efficient networks (e.g., StarGAN [6]).

Repository: The source code to our GANimorph system and all datasets are available online: https://github.com/brownvc/ganimorph/.