Keywords

1 Introduction

Image-to-image translation attempts to convert the image appearance from one domain to another while preserving the intrinsic image content. Many computer vision problems can be formalized as the image-to-image translation problem, such as super-resolution [14, 20], image colorization [6, 30, 31], image segmentation [4, 17], and image synthesis [1, 13, 21, 26, 33]. However, conventional image-to-image translation methods are all task specific. A common framework for universal image-to-image translation remains as an emerging research subject in the literature, which has gained considerable attention in recent studies [7, 10, 16, 27, 34].

Fig. 1.
figure 1

Given unpaired images from two domains, our proposed SCAN learns the image-to-image translation by a stacked structure in a coarse-to-fine manner. For the Cityscapes Labels \(\rightarrow \) Photo task in \(512 \times 512\) resolution, the result of SCAN (left) appears more realistic and includes finer details compared with the result of CycleGAN [34] (right).

Isola et al. [7] leveraged the power of generative adversarial networks (GANs) [5, 18, 28, 32], which encourage the translation results to be indistinguishable from the real images in the target domain, to learn supervised image-to-image translation from image pairs. However, obtaining pairwise training data is time-consuming and heavily relies on human labor. Recent works [10, 16, 27, 34] explore tackling the image-to-image translation problem without using pairwise data. In the unsupervised setting, besides the traditional adversarial loss used in supervised image-to-image translation, a cycle-consistent loss is introduced to restrain the two cross-domain transformations G and F to be the inverses of each other (i.e., \(G(F(x))\approx x\) and \(G(F(y))\approx y\)). By constraining both the adversarial loss and the cycle-consistent loss, the networks learn how to accomplish cross-domain transformations without using pairwise training data.

Despite the progress mentioned above, existing unsupervised image-to-image translation methods may generate inferior results when two image domains are of significant appearance differences or the image resolution is high. As shown in Fig. 1, the result of CycleGAN [34] in translating Cityscapes semantic layout to realistic picture lacks details and remains visually unsatisfactory. The reason for this phenomenon lies in the significant visual gap between the two distinct image domains, which makes the cross-domain transformation too complicated to be learned by using a single-stage unsupervised approach.

Jumping out of the scope of unsupervised image-to-image translation, many methods leveraged the power of multi-stage refinements to tackle image generation from latent vectors [3, 9], caption-to-image [29] and supervised image-to-image translation [1, 4, 23]. By generating an image in a coarse to fine manner, a complicated transformation is broken down into easy-to-solve pieces. Wang et al. [23] successfully tackle the high-resolution image-to-image translation problem in such a coarse-to-fine manner with multi-scale discriminators. However, their method relies on pairwise training images, therefore cannot be directly applied to our unsupervised image-to-image translation task. To the best of our knowledge, there exists no attempt to exploit stacked networks to overcome the difficulties of learning unsupervised image-to-image translation.

In this paper, we propose the stacked cycle-consistent adversarial networks (SCANs) for the unsupervised learning of image-to-image translation. We decompose a complex image translation into multi-stage transformations, including a coarse translation followed by multiple refinement processes. The coarse translation learns to sketch a primary result in low-resolution. The refinement processes improve the translation by adding details into the previous results to produce higher resolution outputs. We use a conjunction of an adversarial loss and a cycle-consistent loss in all stages to learn translations from unpaired image data. To benefit more from multi-stage learning, we also introduce an adaptive fusion block in the refinement process to learn the dynamic integration of the current stage’s output and the previous stage’s output. Extensive experiments demonstrate that our proposed model can not only generate results with realistic details, but also enable learning unsupervised image-to-image translation in higher resolution.

In summary, our contributions are mainly two-fold. Firstly, we propose SCANs to model the unsupervised image-to-image translation problem in a coarse to fine manner for generating results with finer details in higher resolution. Secondly, we introduce a novel adaptive fusion block to dynamically integrate the current stage’s output and the previous stage’s output, which outperforms directly stacking multiple stages.

2 Related Work

Image-to-Image Translation. GANs [5] have shown impressive results in a wide range of image-to-image translation tasks including super-resolution [14, 20], image colorization [7], and image style transfer [34]. The essential part of GANs is the idea of using an adversarial loss that encourages the translated results to be indistinguishable from real target images. Among the existing image-to-image translation works using GANs, perhaps the most well-known one would be Pix2Pix [7], in which Isola et al. applied GANs with a regression loss to learn pairwise image-to-image translation. Due to the fact that pairwise image data is difficult to obtain, image-to-image translation using unpaired data has drawn rising attention in recent studies. Recent works by Zhu et al. [34], Yi et al. [27], and Kim et al. [10] have tackled the image translation problem using a combination of adversarial and cycle-consistent losses. Taigman et al. [22] applied cycle-consistency in the feature level with the adversarial loss to learn a one side translation from unpaired images. Liu et al. [16] used a GAN combined with Variational Auto Encoder (VAE) to learn a shared latent space of two given image domains. Liang et al. [15] combined the ideas of adversarial and contrastive losses, using a contrastive GAN with cycle-consistency to learn the semantic transform of two given image domains with labels. Instead of trying to translate one image to another domain directly, our proposed approach focuses on using refining processes of multiple steps to generate a more realistic output with finer details by using unpaired image data.

Multi-stage Learning. Extensive works have proposed to use multiple stages to tackle complex generation or transformation problems. Eigen et al. [4] proposed a multi-scale network to predict depth, surface, and segmentation, which learns to refine the prediction result from coarse to fine. S2GAN introduced by Wang et al. [24] utilizes two networks arranged sequentially to first generate a structure image and then transform it into a natural scene. Zhang et al. [29] proposed StackGAN to generate high-resolution images from texts, which consists of two stages: Stage-I network generates coarse, low-resolution result, while the Stage-II network refines the result into high-resolution, realistic image. Chen et al. [1] applied a stacked refinement network to generate scenes from segmentation layouts. To accomplish generating high-resolution images from latent vectors, Kerras et al. [9] started from generating a \(4\,\times \,4\) resolution output, then progressively stacked up both generator and discriminator to generate a \(1024\,\times \,1024\) realistic image. Wang et al. [23] applied a coarse-to-fine generator with a multi-scale discriminator to tackle the supervised image-to-image translation problem. Different form the existing works, this work exploits stacked image-to-image translation networks combined with a novel adaptive fusion block to tackle the unsupervised image-to-image translation problem.

3 Proposed Approach

3.1 Formulation

Given two image domains X and Y, the mutual translations between them can be denoted as two mappings \(G:X \rightarrow Y\) and \(F:Y \rightarrow X\), each of which takes an image from one domain and translates it to the corresponding representation in the other domain. Existing unsupervised image-to-image translation approaches [10, 16, 22, 27, 34] finish the learning of G and F in a single stage, which generate results lacking details and are unable to handle complex translations.

In this paper, we decompose translations G and F into multi-stage mappings. For simplicity, now we describe our method in a two-stage setting. Specifically, we decompose \(G = G_{2} \circ G_{1}\) and \(F = F_{2} \circ F_{1}\). \(G_{1}\) and \(F_{1}\) (Stage-1) perform the cross-domain translation in a coarse scale, while \(G_{2}\) and \(F_{2}\) (Stage-2) serve as refinements on the top of the outputs from the previous stage. We first finish the training of Stage-1 in low-resolution and then train Stage-2 to learn refinement in higher resolution based on a fixed Stage-1.

Training two stages in the same resolution would make Stage-2 difficult to bring further improvement, as Stage-1 has already been optimized with the same objective function (see Sect. 4.5). On the other hand, we find that learning in a lower resolution allows the model to generate visually more natural results, since the manifold underlying the low-resolution images is easier to model. Therefore, first, we constrain Stage-1 to train on 2x down-sampled image samples, denoted by \(X_{\downarrow } \) and \(Y_{\downarrow }\), to learn a base transformation. Second, based on the outputs of Stage-1, we train Stage-2 with image samples X and Y in the original resolution. Such a formulation exploits the preliminary low-resolution results of Stage-1 and guides Stage-2 to focus on up-sampling and adding finer details, which helps improve the overall translation quality.

In summary, to learn cross-domain translations \(G: X \rightarrow Y\) and \(F: Y \rightarrow X\) on given domains X and Y, we first learn preliminary translations \(G_1: X_{\downarrow } \rightarrow Y_{\downarrow }\) and \(F_1: Y_{\downarrow } \rightarrow X_{\downarrow }\) at the 2x down-sampled scale. Then we use \(G_{2}: X_{\downarrow } \rightarrow X\) and \(F_2: Y_{\downarrow } \rightarrow Y\) to obtain the final output with finer details in the original resolution. Notice that we can iteratively decompose \(G_2\) and \(F_2\) into more stages.

Fig. 2.
figure 2

Illustration of an overview of Stage-1 for learning coarse translations in low-resolution under an unsupervised setting. Solid arrow denotes input-outputs, the dashed arrow denotes the loss.

3.2 Stage-1: Basic Translation

In general, our Stage-1 module adopts a similar architecture of CycleGAN [34], which consists of two image translation networks \(G_{1}\) and \(F_{1}\) and two discriminators \(D_{X_1}, D_{Y_1}\). Note that Stage-1 is trained in low-resolution image domains \(X_{\downarrow }\) and \(Y_{\downarrow }\). Figure 2 shows an overview of the Stage-1 architecture.

Given a sample \(\mathbf {x}_1 \in X_{\downarrow }\), \(G_1\) translates it to a sample \(\mathbf {\hat{y}}_1 = G_1(\mathbf {x}_1)\) in the other domain \(Y_{\downarrow }\). On the one hand, the discriminator \(D_{Y_1}\) learns to classify the generated sample \(\mathbf {\hat{y}}_1\) to class 0 and the real image \(\mathbf {y}\) to class 1. On the other hand, \(G_1\) learns to deceive \(D_{Y_1}\) by generating more and more realistic samples. This can be formulated as an adversarial loss:

$$\begin{aligned} \begin{aligned} \mathcal {L}_{adv}(G_1, D_{Y_1}, X_\downarrow , Y_\downarrow )&= \mathbb {E}_{\mathbf {y} \sim Y\downarrow }[\log (D_{Y_1}(\mathbf {y}))] \\&+ \mathbb {E}_{\mathbf {x} \sim X\downarrow }[\log (1-D_{Y_1}(G_1(\mathbf {x})))]. \end{aligned} \end{aligned}$$
(1)

While \(D_{Y_1}\) tries to maximize \(\mathcal {L}_{adv}\), \(G_1\) tries to minimize it. Afterward, we use \(F_1\) to translate \(\mathbf {\hat{y}}_1\) back to domain \(X_{\downarrow }\), and constrain \(F_1(G_1(\mathbf {x}))\) to be close to the input \(\mathbf {x}\). This can be formulated as a cycle-consistent loss:

$$\begin{aligned} \mathcal {L}_{cycle}(G_1, F_1, X_\downarrow ) = \mathbb {E}_{\mathbf {x}\sim X\downarrow }\big \Vert \mathbf {x} - F_1(G_1(\mathbf {x}))\big \Vert _1. \end{aligned}$$
(2)

Similarly, for a sample \(\mathbf {y}_1 \in Y_{\downarrow }\), we use \(F_1\) to perform translation, use \(D_{X_1}\) to calculate the adversarial loss, and then use \(G_1\) to translate backward to calculate the cycle-consistent loss. Our full objective function for Stage-1 is a combination of the adversarial loss and the cycle-consistent loss:

$$\begin{aligned} \mathcal {L}_{Stage1}= & {} \mathcal {L}_{adv}(G_1, D_{Y_1}, X_{\downarrow }, Y{\downarrow }) + \mathcal {L}_{adv}(F_1, D_{X_1}, Y_{\downarrow }, X_{\downarrow })\nonumber \\&\quad + \lambda [\mathcal {L}_{cycle}(G_1, F_1, X_{\downarrow }) + \mathcal {L}_{cycle}(F_1, G_1, Y_{\downarrow })], \end{aligned}$$
(3)

where \(\lambda \) denotes the weight of the cycle-consistent loss. We obtain the translations \(G_1\) and \(F_1\) by optimizing the following objective function:

$$\begin{aligned} G_1, F_1 = \mathop {{{\mathrm{arg\,min}}}}\limits _{G_1,F_1} \max _{D_{X_1}, D_{Y_1}} \mathcal {L}_{Stage1}, \end{aligned}$$
(4)

which encourages these translations to transform the results to another domain while preserving the intrinsic image content. As a result, the optimized translations \(G_1\) and \(F_1\) can perform a basic cross-domain translation in low resolution.

Fig. 3.
figure 3

Illustration of an overview of our Stage-2 for learning refining processes on the top of Stage-1 outputs. \(G_{1}\) and \(F_{1}\) are the translation networks learned in Stage-1. In the training process, we keep the weights of \(G_1\) and \(F_1\) fixed. Solid arrow denotes input-output, and the dashed arrow denotes the loss.

3.3 Stage-2: Refinement

Since it is difficult to learn a complicated translation with the limited ability of a single stage, the translated output of Stage-1 may seem plausible but still leaves us much room for improvement. To refine the output of Stage-1, we employ Stage-2 with a stacked structure built on the top of the trained Stage-1 to complete the full translation to generate higher resolution results with finer details.

Stage-2 consists of two translation networks \(G_2\), \(F_2\) and two discriminator network \(D_{X_2}\), \(D_{Y_2}\), as shown in Fig. 3. We only describe the architecture of \(G_{2}\), since \(F_{2}\) shares the same design (see Fig. 3).

\(G_2\) consists of two parts: a newly initialized image translation network \(G_2^T\) and an adaptive fusion block \(G_2^F\). Given the output of Stage-1 (\(\mathbf {\hat{y}}_1 = G_1(\mathbf {x}_1)\)), we use nearest up-sampling to resize it to match the original resolution. Different from the image translation network in Stage-1, which only takes \(\mathbf {x} \in X\) as input, in Stage-2 we use both the current stage’s input \(\mathbf {x}\) and the previous stage’s output \(\mathbf {\hat{y}}_1\). Specifically, we concatenate \(\mathbf {\hat{y}}_1\) and \(\mathbf {x}\) along the channel dimension, and utilize \(G_2^T\) to obtain the refined result \(\mathbf {\hat{y}}_2 = G_{2}^T(\mathbf {\hat{y}}_1, \mathbf {x})\).

Fig. 4.
figure 4

Illustration of the linear combination in an adaptive fusion block. The fusion block applies the fusion weight map \(\varvec{\alpha }\) to find defects in the previous result \(\mathbf {\hat{y}}_1\) and correct it precisely using \(\mathbf {\hat{y}}_2\) to produce a refined output \(\mathbf {y}_2\).

Besides simply using \(\mathbf {\hat{y}}_2\) as the final output, we introduce an adaptive fusion block \(G_2^F\) to learn a dynamic combination of \(\mathbf {\hat{y}}_2\) and \(\mathbf {\hat{y}}_1\) to fully utilize the entire two-stage structure. Specifically, the adaptive fusion block learns a pixel-wise linear combination of the previous results:

$$\begin{aligned} G_2^F(\mathbf {\hat{y}}_1, \mathbf {\hat{y}}_2) = \mathbf {\hat{y}_1} \odot (1-\varvec{\alpha }_{x}) + \mathbf {\hat{y}_2} \odot \varvec{\alpha }_{x}, \end{aligned}$$
(5)

where \(\odot \) denotes element-wise product and \(\varvec{\alpha } \in (0,1)^{H \times W}\) represents the fusion weight map, which is predicted by a convolutional network \(h_{x}\):

$$\begin{aligned} \varvec{\alpha }_{x} = h_x(\mathbf {x}, \mathbf {\hat{y}}_1, \mathbf {\hat{y}}_2). \end{aligned}$$
(6)

Figure 4 shows an example of adaptively combining the outputs from two stages.

Similar to Stage-1, we use a combination of adversarial and cycle-consistent losses to formulate our objective function of Stage-2:

$$\begin{aligned} \mathcal {L}_{Stage2}= & {} \mathcal {L}_{adv}(G_2 \circ G_1, D_{Y_2}, X, Y) + \mathcal {L}_{adv}(F_2 \circ F_1, D_{X_2}, Y, X) \nonumber \\&\quad +\, \lambda [\mathcal {L}_{cycle}(G_2 \circ G_1, F_2 \circ F_1, X) + \mathcal {L}_{cycle}(F_2 \circ F_1, G_2 \circ G_1, Y)]. \end{aligned}$$
(7)

Optimizing this objective is similar to solving Eq. 4. The translation networks \(G_2\) and \(F_2\) are learned to refine the previous results by correcting defects and adding details on them.

Finally, we complete our desired translations G and F by combining the transformations in Stage-1 and Stage-2, which are capable of tackling a complex image-to-image translation problem under the unsupervised setting.

4 Experiments

The proposed approach is named SCAN or SCAN Stage-N if it has N stages in the following experiments. We explore several variants of our model to evaluate the effectiveness of our design in Sect. 4.7. In all experiments, we decompose the target translation into two stages, except for exploring the ability of the three-stage architecture in high-resolution tasks in Sect. 4.5.

We used the official released model of CycleGAN [34] and Pix2Pix [7] for \(256\times 256\) image translation comparisions. For \(512\times 512\) tasks, we train the CycleGAN with the official code since there is no available pre-trained model.

4.1 Network Architecture

For the image translation network, we follow the settings of [15, 34], adopting the encoder-decoder architecture from Johnson et al. [8]. The network consists of two down-sample layers implemented by stride-2 convolution, six residual blocks and two up-sample layers implemented by sub-pixel convolution [20]. Note that different from [34], which used the fractionally strided convolution as the up-sample block, we use the sub-pixel convolution [20], for avoiding checkerboard artifacts [19]. The adaptive fusion block is a simple 3-layer convolutional network, which calculates the fusion weight map \(\varvec{\alpha }\) using two Convolution-InstanceNorm-ReLU blocks followed by a Convolution-Sigmoid block. For the discriminator, we use the PatchGAN structure introduced in [7].

4.2 Datasets

To demonstrate the capability of our proposed method for tackling the complex image-to-image translation problem under unsupervised settings, we first conduct experiments on the Cityscapes dataset [2]. We compare with the state-of-the-art approaches in the Labels \(\leftrightarrow \) Photo task in \(256\,\times \,256\) resolution. To further show the effectiveness of our method to learn complex translations, we also extended the input size to a challenging \(512\,\times \,512\) resolution, namely the high-resolution Cityscapes Labels \(\rightarrow \) Photo task.

Besides the Labels \(\leftrightarrow \) Photo task, we also select eight image-to-image translation tasks from [34], including Map \(\leftrightarrow \) Aerial, Facades \(\leftrightarrow \) Labels and Horse \(\leftrightarrow \) Zebra. We compare our method with the CycleGAN [34] in these tasks in \(256\,\times \,256\) resolution.

4.3 Training Details

Networks in Stage-1 are trained from scratch, while networks in Stage-N are trained with the {Stage-1, \(\cdots \), Stage-(N − 1)} networks fixed. For the GAN loss, Different from the previous works [7, 34], we adopt a gradient penalty term \(\lambda _{gp}(||\nabla D(x)||_2 - 1)^2\) in the discriminator loss to achieve a more stable training process [12]. For all datasets, the Stage-1 networks are trained in \(128\,\times \,128\) resolution, the Stage-2 networks are trained in \(256\,\times \,256\) resolution. For the three-stage architecture in Sect. 4.5, the Stage-3 networks are trained in \(512\,\times \,512\) resolution. We set batch size to 1, \(\lambda = 10\) and \(\lambda _{\text {gp}} =10\) in all experiments. All stages are trained with 100 epochs for all datasets. We use Adam [11] to optimize our networks with an initial learning rate as 0.0002, and decrease it linearly to zero in the last 50 epochs.

4.4 Evaluation Metrics

FCN Score and Segmentation Score. For the Cityscapes dataset, we adopt the FCN Score and the Segmentation Score as evaluation metrics from [7] for the Labels \(\rightarrow \) Photo task and the Photo \(\rightarrow \) Labels task, respectively. The FCN Score employs an off-the-shelf FCN segmentation network [17] to estimate the realism of the translated images. The Segmentation Score includes three standard segmentation metrics, which are the per-pixel accuracy, the per-class accuracy, and the mean class accuracy, as defined in [17].

PSNR and SSIM. Besides using the FCN Score and the Segmentation Score, we also calculate the PSNR and the SSIM [25] for a quantitative evaluation. We apply the above metrics on the Map \(\leftrightarrow \) Aerial task and the Facades \(\leftrightarrow \) Labels task to measure both the color similarity and the structural similarity between the translated outputs and the ground truth images.

User Preference. We run user preference tests in the high-resolution Cityscapes Labels \(\rightarrow \) Photos task and the Horse \(\rightarrow \) Zebra tasks for evaluating the realism of our generated photos. In the user preference test, each time a user is presented with a pair of results from our proposed SCAN and the CycleGAN [34], and asked which one is more realistic. Each pair of the results is translated from the same image. Images are all shown in randomized order. In total, 30 images from the Cityscapes test set and 10 images from the Horse2Zebra test set are used in the user preference tests. As a result, 20 participates make a total of 600 and 200 preference choices, respectively.

Fig. 5.
figure 5

Comparisons on the Cityscapes dataset of \(256\,\times \,256\) resolution. The left subfigure are Labels \(\rightarrow \) Photo results and the right are Photo \(\rightarrow \) Labels results. In the Labels \(\rightarrow \) Photo task, our proposed SCAN generates more natural photographs than CycleGAN; in the Photo \(\rightarrow \) Labels task, SCAN produces an accurate segmentation map while CycleGAN’s results are blurry and suffer from deformation. SCAN also generates results that are visually closer to the supervised approach Pix2Pix than results of CycleGAN. Zoom in for better view.

Table 1. FCN Scores in Labels \(\rightarrow \) Photo task and Segmentation Scores in Photo \(\rightarrow \) Labels task on the Cityscapes dataset. The proposed methods are named after SCAN (Stage-1 resolution)-(Stage-2 resolution). FT means we also fine-tune the Stage-1 model instead of fixing its weight. FS means directly train Stage-2 from-scratch without training Stage-1 model.

4.5 Comparisons

Cityscapes Labels \(\leftrightarrow \) Photo. Table 1 shows the comparison of our proposed method SCAN and its variants with state-of-the-art methods in the Cityscapes Labels \(\leftrightarrow \) Photo tasks. The same unsupervised settings are adopted by all methods except Pix2Pix, which is trained under a supervised setting.

On the FCN Scores, our proposed SCAN Stage-2 128-256 outperforms the state-of-the-art approaches considering the pixel accuracy, while being competitive considering the class accuracy and the class IoU. On the Segmentation Scores, SCAN Stage-2 128-256 outperforms state-of-the-art approaches in all metrics. Comparing SCAN Stage-1 256 with CycleGAN, our modified network yields improved results, which, however, still perform inferiorly to SCAN Stage-2 128-256. Also, we can find that SCAN Stage-2 128-256 achieves a much closer performance to the supervised approach Pix2Pix [7] than others.

We also compare our SCAN Stage-2 128-256 with different variants of SCAN. Comparing SCAN Stage-2 128-256 with SCAN Stage-1 approaches, we can find a substantial improvement on the FCN Scores, which indicates that adding the Stage-2 refinement helps to improve the realism of the output images. On the Segmentation Score, comparison of the SCAN Stage-1 128 and SCAN Stage-1 256 shows that learning from low-resolution yields better performance. Comparison between the SCAN Stage-2 128-256 and SCAN Stage-1 128 shows that adding Stage-2 can further improve from the Stage-1 results. To experimentally prove that the performance gain does not come from merely adding model capacity, we conducted a SCAN Stage-2 256-256 experiments, which perform inferiorly to the SCAN Stage-2 128-256.

To further analyze various experimental settings, we also conducted our SCAN Stage-2 128-256 in two additional settings, including leaning two stages from-scratch and fine-tuning Stage-1. We add supervision signals to both stages for these two settings. Learning two stages from scratch shows poor performance in both tasks, which indicates joint training two stages together does not guarantee performance gain. The reason for this may lie in directly training a high-capacity generator is difficult. Also, fine-tuning Stage-1 does not resolve this problem and has smaller improvement compared with fixing weights of Stage-1.

To examine the effectiveness of the proposed fusion block, we compare it with several variants: (1) Learned Pixel Weight (LPW), which is our proposed fusion block; (2) Uniform Weight (UW), in which the two stages are fused with the same weight at different pixel locations \(\mathbf {\hat{y}_1} (1-w) + \mathbf {\hat{y}_2} w\), and during training w gradually increases from 0 to 1; (3) Learned Uniform Weight (LUW), which is similar to UW, but w is a learnable parameter instead; (4) Residual Fusion (RF), which uses a simple residual fusion \(\mathbf {\hat{y}_1} + \mathbf {\hat{y}_2}\). The results are illustrated in Table 2. It can be observed that our proposed LPW fusion yields the best performance among all alternatives, which indicates that the LPW approach can learn better fusion of the outputs from two stages than approaches with uniform weights.

Table 2. FCN Scores and Segmentation Scores of several variants of the fusion block on the Cityscapes dataset.

In Fig. 5, we visually compare our results with those of the CycleGAN and the Pix2Pix. In the Labels \(\rightarrow \)Photo task, SCAN generates more realistic and vivid photos compared to the CycleGAN. Also, the details in our results appear closer to those of the supervised approach Pix2Pix. In the Photo \(\rightarrow \) Labels task, while SCAN can generate more accurate semantic layouts that are closer to the ground truth, the results of the CycleGAN suffer from distortion and blur.

Fig. 6.
figure 6

Translation results in the Labels \(\rightarrow \) Photo task on the Cityscapes datasets of \(512\,\times \,512\) resolution. Our proposed SCAN produces realistic images that even look at a glance like the ground truths. Zoom in for best view.

High-Resolution Cityscapes Labels \(\rightarrow \) Photo. The CycleGAN only considers images in \(256\,\times \,256\) resolution, and results of training CycleGAN directly in \(512\,\times \,512\) resolution are not satisfactory, as shown in Figs. 1 and 6.

By iteratively decomposing the Stage-2 into a Stage-2 and a Stage-3, we obtain a three-stage SCAN. During the translation process, the resolution of the output is growing from \(128\,\times \,128\) to \(256\,\times \,256\) and to \(512\,\times \,512\), as shown in Fig. 1. Figure 6 shows the comparison between our SCAN and the CycleGAN in the high-resolution Cityscapes Labels \(\rightarrow \) Photo task. We can clearly see that our proposed SCAN generates more realistic photos compared with the results of CycleGAN, and SCAN’s outputs are visually closer to the ground truth images. The first row shows that our results contain realistic trees with plenty of details, while the CycleGAN only generates repeated patterns. For the second row, we can observe that the CycleGAN tends to simply ignore the cars by filling it with a plain grey color, while cars in our results have more details.

Also, we run a user preference study comparing SCAN with the CycleGAN with the setting described in Sect. 4.4. As a result, 74.9% of the queries prefer our SCAN’s results, 10.9% prefer the CycleGAN’s results, and 14.9% suggest that the two methods are equal. This result shows that our SCAN can generate overall more realistic translation results against the CycleGAN in the high-resolution translation task.

Table 3. PSNR and SSIM values in Map \(\leftrightarrow \) Aerial and Facades \(\leftrightarrow \) Labels tasks.
Fig. 7.
figure 7

Translation results in the Labels \(\rightarrow \) Facades task and the Aerial \(\rightarrow \) Map task. Results of our proposed SCAN shows finer details in both tasks comparing with CycleGAN’s results.

Fig. 8.
figure 8

Translation results in the Horse \(\leftrightarrow \) Zebra tasks. The CycleGAN changes both desired objects and backgrounds. Adding identity loss can fix this problem, but tends to be blurry compared with those from SCAN, which without using the identity loss.

Map \(\leftrightarrow \) Aerial and Facades \(\leftrightarrow \) Labels. Table 3 reports the performances regarding the PSNR/SSIM metrics. We can see that our methods outperform the CycleGAN in both metrics, which indicates that our translation results are more similar to ground truth in terms of colors and structures.

Figure 7 shows some of the sample results in the Aerial \(\rightarrow \) Map task and the Labels \(\rightarrow \) Facades task. We can observe that our results contain finer details while the CycleGAN results tend to be blurry.

Horse \(\leftrightarrow \) Zebra. Figure 8 compares the results of SCAN against those of the CycleGAN in the Horse \(\leftrightarrow \) Zebra task. We can observe that both SCAN and the CycleGAN successfully translate the input images to the other domain. As the Fig. 8 shows, the CycleGAN changes not only the desired objects in input images but also the backgrounds of the images. Adding the identity loss [34] can fix this problem, but the results still tend to be blurry compared with those from our proposed SCAN. A user preference study on Horse \(\rightarrow \) Zebra translation is performed with the setting described in Sect. 4.4. As a result, 76.3% of the subjects prefer our SCAN’s results against CycleGAN’s, while 68.9% prefer SCAN’s results against CycleGAN+idt’s.

4.6 Visualization of Fusion Weight Distributions

To illustrate the role of the adaptive fusion block, we visualize the three average distributions of fusion weights (\(\varvec{\alpha }_{x}\) in Eq. 5) over 1000 samples from Cityscapes dataset in epoch 1, 10, and 100, as shown in Fig. 9. We observed that the distribution of the fusion weights gradually shifts from left to right. It indicates a consistent increase of the weight values in the fusion maps, which implies more and more details of the second stage are bought to the final output.

Fig. 9.
figure 9

Distributions of fusion weights over all pixels in different epochs. Each distribution is an average result over 1000 sample images from Cityscapes dataset. Dashed arrows indicate the average weights of fusion maps.

4.7 Ablation Study

In Sect. 4.5, we report the evaluation results of SCAN and its variants, here we further explore SCAN by removing modules from it:

  • SCAN w/o Skip Connection: remove the skip connection from the input to the translation network in the Stage-2 model, denoted by SCAN w/o Skip.

  • SCAN w/o Adaptive Fusion Block: remove the final adaptive fusion block in the Stage-2 model, denoted by SCAN w/o Fusion.

  • SCAN w/o Skip Connection and Adaptive Fusion Block: remove both the skip connection from the input to the translation network and the adaptive fusion block in the Stage-2 model, denoted by SCAN w/o Skip, Fusion.

Table 4. FCN Scores in the Cityscapes dataset for ablation study, evaluated on the Labels \(\rightarrow \) Photo task with different variants of the proposed SCAN.

Table 4 shows the results of the ablation study, in which we can observe that removing the adaptive fusion block as well as removing the skip connection both downgrade the performance. With both of the components removed, the stacked networks obtain marginal performance gain compared with Stage-1. Note that the fusion block only consists of three convolution layers, which have a relatively small size compared to the whole network. Refer to Table 1, in SCAN Stage-2 256-256 experiment, we double the network parameters compared to SCAN Stage-1 256, resulting in no improvement in the Label \(\rightarrow \) Photo task. Thus, the improvement of the fusion block does not simply come from the added capacity.

Therefore, we can conclude that using our proposed SCAN structure, which consists of the skip connection and the adaptive fusion block, is critical for improving the overall translation performance.

5 Conclusions

In this paper, we proposed a novel approach to tackle the unsupervised image-to-image translation problem using a stacked network structure with cycle-consistency, namely SCAN. The proposed SCAN decomposes a complex image translation process into a coarse translation step and multiple refining steps, and then applies the cycle-consistency to learn the target translation from unpaired image data. Extensive experiments on multiple datasets demonstrate that our proposed SCAN outperforms the existing methods in quantitative metrics and generates more visually pleasant translation results with finer details compared to existing approaches.