Keywords

1 Introduction

Rain is a very common weather in actual life. However, it can affect the visibility. Especially in heavy rain, rain streaks from various directions accumulate and make the background scene misty, which will seriously influence the accuracy of many computer vision systems, including video surveillance, object detection and tracking in autonomous driving, etc. Therefore, it is an important task to remove the rain and recover the background from rain images.

Image deraining has attracted much attention in the past decade. Many methods have been proposed to solve this problem. Existing methods can be divided into two categories, including video based approaches and single image based approaches. As video based methods can utilize the relationship between frames, it is relatively easy to remove rain from videos [1,2,3,4]. However, single image deraining is more challenging, and we focus on this task in this paper. For single image deraining, traditional methods, such as discriminative sparse coding [5], low rank representation [6], and the Gaussian mixture model [7], have been applied to this task and they work quite well. Recently, deep learning based deraining methods [8, 9] receive extensive attention due to its powerful ability of feature representation. All these related approaches achieve good performance, but there is still much space to improve.

There are mainly two limitations of existing approaches. On the one hand, according to [10,11,12], spatial contextual information is very useful for deraining. However, many current methods remove rain streaks based on image patches, which neglect the contextual information in large regions. On the other hand, as rain steaks in heavy rain have various directions and shapes, they blur the scene in different ways. It is a common way [9, 13] to decompose the overall rain removal problem into multiple stages, so that we can remove rain streaks iteratively. Since these different stages work together to remove rain streaks, the information of deraining in previous stages is useful to guide and benefit the rain removal in later stages. However, existing methods treat these rain streak removal stages independently and do not consider their correlations.

Motivated by addressing the above two issues, we propose a novel deep network for single image deraining. The pipeline of our proposed network is shown in Fig. 2. We remove rain streaks stage by stage. At each stage, we use the context aggregation network with multiple full-convolutional layers to remove rain streaks. As rain streaks have various directions and shapes, each channel in our network corresponds to one kind of rain streak. Squeeze-and-Excitation (SE) blocks are used to assign different alpha-values to various channels according to their interdependencies in each convolution layer. Benefited from the exponentially increasing convolution dilations, our network has a large reception field with low depth, which can help us to acquire more contextual information. To better utilize the useful information for rain removal in previous stages, we further incorporate the Recurrent Neural Network (RNN) architecture with three kinds of recurrent units to guide the deraining in later stages. We name the proposed deep network as REcurrent SE Context Aggregation Net (RESCAN).

Main contributions of this paper are listed as follows:

  1. 1.

    We propose a novel unified deep network for single image deraining, by which we remove the rain stage by stage. Specifically, at each stage, we use the contextual dilated network to remove the rain. SE blocks are used to assign different alpha-values to various rain streak layers according to their properties.

  2. 2.

    To the best of our knowledge, this is the first paper to consider the correlations between different stages of rain removal. By incorporating RNN architecture with three kinds of recurrent units, the useful information for rain removal in previous stages can be incorporated to guide the deraining in later stages. Our network is suitable for recovering rain images with complex rain streaks, especially in heavy rain.

  3. 3.

    Our deep network achieves superior performance compared with the state-of-the-art methods on various datasets.

2 Related Works

During the past decade, many methods have been proposed to separate rain streaks and background scene from rain images. We briefly review these related methods as follows.

Video Based Methods. As video based methods can leverage the temporal information by analyzing the difference between adjacent frames, it is relatively easy to remove the rain from videos [3, 14]. Garg and Nayar [2, 15, 16] propose an appearance model to describe rain streaks based on photometric properties and temporal dynamics. Meanwhile, Zhang et al. [1] exploit temporal and chromatic properties of rain in videos. Bossu et al. [17] detect the rain based on the histogram of orientation of rain streaks. In [4], Tripathi et al. provide a review of video-based deraining methods proposed in recent years.

Single Image Based Methods. Compared with video deraining, single image deraining is much more challenging, since there is no temporal information in images. For this task, traditional methods, including dictionary learning [18], Gaussian mixture models (GMMs) [19], and low-rank representation [20], have been widely applied. Based on dictionary learning, Kang et al. [21] decompose high frequency parts of rain images into rain and nonrain components. Wang et al. [22] define a 3-layer hierarchical scheme. Luo et al. [5] propose a discriminative sparse coding framework based on image patches. Gu et al. [23] integrate analysis sparse representation (ASR) and synthesis sparse representation (SSR) to solve a variety of image decomposition problems. In [7], GMM works as a prior to decompose a rain image into background and rain streaks layer. Chang et al. [6] leverage the low-rank property of rain streaks to separate two layers. Zhu et al. [24] combine three different kinds of image priors.

Recently, several deep learning based deraining methods achieve promising performance. Fu et al. [8, 25] first introduce deep learning methods to the deraining problem. Similar to [21], they also decompose rain images into low- and high-frequency parts, and then map high-frequency part to the rain streaks layer using a deep residual network. Yang et al. [9, 26] design a deep recurrent dilated network to jointly detect and remove rain streaks. Zhang et al. [27] use the generative adversarial network (GAN) to prevent the degeneration of background image when it is extracted from rain image, and utilize the perceptual loss to further ensure better visual quality. Li et al. [13] design a novel multi-stage convolutional neural network that consists of several parallel sub-networks, each of which is made aware of different scales of rain streaks.

3 Rain Models

It is a commonly used rain model to decompose the observed rain image \(\mathbf {O}\) into the linear combination of the rain-free background scene \(\mathbf {B}\) and the rain streak layer \(\mathbf {R}\):

$$\begin{aligned} \mathbf {O}=\mathbf {B}+\mathbf {R}. \end{aligned}$$
(1)

By removing the rain streaks layer \(\mathbf {R}\) from the observed image \(\mathbf {O}\), we can obtain the rain-free scene \(\mathbf {B}\).

Based on the rain model in Eq. (1), many rain removal algorithms assume that rain steaks should be sparse and have similar characters in falling directions and shapes. However, in reality, raindrops in the air have various appearances, and occur in different distances from the camera, which leads to an irregular distribution of rain streaks. In this case, a single rain streak layer \(\mathbf {R}\) is not enough to well simulate this complex situation.

To reduce the complexity, we regard rain streaks with similar shape or depth as one layer. Then we can divide the captured rainy scene into the combination of several rain streak layers and an unpolluted background. Based on this, the rain model can be reformulated as follows:

$$\begin{aligned} \mathbf {O}=\mathbf {B}+\sum _{i=1}^{n}\mathbf {R}^i, \end{aligned}$$
(2)

where \(\mathbf {R}^i\) represents the i-th rain streak layer that consists of one kind of rain streaks and n is the total number of different rain streak layers.

According to [28], things in reality might be even worse, especially in the heavy rain situation. Accumulation of multiple rain streaks in the air may cause attenuation and scattering, which further increases the diversity of rain streaks’ brightness. For camera or eye visualization, the scattering causes haze or frog effects. This further pollutes the observed image \(\mathbf {O}\). For camera imaging, due to the limitation of pixel number, far away rain streaks cannot occupy full pixels. When mixed with other things, the image will be blurry. To handle the issues above, we further take the global atmospheric light into consideration and assign different alpha-values to various rain streak layers according to their intensities transparencies. We further generalize the rain model to:

$$\begin{aligned} \mathbf {O}=\left( 1-\sum _{i=0}^{n}\mathbf {\alpha }_i\right) \mathbf {B}+\alpha _{0}\mathbf {A}+\sum _{i=1}^{n}\mathbf {\alpha }_i \mathbf {R}^i, \ s.t. \ \alpha _{i} \ge 0, \ \sum _{i=0}^{n} \alpha _{i} \le 1, \end{aligned}$$
(3)

where \(\mathbf {A}\) is the global atmospheric light, \(\alpha _0\) is the scene transmission, \(\mathbf {\alpha }_i\) \((i=1,\cdots ,n)\) indicates the brightness of a rain streak layer or a haze layer.

Fig. 1.
figure 1

The architecture of SE Context Aggregation Network (SCAN).

4 Deraining Method

Instead of using decomposition methods with artificial priors to solve the problem in Eq. (3), we intend to learn a function f that directly maps observed rain image \(\mathbf {O}\) to rain streak layer \(\mathbf {R}\), since \(\mathbf {R}\) is sparser than \(\mathbf {B}\) and has simpler texture. Then we can subtract \(\mathbf {R}\) from \(\mathbf {O}\) to get the rain-free scene \(\mathbf {B}\). The function f above can be represented as a deep neural network and be learned by optimizing the loss function \(\left\| f\left( \mathbf {O}\right) -\mathbf {R} \right\| _F^2\).

Based on the motivation above, we propose the REcurrent SE Context Aggregation Net (RESCAN) for image deraining. The framework of our network is presented in Fig. 2. We remove rain streaks stage by stage. At each stage, we use the context aggregation network with SE blocks to remove rain streaks. Our network can deal with rain streaks of various directions and shapes, and each feature map in the network corresponds to one kind of rain streak. Dilated convolution used in our network can help us have a large reception field and acquire more contextual information. By using SE blocks, we can assign different alpha-values to various feature maps according to their interdependencies in each convolution layer. As we remove the rain in multiple stages, useful information for rain removal in previous stages can guide the learning in later stages. So we incorporate the RNN architecture with memory unit to make full use of the useful information in previous stages.

In the following, we first describe the baseline model, and then define the recurrent structure, which lifts the model’s capacity by iteratively decomposing rain streaks with different characteristics.

4.1 SE Context Aggregation Net

The base model of RESCAN is a forward network without recurrence. We implement it by extending Context Aggregation Net (CAN) [11, 12] with Squeeze-and-Excitation (SE) blocks [29], and name it as SE Context Aggregation Net (SCAN).

Table 1. The detailed architecture of SCAN. d is the depth of network.

Here we provide an illustration and a further specialization of SCAN. We schematically illustrate SCAN in Fig. 1, which is a full-convolution network. In Fig. 1, we set the depth \(d=6\). Since a large receptive field is very helpful to acquire much contextual information, dilation is adopted in our network. For layers \(L_{1}\) to \(L_{3}\), the dilation increases from 1 to 4 exponentially, which leads to the exponential growth in receptive field of every elements. As we treat the first layer as an encoder to transform an image to feature maps, and the last two layers as a decoder to map reversely, we do not apply dilation for layers \(L_{0}\), \(L_{4}\) and \(L_{5}\). Moreover, we use \(3\times 3\) convolution for all layers before \(L_{5}\). To recover RGB channels of a color image, or gray channel for a gray scale image, we adopt the \(1\times 1\) convolution for the last layer \(L_{5}\). Every convolution operation except the last one is followed by a nonlinear operation. The detailed architecture of SCAN is summarized in Table 1. Generally, for a SCAN with depth d, the receptive field of elements in the output image equals to \(\left( 2^{d-2}+3\right) ^{2}\).

For feature maps, we regard each channel of them as the embedding of a rain streak layer \(\mathbf {R}^i\). Recall in Eq. (3), we assign different alpha-values \(\mathbf {\alpha }_i\) to different rain steak layers \(\mathbf {R}^i\). Instead of setting fixed alpha-values \(\mathbf {\alpha }_i\) for each rain layer, we update the alpha-values for embeddings of rain streak layers in each network layer. Although the convolution operation implicitly introduces weights for every channel, these implicit weights are not specialized for every image. To explicitly import weight on each network layer for each image, we extend each basic convolution layer with Squeeze-and-Excitation (SE) block [29], which computes normalized alpha-value for every channel of each item. By multiplying alpha-values learned by SE block, feature maps computed by convolution are re-weighted explicitly.

An obvious difference between SCAN and former models [8, 27] is that SCAN has no batch normalization (BN) [30] layers. BN is widely used in training deep neural network, as it can reduce internal covariate shift of feature maps. By applying BN, each scalar feature is normalized and has zero mean and unit variance. These features are independent with each other and have the same distribution. However, in Eq. (2) rain streaks in different layers have different distributions in directions, colors and shapes, which is also true for each scalar feature of different rain streak layers. Therefore, BN contradicts with the characteristics of our proposed rain model. Thus, we remove BN from our model. Experimental results in Sect. 5 show that this simple modification can substantially improve the performance. Furthermore, since BN layers keep a normalized copy of feature maps in GPU, removing it can greatly reduce the demand on GPU memory. For SCAN, we can save approximately \(40\%\) of memory usage during training without BN. Consequently, we can build a larger model with larger capacity, or increase the mini-batch size to stabilize the training process.

4.2 Recurrent SE Context Aggregation Net

As there are many different rain streak layers and they overlap with each other, it is not easy to remove all rain streaks in one stage. So we incorporate the recurrent structure to decompose the rain removal into multiple stages. This process can be formulated as:

$$\begin{aligned} \mathbf {O}_1&= \mathbf {O}, \end{aligned}$$
(4)
$$\begin{aligned} \mathbf {{R}}_s&= f_{CNN}\left( \mathbf {O}_s\right) , 1 \le s \le S, \end{aligned}$$
(5)
$$\begin{aligned} \mathbf {O}_{s+1}&= \mathbf {O}_{s} - \mathbf {{R}}_s, 1 \le s \le S ,\end{aligned}$$
(6)
$$\begin{aligned} \mathbf {R}&= \sum _{s=1}^{S}\mathbf {{R}}_s, \end{aligned}$$
(7)

where S is the number of stages, \(\mathbf {{R}}_s\) is the output of the s-th stage, and \(\mathbf {O}_{s+1}\) is the intermediate rain-free image after the s-th stage.

The above model for deraining has been used in [9, 13]. However, recurrent structure used in their methods [9, 13] can only be regarded as the simple cascade of the same network. They just use the output images of last stage as the input of current stage and do not consider feature connection among these stages. As these different stages work together to remove the rain, input images of different stages \(\{\mathbf {O}_1,\mathbf {O}_2,\cdots ,\mathbf {O}_S \} \) can be regarded as a temporal sequence of rain images with decreasing levels of rain streaks interference. It is more meaningful to investigate the recurrent connections between features of different stages rather than only using the recurrent structure. So we incorporate recurrent neural network (RNN) [31] with memory unit to better make use of information in previous stages and guide feature learning in later stages.

In our framework, Eq. (5) can be further reformulated as:

$$\begin{aligned} \mathbf {{R}}_s = \sum _{i=1}^{n} \alpha _{i} \mathbf {\bar{R}}^{i}_{s} = f_{CNN+RNN}\left( \mathbf {O}_s,x_{s-1}\right) , 1 \le s \le S, \end{aligned}$$
(8)

where \(\mathbf {\bar{R}}^{i}_{s}\) is the decomposed i-th rain streak layer of the s-th stage and \(x_{s-1}\) is the hidden state of the (\(s-1\))-th stage. Consistent with the rain model in Eq. (3), \(\mathbf {{R}}_s\) is computed by summing \(\mathbf {\bar{R}}^{i}_{s}\) with different alpha-values.

For the deraining task, we further explore three different recurrent unit variants, including ConvRNN, ConvGRU [32], and ConvLSTM [33]. Due to the space limitation, we only present the details of ConvGRU [32] in the following. For other two kinds of recurrent units, please refer to the supplement materials on our webpage.

Fig. 2.
figure 2

The unfolded architecture of RESCAN. K is the convolution kernel size, DF indicates the dilation factor, and S represents the number of stages.

ConvGRU. Gated Recurrent Units (GRU) [32] is a very popular recurrent unit in sequential models. Its convolutional version ConvGRU is adopted in our model. Denote \(x^{j}_{s}\) as the feature map of the j-th layer in the s-th stage, and it can be computed based on the \(x^{j}_{s-1}\) (feature map in the same layer of the previous stage) and \(x^{j-1}_{s}\) (feature map in the previous layer of the same stage):

$$\begin{aligned} z^{j}_s&= \sigma \left( W^{j}_z\circledast x^{j-1}_s + U^{j}_z\circledast x^{j}_{s-1} + b^{j}_z \right) , \end{aligned}$$
(9)
$$\begin{aligned} r^{j}_s&= \sigma \left( W^{j}_r\circledast x^{j-1}_s + U^{j}_r\circledast x^{j}_{s-1} + b^{j}_r \right) , \end{aligned}$$
(10)
$$\begin{aligned} n^{j}_s&= \tanh \left( W^{j}_n\circledast x^{j-1}_s + U^{j}_n\circledast \left( r^{j}_s\odot x^{j}_{s-1}\right) + b^{j}_n \right) , \end{aligned}$$
(11)
$$\begin{aligned} x^{j}_{s}&= \left( 1 - z^{j}_s\right) \odot x^{j}_{s-1} + z^{j}_s \odot n^{j}_s, \end{aligned}$$
(12)

where \(\sigma \) is the sigmoid function \(\sigma \left( x\right) =1/\left( 1+\exp \left( -x\right) \right) \) and \(\odot \) denotes element-wise multiplication. W is the dilated convolutional kernel, and U is an ordinary kernel with a size \(3\times 3\) or \(1\times 1\). Compared with a convolutional unit, ConvRNN, ConvGRU and ConvLSTM units will have twice, three times and four times parameters, respectively.

4.3 Recurrent Frameworks

We further examine two frameworks to infer the final output. Both of them use \(\mathbf {O}_{s}\) and previous states as input for the s-th stage, but they output different images. In the following, we describe the additive prediction and the full prediction in detail, together with their corresponding loss functions.

Additive Prediction. The additive prediction is widely used in image processing. In each stage, the network only predicts the residual between previous predictions and the ground truth. It incorporates previous feature maps and predictions as inputs, which can be formulated as:

$$\begin{aligned} \mathbf {R}_s&= f\left( \mathbf {O}_{s}, x_{s-1}\right) , \end{aligned}$$
(13)
$$\begin{aligned} \mathbf {O}_{s+1}&= \mathbf {O} - \sum _{j=1}^{s}\mathbf {R}_{j} = \mathbf {O}_s - \mathbf {R}_s, \end{aligned}$$
(14)

where \(x_{s-1}\) represents previous states as in Eq. (12). For this framework, the loss functions are chosen as follows:

$$\begin{aligned} L\left( \mathbf {\Theta }\right) = \sum _{s=1}^{S}\left\| \sum _{j=1}^{s}\mathbf {R}_j - \mathbf {R}\right\| ^2_F, \end{aligned}$$
(15)

where \(\mathbf {\Theta }\) represents the network’s parameters.

Full Prediction. The full prediction means that in each stage, we predict the whole rain streaks \(\mathbf {R}\). This approach can be formulated as:

$$\begin{aligned} \hat{\mathbf {R}}_s = f\left( \hat{\mathbf {O}}_s, x_{s-1}\right) , \end{aligned}$$
(16)
$$\begin{aligned} \hat{\mathbf {O}}_{s+1} = \mathbf {O} - \hat{\mathbf {R}}_{s}, \end{aligned}$$
(17)

where \(\hat{\mathbf {R}}_s\) represents the predicted full rain streaks in the s-th stage, and \(\hat{\mathbf {O}}_{s+1}\) equals to \(\mathbf {B}\) plus remaining rain streaks. The corresponding loss function is:

$$\begin{aligned} L\left( \mathbf {\Theta }\right) = \sum _{s=1}^{S}\left\| \hat{\mathbf {R}}_s - \mathbf {R}\right\| ^2_F, \end{aligned}$$
(18)

5 Experiments

In this section, we present details of experimental settings and quality measures used to evaluate the proposed SCAN and RESCAN models. We compare the performance of our proposed methods with the state-of-the-art methods on both synthetic and real-world datasets.

5.1 Experiment Settings

Synthetic Dataset. Since it is hard to obtain a large dataset of clean/rainy image pairs from real-world data, we first use synthesized rainy datasets to train the network. Zhang et al. [27] synthesize 800 rain images (Rain800) from randomly selected outdoor images, and split them into testing set of 100 images and training set of 700 images. Yang et al. [9] collect and synthesize 3 datasets, Rain12, Rain100L and Rain100H. We select the most difficult one, Rain100H, to test our model. It is synthesized with the combination of five streak directions, which makes it hard to effectively remove all rain streaks. There are 1, 800 synthetic image pairs in Rain100H, and 100 pairs are selected as the testing set.

Real-World Dataset. Zhang et al. [27] and Yang et al. [9] also provide many real-world rain images. These images are diverse in terms of content as well as intensity and orientation of rain streaks. We use these datasets for objective evaluation.

Training Settings. In the training process, we randomly generate 100 patch pairs with a size of \(64\times 64 \) from every training image pairs. The entire network is trained on an Nvidia 1080Ti GPU based on Pytorch. We use a batch size of 64 and set the depth of SCAN as \(d=7\) with the receptive field size \(35\times 35\). For the nonlinear operation, we use leaky ReLU [34] with \(\alpha =0.2\). For optimization, the ADAM algorithm [35] is adopted with a start learning rate \(5\times 10^{-3}\). During training, the learning rate is divided by 10 at 15, 000 and 17, 500 iterations.

Quality Measures. To evaluate the performance on synthetic image pairs, we adopt two commonly used metrics, including peak signal to noise ratio (PSNR) [36] and structure similarity index (SSIM) [37]. Since there are no ground truth rain-free images for real-world images, the performance on the real-world dataset can only be evaluated visually. We compare our proposed approach with five state-of-the-art methods, including image decomposition (ID) [21], discriminative sparse coding (DSC) [5], layer priors (LP) [7], DetailsNet [8], and joint rain detection and removal (JORDER) [9].

Fig. 3.
figure 3

Results of various methods on synthetic images. Best viewed at screen!

5.2 Results on Synthetic Data

Table 2 shows results of different methods on the Rain800 and the Rain100H datasets. We can see that our RESCAN considerably outperforms other methods in terms of both PSNR and SSIM on these two datasets. It is also worth noting that our non-recurrent network SCAN can even outperform JORDER and DetailsNet, and is slightly superior to JORDER-R, the recurrent version of JORDER. This shows the high capacity behind SCAN’s shallow structure. Moreover, by using RNN to gradually recover the full rain streak layer \(\mathbf {R}\), RESCAN further improves the performance.

Table 2. Quantitative experiments evaluated on two synthetic datasets. Best results are marked in bold and the second best results are underlined.
Table 3. Quantitative comparison between SCAN and base model candidates.

To visually demonstrate the improvements obtained by the proposed methods, we present results on several difficult sample images in Fig. 3. Please note that we select difficult sample images to show that our method can outperform others especially in difficult conditions, as we design it to deal with complex conditions. According to Fig. 3, these state-of-the-art methods cannot remove all rain steaks and may blur the image, while our method can remove the majority of rain steaks as well as maintain details of background scene.

5.3 Results on Real-World Dataset

To test the practicability of deraining methods, we also evaluate the performance on real-world rainy test images. The predictions for all related methods on four real-world sample rain images are shown in Fig. 4. As observed, LP [7] cannot remove rain steaks efficiently, and DetailsNet [8] tends to add artifacts on derained outputs. We can see that the proposed method can remove most of rain streaks and maintain much texture of background images. To further validate the performance on real-world dataset, we also make a user study. For details, please refer to the supplement material.

5.4 Analysis on SCAN

To show that SCAN is the best choice as a base model for deraining task, we conduct experiments on two datasets to compare the performance of SCAN and its related network architectures, including Plain (dilation=1 for all convolutions), ResNet used in DetailsNet [8] and Encoder-Decoder used in ID-CGAN [27]. For all networks, we set the same depth \(d=7\) and width (24 channels), so we can keep their numbers of parameters and computations at the same order of magnitude. More specifically, we keep layers \(L_0\), \(L_{d-5}\) and \(L_{d-6}\) of them with the same structure, so they only differ in layers \(L_1\) to \(L_4\). The results are shown in Table 3. SCAN and CAN achieve the best performance in both datasets, which can be attributed to the fact that receptive fields of these two methods exponentially increase with the growing of depth, while receptive fields of other methods only have linear relationship with depth.

Fig. 4.
figure 4

Results of various methods on real-world images. Best viewed at screen!

Moreover, the comparison between results of SCAN and CAN indicates that the SE block contributes a lot to the base model, as it can explicitly learn alpha-value for every independent rain streak layer. We also examine the SCAN model with BN. The results clearly verify that removing BN is a good choice in the deraining task, as rain steak layers are independent on each other.

5.5 Analysis on RESCAN

As we list three recurrent units and two recurrent frameworks in Sect. 4.2, we conduct experiments to compare these different settings, consisting of \(\{\)ConvRNN, ConvLSTM, ConvGRU\(\}\) \(\times \) \(\{\)Additive Prediction (Add), Full Prediction (Full)\(\}\). To compare with the recurrent framework in [9, 13], we also implement the settings in Eq. (5) (Iter), in which previous states are not reserved. In Table 4, we report the results.

Table 4. Quantitative comparison between different settings of RESCAN. Iter represents the framework that leaves out states of previous stages as [9, 13]. Add and Full represent the additive prediction framework and the full prediction framework, respectively.

It is obvious that Iter cannot compete with all RNN structures, and is not better than SCAN, as it leaves out information of previous stages. Moreover, ConvGRU and ConvLSTM outperform ConvRNN, as they maintain more parameters and require more computation. However, it is difficult to pick the best unit between ConvLSTM and ConvGRU as they perform similarly in experiments. For recurrent frameworks, results indicate that Full Prediction is better.

6 Conclusions

We propose the RESCAN for single image deraining in this paper. We divide the rain removal into multiple stages. In each stage, the context aggregation network is adopted to remove rain streaks. We also modify CAN to better match the rain removal task, including the exponentially increasing dilation and removal of the BN layer. To better characterize intensities of different rain streak layers, we adopt the squeeze-and-excitation block to assign different alpha-values according to their properties. Moreover, RNN is incorporated to better utilize the useful information for rain removal in previous stages and guide the learning in later stages. We also test the performance of different network architectures and recurrent units. Experiments on both synthetic and real-world datasets show that RESCAN outperforms the state-of-the-art methods under all evaluation metrics.