Keywords

1 Introduction

The visibility of the image as an important criterion for evaluating the quality of the images reflects the detail, clarity and contrast of the image. Some natural scenes such as sunny days, rainy days or night scenes always lack light or external light conditions. The quality and illumination of images taken from these scenes will be reduced. Improving the quality of images has critical positions in the field of image fusion [1], feature extraction [2] and machine recognition [3].

For low-light image, the key to improving image quality is improving contrast and resolution. A number of methods have been developed to address visibility enhancement for low-light scenes from a single image. The simplest way is increasing the brightness of images directly. But the brighter area will overflow. Image gray scale dynamic range can be balanced with histograms, but the details will be lost [4,5,6,7]. The multi-scale retinex algorithm with color restoration will reduce the image contrast and keep the edge detail unsatisfactory [8, 9].

Fig. 1.
figure 1

Algorithm general block diagram.

Building the geometric relationship between the original image and reconstructed image can be constructed recovery function. Fu et al. establishes a color estimation model for transforming the image from night to day [10]. In [11], the observed image can be decomposed into the product of the desired scene and the illumination image. Dong et al. applies an enhancement method based on dark channel prior algorithm [12, 13].

Based on the dark channel a prior algorithm, Zhang et al. gives the weight to each illumination component on the dark channel to optimize the illumination map [14]. In [15], the illumination map is estimated by using the illumination component in the HSI space, but it will bring color distortion. Recent years, scholars applied convolution neural networks into the dehaze algorithm, estimating illumination map by training samples [16]. Although the treatment results are improved, it also bringing new problems with high time costs.

This paper proposes a low-light image enhancement algorithm based on the atmospheric physical model with a constrained norm estimation so as to adapt to different illumination images. The initial illumination map is estimated by finding the maximum value of R, G and B channels and refinement by norm estimation. Considering the robustness of the algorithm, we introduce the parameter \(\gamma \) into the power exponent \({t^\gamma }\) to constrain the brightness map t. We use quadtree algorithm to estimate the ambient light A, which has a faster processing effect than the dark channel proposed by He. We also propose a new evaluation criteria to verify the effectiveness of our approach.

To this end, we present the whole algorithm flow in Fig. 1, including image reversal, image inversion, illumination map estimation and constraint, ambient light estimation and image reconstruction.

First of all, we reverse the picture. Secondly we extracte the maximum in RGB channel as the illumitation map and use the norm estimation to optimize the propagation map. Then the atmospheric light is estimated by the quadtree algorithm. Finally, the image is reconstructed by using the atmospheric physical model to obtain the enhanced image. In the following sections, we will discuss in detail the algorithm details and performance.

2 Our Approach

With statistics supporting in [12], the inverted low-light image on the RGB channel is similar to the haze image:

$$\begin{aligned} I(x) = 1 - L(x) \end{aligned}$$
(1)

where I indicates inverted image which looks like haze image, L represents the input low-light image. The model as follows is widely used in haze removing [13, 20]:

$$\begin{aligned} R(x) = \frac{{I(x) - A(1 - t(x))}}{{t(x)}} \end{aligned}$$
(2)

where R represents the reconstructed haze image, I is the haze image, A is the global atmospheric light in haze image, but it is also called ambient light in low-light image. t represents the medium transmission. Reconstructing the inverted low illumination image as a haze image, we substitute Eq. (1) into Eq. (2):

$$\begin{aligned} J(x) = \frac{{L(x) - A}}{{t(x)}} + A + 1 \end{aligned}$$
(3)

where \(J = 1 - R\) is the enhanced image, L and represents the input low-light image. A is the ambient light and t indicates illumination map.

For preventing oversaturation, Eq. (4) is introduced as:

$$\begin{aligned} J(x) = \frac{{L(x) - A}}{{t(x) + {\varepsilon _0}}} + A + 1 \end{aligned}$$
(4)

where \({\varepsilon _0}\) approaches zero infinity. Figure 2 provides an example, we can see input image and the corresponding inverted image as well as enhanced image.

Fig. 2.
figure 2

The enhanced results by our approach. (a) Input low-light image. (b) Inverted image. (c) Image after enhance by our approach.

2.1 Initial Illumination Map Estimation

The medium transmission map t* describes the light portion that attenuate with the increase of depth of field. t* is defined as:

$$\begin{aligned} t^\mathrm{{*}}(x) = {e^{ - \beta d(x)}} \end{aligned}$$
(5)

where \(\beta \) is the scattering coefficient of the atmosphere and d(x) is scene depth. In low-light image, the illumination map does not attenuate with the increase of depth of field. Therefore, the method of defogging to estimating the illumination map may not be applicable. This paper estimates the initial illumination map by finding the maximum value of R, G and B channels [11]:

$$\begin{aligned} \hat{t}(x) = \mathop {max}\limits _{c \in \{ R,G,B\} } {L^c}(x) \end{aligned}$$
(6)

Initial illumination map \(\hat{t}\) contains the luminance information of the image but does not contain the texture information of the image.

2.2 Illumination Map Optimization by Norm Estimation

Initial estimation can promote the global illumination. But the enhancement processing is required to preserve more structures. In order to solve this problem, we use the following refinement method for propagation maps [19]:

$$\begin{aligned} \mathop {min}\limits _t \left\| {\hat{t} - t} \right\| _F^2 + \eta {\left\| {W \cdot \left| {{{({\partial _h}t)}_x} + {{({\partial _v}t)}_x}} \right| } \right\| _1}\ \end{aligned}$$
(7)

where t is the optimization illumination map, \(\left\| \right\| _{F}\) is the Frobenious-norm and \(\left\| \right\| _{1}\) represents the \(l_{1}\)-norm respectively. The first term takes into account the degree of fusion and smoothness between the initial map \(\hat{t}\) and the refined map t while the second one squares up the structure and texture information. \(\eta \) (the experience value is 0.2) designates a coefficient that balances the weights between two terms. x indexes 2D pixels. \(\partial _{x}\) and \(\partial _{y}\) are the partial derivatives in two directions. Further, W is the weight matrix, we defined it as:

$$\begin{aligned} W = \sum \limits _{y \in \varOmega (x)} {\frac{{{G_\sigma }(x,y)}}{{\left| {\sum \nolimits _{y \in \varOmega (x)} {{G_\sigma }(x,y)} * {{({\partial _h}\hat{t})}_y}} \right| + {\varepsilon _0}}}} \ \end{aligned}$$
(8)

where \(G_{\sigma }\left( x,y \right) \) is a Gaussion filter with standard deviation \(\sigma \) and \(*\) is the convolution operator. W can be estimated by the known \(\hat{t}\) rather than through the iteration of t, thus reducing the calculation time. We transform the nonlinear part in Eq. (7) into a linear equation for calculation:

$$\begin{aligned} \mathop {min}\limits _t \left\| {\hat{t} - t} \right\| _F^2 + \eta \sum \limits _x {\frac{{{W_h}(x) \cdot {{({\partial _h}t(x))}^2}}}{{\left| {{\partial _h}\hat{t}(x)} \right| + {\varepsilon _0}}}} + \frac{{{W_v}(x) \cdot {{({\partial _v}t(x))}^2}}}{{\left| {{\partial _v}\hat{t}(x)} \right| + {\varepsilon _0}}}\ \end{aligned}$$
(9)

After simplifying the equation and resolving the quadratic term, we get the optimization illumination map t. The different illumination maps are shown in the top row of Fig. 3 and the bottom row is reconstructed images with different illumination maps. We can see the advance of our method. The reconstructed image has more details and more suitable brightness.

Fig. 3.
figure 3

Comparison of different illumination maps and corresponding enhanced results. (a) Initial. (b) Xu [18]. (c) RTA [19]. (d) LIME [11]. (e) With norm estimation.

2.3 Constraint Illumination Map by Exponential Power

We have discussed the method of estimating the propagation pattern with the norm in Sect. 2.2. Furthermore, the concentration of fog images have the different in thick-fog and light-fog. Similarly, the illumination intensity of the low-light images are not stable.

We propose a new method called LieCNE to accommodate a wider range of illumination conditions. The estimation of propagation map t needs to be constrained. We propose the parameter \(\gamma \) to make the constraint of t by:

$$\begin{aligned} T = {t^\gamma } \end{aligned}$$
(10)

\(\gamma \) is defined as:

$$\begin{aligned} \gamma = min(\sqrt{\frac{K}{\varDelta },\mu } ) \end{aligned}$$
(11)

where K is a threshold (empirical value of 80), when the image illumination is low, K may take a smaller value, and when the image illumination is high, K may take a larger value. \(\mu \in (0,1]\) is a given constant (0.8 by experience). \(\varDelta \) is given by the following equation:

$$\begin{aligned} \varDelta = \left| {\mathop {min}\limits _c {I^c}(x) - \mathop {min}\limits _c {A^c}} \right| \ \end{aligned}$$
(12)

where A is the ambient light and we will discuss in Sect. 2.4. We obtain a more adaptable illumination map T, Eq. (4) can be written as:

$$\begin{aligned} J(x) = \frac{{L(x) - A}}{{T(x) + {\varepsilon _0}}} + A + 1 \end{aligned}$$
(13)

2.4 Quadtree Algorithm for Estimating Ambient Light

In this paper, the quadtree algorithm is used to estimate the value of ambient light. The quadtree algorithm and the algorithm [13] have almost the same effect on the atmospheric light estimation, but the quadtree algorithm has a faster processing speed. Quadtree algorithm to estimate the process of atmospheric light are shown in Fig. 4.

Fig. 4.
figure 4

Schematic figure of quad-tree algorithm.

The channel minimum map of image is firstly computed, which aims to avoid the local image in a channel while there is a maximum value of the atmospheric light caused by the error estimates.

$$\begin{aligned} D(x) = \mathop {\min }\limits _{c \in \{ r,g,b\} } {I^c}(x) \end{aligned}$$
(14)

Secondly, the image is divided into four blocks. We calculate the average of each block image and take the mean of the largest block to four equal parts thirdly. The iterations are repeated until the resulting image block size is less than a given threshold. The maximum value of the pixels in the resulting image block is taken as the value of the ambient light.

$$\begin{aligned} A(x) = \mathop {\max }\limits _{y \in \varOmega (x)} (D_1^c(x),D_2^c(x),D_3^c(x),D_4^c(x)) \end{aligned}$$
(15)

3 Experimental Results

3.1 Algorithm Performance Comparison

The proposed method was tested on several 816 \(\times \) 612 pixel images with various degrees of darkness and noise levels. In order to further verify the enhancement effect of the proposed algorithm, we compare the algorithm with LIME [11], Dong et al. [12], Hu et al. [15] and Ren et al. [17] in Fig. 5. Moreover, all the codes are carried out on the MATLAB2014a platform to ensure the algorithm fairness and time results based on Windows 10 OS with 8 G RAM and 3.2 GHz CPU.

Fig. 5.
figure 5

Comparison of different method. (a) input. (b) Hu [15]. (c) Dong [12]. (d) LIME [11]. (e) Ren [17]. (f) LieCNE.

[12, 15] reduce the fidelity of the image. Our method recovers the structures without sacrificing the fidelity of the colors. In order to further prove the performance of our algorithm, we compare the proposed algorithm with the state-of-the-art using the Mean Squared Error (MSE), Peak Signal-to-Noise Ratio (PSNR), Entropy, Absolute Mean Brightness Error (AMBE) and Lightness order Error (LOE) metrics. (image size is 816 \(\times \) 612).

Table 1. The average results of MSE, PSNP, E, AMBE, LOE, times on the reconstructed.

In Table 1, LieCNE works well on several objective indicators. Although the entropy of LieCNE and Ren’s entropy has a difference of 0.32, the remaining indicators are better than other algorithms. It is worth mentioning that, although the Rens method and LieCNE are close to most of the indicators, our method just using 1.64 s, runs 3.48 times faster than the Rens.

3.2 A New Evaluation Criteria

When dealing with low-light images, it is necessary not only to enhance the brightness of the image as much as possible, but also to avoid the image pixel value of the overflow phenomenon, especially with a strong light source of low-light images. The enhancement effect needs to consider from two aspects: one is the enhanced intensity that is the degree of improvement of the brightness contrast, and the other is the statistics of the spillover intensity near the bright spot or the local strong light source. So the principle of enhancement is trying to avoid too much overflow while enhancing the lower part of the brightness.

Fig. 6.
figure 6

Comparison of intensity enhancement among different algorithms. (a) RGB weight contrast between different algorithms. (b) Color histogram distribution curve. (c) The image under normal light (filmed at 14:00) (Color figure online).

Fig. 7.
figure 7

Comparison of enhancement effects of several algorithms in different brightness of the same scene. (a) Input. (b) Hu [15]. (c) Dong [12]. (d) LIME [11]. (e) Ren [17]. (f) LieCNE

Figure 6(a) shows RGB weight contrast between different algorithms which reflects the overall enhanced strength. In Fig. 6(b), the degree of spillover of the image after reconstruction by several algorithms can be seen more intuitively. Gray level of 256 is considered as overflow and the more pixels gather in this gray-scale the more overflow appear in the image. LieCNE has the better overall enhancement effect and also has advantages in suppressing spillover result. Due to the lack of true values, we collected images of different lighting scenes in the same scene as samples. We fixed the camera on a tripod to ensure the reliability of the experimental data and take the images in the same scene in 14:00, 16:00, 17:00, 17:30, 18:00 respectively. We take at 14:00 as the true value of the data comparison, as shown in Fig. 6(c).

Figure 7 shows the enhancement effects of several algorithms for the same scene at different light intensities. Row (1)–(4) are the original images filmed at 16:00, 17:00, 17:30 and 18:00 and their correspond enhanced images respectively. Furthermore, we propose a new evaluation method, which contrasts the enhancement effect of the algorithm under different illumination intensities. The image enhancement effect was evaluated by computing the structural similarity (SSIM) between the reconstructed low image and the normal illumination.

$$\begin{aligned} SSIM(x,y) = \frac{{(2{\mu _x}{\mu _y} + {C_1})(2{\sigma _{xy}} + {C_2})}}{{(\mu _x^2 + \mu _y^2 + {C_1})(\sigma _x^2 + \sigma _y^2 + {C_2})}} \end{aligned}$$
(16)
Table 2. The SSIM with normal image under several illuminations.

In Table 2, the SSIM enhanced by LieCNE under different lighting conditions is closer than other methods to the normal light of the image. Figure 8 shows the line graph.

Fig. 8.
figure 8

The comparison of SSIM between enhanced image and original image under different brightness.

Figure 8 reflects the comparison results more directly. First, as the decreasing of image brightness, SSIM with normal light is also reduced. Second, LieCNE exhibits better SSIM to the original image in each illumination. Finally, it is found that the enhancement of image photographed at 17:00 has a higher SSIM with the original image. As described in Sect. 2.2, it is necessary to vary the estimates of the illumination map appropriately for different light intensities.

4 Discussions and Conclusions

This paper has presented an effective approach to enhance low-light images for boosting the visual quality. Inspired by the existing atmospheric haze imaging model and low illumination image enhancement algorithms, the illumination map can be estimated from the image color channel and optimized using the norm, ambient light is determined by the quadtree algorithm. In addition, this paper proposes a method of constraining the illumination map for different illumination images as well as a new evaluation method, that is, determining the enhancement effect of the algorithm by comparing the quality of the image under different light intensity. The experimental results reveal that LieCNE achieves higher efficiency and better enhancing effects a variety of light intensity. But some problems still exist. Low-light images have a lot of noise and it will be amplified during enhancement processing. Filtering noise while ensuring the image enhancement effect is not affected is the next issue of concern.