Abstract
Low-light images often suffer from poor quality and low visibility. Improving the quality of low-light image is becoming a highly desired subject in both computational photography and computer vision applications. This paper proposes an effective method to constrain the illumination map t by estimating the norm and constructing the constraint coefficients, which called LieCNE. More specifically, we estimate the initial illumination map by finding the maximum value of R, G and B channels and optimize it by norm estimation. We propose a function \({t^\gamma }\) to contain the exponential power \(\gamma \) in order to optimize the enhancement effect under different illumination conditions. Finally, a new evaluation criterion is also proposed. We use the similarity with the true value to determine the enhanced effect. Experimental results show that LieCNE exhibits better performance under a variety of lighting conditions in enhancement results and image spillover prevention.
This work was supported by National Natural Science Foundation of China (Nos. 61303104, 61373090, 61203238), Beijing Natural Science Foundation of China (4132014), and also supported by Youth Innovative Research Team of Capital Normal University.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
The visibility of the image as an important criterion for evaluating the quality of the images reflects the detail, clarity and contrast of the image. Some natural scenes such as sunny days, rainy days or night scenes always lack light or external light conditions. The quality and illumination of images taken from these scenes will be reduced. Improving the quality of images has critical positions in the field of image fusion [1], feature extraction [2] and machine recognition [3].
For low-light image, the key to improving image quality is improving contrast and resolution. A number of methods have been developed to address visibility enhancement for low-light scenes from a single image. The simplest way is increasing the brightness of images directly. But the brighter area will overflow. Image gray scale dynamic range can be balanced with histograms, but the details will be lost [4,5,6,7]. The multi-scale retinex algorithm with color restoration will reduce the image contrast and keep the edge detail unsatisfactory [8, 9].
Building the geometric relationship between the original image and reconstructed image can be constructed recovery function. Fu et al. establishes a color estimation model for transforming the image from night to day [10]. In [11], the observed image can be decomposed into the product of the desired scene and the illumination image. Dong et al. applies an enhancement method based on dark channel prior algorithm [12, 13].
Based on the dark channel a prior algorithm, Zhang et al. gives the weight to each illumination component on the dark channel to optimize the illumination map [14]. In [15], the illumination map is estimated by using the illumination component in the HSI space, but it will bring color distortion. Recent years, scholars applied convolution neural networks into the dehaze algorithm, estimating illumination map by training samples [16]. Although the treatment results are improved, it also bringing new problems with high time costs.
This paper proposes a low-light image enhancement algorithm based on the atmospheric physical model with a constrained norm estimation so as to adapt to different illumination images. The initial illumination map is estimated by finding the maximum value of R, G and B channels and refinement by norm estimation. Considering the robustness of the algorithm, we introduce the parameter \(\gamma \) into the power exponent \({t^\gamma }\) to constrain the brightness map t. We use quadtree algorithm to estimate the ambient light A, which has a faster processing effect than the dark channel proposed by He. We also propose a new evaluation criteria to verify the effectiveness of our approach.
To this end, we present the whole algorithm flow in Fig. 1, including image reversal, image inversion, illumination map estimation and constraint, ambient light estimation and image reconstruction.
First of all, we reverse the picture. Secondly we extracte the maximum in RGB channel as the illumitation map and use the norm estimation to optimize the propagation map. Then the atmospheric light is estimated by the quadtree algorithm. Finally, the image is reconstructed by using the atmospheric physical model to obtain the enhanced image. In the following sections, we will discuss in detail the algorithm details and performance.
2 Our Approach
With statistics supporting in [12], the inverted low-light image on the RGB channel is similar to the haze image:
where I indicates inverted image which looks like haze image, L represents the input low-light image. The model as follows is widely used in haze removing [13, 20]:
where R represents the reconstructed haze image, I is the haze image, A is the global atmospheric light in haze image, but it is also called ambient light in low-light image. t represents the medium transmission. Reconstructing the inverted low illumination image as a haze image, we substitute Eq. (1) into Eq. (2):
where \(J = 1 - R\) is the enhanced image, L and represents the input low-light image. A is the ambient light and t indicates illumination map.
For preventing oversaturation, Eq. (4) is introduced as:
where \({\varepsilon _0}\) approaches zero infinity. Figure 2 provides an example, we can see input image and the corresponding inverted image as well as enhanced image.
2.1 Initial Illumination Map Estimation
The medium transmission map t* describes the light portion that attenuate with the increase of depth of field. t* is defined as:
where \(\beta \) is the scattering coefficient of the atmosphere and d(x) is scene depth. In low-light image, the illumination map does not attenuate with the increase of depth of field. Therefore, the method of defogging to estimating the illumination map may not be applicable. This paper estimates the initial illumination map by finding the maximum value of R, G and B channels [11]:
Initial illumination map \(\hat{t}\) contains the luminance information of the image but does not contain the texture information of the image.
2.2 Illumination Map Optimization by Norm Estimation
Initial estimation can promote the global illumination. But the enhancement processing is required to preserve more structures. In order to solve this problem, we use the following refinement method for propagation maps [19]:
where t is the optimization illumination map, \(\left\| \right\| _{F}\) is the Frobenious-norm and \(\left\| \right\| _{1}\) represents the \(l_{1}\)-norm respectively. The first term takes into account the degree of fusion and smoothness between the initial map \(\hat{t}\) and the refined map t while the second one squares up the structure and texture information. \(\eta \) (the experience value is 0.2) designates a coefficient that balances the weights between two terms. x indexes 2D pixels. \(\partial _{x}\) and \(\partial _{y}\) are the partial derivatives in two directions. Further, W is the weight matrix, we defined it as:
where \(G_{\sigma }\left( x,y \right) \) is a Gaussion filter with standard deviation \(\sigma \) and \(*\) is the convolution operator. W can be estimated by the known \(\hat{t}\) rather than through the iteration of t, thus reducing the calculation time. We transform the nonlinear part in Eq. (7) into a linear equation for calculation:
After simplifying the equation and resolving the quadratic term, we get the optimization illumination map t. The different illumination maps are shown in the top row of Fig. 3 and the bottom row is reconstructed images with different illumination maps. We can see the advance of our method. The reconstructed image has more details and more suitable brightness.
2.3 Constraint Illumination Map by Exponential Power
We have discussed the method of estimating the propagation pattern with the norm in Sect. 2.2. Furthermore, the concentration of fog images have the different in thick-fog and light-fog. Similarly, the illumination intensity of the low-light images are not stable.
We propose a new method called LieCNE to accommodate a wider range of illumination conditions. The estimation of propagation map t needs to be constrained. We propose the parameter \(\gamma \) to make the constraint of t by:
\(\gamma \) is defined as:
where K is a threshold (empirical value of 80), when the image illumination is low, K may take a smaller value, and when the image illumination is high, K may take a larger value. \(\mu \in (0,1]\) is a given constant (0.8 by experience). \(\varDelta \) is given by the following equation:
where A is the ambient light and we will discuss in Sect. 2.4. We obtain a more adaptable illumination map T, Eq. (4) can be written as:
2.4 Quadtree Algorithm for Estimating Ambient Light
In this paper, the quadtree algorithm is used to estimate the value of ambient light. The quadtree algorithm and the algorithm [13] have almost the same effect on the atmospheric light estimation, but the quadtree algorithm has a faster processing speed. Quadtree algorithm to estimate the process of atmospheric light are shown in Fig. 4.
The channel minimum map of image is firstly computed, which aims to avoid the local image in a channel while there is a maximum value of the atmospheric light caused by the error estimates.
Secondly, the image is divided into four blocks. We calculate the average of each block image and take the mean of the largest block to four equal parts thirdly. The iterations are repeated until the resulting image block size is less than a given threshold. The maximum value of the pixels in the resulting image block is taken as the value of the ambient light.
3 Experimental Results
3.1 Algorithm Performance Comparison
The proposed method was tested on several 816 \(\times \) 612 pixel images with various degrees of darkness and noise levels. In order to further verify the enhancement effect of the proposed algorithm, we compare the algorithm with LIME [11], Dong et al. [12], Hu et al. [15] and Ren et al. [17] in Fig. 5. Moreover, all the codes are carried out on the MATLAB2014a platform to ensure the algorithm fairness and time results based on Windows 10 OS with 8 G RAM and 3.2 GHz CPU.
[12, 15] reduce the fidelity of the image. Our method recovers the structures without sacrificing the fidelity of the colors. In order to further prove the performance of our algorithm, we compare the proposed algorithm with the state-of-the-art using the Mean Squared Error (MSE), Peak Signal-to-Noise Ratio (PSNR), Entropy, Absolute Mean Brightness Error (AMBE) and Lightness order Error (LOE) metrics. (image size is 816Â \(\times \)Â 612).
In Table 1, LieCNE works well on several objective indicators. Although the entropy of LieCNE and Ren’s entropy has a difference of 0.32, the remaining indicators are better than other algorithms. It is worth mentioning that, although the Rens method and LieCNE are close to most of the indicators, our method just using 1.64 s, runs 3.48 times faster than the Rens.
3.2 A New Evaluation Criteria
When dealing with low-light images, it is necessary not only to enhance the brightness of the image as much as possible, but also to avoid the image pixel value of the overflow phenomenon, especially with a strong light source of low-light images. The enhancement effect needs to consider from two aspects: one is the enhanced intensity that is the degree of improvement of the brightness contrast, and the other is the statistics of the spillover intensity near the bright spot or the local strong light source. So the principle of enhancement is trying to avoid too much overflow while enhancing the lower part of the brightness.
Figure 6(a) shows RGB weight contrast between different algorithms which reflects the overall enhanced strength. In Fig. 6(b), the degree of spillover of the image after reconstruction by several algorithms can be seen more intuitively. Gray level of 256 is considered as overflow and the more pixels gather in this gray-scale the more overflow appear in the image. LieCNE has the better overall enhancement effect and also has advantages in suppressing spillover result. Due to the lack of true values, we collected images of different lighting scenes in the same scene as samples. We fixed the camera on a tripod to ensure the reliability of the experimental data and take the images in the same scene in 14:00, 16:00, 17:00, 17:30, 18:00 respectively. We take at 14:00 as the true value of the data comparison, as shown in Fig. 6(c).
Figure 7 shows the enhancement effects of several algorithms for the same scene at different light intensities. Row (1)–(4) are the original images filmed at 16:00, 17:00, 17:30 and 18:00 and their correspond enhanced images respectively. Furthermore, we propose a new evaluation method, which contrasts the enhancement effect of the algorithm under different illumination intensities. The image enhancement effect was evaluated by computing the structural similarity (SSIM) between the reconstructed low image and the normal illumination.
In Table 2, the SSIM enhanced by LieCNE under different lighting conditions is closer than other methods to the normal light of the image. Figure 8 shows the line graph.
Figure 8 reflects the comparison results more directly. First, as the decreasing of image brightness, SSIM with normal light is also reduced. Second, LieCNE exhibits better SSIM to the original image in each illumination. Finally, it is found that the enhancement of image photographed at 17:00 has a higher SSIM with the original image. As described in Sect. 2.2, it is necessary to vary the estimates of the illumination map appropriately for different light intensities.
4 Discussions and Conclusions
This paper has presented an effective approach to enhance low-light images for boosting the visual quality. Inspired by the existing atmospheric haze imaging model and low illumination image enhancement algorithms, the illumination map can be estimated from the image color channel and optimized using the norm, ambient light is determined by the quadtree algorithm. In addition, this paper proposes a method of constraining the illumination map for different illumination images as well as a new evaluation method, that is, determining the enhancement effect of the algorithm by comparing the quality of the image under different light intensity. The experimental results reveal that LieCNE achieves higher efficiency and better enhancing effects a variety of light intensity. But some problems still exist. Low-light images have a lot of noise and it will be amplified during enhancement processing. Filtering noise while ensuring the image enhancement effect is not affected is the next issue of concern.
References
Liu, Y., Chen, X., Ward, R.K., et al.: Image fusion with convolutional sparse representation. Sig. Process. Lett. 23(12), 1882–1886 (2016)
Kang, J., Lu, S., Gong, W., et al.: Image feature extraction based on spectral graph information. In: 2016 IEEE International Conference on Image Processing (ICIP), pp. 46–50. IEEE (2016)
Wei, M., Ma, X., Li, H., et al.: Application of image recognition algorithm in satellite interference transmitter location. In: 2016 International Conference on Communication Problem-Solving (ICCP), pp. 1–2. IEEE (2016)
Yao, Z., Zhou, Q., Yang, X., et al.: Quadrants histogram equalization with a clipping limit for image enhancement. In: 2016 8th International Conference on Wireless Communications Signal Processing (WCSP), pp. 1–5. IEEE (2016)
Nithyananda, C.R., Ramachandra, A.C.: Review on histogram equalization based image enhancement techniques. In: International Conference on Electrical, Electronics, and Optimization Techniques (ICEEOT), pp. 2512–2517. IEEE (2016)
Yao, Z., Lai, Z., Wang, C.: Brightness preserving and non-parametric modified bi-histogram equalization for image enhancement. In: 2016 12th International Conference on Natural Computation, Fuzzy Systems and Knowledge Discovery (ICNC-FSKD), pp. 1872–1876. IEEE (2016)
Youshi, Y., Qiang, S., Lei, S., et al.: A adaptive dual-platform deep space infrared image enhancement algorithm based on linear gray scale transformation. In: 2014 33rd Chinese Control Conference (CCC), pp. 7434–7438. IEEE (2014)
Okuno, T., Nishitani, T.: Efficient multi-scale retinex algorithm using multi-rate image processing. In: 2009 16th IEEE International Conference on Image Processing (ICIP), pp. 3145–3148. IEEE (2009)
Fu, X., Zhuang, P., Huang, Y., et al.: A retinex-based enhancing approach for single underwater image. In: 2014 IEEE International Conference on Image Processing (ICIP), pp. 4572–4576. IEEE (2014)
Fu, H., Ma, H., Wu, S.: Night removal by color estimation and sparse representation. In: 2012 21st International Conference on Pattern Recognition (ICPR), pp. 3656–3659. IEEE (2012)
Guo, X.: LIME: a method for low-light image enhancement. J. Nerv. Ment. Dis. 167(7), 402–409 (1979)
Dong, X., Wang, G., Pang, Y., et al.: Fast efficient algorithm for enhancement of low lighting video. In: 2011 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6. IEEE (2011)
He, K., Sun, J., Tang, X.: Single image haze removal using dark channel prior. Trans. Pattern Anal. Mach. Intell. 33(12), 2341–2353 (2011)
Zhang, X., Shen, P., Luo, L., et al.: Enhancement and noise reduction of very low light level images. In: 2012 21st International Conference on Pattern Recognition (ICPR), pp. 2034–2037. IEEE (2012)
Hu, Y., Shang, Y., Fu, X., et al.: A low illumination video enhancement algorithm based on the atmospheric physical model. In: 2015 8th International Congress on Image and Signal Processing (CISP), pp. 119–124. IEEE (2015)
Cai, B., Xu, X., Jia, K., et al.: Dehazenet: an end-to-end system for single image haze removal. IEEE Trans. Image Process. 25(11), 5187–5198 (2016)
Ren, W., Liu, S., Zhang, H., Pan, J., Cao, X., Yang, M.-H.: Single image dehazing via multi-scale convolutional neural networks. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9906, pp. 154–169. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46475-6_10
Xu, L., Lu, C., Xu, Y., et al.: Image smoothing via L0 gradient minimization. ACM Trans. Graph. (TOG) 30(6), 174 (2011). ACM
Xu, L., Yan, Q., Xia, Y., et al.: Structure extraction from texture via relative total variation. ACM Trans. Graph. (TOG) 31(6), 139 (2012)
Tan, R.T.: Visibility in bad weather from a single image. In: 2008 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2008, pp. 1–8. IEEE (2008)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Zhao, T., Ding, H., Shang, Y., Zhou, X. (2017). Low-Light Image Enhancement Based on Constrained Norm Estimation. In: Yang, J., et al. Computer Vision. CCCV 2017. Communications in Computer and Information Science, vol 771. Springer, Singapore. https://doi.org/10.1007/978-981-10-7299-4_30
Download citation
DOI: https://doi.org/10.1007/978-981-10-7299-4_30
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-10-7298-7
Online ISBN: 978-981-10-7299-4
eBook Packages: Computer ScienceComputer Science (R0)