Advertisement

CNN Based Image Restoration

Adjusting Ill-Exposed sRGB Images in Post-Processing

  • 34 Accesses

Abstract

This work proposes an artificial neural network model to restore images damaged by inadequate sensor exposure, saturation, and underexposure, at the time of acquisition. The problem has significant relevance in computational and robotics vision applications, especially when obtaining images of scenes with non-Lambertian surfaces, as well as natural images where the sensor limitation or optical arrangement prevents the scene details from being adequately represented in the captured image. We chose to model an alternative based on deep neural networks, which is adequate, considering the variability in equipment and photography techniques, along with several uncontrolled variables affecting the process. Given a set of synthetic and real image pairs, the representation structure is able to converge into a robust image enhancement model. The proposal incorporates recent advances made by convolutional networks on issues such as semantic segmentation and classification in images. The development and evaluation of the research results are primarily quantitative, using qualitative analysis when appropriate. Results measured by different indicators of image quality indicate that the proposed neural network model can improve images damaged by an amount of 3% on the best scenario on the PSNR metric.

This is a preview of subscription content, log in to check access.

Access options

Buy single article

Instant unlimited access to the full article PDF.

US$ 39.95

Price includes VAT for USA

Subscribe to journal

Immediate online access to all issues from 2019. Subscription will auto renew annually.

US$ 199

This is the net price. Taxes to be calculated in checkout.

References

  1. 1.

    Abdullah-Al-Wadud, M., Kabir, M.H., Dewan, M.A.A., Chae, O.: A dynamic histogram equalization for image contrast enhancement. IEEE Trans. Consum. Electron. 53(2), 593–600 (2007). https://doi.org/10.1109/TCE.2007.381734

  2. 2.

    Blau, Y., Mechrez, R., Timofte, R., Michaeli, T., Zelnik-Manor, L.: The 2018 pirm challenge on perceptual image super-resolution. In: European Conference on Computer Vision, pp. 334–355. Springer (2018)

  3. 3.

    Blau, Y., Michaeli, T.: The perception-distortion tradeoff. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6228–6237 (2018)

  4. 4.

    Brock, A., Donahue, J., Simonyan, K.: Large scale gan training for high fidelity natural image synthesis. arXiv:1809.11096 (2018)

  5. 5.

    Bychkovsky, V., Paris, S., Chan, E., Durand, F.: Learning photographic global tonal adjustment with a database of input / output image pairs. In: The Twenty-Fourth IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2011)

  6. 6.

    Bychkovsky, V., Paris, S., Chan, E., Durand, F.: Learning photographic global tonal adjustment with a database of input/output image pairs. In: 2011 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 97–104. IEEE (2011)

  7. 7.

    Cai, B., Xu, X., Jia, K., Qing, C., Tao, D.: Dehazenet: an end-to-end system for single image haze removal. IEEE Trans. Image Process. 25(11), 5187–5198 (2016)

  8. 8.

    Cai, J., Gu, S., Zhang, L.: Learning a deep single image contrast enhancer from multi-exposure images. IEEE Trans. Image Process. 27(4), 2049–2062 (2018)

  9. 9.

    Canny, J.: A computational approach to edge detection. IEEE Transactions on Pattern Analysis and Machine Intelligence PAMI-8(6), 679–698 (1986). https://doi.org/10.1109/TPAMI.1986.4767851

  10. 10.

    Chen, C., Chen, Q., Xu, J., Koltun, V.: Learning to see in the dark. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018)

  11. 11.

    Chen, J., Adams, A., Wadhwa, N., Hasinoff, S.W.: Bilateral guided upsampling. ACM Transactions on Graphics (TOG) 35(6), 203 (2016)

  12. 12.

    Chen, Q., Xu, J., Koltun, V.: Fast image processing with fully-convolutional networks. In: IEEE International Conference on Computer Vision, vol. 9, pp. 2516–2525 (2017)

  13. 13.

    Clevert, D.A., Unterthiner, T., Hochreiter, S.: Fast and accurate deep network learning by exponential linear units (elus). arXiv:1511.07289 (2015)

  14. 14.

    Culley, S., Albrecht, D., Jacobs, C., Pereira, P.M., Leterrier, C., Mercer, J., Henriques, R.: Quantitative mapping and minimization of super-resolution optical imaging artifacts. Nature Methods 15(4), 263 (2018)

  15. 15.

    Dawson-Howe, K.: A Practical Introduction to Computer Vision with openCV. Wiley (2014). https://books.google.com.br/books?id=F9MsAwAAQBAJ

  16. 16.

    Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: Imagenet: a large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255. IEEE (2009)

  17. 17.

    Dong, X., Wang, G., Pang, Y., Li, W., Wen, J., Meng, W., Lu, Y.: Fast efficient algorithm for enhancement of low lighting video. In: 2011 IEEE International Conference on Multimedia and Expo, pp. 1–6. IEEE (2011)

  18. 18.

    Drozdzal, M., Vorontsov, E., Chartrand, G., Kadoury, S., Pal, C.: The importance of skip connections in biomedical image segmentation. In: Deep Learning and Data Labeling for Medical Applications, pp. 179–187. Springer (2016)

  19. 19.

    Egiazarian, K., Ponomarenko, M., Lukin, V., Ieremeiev, O.: Statistical evaluation of visual quality metrics for image denoising. In: 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 6752–6756. IEEE (2018)

  20. 20.

    Fang, Y., Ma, K., Wang, Z., Lin, W., Fang, Z., Zhai, G.: No-reference quality assessment of contrast-distorted images based on natural scene statistics. IEEE Signal Processing Letters 22(7), 838–842 (2014)

  21. 21.

    Fu, X., Liao, Y., Zeng, D., Huang, Y., Zhang, X.P., Ding, X.: A probabilistic method for image enhancement with simultaneous illumination and reflectance estimation. IEEE Trans. Image Process. 24(12), 4965–4977 (2015)

  22. 22.

    Gharbi, M., Chen, J., Barron, J.T., Hasinoff, S.W., Durand, F.: Deep bilateral learning for real-time image enhancement. ACM Transactions on Graphics (TOG) 36(4), 118 (2017)

  23. 23.

    Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. In: Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, pp. 249–256 (2010)

  24. 24.

    Gonçalves, L.T., de Oliveira Gaya, J.F., Junior, P.J.L.D., da Costa Botelho, S.S.: Guidednet: single image dehazing using an End-To-End convolutional neural network. In: 2018 31St SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI), pp. 79–86. IEEE (2018)

  25. 25.

    Gonzalez, R.: Digital image processing. Pearson Education. https://books.google.com.br/books?id=a62xQ2r_f8wC (2009)

  26. 26.

    Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial nets. In: Advances in Neural Information Processing Systems, pp. 2672–2680 (2014)

  27. 27.

    Gupta, A., Rush, A.M.: Dilated convolutions for modeling long-distance genomic dependencies. arXiv:1710.01278 (2017)

  28. 28.

    Hall, M.: Digital Photography: Mastering Aperture, Shutter Speed, ISO and Exposure. CreateSpace Independent Publishing Platform, USA (2015)

  29. 29.

    Hasinoff, S.W.: Saturation (Imaging). In: Computer Vision, pp. 699–701. Springer (2014)

  30. 30.

    Hasinoff, S.W., Sharlet, D., Geiss, R., Adams, A., Barron, J.T., Kainz, F., Chen, J., Levoy, M.: Burst photography for high dynamic range and low-light imaging on mobile cameras. ACM Transactions on Graphics (TOG) 35(6), 192 (2016)

  31. 31.

    He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

  32. 32.

    Hochreiter, S., Younger, A.S., Conwell, P.R.: Learning to learn using gradient descent. In: International Conference on Artificial Neural Networks, pp. 87–94. Springer (2001)

  33. 33.

    Honig, S., Werman, M.: Image declipping with deep networks. In: 2018 25Th IEEE International Conference on Image Processing (ICIP), pp. 3923–3927. IEEE (2018)

  34. 34.

    Hu, Y., He, H., Xu, C., Wang, B., Lin, S.: Exposure: a white-box photo post-processing framework. ACM Transactions on Graphics (TOG) 37(2), 26 (2018)

  35. 35.

    Huang, G., Liu, Z., van der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017)

  36. 36.

    Huynh-Thu, Q., Ghanbari, M.: Scope of validity of psnr in image/video quality assessment. Electronics Letters 44(13), 800–801 (2008)

  37. 37.

    Ignatov, A., Kobyshev, N., Timofte, R., Vanhoey, K., Van Gool, L.: Dslr-quality photos on mobile devices with deep convolutional networks. In: Proceedings of the IEEE International Conference on Computer Vision (2017)

  38. 38.

    Ignatov, A., Kobyshev, N., Timofte, R., Vanhoey, K., Van Gool, L.: Wespe: Weakly supervised photo enhancer for digital cameras. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 691–700 (2018)

  39. 39.

    Isola, P., Zhu, J.Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1125–1134 (2017)

  40. 40.

    Jarque, C.M., Bera, A.K.: Efficient tests for normality, homoscedasticity and serial independence of regression residuals. Economics Letters 6(3), 255–259 (1980)

  41. 41.

    Johnson, J., Alahi, A., Fei-Fei, L.: Perceptual losses for real-time style transfer and super-resolution. In: European Conference on Computer Vision, pp. 694–711. Springer (2016)

  42. 42.

    Kingma, D., Ba, J.: Adam: A method for stochastic optimization. arXiv:1412.6980(2014)

  43. 43.

    Lai, W.S., Huang, J.B., Ahuja, N., Yang, M.H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017)

  44. 44.

    Ledig, C., Theis, L., Huszar, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., Wang, Z., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4681–4690 (2017)

  45. 45.

    Li, C., Guo, J., Porikli, F., Pang, Y.: Lightennet: a convolutional neural network for weakly illuminated image enhancement. Pattern Recogn. Lett. 104, 15–22 (2018)

  46. 46.

    Lin, M., Chen, Q., Yan, S.: Network in network. arXiv:1312.4400 (2013)

  47. 47.

    Lin, T.Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollár, P., Zitnick, C.L.: Microsoft coco: common objects in context. In: European Conference on Computer Vision, pp. 740–755. Springer (2014)

  48. 48.

    Mertens, T., Kautz, J., Van Reeth, F.: Exposure fusion. In: 15Th Pacific Conference on Computer Graphics and Applications, 2007. PG’07, pp. 382–390. IEEE (2007)

  49. 49.

    Milletari, F., Navab, N., Ahmadi, S.A.: V-Net: fully convolutional neural networks for volumetric medical image segmentation. In: 2016 Fourth International Conference on 3D Vision (3DV), pp. 565–571. IEEE (2016)

  50. 50.

    Nafchi, H.Z., Cheriet, M.: Efficient no-reference quality assessment and classification model for contrast distorted images. IEEE Trans. Broadcast. 64(2), 518–523 (2018)

  51. 51.

    Odena, A., Dumoulin, V., Olah, C.: Deconvolution and checkerboard artifacts. Distill. https://doi.org/10.23915/distill.00003. http://distill.pub/2016/deconv-checkerboard (2016)

  52. 52.

    Pan, X., Luo, P., Shi, J., Tang, X.: Two at once: Enhancing learning and generalization capacities via ibn-net. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 464–479 (2018)

  53. 53.

    Pathak, D., Krahenbuhl, P., Donahue, J., Darrell, T., Efros, A.A.: Context encoders: feature learning by inpainting. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016)

  54. 54.

    Petro, A.B., Sbert, C., Morel, J.M.: Multiscale retinex. Image Processing On Line, pp. 71–88 (2014)

  55. 55.

    Prashnani, E., Cai, H., Mostofi, Y., Sen, P.: Pieapp: perceptual image-error assessment through pairwise preference. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018)

  56. 56.

    Radford, A., Metz, L., Chintala, S.: Unsupervised representation learning with deep convolutional generative adversarial networks. ICLR (2016)

  57. 57.

    Ren, W., Liu, S., Zhang, H., Pan, J., Cao, X., Yang, M.H.: Single image dehazing via multi-scale convolutional neural networks. In: European Conference on Computer Vision, pp. 154–169. Springer (2016)

  58. 58.

    Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 234–241. Springer (2015)

  59. 59.

    Shapiro, S.S., Wilk, M.B.: An analysis of variance test for normality (complete samples). Biometrika 52 (3/4), 591–611 (1965)

  60. 60.

    Sharma, G., Wu, W., Dalal, E.N.: The ciede2000 color-difference formula: Implementation notes, supplementary test data, and mathematical observations. Color Research & Application: Endorsed by Inter-Society Color Council, The Colour Group (Great Britain), Canadian Society for Color, Color Science Association of Japan, Dutch Society for the Study of Color, The Swedish Colour Centre Foundation, Colour Society of Australia, Centre Français de la Couleur 30(1), 21–30 (2005)

  61. 61.

    Sheikh, H.R., Bovik, A.C.: Image information and visual quality. In: IEEE International Conference on Acoustics, Speech, and Signal Processing, 2004. Proceedings.(ICASSP’04), vol. 3, pp. Iii–709. IEEE (2004)

  62. 62.

    Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556 (2014)

  63. 63.

    Steffens, C., Drews-Jr, P., Botelho, S.: Deep learning based exposure correction for image exposure correction with application in computer vision for robotics. In: Latin American Robotic Symposium and Brazilian Symposium on Robotics (LARS/SBR), pp. 194–200. IEEE (2018)

  64. 64.

    Stephens, M.A.: Asymptotic results for goodness-of-fit statistics with unknown parameters. Ann. Stat., pp. 357–369 (1976)

  65. 65.

    Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-V4, inception-resnet and the impact of residual connections on learning. In: AAAI, vol. 4, p. 12 (2017)

  66. 66.

    Wang, T.C., Liu, M.Y., Zhu, J.Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018)

  67. 67.

    Wang, X., Yu, K., Dong, C., Change Loy, C.: Recovering realistic texture in image super-resolution by deep spatial feature transform. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 606–615 (2018)

  68. 68.

    Wang, Z., Bovik, A.C.: A universal image quality index. IEEE Signal Processing Letters 9(3), 81–84 (2002)

  69. 69.

    Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing 13(4), 600–612 (2004)

  70. 70.

    Wei, C., Wang, W., Yang, W., Liu, J.: Deep retinex decomposition for low-light enhancement. arXiv:1808.04560 (2018)

  71. 71.

    Xue, W., Zhang, L., Mou, X., Bovik, A.C.: Gradient magnitude similarity deviation: a highly efficient perceptual image quality index. IEEE Trans. Image Process. 23(2), 684–695 (2014)

  72. 72.

    Yan, J., Li, J., Fu, X.: No-reference quality assessment of contrast-distorted images using contrast enhancement. arXiv:1904.08879 (2019)

  73. 73.

    Ying, Z., Li, G., Ren, Y., Wang, R., Wang, W.: A new image contrast enhancement algorithm using exposure fusion framework. In: International Conference on Computer Analysis of Images and Patterns, pp. 36–46. Springer (2017)

  74. 74.

    Ying, Z., Li, G., Ren, Y., Wang, R., Wang, W.: A new low-light image enhancement algorithm using camera response model. In: 2017 IEEE International Conference on Computer Vision Workshop (ICCVW), pp. 3015–3022. IEEE (2017)

  75. 75.

    Yu, F., Koltun, V.: Multi-scale context aggregation by dilated convolutions. arXiv:1511.07122 (2015)

  76. 76.

    Zhang, L., Zhang, L., Mou, X., Zhang, D., et al.: Fsim: a feature similarity index for image quality assessment. IEEE Transactions on Image Processing 20(8), 2378–2386 (2011)

  77. 77.

    Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2472–2481 (2018)

  78. 78.

    Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: 2017 IEEE International Conference on Computer Vision (ICCV), pp. 2242–2251. IEEE (2017)

Download references

Author information

Correspondence to Cristiano R. Steffens.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Steffens, C.R., Messias, L.R.V., Drews-Jr, P.J.L. et al. CNN Based Image Restoration. J Intell Robot Syst (2020). https://doi.org/10.1007/s10846-019-01124-9

Download citation

Keywords

  • Image enhancement
  • Image restoration
  • Deep neural networks

Mathematics Subject Classification (2010)

  • MSC 94-04
  • MSC 94D05
  • MSC 68-02
  • MCS 68T45