Advertisement

Underwater-GAN: Underwater Image Restoration via Conditional Generative Adversarial Network

  • Xiaoli Yu
  • Yanyun QuEmail author
  • Ming Hong
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11188)

Abstract

Underwater image restoration is still a challenging task until now, because underwater images are degenerated due to the complex underwater imaging environment and poor light condition. The image degeneration includes color distortion, low contrast, and blur. In this paper, we propose Underwater-GAN, a conditional generative adversarial network for underwater image restoration. In Underwater-GAN, we use Wasserstein GAN with gradient penalty term as the backbone network. We design the loss function as the sum of the loss of generative adversarial network and the perceptual loss. In the discriminator of Underwater-GAN, we use a convolution patchGAN classifier to learn a structural loss instead of the image-level loss or pixel-wise loss. Moreover, we construct an underwater image dataset by simulating to generate underwater images according to the underwater imaging model. We train our model with these simulated underwater dataset. The results of our experiments show that the proposed method produces better visual qualitative and quantitative indicators than existing methods.

Keywords

Underwater image restoration Generative adversarial network Perceptual loss 

References

  1. 1.
    Hummel, R.: Image enhancement by histogram transformation. Comput. Graph. Image Process. 6, 184–195 (1977)CrossRefGoogle Scholar
  2. 2.
    Liu, Y.C., Chan, W.H., Chen, Y.Q.: Automatic white balance for digital still camera. IEEE Trans. Consum. Electron. 41, 460–466 (1995)CrossRefGoogle Scholar
  3. 3.
    Henke, B., Vahl, M., Zhou, Z.: Removing color cast of underwater images through non-constant color constancy hypothesis. In: IEEE International Symposium on Image and Signal Processing and Analysis (2013)Google Scholar
  4. 4.
    Galdran, A., Pardo, D., et al.: Automatic red-channel underwater image restoration. J. Vis. Commun. Image Represent. (JVCIR) 26, 132–145 (2014)CrossRefGoogle Scholar
  5. 5.
    Li, C., Guo, J., Pang, Y., Chen, S., Wang, J.: Single underwater image restoration by blue-green channels dehazing and red channel correction. In: IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (2016)Google Scholar
  6. 6.
    Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. MIT Press, Cambridge (2016)zbMATHGoogle Scholar
  7. 7.
    Goodfellow, I., et al.: Generative adversarial nets. In: Conference on Neural Information Processing Systems (NIPS) (2014)Google Scholar
  8. 8.
    Mirza, M., Osindero, S.: Conditional generative adversarial nets. arXiv: 1411.1784
  9. 9.
    Ledig, C., Theis, L., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017)Google Scholar
  10. 10.
    He, K., Sun, J., Tang, X.: Single image haze removal using dark channel prior. IEEE Trans. Pattern Anal. Mach. Intell. (T-PAMI) 33, 2341–2353 (2011)CrossRefGoogle Scholar
  11. 11.
    Jaffe, J.S.: Computer modeling and the design of optimal underwater imaging systems. IEEE J. Ocean. Eng. 15, 101–111 (1990)CrossRefGoogle Scholar
  12. 12.
    Koschmieder, H.: Theorie der horizontalen Sichtweite. In: Beitrage zur Physik der freien. Atmosphare (1924)Google Scholar
  13. 13.
    Li, J., Skinner, K.A., et al.: WaterGAN: unsupervised generative network to enable real-time color correction of monocular underwater images. arXiv: 1701.07875
  14. 14.
    Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V., Courville, A.: Improved training of wasserstein GANs. arXiv: 1704.00028
  15. 15.
    Isola, P., Zhu, J.Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: CVPR (2017)Google Scholar
  16. 16.
    Pathak, D., Krahenbuhl, P., Donahue, J., Darrell, T., Efros, A.A.: Context encoders: feature learning by inpainting. In: CVPR (2016)Google Scholar
  17. 17.
    Johnson, J., Alahi, A., Li, F.: Perceptual losses for real-time style transfer and super-resolution. In: European Conference on Computer Vision (ECCV) (2016)Google Scholar
  18. 18.
    Kupyn, O., Budzan, V., Mykhailych, M.: DeblurGAN: blind motion deblurring using conditional adversarial networks. arXiv:1711.07064
  19. 19.
    Gonzalez, R., Woods, R.: Digital Image Processing. Addison-Wesley Publishing Company, Boston (1992). Chapter 4Google Scholar
  20. 20.
    Johnson-Roberson, M., Bryson, M., et al.: High-resolution underwater robotic vision-based mapping and 3D reconstruction for archaeology. Field Robot. 34, 625–643 (2016)CrossRefGoogle Scholar
  21. 21.
    Bekaert, P., Haber, T., et al.: Enhancing underwater images and videos by fusion. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2012)Google Scholar
  22. 22.
    Fabbri, C., Islam, M.J., et al.: Enhancing underwater imagery using generative adversarial networks. In: IEEE Conference on Robotics and Automation (ICRA) (2018)Google Scholar
  23. 23.
    Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations (ICLR) (2015)Google Scholar
  24. 24.
    Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(11), 600–612 (2004)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.School of Information Science and EngineeringXiamen UniversityXiamenChina

Personalised recommendations