Advertisement

GAN with Pixel and Perceptual Regularizations for Photo-Realistic Joint Deblurring and Super-Resolution

  • Yong Li
  • Zhenguo YangEmail author
  • Xudong Mao
  • Yong WangEmail author
  • Qing Li
  • Wenyin LiuEmail author
  • Ying Wang
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11542)

Abstract

In this paper, we propose a Generative Adversarial Network with Pixel and Perceptual regularizations, denoted as P2GAN, to restore single motion blurry and low-resolution images jointly into clear and high-resolution images. It is an end-to-end neural network consisting of deblurring module and super-resolution module, which repairs degraded pixels in the motion-blur images firstly, and then outputs the deblurred images and deblurred features for further reconstruction. More specifically, the proposed P2GAN integrates pixel-wise loss in pixel-level, contextual loss and adversarial loss in perceptual level simultaneously, in order to guide on deblurring and super-resolution reconstruction of the raw images that are blurry and in low-resolution, which help obtaining realistic images. Extensive experiments conducted on a real-world dataset manifest the effectiveness of the proposed approaches, outperforming the state-of-the-art models.

Keywords

Image deblurring Super-resolution GANs Pixel loss Contextual loss 

Notes

Acknowledgment

This work is supported by the National Natural Science Foundation of China (No. 61703109, No. 91748107), and the Guangdong Innovative Research Team Program (No. 2014ZT05G157).

References

  1. 1.
    Kupyn, O., Budzan, V., Mykhailych, M., Mishkin, D., Matas, J.: Deblurgan: blind motion deblurring using conditional adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8183–8192 (2018)Google Scholar
  2. 2.
    Tao, X., Gao, H., Shen, X., Wang, J., Jia, J.: Scale-recurrent network for deep image deblurring. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8174–8182 (2018)Google Scholar
  3. 3.
    Xu, X., Sun, D., Pan, J., Zhang, Y., Pfister, H., Yang, M.H.: Learning to super-resolve blurry face and text images. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 251–260 (2017)Google Scholar
  4. 4.
    Zhang, X., Wang, F., Dong, H., Guo, Y.: A deep encoder-decoder networks for joint deblurring and super-resolution. In: 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 1448–1452. IEEE (2018)Google Scholar
  5. 5.
    Zhang, X., Dong, H., Hu, Z., Lai, W.S., Wang, F., Yang, M.H.: Gated fusion network for joint image deblurring and super-resolution. arXiv preprint arXiv:1807.10806 (2018)
  6. 6.
    Li, J., Fang, F., Mei, K., Zhang, G.: Multi-scale residual network for image super-resolution. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 517–532 (2018)CrossRefGoogle Scholar
  7. 7.
    Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004)CrossRefGoogle Scholar
  8. 8.
    Ma, C., Yang, C.Y., Yang, X., Yang, M.H.: Learning a no-reference quality metric for single-image super-resolution. Comput. Vis. Image Underst. 158, 1–16 (2017)CrossRefGoogle Scholar
  9. 9.
    Ledig, C., et al.: Photo-realistic single image super-resolution using a generative adversarial network. arXiv preprint (2017)Google Scholar
  10. 10.
    Wang, X., et al.: ESRGAN: enhanced super-resolution generative adversarial networks. In: Leal-Taixé, L., Roth, S. (eds.) ECCV 2018. LNCS, vol. 11133, pp. 63–79. Springer, Cham (2019).  https://doi.org/10.1007/978-3-030-11021-5_5CrossRefGoogle Scholar
  11. 11.
    Michaeli, T., Irani, M.: Nonparametric blind super-resolution. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 945–952 (2013)Google Scholar
  12. 12.
    Blau, Y., Michaeli, T.: The perception-distortion tradeoff. In: CVPR (2017)Google Scholar
  13. 13.
    Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. arXiv preprint (2018)Google Scholar
  14. 14.
    Mechrez, R., Talmi, I., Zelnik-Manor, L.: The contextual loss for image transformation with non-aligned data. arXiv preprint arXiv:1803.02077 (2018)
  15. 15.
    Mechrez, R., Talmi, I., Shama, F., Zelnik-Manor, L.: Maintaining natural image statistics with the contextual loss. arXiv preprint arXiv:1803.04626 (2018)

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.School of Computer Science and TechnologyGuangdong University of TechnologyGuangzhouChina
  2. 2.Department of Computer ScienceCity University of Hong KongKowloonHong Kong
  3. 3.Department of ComputingThe Hong Kong Polytechnic UniversityHong KongChina

Personalised recommendations