Advertisement

FG-SRGAN: A Feature-Guided Super-Resolution Generative Adversarial Network for Unpaired Image Super-Resolution

  • Shuailong Lian
  • Hejian Zhou
  • Yi SunEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11554)

Abstract

Recently, the performance of single image super-resolution has been significantly improved by convolution neural networks (CNN). However, most of these networks are trained with paired images and take the bicubic-downsampled images as inputs. It’s impractical if we want to super-resolve low-resolution images in the real world, since there is no ground truth high-resolution images corresponding to the low-resolution images. To tackle this challenge, a Feature-Guided Super-Resolution Generative Adversarial Network (FG-SRGAN) for unpaired image super-resolution is proposed in this paper. A guidance module is introduced in FG-SRGAN, which is utilized to reduce the space of possible mapping functions and help to learn the correct mapping function from low-resolution domain to high-resolution domain. Furthermore, we treat the outputs of guidance module as fake examples, which can be leveraged using another adversarial loss. This is beneficial for the main task as it forces FG-SRGAN to learn valid representations for super-resolution. When applied to super-resolve low-resolution face images in the real world, FG-SRGAN is able to achieve satisfactory performance both qualitatively and quantitatively.

Keywords

Image super-resolution Unsupervised learning GAN 

Notes

Acknowledgement

This project was partially supported by the National Natural Science Foundation of China (Grant No.61671104), and the National Major Scientific Instruments Project (Grant No. 2014YQ24044501)

References

  1. 1.
    Bulat, A., Tzimiropoulos, G.: How far are we from solving the 2D & 3D face alignment problem? (and a dataset of 230,000 3D facial landmarks). In: International Conference on Computer Vision, pp. 1021–1030 (2017)Google Scholar
  2. 2.
    Bulat, A., Yang, J., Tzimiropoulos, G.: To learn image super-resolution, use a GAN to learn how to do image degradation first. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11210, pp. 187–202. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-01231-1_12Google Scholar
  3. 3.
    Dong, C., Loy, C.C., He, K., Tang, X.: Learning a deep convolutional network for image super-resolution. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8692, pp. 184–199. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-10593-2_13Google Scholar
  4. 4.
    Dong, C., Loy, C.C., Tang, X.: Accelerating the super-resolution convolutional neural network. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9906, pp. 391–407. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46475-6_25Google Scholar
  5. 5.
    Goodfellow, I., et al.: Generative adversarial nets. In: Advances in Neural Information Processing Systems, pp. 2672–2680 (2014)Google Scholar
  6. 6.
    Han, W., Chang, S., Liu, D., Yu, M., Witbrock, M., Huang, T.S.: Image super-resolution via dual-state recurrent networks. In: Computer Vision and Pattern Recognition, pp. 1654–1663 (2018)Google Scholar
  7. 7.
    Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: GANs trained by a two time-scale update rule converge to a local nash equilibrium. In: Advances in Neural Information Processing Systems, pp. 6626–6637 (2017)Google Scholar
  8. 8.
    Hui, Z., Wang, X., Gao, X.: Fast and accurate single image super-resolution via information distillation network. In: Computer Vision and Pattern Recognition, pp. 723–731 (2018)Google Scholar
  9. 9.
    Johnson, J., Alahi, A., Fei-Fei, L.: Perceptual losses for real-time style transfer and super-resolution. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9906, pp. 694–711. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46475-6_43Google Scholar
  10. 10.
    Kim, J., Kwon Lee, J., Mu Lee, K.: Accurate image super-resolution using very deep convolutional networks. In: Computer Vision and Pattern Recognition, pp. 1646–1654 (2016)Google Scholar
  11. 11.
    Kim, J., Kwon Lee, J., Mu Lee, K.: Deeply-recursive convolutional network for image super-resolution. In: Computer Vision and Pattern Recognition, pp. 1637–1645 (2016)Google Scholar
  12. 12.
    Lai, W.S., Huang, J.B., Ahuja, N., Yang, M.H.: Deep Laplacian pyramid networks for fast and accurate super-resolution. In: Computer Vision and Pattern Recognition, pp. 5835–5843 (2017)Google Scholar
  13. 13.
    Ledig, C., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: Computer Vision and Pattern Recognition, pp. 105–114 (2017)Google Scholar
  14. 14.
    Lim, B., Son, S., Kim, H., Nah, S., Lee, K.M.: Enhanced deep residual networks for single image super-resolution. In: Computer Vision and Pattern Recognition Workshops, pp. 1132–1140 (2017)Google Scholar
  15. 15.
    Mao, X.J., Shen, C., Yang, Y.B.: Image restoration using convolutional auto-encoders with symmetric skip connections. arXiv preprint arXiv:1606.08921 (2016)
  16. 16.
    Nasrollahi, K., Moeslund, T.B.: Super-resolution: a comprehensive survey. Mach. Vis. Appl. 25(6), 1423–1468 (2014)Google Scholar
  17. 17.
    Shi, W., et al.: Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In: Computer Vision and Pattern Recognition, pp. 1874–1883 (2016)Google Scholar
  18. 18.
    Shocher, A., Cohen, N., Irani, M.: “Zero-shot" super-resolution using deep internal learning. In: Computer Vision and Pattern Recognition, pp. 3118–3126 (2018)Google Scholar
  19. 19.
    Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)
  20. 20.
    Tai, Y., Yang, J., Liu, X.: Image super-resolution via deep recursive residual network. In: Computer Vision and Pattern Recognition, pp. 2790–2798 (2017)Google Scholar
  21. 21.
    Tai, Y., Yang, J., Liu, X., Xu, C.: MemNet: A persistent memory network for image restoration. In: Computer Vision and Pattern Recognition, pp. 4539–4547 (2017)Google Scholar
  22. 22.
    Tao, X., Gao, H., Shen, X., Wang, J., Jia, J.: Scale-recurrent network for deep image deblurring. In: Computer Vision and Pattern Recognition, pp. 8174–8182 (2018)Google Scholar
  23. 23.
    Wang, X., et al.: ESRGAN: Enhanced super-resolution generative adversarial networks. arXiv preprint arXiv:1809.00219 (2018)
  24. 24.
    Yang, S., Luo, P., Loy, C.C., Tang, X.: Wider face: A face detection benchmark. In: Computer Vision and Pattern Recognition, pp. 5525–5533 (2016)Google Scholar
  25. 25.
    Zhang, Y., Li, K., Li, K., Wang, L., Zhong, B., Fu, Y.: Image super-resolution using very deep residual channel attention networks, In: European Conference on Computer Vision. pp. 286–301 (2018)Google Scholar
  26. 26.
    Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Computer Vision and Pattern Recognition, pp. 2472–2481 (2018)Google Scholar
  27. 27.
    Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: International Conference on Computer Vision, pp. 2242–2251 (2017)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.School of Information and Communication EngineeringDalian University of TechnologyDalianChina

Personalised recommendations