Advertisement

The 2018 PIRM Challenge on Perceptual Image Super-Resolution

  • Yochai BlauEmail author
  • Roey Mechrez
  • Radu Timofte
  • Tomer Michaeli
  • Lihi Zelnik-Manor
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11133)

Abstract

This paper reports on the 2018 PIRM challenge on perceptual super-resolution (SR), held in conjunction with the Perceptual Image Restoration and Manipulation (PIRM) workshop at ECCV 2018. In contrast to previous SR challenges, our evaluation methodology jointly quantifies accuracy and perceptual quality, therefore enabling perceptual-driven methods to compete alongside algorithms that target PSNR maximization. Twenty-one participating teams introduced algorithms which well-improved upon the existing state-of-the-art methods in perceptual SR, as confirmed by a human opinion study. We also analyze popular image quality measures and draw conclusions regarding which of them correlates best with human opinion scores. We conclude with an analysis of the current trends in perceptual SR, as reflected from the leading submissions.

Notes

Acknowledgments

The 2018 PIRM Challenge on Perceptual SR was sponsored by Huawei and Mediatek.

Supplementary material

References

  1. 1.
    Blau, Y., Michaeli, T.: The perception-distortion tradeoff. In: Proceedings of the CVPR (2018)Google Scholar
  2. 2.
    Cheon, M., Kim, J.H., Choi, J.H., Lee, J.S.: Generative adversarial network-based image super-resolution using perceptual content losses. In: Proceedings of the ECCV Workshops (2018)Google Scholar
  3. 3.
    Choi, J.H., Kim, J.H., Cheon, M., Lee, J.S.: Deep learning-based image super-resolution considering quantitative and perceptual quality. arXiv preprint arXiv:1809.04789 (2018)
  4. 4.
    Dahl, R., Norouzi, M., Shlens, J.: Pixel recursive super resolution. In: Proceedings of the ICCV (2017)Google Scholar
  5. 5.
    Deng, X.: Enhancing image quality via style transfer for single image super-resolution. IEEE Sign. Process. Lett. 25(4), 571–575 (2018)CrossRefGoogle Scholar
  6. 6.
    Dong, C., Loy, C.C., He, K., Tang, X.: Learning a deep convolutional network for image super-resolution. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8692, pp. 184–199. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-10593-2_13CrossRefGoogle Scholar
  7. 7.
    Gatys, L., Ecker, A.S., Bethge, M.: Texture synthesis using convolutional neural networks. In: Proceedings of the NIPS (2015)Google Scholar
  8. 8.
    Gondal, M.W., Schölkopf, B., Hirsch, M.: The unreasonable effectiveness of texture transfer for single image super-resolution. In: Proceedings of the ECCV Workshops (2018)Google Scholar
  9. 9.
    Goodfellow, I., et al.: Generative adversarial nets. In: Proceedings of the NIPS (2014)Google Scholar
  10. 10.
    Han, W., Chang, S., Liu, D., Yu, M., Witbrock, M., Huang, T.S.: Image super-resolution via dual-state recurrent networks. In: Proceedings of the CVPR (2018)Google Scholar
  11. 11.
    Haris, M., Shakhnarovich, G., Ukita, N.: Deep backprojection networks for super-resolution. In: Proceedings of the CVPR (2018)Google Scholar
  12. 12.
    Huang, J.B., Singh, A., Ahuja, N.: Single image super-resolution from transformed self-exemplars. In: Proceedings of the CVPR (2015)Google Scholar
  13. 13.
    Johnson, J., Alahi, A., Fei-Fei, L.: Perceptual losses for real-time style transfer and super-resolution. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9906, pp. 694–711. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46475-6_43CrossRefGoogle Scholar
  14. 14.
    Jolicoeur-Martineau, A.: The relativistic discriminator: a key element missing from standard GAN. arXiv preprint arXiv:1807.00734 (2018)
  15. 15.
    Kim, J., Kwon Lee, J., Mu Lee, K.: Accurate image super-resolution using very deep convolutional networks. In: Proceedings of the CVPR (2016)Google Scholar
  16. 16.
    Kim, J.H., Lee, J.S.: Deep residual network with enhanced upscaling module for super-resolution. In: Proceedings of the CVPR Workshops (2018)Google Scholar
  17. 17.
    Lai, W.S., Huang, J.B., Ahuja, N., Yang, M.H.: Deep laplacian pyramid networks for fast and accurate superresolution. In: Proceedings of the CVPR (2017)Google Scholar
  18. 18.
    Ledig, C., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: Proceedings of the CVPR (2017)Google Scholar
  19. 19.
    Lim, B., Son, S., Kim, H., Nah, S., Lee, K.M.: Enhanced deep residual networks for single image super-resolution. In: Proceedings of the CVPR workshops (2017)Google Scholar
  20. 20.
    Lin, T.Y., Goyal, P., Girshick, R., He, K., Dollár, P.: Focal loss for dense object detection. In: Proceedings of the ICCV (2017)Google Scholar
  21. 21.
    Luo, X., Chen, R., Xie, Y., Qu, Y., Cui-hua, L.: Bi-GANs-ST for perceptual image super-resolution. In: Proceedings of the ECCV Workshops (2018)Google Scholar
  22. 22.
    Ma, C., Yang, C.Y., Yang, X., Yang, M.H.: Learning a no-reference quality metric for single-image super-resolution. Comput. Vis. Image Underst. 158, 1–16 (2017)CrossRefGoogle Scholar
  23. 23.
    Martin, D., Fowlkes, C., Tal, D., Malik, J.: A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In: Proceedings of the ICCV (2001)Google Scholar
  24. 24.
    Mechrez, R., Talmi, I., Shama, F., Zelnik-Manor, L.: Learning to maintain natural image statistics. arXiv preprint arXiv:1803.04626 (2018)
  25. 25.
    Mechrez, R., Talmi, I., Zelnik-Manor, L.: The contextual loss for image transformation with non-aligned data. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) Computer Vision – ECCV 2018. LNCS, vol. 11218, pp. 800–815. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-01264-9_47CrossRefGoogle Scholar
  26. 26.
    Mittal, A., Moorthy, A.K., Bovik, A.C.: No-reference image quality assessment in the spatial domain. IEEE Trans. Image Process. (TIP) 21(12), 4695–4708 (2012)MathSciNetCrossRefGoogle Scholar
  27. 27.
    Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Sign. Process. Lett. 20(3), 209–212 (2013)CrossRefGoogle Scholar
  28. 28.
    Navarrete Michelini, P., Zhu, D., Hanwen, L.: Multi-scale recursive and perception-distortion controllable image super-resolution. In: Proceedings of the ECCV Workshops (2018)Google Scholar
  29. 29.
    Purohit, K., Mandal, S., Rajagopalan, A.N.: Scale-recurrent multi-residual dense network for image super resolution. In: Proceedings of the ECCV Workshops (2018)Google Scholar
  30. 30.
    Saad, M.A., Bovik, A.C., Charrier, C.: Blind image quality assessment: a natural scene statistics approach in the DCT domain. IEEE Trans. Image Process. (TIP) 21(8), 3339–3352 (2012)MathSciNetCrossRefGoogle Scholar
  31. 31.
    Sajjadi, M.S., Schölkopf, B., Hirsch, M.: Enhancenet: single image super-resolution through automated texture synthesis. In: Proceedings of the ICCV (2017)Google Scholar
  32. 32.
    Sheikh, H.R., Bovik, A.C.: Image information and visual quality. IEEE Trans. Image Process. (TIP) 15(2), 430–444 (2006)CrossRefGoogle Scholar
  33. 33.
    Sheikh, H.R., Bovik, A.C., De Veciana, G.: An information fidelity criterion for image quality assessment using natural scene statistics. IEEE Trans. Image Process. 14(12), 2117–2128 (2005)CrossRefGoogle Scholar
  34. 34.
    Shocher, A., Cohen, N., Irani, M.: “zero-shot” super-resolution using deep internal learning. In: Proceedings of the CVPR (2018)Google Scholar
  35. 35.
    Sun, L., Hays, J.: Super-resolution using constrained deep texture synthesis. arXiv preprint arXiv:1701.07604 (2017)
  36. 36.
    Timofte, R., Agustsson, E., Van Gool, L., Yang, M.H., Zhang, L., et al.: NTIRE 2017 challenge on single image super-resolution: methods and results. In: Proceedings of the CVPR workshops (2017)Google Scholar
  37. 37.
    Timofte, R., De Smet, V., Van Gool, L.: A+: adjusted anchored neighborhood regression for fast super-resolution. In: Proceedings of the ACCV (2014)Google Scholar
  38. 38.
    Timofte, R., et al.: NTIRE 2018 challenge on single image super-resolution: methods and results. In: Proceedings of the CVPR workshops (2018)Google Scholar
  39. 39.
    Tong, T., Li, G., Liu, X., Gao, Q.: Image super-resolution using dense skip connections. In: Proceedings of the ICCV (2017)Google Scholar
  40. 40.
    Vasu, S., Nimisha, T.M., Rajagopalan, A.N.: Analyzing perception-distortion tradeoff using enhanced perceptual super-resolution network. In: Proceedings of the ECCV Workshops (2018)Google Scholar
  41. 41.
    Vu, T., Luu, T., Yoo, C.D.: Perception-enhanced image super-resolution via relativistic generative adversarial networks. In: Proceedings of the ECCV Workshops (2018)Google Scholar
  42. 42.
    Wang, X., Yu, K., Dong, C., Loy, C.C.: Recovering realistic texture in image super-resolution by deep spatial feature transform. In: Proceedings of the CVPR (2018)Google Scholar
  43. 43.
    Wang, X., et al.: ESRGAN: enhanced super-resolution generative adversarial networks. In: Proceedings of the ECCV Workshops (2018)Google Scholar
  44. 44.
    Wang, Y., Perazzi, F., McWilliams, B., Sorkine-Hornung, A., Sorkine-Hornung, O., Schroers, C.: A fully progressive approach to single-image super-resolution. In: Proceedings of the CVPR (2018)Google Scholar
  45. 45.
    Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. (TIP) 13(4), 600–612 (2004)CrossRefGoogle Scholar
  46. 46.
    Wang, Z., Simoncelli, E.P., Bovik, A.C.: Multiscale structural similarity for image quality assessment. In: Conference on Signals, Systems & Computers, vol. 2, pp. 1398–1402 (2003)Google Scholar
  47. 47.
    Yang, W., Zhang, X., Tian, Y., Wang, W., Xue, J.H.: Deep learning for single image super-resolution: a brief review. arXiv preprint arXiv:1808.03344 (2018)
  48. 48.
    Ye, P., Kumar, J., Kang, L., Doermann, D.: Unsupervised feature learning framework for no-reference image quality assessment. In: Proceedings of the CVPR (2012)Google Scholar
  49. 49.
    Zhang, L., et al.: FSIM: a feature similarity index for image quality assessment. IEEE Trans. Image Process. (TIP) 20(8), 2378–2386 (2011)MathSciNetCrossRefGoogle Scholar
  50. 50.
    Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the CVPR (2018)Google Scholar
  51. 51.
    Zhang, Y., Li, K., Li, K., Wang, L., Zhong, B., Fu, Y.: Image super-resolution using very deep residual channel attention networks. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11211, pp. 294–310. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-01234-2_18CrossRefGoogle Scholar
  52. 52.
    Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proceedings of the CVPR (2018)Google Scholar
  53. 53.
    Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3(1), 47–57 (2017)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Yochai Blau
    • 1
    Email author
  • Roey Mechrez
    • 1
  • Radu Timofte
    • 2
  • Tomer Michaeli
    • 1
  • Lihi Zelnik-Manor
    • 1
  1. 1.Technion–Israel Institute of TechnologyHaifaIsrael
  2. 2.ETH ZurichZürichSwitzerland

Personalised recommendations