Perception-Preserving Convolutional Networks for Image Enhancement on Smartphones

  • Zheng Hui
  • Xiumei WangEmail author
  • Lirui Deng
  • Xinbo Gao
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11133)


Although the configuration of smartphone cameras is getting better and better, the quality of smartphone photos still cannot match DSLR camera photos due to the limitation of physical space, hardware and cost. In this work, we present a fast and accurate image enhancement approach based on generative adversarial nets, which elevates the quality of photos on smartphones. We propose the lightweight local residual convolutional network to learn the mapping between ordinary photos and DSLR-quality images. To make the generated images look real, we introduce the perception-preserving measurement error, which comprises content, color, and adversarial losses. Especially, the content loss is constituted of contextual and SSIM losses, which maintains the natural internal statistics and the structure of images. In addition, we introduce the knowledge transfer strategy to ensure the high performance of the proposed network. The experiments demonstrate that our proposed method produces better results compared with the state-of-the-art approaches, both qualitatively and quantitatively. The code is available at


Image enhancement Perception-preserving measurement error Knowledge transfer 



This work was supported in part by the National Natural Science Foundation of China under Grant 61472304, 61432914 and U1605252, in part by the Fundamental Research Funds for the Central Universities, and in part by the Innovation Fund of Xidian University.


  1. 1.
    Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: dataset and study. In: CVPRW, pp. 1122–1131 (2017)Google Scholar
  2. 2.
    Aly, H.A., Dubois, E.: Image up-sampling using total-variation regularization with a new observation model. TIP 14(10), 1647–1659 (2005)Google Scholar
  3. 3.
    Bevilacqua, M., Roumy, A., Guillemot, C., Alberi-Morel, M.L.: Low-complexity single-image super-resolution based on nonnegative neighbor embedding. In: BMVC (2012)Google Scholar
  4. 4.
    Dong, C., Loy, C.C., He, K., Tang, X.: Learning a deep convolutional network for image super-resolution. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8692, pp. 184–199. Springer, Cham (2014). Scholar
  5. 5.
    Dong, C., Loy, C.C., He, K., Tang, X.: Image super-resolution using deep convolutional networks. TPAMI 38(2), 295–307 (2015)CrossRefGoogle Scholar
  6. 6.
    Dong, C., Loy, C.C., Tang, X.: Accelerating the super-resolution convolutional neural network. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9906, pp. 391–407. Springer, Cham (2016). Scholar
  7. 7.
    Fu, X., Huang, J., Ding, X., Liao, Y., Paisley, J.: Clearing the skies: a deep network architecture for single-image rain removal. TIP 26(6), 2944–2956 (2017)MathSciNetGoogle Scholar
  8. 8.
    Fu, X., Huang, J., Zeng, D., Huang, Y., Ding, X., Paisley, J.: Removing rain from single images via a deep detail network. In: CVPR, pp. 1715–1723 (2017)Google Scholar
  9. 9.
    Goodfellow, I., et al.: Generative adversarial nets. NIPS, pp. 2672–2680 (2014)Google Scholar
  10. 10.
    Huang, J.B., Singh, A., Ahuja, N.: Single image super-resolution from transformed self-exemplars. In: CVPR, pp. 5197–5206 (2015)Google Scholar
  11. 11.
    Ignatov, A., Kobyshev, N., Vanhoey, K., Timofte, R., Gool, L.V.: DSLR-quality photos on mobile devices with deep convolutional networks. In: ICCV, pp. 3277–3285 (2017)Google Scholar
  12. 12.
    Ignatov, A., Kobyshev, N., Vanhoey, K., Timofte, R., Gool, L.V.: WESPE: weakly supervised photo enhancer for digital cameras. In: CVPRW, pp. 804–813 (2018)Google Scholar
  13. 13.
    Ignatov, A., et al.: PIRM challenge on perceptual image enhancement on smartphones: Report. In: ECCVW (2018)Google Scholar
  14. 14.
    Johnson, J., Alahi, A., Fei-Fei, L.: Perceptual losses for real-time style transfer and super-resolution. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9906, pp. 694–711. Springer, Cham (2016). Scholar
  15. 15.
    Kim, J., Lee, J.K., Lee, K.M.: Accurate image super-resolution using very deep convolutional networks. In: CVPR, pp. 1646–1654 (2016)Google Scholar
  16. 16.
    Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. In: ICLR (2014)Google Scholar
  17. 17.
    Lai, W.S., Huang, J.B., Ahuja, N., Yang, M.H.: Deep Laplacian pyramid networks for fast and accurate super-resolution. In: CVPR, pp. 624–632 (2017)Google Scholar
  18. 18.
    Ledig, C., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: CVPR, pp. 4681–4690 (2017)Google Scholar
  19. 19.
    Li, R., Cheong, L.F., Tan, R.T.: Single image deraining using scale-aware multi-stage recurrent network. arXiv:1712.06830 (2017)
  20. 20.
    Martin, D., Fowlkes, C., Tal, D., Malik, J.: A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In: ICCV, pp. 416–423 (2001)Google Scholar
  21. 21.
    Matsui, Y., et al.: Sketch-based manga retrieval using manga109 dataset. IEEE Trans. Image Process. 76(20), 21811–21838 (2017)Google Scholar
  22. 22.
    Mechrez, R., Talmi, I., Shama, F., Zelnik-Manor, L.: Maintaining natural image statistics with the contextual loss. arXiv:1803.04626 (2018)
  23. 23.
    Mechrez, R., Talmi, I., Zelnik-Manor, L.: The contextual loss for image transformation with non-aligned data. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) Computer Vision – ECCV 2018. LNCS, vol. 11218, pp. 800–815. Springer, Cham (2018). Scholar
  24. 24.
    Shi, W., et al.: Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In: CVPR, pp. 1874–1883 (2016)Google Scholar
  25. 25.
    Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: ICLR (2015)Google Scholar
  26. 26.
    Tai, Y., Yang, J., Liu, X.: Image super-resolution via deep recursive residual network. In: CVPR, pp. 3147–3155 (2017)Google Scholar
  27. 27.
    Tai, Y., Yang, J., Liu, X., Xu, C.: MemNet: a persistent memory network for image restoration. In: ICCV, pp. 3147–3155 (2017)Google Scholar
  28. 28.
    Timofte, R., Agustsson, E., Gool, L.V., Yang, M.H., Zhang, L., et al.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW, pp. 1110–1121 (2017)Google Scholar
  29. 29.
    Timofte, R., et al.: Ntire 2018 challenge on single image super-resolution: methods and results. In: CVPRW, pp. 852–863 (2018)Google Scholar
  30. 30.
    Yang, W., Tan, R.T., Feng, J., Liu, J., Guo, Z., Yan, S.: Deep joint rain detection and removal from a single image. In: CVPR, pp. 1357–1366 (2017)Google Scholar
  31. 31.
    Yim, J., Joo, D., Bae, J., Kim, J.: A gift from knowledge distillation: fast optimization, network minimization and transfer learning. In: CVPR, pp. 4133–4141 (2017)Google Scholar
  32. 32.
    Zagoruyko, S., Komodakis, N.: Paying more attention to attention: improving the performance of convolutional neural networks via attention transfer. In: ICLR (2017)Google Scholar
  33. 33.
    Zeyde, R., Elad, M., Protter, M.: On single image scale-up using sparse-representations. In: Boissonnat, J.-D., Chenin, P., Cohen, A., Gout, C., Lyche, T., Mazure, M.-L., Schumaker, L. (eds.) Curves and Surfaces 2010. LNCS, vol. 6920, pp. 711–730. Springer, Heidelberg (2012). Scholar
  34. 34.
    Zhang, H., Patel, V.M.: Density-aware single image de-raining using a multi-stream dense network. In: CVPR, pp. 695–704 (2018)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.School of Electronic EngineeringXidian UniversityXi’anChina
  2. 2.Department of Computer Science and TechnologyTsinghua UniversityBeijingChina

Personalised recommendations