Advertisement

No-reference synthetic image quality assessment with convolutional neural network and local image saliency

  • Xiaochuan Wang
  • Xiaohui LiangEmail author
  • Bailin Yang
  • Frederick W. B. Li
Open Access
Research Article
  • 53 Downloads

Abstract

Depth-image-based rendering (DIBR) is widely used in 3DTV, free-viewpoint video, and interactive 3D graphics applications. Typically, synthetic images generated by DIBR-based systems incorporate various distortions, particularly geometric distortions induced by object dis-occlusion. Ensuring the quality of synthetic images is critical to maintaining adequate system service. However, traditional 2D image quality metrics are ineffective for evaluating synthetic images as they are not sensitive to geometric distortion. In this paper, we propose a novel no-reference image quality assessment method for synthetic images based on convolutional neural networks, introducing local image saliency as prediction weights. Due to the lack of existing training data, we construct a new DIBR synthetic image dataset as part of our contribution. Experiments were conducted on both the public benchmark IRCCyN/IVC DIBR image dataset and our own dataset. Results demonstrate that our proposed metric outperforms traditional 2D image quality metrics and state-of-the-art DIBR-related metrics.

Keywords

image quality assessment synthetic image depth-image-based rendering (DIBR) convolutional neural network local image saliency 

Notes

Acknowledgements

The authors would like to thank the anonymous reviewers for their valuable comments. They would also thank Kai Wang and Jialei Li for their assistance in dataset construction and public release. The work was sponsored by the National Key R&D Program of China (No. 2017YFB1002702), and the National Natural Science Foundation of China (Nos. 61572058, 61472363).

References

  1. [1]
    Fehn, C. Depth-image-based rendering (DIBR), compression, and transmission for a new approach on 3DTV. In: Proceedings of the SPIE 5291, Stereoscopic Displays and Virtual Reality Systems XI, 93–104, 2004.CrossRefGoogle Scholar
  2. [2]
    Smolic, A. 3D video and free viewpoint video: From capture to display. Pattern Recognition Vol. 44, No. 9, 1958–1968, 2011.MathSciNetCrossRefGoogle Scholar
  3. [3]
    Smolic, A.; Mueller, K.; Merkle, P.; Kauff, P.; Wiegand, T. An overview of available and emerging 3D video formats and depth enhanced stereo as efficient generic solution. In: Proceedings of the Picture Coding Symposium, 1–4, 2009.Google Scholar
  4. [4]
    Wang, X.; Liang, X.; Yang, B.; Li, F. W. Scalable remote rendering using synthesized image quality assessment. IEEE Access Vol. 6, 36595–36610, 2018.CrossRefGoogle Scholar
  5. [5]
    Mark, W. Post-rendering 3D image warping: Visibility, reconstruction, and performance for depth-image warping. Technical Report. Chapel Hill, NC, USA, 1999.Google Scholar
  6. [6]
    Zhou, Y.; Li, L.; Gu, K.; Fang, Y.; Lin, W. Quality assessment of 3D synthesized images via disoccluded region discovery. In: Proceedings of the IEEE International Conference on Image Processing, 1012–1016, 2016.Google Scholar
  7. [7]
    Battisti, F.; Bosc, E.; Carli, M.; Le Callet, P.; Perugia, S. Objective image quality assessment of 3D synthesized views. Signal Processing: Image Communication Vol. 30, 78–88, 2015.Google Scholar
  8. [8]
    Gu, K.; Jakhetiya, V.; Qiao, J. F.; Li, X.; Lin, W.; Thalmann, D. Model-based referenceless quality metric of 3D synthesized images using local image description. IEEE Transactions on Image Processing Vol. 27, No. 1, 394–405, 2018.MathSciNetCrossRefzbMATHGoogle Scholar
  9. [9]
    Tian, S.; Zhang, L.; Morin, L.; Déforges, O. NIQSV+: A no-reference synthesized view quality assessment metric. IEEE Transactions on Image Processing Vol. 27, No. 4, 1652–1664, 2018.MathSciNetCrossRefzbMATHGoogle Scholar
  10. [10]
    Bosc, E.; Pepion, R.; Le Callet, P.; Koppel, M.; Ndjiki-Nya, P.; Pressigout, M.; Morin, L. Towards a new quality metric for 3-D synthesized view assessment. IEEE Journal of Selected Topics in Signal Processing Vol. 5, No. 7, 1332–1343, 2011.CrossRefGoogle Scholar
  11. [11]
    Wang, Z.; Bovik, A. C.; Sheikh, H. R.; Simoncelli, E. P. Image quality assessment: From error visibility to structural similarity. IEEE Transactions on Image Processing Vol. 13, No. 4, 600–612, 2004.CrossRefGoogle Scholar
  12. [12]
    Sharifi, K.; Leon-Garcia, A. Estimation of shape parameter for generalized Gaussian distributions in subband decompositions of video. IEEE Transactions on Circuits and Systems for Video Technology Vol. 5, No. 1, 52–56, 1995.CrossRefGoogle Scholar
  13. [13]
    Mittal, A.; Moorthy, A. K.; Bovik, A. C. No-reference image quality assessment in the spatial domain. IEEE Transactions on Image Processing Vol. 21, No. 12, 4695–4708, 2012.MathSciNetCrossRefzbMATHGoogle Scholar
  14. [14]
    Mittal, A.; Soundararajan, R.; Bovik, A. C. Making a “completely blind” image quality analyzer. IEEE Signal Processing Letters Vol. 20, No. 3, 209–212, 2013.CrossRefGoogle Scholar
  15. [15]
    Kang, L.; Ye, P.; Li, Y.; Doermann, D. Convolutional neural networks for no-reference image quality assessment. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1733–1740, 2014.Google Scholar
  16. [16]
    Bosse, S.; Maniry, D.; Wiegand, T.; Samek, W. A deep neural network for image quality assessment. In: Proceedings of the IEEE International Conference on Image Processing, 3773–3777, 2016.Google Scholar
  17. [17]
    Bare, B.; Li, K.; Yan, B. An accurate deep convolutional neural networks model for no-reference image quality assessment. In: Proceedings of the IEEE International Conference on Multimedia and Expo, 1356–1361, 2017.Google Scholar
  18. [18]
    Kim, J.; Nguyen, A.; Ahn, S.; Luo, C.; Lee, S. Multiple level feature-based universal blind image quality assessment model. In: Proceedings of the 25th IEEE International Conference on Image Processing, 291–295, 2018.Google Scholar
  19. [19]
    Lin, K.-Y.; Wang, G. Hallucinated-IQA: No-reference image quality assessment via adversarial learning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 732–741, 2018.Google Scholar
  20. [20]
    Zhang, F.-L.; Wu, X.; Li, R.-L.; Wang, J.; Zheng, Z.-H.; Hu, S.-M. Detecting and removing visual distractors for video aesthetic enhancement. IEEE Transactions on Multimedia Vol. 20, No. 8, 1987–1999, 2018.CrossRefGoogle Scholar
  21. [21]
    Sheikh, H. R.; Wang, Z.; Cormack, L.; Bovik, A. C. Live image quality assessment database release 2 (2005). 2016. Available at https://doi.org/live.ece.utexas.edu/research/quality.Google Scholar
  22. [22]
    Ponomarenko, N.; Lukin, V.; Zelensky, A.; Egiazarian, K.; Carli, M.; Battisti, F. TID2008-a database for evaluation of full-reference visual quality assessment metrics. Advances of Modern Radioelectronics Vol. 10, No. 4, 30–45, 2009.Google Scholar
  23. [23]
    Ponomarenko, N.; Jin, L.; Ieremeiev, O.; Lukin, V.; Egiazarian, K.; Astola, J.; Vozel, B.; Chehdi, K.; Carli, M.; Battisti, F.; Jay Kuo, C.-C. Image database TID2013: Peculiarities, results and perspectives. Signal Processing: Image Communication Vol. 30, 57–77, 2015.Google Scholar
  24. [24]
    Conze, P.-H.; Robert, P.; Morin, L. Objective view synthesis quality assessment. In: Proceedings of the SPIE 8288, Stereoscopic Displays and Applications XXIII, 82881M, 2012.CrossRefGoogle Scholar
  25. [25]
    Sandić Stanković, D.; Kukolj, D.; Le Callet, P. DIBR synthesized image quality assessment based on morphological wavelets. In: Proceedings of the 7th International Workshop on Quality of Multimedia Experience, 1–6, 2015.Google Scholar
  26. [26]
    Sandić Stanković, D.; Kukolj, D.; Le Callet, P. DIBR-synthesized image quality assessment based on morphological multi-scale approach. EURASIP Journal on Image and Video Processing Vol. 2017, 4, 2017.CrossRefGoogle Scholar
  27. [27]
    Heng, W.; Jiang. T. From image quality to patch quality: An image-patch model for no-reference image quality assessment. In: Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, 1238–1242, 2017.Google Scholar
  28. [28]
    Zhu, W.; Liang, S.; Wei, Y.; Sun, J. Saliency optimization from robust background detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2814–2821, 2014.Google Scholar
  29. [29]
    Yang, X.; Ling, W.; Lu, Z.; Ong, E. P.; Yao, S. Just noticeable distortion model and its applications in video coding. Signal Processing: Image Communication Vol. 20, No. 7, 662–680, 2005.Google Scholar
  30. [30]
    Kimata, H.; Kitahara, M.; Kamikura, K.; Yashima, Y. Free-viewpoint video communication using multiview video coding. NTT Technical Review Vol. 2, No. 8, 21–26, 2004.Google Scholar
  31. [31]
    Zitnick, C. L.; Kang, S. B.; Uyttendaele, M.; Winder, S.; Szeliski, R. High-quality video view interpolation using a layered representation. ACM Transactions on Graphics Vol. 23, No. 3, 600–608, 2004.CrossRefGoogle Scholar
  32. [32]
    Domañski, M.; Grajek, T.; Klimaszewski, K.; Kurc, M.; Stankiewicz, O.; Stankowski, J.; Wegner, K. Poznan multiview video test sequences and camera parameters. ISO/IEC JTC1/SC29/WG11 MPEG, M17050, 2009.Google Scholar
  33. [33]
    Mayer, N.; Ilg, E.; Hausser, P.; Fischer, P.; Cremers, D.; Dosovitskiy, A.; Brox, T. A large dataset to train convolutional networks for disparity, optical flow, and scene flow estimation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 4040–4048, 2016.Google Scholar
  34. [34]
    Hirschmuller, H.; Scharstein, D. Evaluation of cost functions for stereo matching. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1–8, 2007.Google Scholar
  35. [35]
    Bosc, E.; Pépion, R.; Le Callet, P.; Köppel, M.; Ndjiki-Nya, P.; Morin, L.; Pressigout, M. Perceived quality of DIBR-based synthesized views. In: Proceedings of SPIE 8135, Applications of Digital Image Processing XXXIV, 81350I, 2011.Google Scholar
  36. [36]
    Song, R.; Ko, H.; Jay Kuo, C.-C. MCL-3D: A database for stereoscopic image quality assessment using 2D-image-plus-depth source.Journal of Information Science and Engineering Vol. 31, 1593–1611, 2015.Google Scholar
  37. [37]
    Winkler, S. Analysis of public image and video databases for quality assessment. IEEE Journal of Selected Topics in Signal Processing Vol. 6, No. 6, 616–625, 2012.CrossRefGoogle Scholar
  38. [38]
    I. T. Union. ITU-R BT.910. In: Subjective video quality assessment methods for multimedia applications. 1999.Google Scholar
  39. [39]
    I. T. Union. ITU-R BT.500-12. In: Recommendation: Methodology for the subjective assessment of the quality of television pictures. 1993.Google Scholar
  40. [40]
    Chandler, D. M.; Hemami, S. S. VSNR: A wavelet-based visual signal-to-noise ratio for natural images. IEEE Transactions on Image Processing Vol. 16, No. 9, 2284–2298, 2007.MathSciNetCrossRefGoogle Scholar
  41. [41]
    Zhang, L.; Zhang, L.; Mou, X.; Zhang, D. FSIM: A feature similarity index for image quality assessment. IEEE Transactions on Image Processing Vol. 20, No. 8, 2378–2386, 2011.MathSciNetCrossRefzbMATHGoogle Scholar
  42. [42]
    Liu, L.; Liu, B.; Huang, H.; Bovik, A. C. No-reference image quality assessment based on spatial and spectral entropies. Signal Processing: Image Communication Vol. 29, No. 8, 856–863, 2014.Google Scholar
  43. [43]
    Bao, P.; Gourlay, D. Low bandwidth remote rendering using 3D image warping. In: Proceedings of the International Conference on Visual Information Engineering. Ideas, Applications, Experience, 61–64, 2003.Google Scholar
  44. [44]
    Bao, P.; Gourlay, D. A framework for remote rendering of 3-D scenes on limited mobile devices. IEEE Transactions on Multimedia Vol. 8, No. 2, 382–389, 2006.CrossRefGoogle Scholar
  45. [45]
    Shi, S.; Nahrstedt, K.; Campbell, R. A real-time remote rendering system for interactive mobile graphics. ACM Transactions on Multimedia Computing, Communications, and Applications Vol. 8, No. 3s, Article No. 46, 2012.Google Scholar

Copyright information

© The Author(s) 2019

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.

The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

To view a copy of this licence, visit https://doi.org/creativecommons.org/licenses/by/4.0/.

Other papers from this open access journal are available free of charge from https://doi.org/www.springer.com/journal/41095. To submit a manuscript, please go to https://doi.org/www.editorialmanager.com/cvmj.

Authors and Affiliations

  • Xiaochuan Wang
    • 1
  • Xiaohui Liang
    • 1
    Email author
  • Bailin Yang
    • 2
  • Frederick W. B. Li
    • 3
  1. 1.State Kay Laboratory of Virtual Reality Technology and SystemBeihang UniversityBeijingChina
  2. 2.School of Computer Science & Information EngineeringZhejiang Gongshang UniversityHangzhouChina
  3. 3.Department of Computer ScienceUniversity of DurhamDurhamUK

Personalised recommendations