Advertisement

Pair-Comparing Based Convolutional Neural Network for Blind Image Quality Assessment

  • Xue Qin
  • Tao XiangEmail author
  • Ying Yang
  • Xiaofeng Liao
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11555)

Abstract

The introduction of convolutional neural network (CNN) in no-reference image quality assessment (NR-IQA) gains great success in improving its prediction accuracy, and the performance of CNN relies on the magnitude of training samples. However, many widely-used existing image databases cannot provide adequate samples for CNN training. In this paper, we propose a pair-comparing based convolutional neural network (PC-CNN) for blind image quality assessment. By taking reference images into consideration, we generate more training samples of patch pairs by different combinations of distorted images and reference image. We build a new CNN network which has two inputs for patch pairs and two outputs predicting the scores of patches. We conduct extensive experiments to evaluate the performance of our proposed PC-CNN, and the results show that it outperforms many state-of-the-art methods.

Keywords

No-reference image quality assessment Convolutional neural network Deep learning Human visual system 

Notes

Acknowledgments

This work was supported by the National Natural Science Foundation of China (No. 61672118).

References

  1. 1.
    Bosse, S., Maniry, D., Wiegand, T., Samek, W.: A deep neural network for image quality assessment. In: ICIP, pp. 3773–3777 (2016)Google Scholar
  2. 2.
    Gu, J., Meng, G., Redi, J.A., Xiang, S., Pan, C.: Blind image quality assessment via vector regression and object oriented pooling. IEEE Trans. Multimed. 20(5), 1140–1153 (2018)Google Scholar
  3. 3.
    Kang, L., Ye, P., Li, Y., Doermann, D.: Convolutional neural networks for no-reference image quality assessment. In: CVPR, pp. 1733–1740 (2014)Google Scholar
  4. 4.
    Kim, J., Lee, S.: Fully deep blind image quality predictor. IEEE J. Sel. Top. Signal Process. 11(1), 206–220 (2017)Google Scholar
  5. 5.
    Larson, E.C., Chandler, D.M.: Most apparent distortion: full-reference image quality assessment and the role of strategy. J. Electron. Imaging 19(1), 011006 (2010)Google Scholar
  6. 6.
    Ma, K., Liu, W., Liu, T., Wang, Z., Tao, D.: dipIQ: blind image quality assessment by learning-to-rank discriminable image pairs. IEEE Trans. Image Process. 26(8), 3951–3964 (2017)Google Scholar
  7. 7.
    Mittal, A., Moorthy, A.K., Bovik, A.C.: No-reference image quality assessment in the spatial domain. IEEE Trans. Image Process. 21(12), 4695–4708 (2012)Google Scholar
  8. 8.
    Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Proc. Let. 20(3), 209–212 (2013)Google Scholar
  9. 9.
    Moorthy, A.K., Bovik, A.C.: Blind image quality assessment: from natural scene statistics to perceptual quality. IEEE Trans. Image Process. 20(12), 3350–3364 (2011)Google Scholar
  10. 10.
    Ponomarenko, N., et al.: Color image database TID2013: peculiarities and preliminary results. In: EUVIP, pp. 106–111 (2013)Google Scholar
  11. 11.
    Saad, M.A., Bovik, A.C., Charrier, C.: Blind image quality assessment: a natural scene statistics approach in the DCT domain. IEEE Trans. Image Process. 21(8), 3339–3352 (2012)Google Scholar
  12. 12.
    Sheikh, H.: Live image quality assessment database release 2. http://live.ece.utexas.edu/research/quality
  13. 13.
    Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004)Google Scholar
  14. 14.
    Ye, P., Kumar, J., Kang, L., Doermann, D.: Unsupervised feature learning framework for no-reference image quality assessment. In: CVPR, pp. 1098–1105 (2012)Google Scholar
  15. 15.
    Zhang, L., Zhang, L., Mou, X., Zhang, D.: FSIM: a feature similarity index for image quality assessment. IEEE Trans. Image Process. 20(8), 2378–2386 (2011)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.College of Computer ScienceChongqing UniversityChongqingChina

Personalised recommendations