Advertisement

Journal of Electrical Engineering & Technology

, Volume 14, Issue 1, pp 487–496 | Cite as

No-Reference Image Quality Assessment Using Independent Component Analysis and Convolutional Neural Network

  • Chuang Zhang
  • Jiawei Xu
  • Xiaoyu Huang
  • Seop Hyeong ParkEmail author
Original Article
  • 20 Downloads

Abstract

As digital images have become a significantly primary medium in a broad area, there is a growing interest in the development of automatic objective image quality assessment (IQA) methods. In this paper, a novel no-reference IQA (NRIQA) algorithm is proposed based on independent component analysis and convolutional neural network. The proposed NRIQA algorithm consists of the following three steps: selection of some representative patches, extraction of the features of the selected image patches, and prediction of the image quality by exploiting the features. Initially, an image is divided into non-overlapping patches and then some patches are selected with the suitable property for assessing the overall image quality. In this paper, we refer to the selected patches as image quality patches. The largest infinity norm of the gradient of each image quality patch is employed as a basis when the image quality patches being selected. Second, we employ independent component analysis (ICA) to extract the features of image quality patches. At the last moment, a convolutional neural network (CNN) is applied to the independent component coefficients of image quality patches to predict the corresponding differential mean opinion score (DMOS). We compared the performance of the proposed NQIRM with other IQMs in terms of PCC, SROCC, and RMSE on the database LIVE2, CSIQ and TID2008/2013. The PCC, SROCC and RMSE values achieve respectively to 0.996, 0.999 and 6.011 on the database TID2013. The performance comparison results show the proposed NRIQM is superior to commonly used IQMs.

Keywords

Convolutional neural network Independent component analysis No-reference image quality assessment Patch selection 

Notes

Acknowledgements

This research was supported by Jiangsu Key Laboratory of Meteorological Observation and Information Processing Open Project (No. KDXS1805), and by Hallym University Research Fund (HRF-201806-010).

References

  1. 1.
    Manap RA, Shao L (2015) Non-distortion-specific no-reference image quality assessment: a survey. Inf Sci 301:141–160CrossRefGoogle Scholar
  2. 2.
    Li C, Ju Y, Bovik AC, Wu X, Sang Q (2013) No-training, no-reference image quality index using perceptual features. Opt Eng 52(5):057003CrossRefGoogle Scholar
  3. 3.
    Peng P, Li Z (2014) General-purpose image quality assessment based on distortion-aware decision fusion. Neurocomputing 134:117–121CrossRefGoogle Scholar
  4. 4.
    Yun N, Feng Z, Yang J, Lei J (2013) The objective quality assessment of stereo image. Neurocomputing 120:121–129CrossRefGoogle Scholar
  5. 5.
    Mittal A, Muralidhar GS, Ghosh J, Bovik AC (2012) Blind image quality assessment without human training using latent quality factors. IEEE Signal Process Lett 19(2):75–78CrossRefGoogle Scholar
  6. 6.
    Suresh S, Venkatesh Babu R, Kim HJ (2009) No-reference image quality assessment using modified extreme learning machine classifier. Appl Soft Comput 9(2):541–552CrossRefGoogle Scholar
  7. 7.
    Torralba A, Oliva A (2003) Statistics of natural image categories. Netw Comput Neural Syst 14(3):391–412CrossRefGoogle Scholar
  8. 8.
    Wang Z, Simoncelli EP, Bovik AC (2004) Multiscale structural similarity for image quality assessment. In: IEEE conference record of the thirty-seventh Asilomar Conference on signals, systems and computers, vol 2. Pacific Grove, CA, USA, pp 1398–1402Google Scholar
  9. 9.
    He L, Tao D, Li X, Gao X (2012) Sparse representation for blind image quality assessment. In: 2012 IEEE conference on computer vision and pattern recognition (CVPR), vol 23. Rhode Island, USA, pp 1146–1153Google Scholar
  10. 10.
    Zhang W, Qu C, Ma L, Guan J, Huang R (2016) Learning structure of stereoscopic image for no-reference quality assessment with convolutional neural network. Pattern Recogn 59:176–187CrossRefGoogle Scholar
  11. 11.
    Tian X, Dong Z, Yang K, Mei T (2015) Query-dependent aesthetic model with deep learning for photo quality assessment. IEEE Trans Multimed 17:2035–2048CrossRefGoogle Scholar
  12. 12.
    Livingstone MS, Hubel DH (1987) Psychophysical evidence for separate channels for the perception of form, color, movement and depth. J Neurosci 7(11):3416–3468CrossRefGoogle Scholar
  13. 13.
    Rubin E (1958) Figure and ground. Read Percept, pp 194–203. http://psycnet.apa.org/psycinfo/1959-00274-000. Accessed 20 Sep 2018
  14. 14.
    Van Essen DC, Deyoe EA (1995) Concurrent processing in the primate visual cortex. In: Gazzaniga MS (ed) The cognitive neurosciences. The MIT Press, Cambridge, MA, US, pp 383–400Google Scholar
  15. 15.
    Zhou F, Lu Z, Wang C, Sun W, Xia ST, Liao Q (2015) Image quality assessment based on inter-patch and intra-patch similarity. PLoS One 10(3):e0116312CrossRefGoogle Scholar
  16. 16.
    Xu J, Ye P, Li Q, Du H, Liu Y, Doermann D (2016) Blind image quality assessment based on high order statistics aggregation. IEEE Trans Image Process 25(9):4444–4457MathSciNetCrossRefGoogle Scholar
  17. 17.
    Manap RA, Shao L, Frangi AF (2017) PATCH-IQ: a patch based learning framework for blind image quality assessment. Inf Sci 420:329–344CrossRefGoogle Scholar
  18. 18.
    Zhang C, Pan J, Chen S, Wang T, Sun D (2016) No reference image quality assessment using sparse feature representation in two dimensions spatial correlation. Neurocomputing 173:462–470CrossRefGoogle Scholar
  19. 19.
    Hyvärinen A, Hurri J, Hoyer PO (2009) Natural image statistics. Springer, LondonCrossRefzbMATHGoogle Scholar
  20. 20.
    Li J, Duan LY, Chen X, Huang T, Tian Y (2015) Finding the secret of image saliency in the frequency domain. IEEE Trans Pattern Anal Mach Intell 37(12):2428–2440CrossRefGoogle Scholar
  21. 21.
    Sheikh HR, Wang Z, Cormack L, Bovik AC. LIVE image quality assessment database release 2. http://live.ece.utexas.edu/research/quality. Accessed 25 Sep 2018
  22. 22.
    Hyvärinen A (2001) Complexity pursuit: separating interesting components from time series. Neural Comput 13(4):883–898CrossRefzbMATHGoogle Scholar
  23. 23.
    Final report from the video quality experts group on the validation of objective models of video quality assessment—phase II. http://www.vqeg.org/ (Online). Accessed 25 Sep 2018
  24. 24.
    Cao J, Pang Y, Li X, Liang J (2018) Randomly translational activation inspired by the input distributions of ReLU. Neurocomputing 275:859–868CrossRefGoogle Scholar
  25. 25.
    Sheikh HR, Sabir MF, Bovik AC (2006) A statistical evaluation of recent full reference image quality assessment algorithms. IEEE Trans Image Process 15(11):3440–3451CrossRefGoogle Scholar
  26. 26.
    Larson EC, Chandler DM (2010) Most apparent distortion: full-reference image quality assessment and the role of strategy. J Electr Imaging 19(1):011006-1–011006-21Google Scholar
  27. 27.
    Ponomarenko N, Ieremeiev O, Lukin V, Egiazarian K, Jin L, Astola J, Vozel B, Chehdi K, Carli M, Battisti F, Jay Kuo CC (2013) Color image database TID2013: peculiarities and preliminary results. In: European workshop on visual information processing, Paris, France, pp 106–111Google Scholar
  28. 28.
    Sheikh HR, Bovik AC (2006) Image information and visual quality. IEEE Trans Image Process 15(2):430–444CrossRefGoogle Scholar
  29. 29.
    Sheikh HR, Bovik AC, Veciana G (2005) An information fidelity criterion for image quality assessment using natural scene statistics. IEEE Trans Image Process 14(12):2117–2128CrossRefGoogle Scholar
  30. 30.
    Tang H, Joshi N, Kapoor A (2011) Learning a blind measure of perceptual image quality. In: 2011 IEEE conference on computer vision and pattern recognition (CVPR), Colorado Springs, Colorado, USA, pp 305–312Google Scholar
  31. 31.
    Xue W, Zhang L, Mou X (2013) Learning without human scores for blind image quality assessment. In: Proceedings of the IEEE conference on computer vision and pattern recognition, Portland, Oregon, USA, pp 995–1002Google Scholar
  32. 32.
    Gao X, Gao F, Tao D, Li X (2013) Universal blind image quality assessment metrics via natural scene statistics and multiple kernel learning. IEEE Trans Neural Netw Learn Syst 24(12):2013–2016CrossRefGoogle Scholar
  33. 33.
    Saad MA, Bovik AC, Charrier C (2010) A DCT statistics-based blind image quality index. IEEE Signal Process Lett 17(6):583–586CrossRefGoogle Scholar
  34. 34.
    Xue W, Mou X, Zhang L, Bovik AC, Feng X (2014) Blind image quality assessment using joint statistics of gradient magnitude and Laplacian features. IEEE Trans Image Process 23(11):4850–4862MathSciNetCrossRefzbMATHGoogle Scholar
  35. 35.
    Ye P, Kumar J, Kang L, Doermann D (2012) Unsupervised feature learning framework for no-reference image quality assessment. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), Rhode Island, USA, pp 1098–1105Google Scholar
  36. 36.
    Manap RA, Frangi AF, Shao L (2015) A non-parametric framework for no-reference image quality assessment. In: Proceedings of the IEEE global conference on signal and information processing (GlobalSIP), Orlando, Florida, USA, pp 562–566Google Scholar

Copyright information

© The Korean Institute of Electrical Engineers 2019

Authors and Affiliations

  • Chuang Zhang
    • 1
  • Jiawei Xu
    • 1
  • Xiaoyu Huang
    • 1
  • Seop Hyeong Park
    • 2
    Email author
  1. 1.Jiangsu Key Laboratory of Meteorological Observation and Information ProcessingNanjing University of Information Science and TechnologyNanjingChina
  2. 2.School of SoftwareHallym UniversityChuncheonSouth Korea

Personalised recommendations