Advertisement

Cognitive Computation

, Volume 10, Issue 6, pp 980–990 | Cite as

A No-Reference Image Quality Measure for Blurred and Compressed Images Using Sparsity Features

  • Kanjar DeEmail author
  • V. Masilamani
Article
  • 158 Downloads

Abstract

Images can be distorted in the real world via many sources like faulty sensors, artifacts generated by compression algorithms, defocus, faulty lens, and poor lighting conditions. Our biological vision system can identify the quality of image by looking at the images, but developing an algorithm to assess the quality of an image is a very challenging task as an image can be corrupted by different types of distortions and statistical properties of different types of distortions are dissimilar. The main objective of this article is to propose an image quality assessment technique for images corrupted by blurring and compression-based artifacts. Machine learning-based approaches have been used in recent times to perform this task. Images can be analyzed in different transform domains like discrete cosine transform domain, wavelet domains, curvelet domains, and singular value decomposition. These domains generate sparse matrices. In this paper, we propose no-reference image quality assessment algorithms for images corrupted by blur and different compression algorithms using sparsity-based features computed from different domains and all features pooled by support vector regression. The proposed model has been tested on three standard image quality assessment datasets LIVE, CSIQ, and TID2013, and correlation with subjected human opinion scores has been presented along with comparative study with state-of-the-art quality measures. Experiments run on standard image quality databases show that the results obtained are outperforming the existing results.

Keywords

Image quality assessment Sparsity Support vector regression 

Notes

Compliance with Ethical Standards

Conflict of Interest

The authors declare that they have no conflict of interest.

Ethical Approval

The authors of this article have not performed any studies with human subjects and animals.

References

  1. 1.
    Acqualagna L, Bosse S, Porbadnigk AK, Curio G, Müller K. R., Wiegand T, Blankertz B. EEG-based classification of video quality perception using steady state visual evoked potentials (SSVEPs). J Neural Eng 2015;12(2):026,012.Google Scholar
  2. 2.
    Avarvand FS, Bosse S, Müller K R, Schäfer R., Nolte G, Wiegand T, Curio G, Samek W. Objective quality assessment of stereoscopic images with vertical disparity using EEG. J Neural Eng 2017;14(4): 046,009.Google Scholar
  3. 3.
    Bosse S, Acqualagna L, Samek W, Porbadnigk AK, Curio G, Blankertz B, Muller KR, Wiegand T. 2017. Assessing perceived image quality using steady-state visual evoked potentials and spatio-spectral decomposition. IEEE Transactions on Circuits and Systems for Video Technology.Google Scholar
  4. 4.
    Burges CJ. A tutorial on support vector machines for pattern recognition. Data Min Knowl Disc 1998;2(2): 121–167.Google Scholar
  5. 5.
    Candes E, Demanet L, Donoho D, Ying L. Fast discrete curvelet transforms. Multiscale Model Simul 2006;5(3):861–899.Google Scholar
  6. 6.
    Chang CC, Lin CJ. LIBSVM: a library for support vector machines. ACM Trans Intell Syst Technol 2011;2 (3):27.Google Scholar
  7. 7.
    Cohen A, Daubechies I, Feauveau JC. Biorthogonal bases of compactly supported wavelets. Commun Pure Appl Math 1992;45(5):485–560.Google Scholar
  8. 8.
    Eskicioglu AM, Fisher PS. Image quality measures and their performance. IEEE Trans Commun 1995;43 (12):2959–2965.Google Scholar
  9. 9.
    Ferzli R, Karam LJ. A no-reference objective image sharpness metric based on the notion of just noticeable blur (JNB). IEEE Trans Image Process 2009;18(4):717–728.PubMedGoogle Scholar
  10. 10.
    Fong RC, Scheirer WJ, Cox DD. Using human brain activity to guide machine learning. Sci Rep 2018; 8(1):5397.PubMedPubMedCentralGoogle Scholar
  11. 11.
    Getreuer P, C-D-F wavelet. 2016. http://www.getreuer.info/home/waveletcdf97.
  12. 12.
    Golestaneh SA, Chandler DM. No-reference quality assessment of JPEG images via a quality relevance map. IEEE Signal Process Lett 2014;21(2):155–158.Google Scholar
  13. 13.
    Gonzalez RC, Woods RE. 2008. Digital image processing. Nueva Jersey.Google Scholar
  14. 14.
    Group VQE, et al. 2003. Final report from the video quality experts group on the validation of objective models of video quality assessment, phase ii (fr_tv2). ftp://ftp.its.bldrdoc.gov/dist/ituvidq/Boulder_VQEG_jan_04/VQEG_PhaseII_FRTV_Final_Report_SG9060E.doc, 2003.
  15. 15.
    Gu K, Zhai G, Lin W, Yang X, Zhang W. No-reference image sharpness assessment in autoregressive parameter space. IEEE Trans Image Process 2015;24(10):3218–3231.PubMedGoogle Scholar
  16. 16.
    Gu K, Zhai G, Liu M, Yang X, Zhang W, Sun X, Chen W, Zuo Y. 2013. Fisblim: A five-step blind metric for quality assessment of multiply distorted images. In: SiPS, pp 241–246.Google Scholar
  17. 17.
    Gu K, Zhai G, Yang X, Zhang W. Hybrid no-reference quality metric for singly and multiply distorted images. IEEE Trans Broadcast 2014;60(3):555–567.Google Scholar
  18. 18.
    Güċlü U., van Gerven MA. Deep neural networks reveal a gradient in the complexity of neural representations across the ventral stream. J Neurosci 2015;35(27):10,005–10,014.Google Scholar
  19. 19.
    Harding P, Robertson NM. Visual saliency from image features with application to compression. Cogn Comput 2013;5(1):76–98.Google Scholar
  20. 20.
    Hassen R, Wang Z, Salama MM. Image sharpness assessment based on local phase coherence. IEEE Trans Image Process 2013;22(7):2798–2810.PubMedGoogle Scholar
  21. 21.
    Hoyer PO. Non-negative matrix factorization with sparseness constraints. J Mach Learn Res 2004;5:1457–1469.Google Scholar
  22. 22.
    Hurley N, Rickard S. Comparing measures of sparsity. IEEE Trans Inf Theory 2009;55(10):4723–4741.Google Scholar
  23. 23.
    Jain AK. Fundamentals of digital image processing. New Jersey: Prentice-Hall Inc.; 1989.Google Scholar
  24. 24.
    Jayaraman D, Mittal A, Moorthy AK, Bovik AC. 2012. Objective quality assessment of multiply distorted images. In: 2012 conference record of the 46th Asilomar conference on signals, systems and computers (ASILOMAR), pp 1693–1697. IEEE.Google Scholar
  25. 25.
    Kriegeskorte N. Deep neural networks: a new framework for modeling biological vision and brain information processing. Annual Rev Vis Sci 2015;1:417–446.Google Scholar
  26. 26.
    Kruger N, Janssen P, Kalkan S, Lappe M, Leonardis A, Piater J, Rodriguez-Sanchez AJ, Wiskott L. Deep hierarchies in the primate visual cortex: what can we learn for computer vision? IEEE Trans Pattern Anal Mach Intell 2013;35(8):1847–1871.PubMedGoogle Scholar
  27. 27.
    Lai CC, Tsai CC. Digital image watermarking using discrete wavelet transform and singular value decomposition. IEEE Trans Instrum Meas 2010;59(11):3060–3063.Google Scholar
  28. 28.
    Larson EC, Chandler D. 2010. Categorical image quality (CSIQ) database. Online, http://vision.okstate.edu/csiq.
  29. 29.
    LeCun Y, Bengio Y, Hinton G. Deep learning. Nature 2015;521(7553):436.PubMedPubMedCentralGoogle Scholar
  30. 30.
    Li L, Lin W, Wang X, Yang G, Bahrami K, Kot AC. No-reference image blur assessment based on discrete orthogonal moments. IEEE Trans Cybern 2016;46(1):39–50.PubMedGoogle Scholar
  31. 31.
    Li L, Zhu H, Yang G, Qian J. Referenceless measure of blocking artifacts by tchebichef kernel analysis. IEEE Signal Process Lett 2014;21(1):122–125.Google Scholar
  32. 32.
    Lu W, Li X, Gao X, Tang W, Li J, Tao D. A video quality assessment metric based on human visual system. Cogn Comput 2010;2(2):120–131.Google Scholar
  33. 33.
    Ma J, Plonka G. The curvelet transform. IEEE Signal Process Mag 2010;27(2):118–133.Google Scholar
  34. 34.
    Malcolm GL, Groen II, Baker CI. Making sense of real-world scenes. Trends Cogn Sci 2016;20(11): 843–856.PubMedPubMedCentralGoogle Scholar
  35. 35.
    Marziliano P, Dufaux F, Winkler S, Ebrahimi T. A no-reference perceptual blur metric. In: 2002 international conference on image processing. 2002. Proceedings, vol 3, pp III–57. IEEE; 2002.Google Scholar
  36. 36.
    Mittal A, Moorthy AK, Bovik AC. No-reference image quality assessment in the spatial domain. IEEE Trans Image Process 2012;21(12):4695–4708.PubMedGoogle Scholar
  37. 37.
    Moorthy AK, Bovik AC. Blind image quality assessment: from natural scene statistics to perceptual quality. IEEE Trans Image Process 2011;20(12):3350–3364.PubMedGoogle Scholar
  38. 38.
    Narvekar ND, Karam LJ. A no-reference image blur metric based on the cumulative probability of blur detection (CPBD). IEEE Trans Image Process 2011;20(9):2678–2683.PubMedGoogle Scholar
  39. 39.
    Narwaria M, Lin W. SVD-based quality metric for image and video using machine learning. IEEE Trans Syst Man Cybern B Cybern 2012;42(2):347–364.PubMedGoogle Scholar
  40. 40.
    Olshausen BA, Field DJ. Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature 1996;381(6583):607.PubMedGoogle Scholar
  41. 41.
    Ponomarenko N, Ieremeiev O, Lukin V, Egiazarian K, Jin L, Astola J, Vozel B, Chehdi K, Carli M, Battisti F, et al. 2013. Color image database TID2013: peculiarities and preliminary results. In: 2013 4th European workshop on visual information processing (EUVIP), pp 106–111. IEEE.Google Scholar
  42. 42.
    Ponomarenko N, Lukin V, Zelensky A, Egiazarian K, Carli M, Battisti F. TID2008—a database for evaluation of full-reference visual quality assessment metrics. Adv Mod Radioelectron 2009;10(4):30–45.Google Scholar
  43. 43.
    Rehman A, Wang Z. Reduced-reference image quality assessment by structural similarity estimation. IEEE Trans Image Process 2012;21(8):3378–3389.PubMedGoogle Scholar
  44. 44.
    Saad MA, Bovik AC, Charrier C. Blind image quality assessment: a natural scene statistics approach in the DCT domain. IEEE Trans Image Process 2012;21(8):3339–3352.PubMedGoogle Scholar
  45. 45.
    Saevarsson BB, Sveinsson JR, Benediktsson JA. Combined wavelet and curvelet denoising of SAR images. In: Geoscience and Remote Sensing Symposium, 2004. IGARSS’04. Proceedings. 2004 IEEE International, vol 6, pp 4235–4238. IEEE; 2004.Google Scholar
  46. 46.
    Scheirer WJ, Anthony SE, Nakayama K, Cox DD. Perceptual annotation: measuring human vision to improve computer vision. IEEE Trans Pattern Anal Mach Intell 2014;36(8):1679–1686.PubMedGoogle Scholar
  47. 47.
    Scholler S, Bosse S, Treder MS, Blankertz B, Curio G, Muller KR, Wiegand T. Toward a direct measure of video quality perception using EEG. IEEE Trans Image Process 2012;21(5):2619–2629.PubMedGoogle Scholar
  48. 48.
    Sheikh HR, Bovik AC, Cormack L. No-reference quality assessment using natural scene statistics: JPEG2000. IEEE Trans Image Process 2005;14(11):1918–1927.PubMedGoogle Scholar
  49. 49.
    Sheikh HR, Wang Z, Cormack L, Bovik AC. 2005. Live image quality assessment database release 2.Google Scholar
  50. 50.
    Soundararajan R, Bovik AC. Rred indices: Reduced reference entropic differencing for image quality assessment. IEEE Trans Image Process 2012;21(2):517–526.PubMedGoogle Scholar
  51. 51.
    Spratling MW. A hierarchical predictive coding model of object recognition in natural images. Cogn Comput 2017;9(2):151–167.Google Scholar
  52. 52.
    Starck JL, Candès E. J., Donoho DL. The curvelet transform for image denoising. IEEE Trans Image Process 2002;11(6):670–684.PubMedGoogle Scholar
  53. 53.
    Starck JL, Murtagh F, Candès E. J., Donoho DL. Gray and color image contrast enhancement by the curvelet transform. IEEE Trans Image Process 2003;12(6):706–717.PubMedGoogle Scholar
  54. 54.
    Sumana IJ, Islam MM, Zhang D, Lu G. 2008. Content based image retrieval using curvelet transform. In: 2008 IEEE 10th workshop on multimedia signal processing, pp 11–16. IEEE.Google Scholar
  55. 55.
    Sumana IJ, Lu G, Zhang D. 2012. Comparison of curvelet and wavelet texture features for content based image retrieval. In: 2012 IEEE international conference on multimedia and Expo (ICME), pp 290–295. IEEE.Google Scholar
  56. 56.
    Tanveer M. Robust and sparse linear programming twin support vector machines. Cogn Comput 2015;7(1): 137–149.Google Scholar
  57. 57.
    VanRullen R. Perception science in the age of deep neural networks. Front Psychol 2017;8:142.PubMedPubMedCentralGoogle Scholar
  58. 58.
    Vu CT, Phan TD, Chandler DM. A spectral and spatial measure of local perceived sharpness in natural images. IEEE Trans Image Process 2012;21(3):934–945.PubMedGoogle Scholar
  59. 59.
    Vu PV, Chandler DM. A fast wavelet-based algorithm for global and local image sharpness estimation. IEEE Signal Process Lett 2012;19(7):423–426.Google Scholar
  60. 60.
    Wang Z, Bovik AC, Sheikh HR, Simoncelli EP. Image quality assessment: from error visibility to structural similarity. IEEE Trans Image Process 2004;13(4):600–612.PubMedGoogle Scholar
  61. 61.
    Wang Z, Sheikh HR, Bovik AC. No-reference perceptual quality assessment of JPEG compressed images. In: 2002 international conference on image processing. 2002. Proceedings, vol 1, pp I–477. IEEE; 2002.Google Scholar
  62. 62.
    Zhang J, Wang Y, Zhang Z, Xia C. Comparison of wavelet, gabor and curvelet transform for face recognition. Opt Appl 2011;41(1):183–193.Google Scholar
  63. 63.
    Zhang L, Zhang L, Bovik AC. A feature-enriched completely blind image quality evaluator. IEEE Trans Image Process 2015;24(8):2579–2591.Google Scholar
  64. 64.
    Zhang Y, Wang Y, Jin J, Wang X. Sparse Bayesian learning for obtaining sparsity of eeg frequency bands based feature vectors in motor imagery classification. Int J Neural Syst 2017;27(02):1650,032.Google Scholar
  65. 65.
    Zhang Y, Zhou G, Jin J, Wang M, Wang X, Cichocki A. L1-regularized multiway canonical correlation analysis for SSVEP-based BCI. IEEE Trans Neural Syst Rehabil Eng 2013;21(6):887–896.PubMedGoogle Scholar
  66. 66.
    Zhang Y, Zhou G, Jin J, Wang X, Cichocki A. Optimizing spatial patterns with sparse filter bands for motor-imagery based brain–computer interface. J Neurosci Methods 2015;255:85–91.PubMedGoogle Scholar
  67. 67.
    Zhang Y, Zhou G, Jin J, Zhao Q, Wang X, Cichocki A. Sparse Bayesian classification of eeg for brain–computer interface. IEEE Trans Neural Netw Learn Syst 2016;27(11):2256–2267.PubMedGoogle Scholar
  68. 68.
    Zhao J, Sun S, Liu X, Sun J, Yang A. A novel biologically inspired visual saliency model. Cogn Comput 2014;6(4):841–848.Google Scholar
  69. 69.
    Zhou G, Zhao Q, Zhang Y, Adalı T, Xie S, Cichocki A. Linked component analysis from matrices to high-order tensors: applications to biomedical data. Proc IEEE 2016;104(2):310–331.Google Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2018

Authors and Affiliations

  1. 1.Indian Institute of Information Technology, Design & Manufacturing (IIITD&M) KancheepuramChennaiIndia

Personalised recommendations