Advertisement

Human Visual System and Vision Modeling

  • Yong Ding
Chapter

Abstract

The computational modeling of human visual system (HVS) is closely connected with image quality assessment (IQA) since visual signal quality is always finally evaluated by the former. Therefore, basic knowledge about HVS, especially its parts that are in charge of quality perception, should be aware of for studying IQA. This chapter gives a general introduction to the anatomy structure and the important properties of HVS. The anatomy structure gives a straightforward understanding upon HVS, including the hierarchical signal transmitting and processing flow and the responsibilities of each specific part. The properties of HVS are abstraction of this biological basis that is concluded to offer potential instructions for the design of objective IQA methods.

Keywords

Human visual system Anatomy structures Properties 

References

  1. Ahumanda, A. (1996). Simplified vision models for image quality assessment. In SID International Symposium Digest of Technical Papers, 97-400.Google Scholar
  2. Alaei, A., Raveaux, R., & Conte, D. (2017). Image quality assessment based on regions of interest. Signal, Image and Video Processing, 11(4), 673–680.CrossRefGoogle Scholar
  3. Backus, B. T., Banks, M. S., van Ee, R., & Crowell, J. A. (1999). Horizontal and vertical disparity, eye position, and stereoscopic slant perception. Vision Research, 39(6), 1143–1170.CrossRefGoogle Scholar
  4. Budrikis, Z. L. (1972). Visual fidelity criterion and modeling. Proceedings of the IEEE, 60(7), 771–779.Google Scholar
  5. Campbell, F. W., & Robson, J. G. (1968). Application of Fourier analysis to the visibility of gratings. Journal of Physiology (London), 197(3), 551–566.CrossRefGoogle Scholar
  6. Chandler, D. M. (2013). Seven challenges in image quality assessment: Past, present, and future research. ISRN Signal Processing (pp. 1–53).Google Scholar
  7. Chen, M. J., Su, C. C., Kwon, D. K., Cormack, L. K., & Bovik, A. C. (2013). Full-reference quality assessment of stereopairs accounting for rivalry. Signal Processing: Image Communication, 28(9), 1143–1155.Google Scholar
  8. Chen, C., Zhang, X., Wang, Y., Zhou, T., & Fang, F. (2016). Neural activities in V1 create the bottom-up saliency map of natural scenes. Experimental Brain Research, 234(6), 1769–1780.CrossRefGoogle Scholar
  9. Conway, B. R. (2009). Color vision, cones, and color-coding in the cortex. The Neuroscientist, 15(3), 274–290.CrossRefGoogle Scholar
  10. Cormack, L. K. (2005). Computational models of early human vision. In Handbook of image and video processing (pp. 325–345).Google Scholar
  11. Daly, S. (1992). Visible difference predictor: An algorithm for the assessment of image fidelity. In Proceedings of SPIE (Vol. 1616, 2–15).Google Scholar
  12. Daubechies, I., & Sweldens, W. (1998). Factoring wavelet transforms into lifting steps. Journal of Fourier Analysis and Applications, 4(3), 245–267.MathSciNetCrossRefzbMATHGoogle Scholar
  13. De Valois, R. L., & De Valois, K. K. (1990). Spatial vision. New York: Oxford University Press.zbMATHGoogle Scholar
  14. Ding, Y., Zhao, X., Zhang, Z., & Dai, H. (2017). Image quality assessment based on multi-order local features description, modeling and quantification. IEICE Transactions on Information and Systems, E100-D(6), 2453–2460.Google Scholar
  15. Felleman, D., & Essen, D. V. (1991). Distributed hierarchical processing in primate cerebral cortex. Cerebral Cortex, 1(1), 1–47.CrossRefGoogle Scholar
  16. Gao, X., Lu, W., Tao, D., & Li, X. (2009). Image quality assessment based on multiscale geometric analysis. IEEE Transactions on Image Processing, 18(7), 1409–1423.MathSciNetCrossRefzbMATHGoogle Scholar
  17. Garding, J., Porrill, J., Mayhew, J., & Frisby, J. (1995). Stereopsis, vertical disparity and relief transformations. Vision Research, 35(5), 703–722.CrossRefGoogle Scholar
  18. Geisler, W. S., & Banks, M. S. (1995). Visual performance. New York: McGraw-Hill Book Company.Google Scholar
  19. Goferman, S., Zelnik-Manor, L., & Tal, A. (2012). Context-aware saliency detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 34(10), 1915–1926.CrossRefGoogle Scholar
  20. Gollisch, T., & Meister, M. (2010). Eye smarter than scientists believed: Neural computations in circuits of the retina. Neuron, 65(2), 150–164.CrossRefGoogle Scholar
  21. Graham, N. (1989). Visual pattern analyzers. New York: Oxford University Press.CrossRefGoogle Scholar
  22. Gu, K., Zhai, G., Yang, X., & Zhang, W. (2015). Using free energy principle for blind image quality assessment. IEEE Transactions on Multimedia, 17(1), 50–63.CrossRefGoogle Scholar
  23. Hecht, S. (1924). The visual discrimination of intensity and the Weber-Fechner law. Journal General Physiology, 7(2), 235–267.CrossRefGoogle Scholar
  24. Itti, L., Koch, C., & Niebur, E. (1998). A model of saliency-based visual attention for rapid scene analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(11), 1254–1259.Google Scholar
  25. Itti, L., & Koch, C. (2000). A saliency-based mechanism for overt and convert shifts of visual attention. Vision Research, 40, 1489–1506.Google Scholar
  26. Jones, P. W., Daly, S. J., Gaborski, R. S., & Rabbani, M. (1995). Comparative study of wavelet and discrete cosine transform (DCT) decompositions with equivalent quantization and encoding strategies for medical images. In Proceedings of SPIE Medical Imaging (Vol. 2431, pp. 571–582).Google Scholar
  27. Jones, J. P., & Palmer, L. A. (1987). An evaluation of the two-dimensional Gabor filter model of simple receptive fields in cat striate cortex. Journal of Neurophysiology, 58(6), 1233–1258.CrossRefGoogle Scholar
  28. Kadir, T., & Brady, M. (2001). Saliency, scale and image description. International Journal of Computer Vision, 45(2), 83–105.CrossRefzbMATHGoogle Scholar
  29. Kaplan, I. T., & Metlay, W. (1964). Light intensity and binocular rivalry. Journal of Experimental Psychology, 67(1), 22–26.CrossRefGoogle Scholar
  30. Koch, C., & Poggio, T. (1999). Predicting the visual world: silence is golden. Nature Neuroscience, 2(1), 9–10.CrossRefGoogle Scholar
  31. Koffka, K. (1955). Principles of gestalt psychology. Routledge & Kegan Paul Ltd.Google Scholar
  32. Kottayil, N. K., Cheng, I., Dufaux, F., & Basu, A. (2016). A color intensity invariant low-level feature optimization framework for image quality assessment. Signal, Image and Video Processing, 10(6), 1169–1176.CrossRefGoogle Scholar
  33. Kruger, N., Janssen, P., Kalkan, S., Lappe, M., Leonardis, A., & Piater, J. (2013). Deep hierarchies in the primate visual cortex: What can we learn for computer vision? IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(8), 1847–1871.CrossRefGoogle Scholar
  34. Kuffler, S. W. (1953). Discharge patterns and functional organization of mammalian retina. Journal of Neurophysiology, 16(1), 37–68.CrossRefGoogle Scholar
  35. Legge, G. E., & John, M. F. (1980). Contrast masking in human vision. Journal of the Optical Society of America, 70(12), 1458–1471.CrossRefGoogle Scholar
  36. Levin, A., & Weiss, Y. (2009). Learning to combine bottom-up and top-down segmentation. International Journal of Computer Vision, 81(1), 105–118.CrossRefGoogle Scholar
  37. Li, Z. (2002). A saliency map in primary visual cortex. Trends in Cognitive Sciences, 6(1), 9–16.CrossRefGoogle Scholar
  38. Lin, W., Dong, L. & Xue, P. (2003). Discriminative analysis of pixel difference towards picture quality prediction. In Proceedings 2003 International Conference on Image Processing (Cat. No.03CH37429), (Vol. 2, No. 3, pp. 193–196).Google Scholar
  39. Lin, W., & Kuo, C.-C. J. (2011). Perceptual visual quality metrics: A survey. Journal of Visual Communication and Image Representation, 22(4), 297–312.CrossRefGoogle Scholar
  40. Lubin, J. (1993). The use of psychophysical data and models in the analysis of display system performance. In A. B. Watson (Ed.), Digital images and human vision (pp. 163–178). Cambridge: MIT Press.Google Scholar
  41. Lubin, J. (1995). Avisual discrimination mode for image system design and evaluation. Visual Models for Target Detection and Recognition (pp. 207–220). Singapore: World Scientific Publishers.Google Scholar
  42. Mannos, J. L., & Sakrison, D. J. (1974). The effects of a visual fidelity criterion on the encoding of images. IEEE Transactions on Information Theory, 20(4), 525–536.CrossRefzbMATHGoogle Scholar
  43. Masland, R. H. (2012). The neuronal organization of the retina. Neuron, 76(2), 266–280.CrossRefGoogle Scholar
  44. Min, X., Zhai, G., Gao, Z., & Gu, K. (2014). Visual attention data for image quality assessment databases. In 2014 IEEE International Symposium on Circuits and Systems (ISCAS) (pp. 894–897), Melbourne VIC.Google Scholar
  45. Moorthy, A. K., Wang, Z., & Bovik, A. C. (2011). Visual perception and quality assessment. In G. Cristobal, P. Schelkens, & H. Thienpont (Eds.), Optical and digital image processing. Weinheim: Wiley Publisher.Google Scholar
  46. Navalpakkam, V., Koch, C., Rangel, A., Perona, P., & Treisman, A. (2010). Optimal reward harvesting in complex perceptual environments. Proceedings of the National Academy of Sciences of the United States of America, 107(11), 5232–5237.Google Scholar
  47. Nawrot, M. (2003). Depth from motion parallax scales with eye movement gain. Journal of Vision, 3(11), 841–851.CrossRefGoogle Scholar
  48. Oliva A. (2005). Gist of the scene. Neurobiology of Attention, 251–256.Google Scholar
  49. Orban, G. A. (2008). Higher order visual processing in macaque extrastriate cortex. Physiological Reviews, 88(1), 59–89.CrossRefGoogle Scholar
  50. Ouria, D. B., Rieux, C., Hut, R. A., & Cooper, H. M. (2006). Immunohistochemical evidence of a melanopsin cone in human retina. Investigative Ophthalmology & Visual Science, 47(4), 1636–1641.CrossRefGoogle Scholar
  51. Poggio, G., & Poggio, T. (1984). The analysis of stereopsis. Annual Review of Neuroscience, 7(1), 379–412.CrossRefGoogle Scholar
  52. Saha, A., & Wu, Q. M. J. (2016). Full-reference image quality assessment by combining global and local distortion measures. Signal Processing, 128, 186–197.CrossRefGoogle Scholar
  53. Sakrison, D., & Algazi, V. (1971). Comparison of line-by-line and two-dimensional encoding of random images. IEEE Transactions on Information Theory, 17(4), 386–398.MathSciNetCrossRefzbMATHGoogle Scholar
  54. Schade, O. H. (1956). Optical and photoelectric analog of the eye. Journal of the Optical Society of America, 46(9), 721–739.CrossRefGoogle Scholar
  55. Schreiber, W. F. (1986). Fundamentals of electronic imaging systems. Berlin: Springer.CrossRefGoogle Scholar
  56. Shao, F., Lin, W., Gu, S., Jiang, G., & Srikanthan, T. (2013). Perceptual full-reference quality assessment of stereoscopic images by considering binocular visual characteristics. IEEE Transactions on Image Processing, 22(5), 1940–1953.MathSciNetCrossRefzbMATHGoogle Scholar
  57. Shapley, R., & Hawken, M. J. (2011). Color in the cortex: Single- and double-opponent cells. Vision Research, 51(7), 701–717.CrossRefGoogle Scholar
  58. Shen, D., & Wang, S. (1996). Measurements of JND property of HVS and its applications to image segmentation, coding and requantization. In Proceedings of SPIE (Vol. 2952, pp. 113–121).Google Scholar
  59. Stockham, T. G. (1972). Image processing in the context of a visual model. Proceedings of the IEEE, 60(7), 828–842.Google Scholar
  60. Tatler, B. W., Wade, N. J., Kwan, H., Findlay, J. M., & Velichkovsky, B. M. (2010). Yarbus, eye movements, and vision. I-Perception, 1(1), 7–27.CrossRefGoogle Scholar
  61. Taylor, C., Pizlo, Z., Allebach, J. P., & Bouman, C. A. (1997). Image quality assessment with a Gabor pyramid model of the human visual system. In Proceeding of SPIE (Vol. 3016, pp. 58–69).Google Scholar
  62. Tong, Y. B., Konik, H., Cheikh, F. A., & Tremeau, A. (2010). Full reference image quality assessment based on saliency map analysis. Journal of Imaging Science and Technology, 54(3), 305031–305034.CrossRefGoogle Scholar
  63. Treisman, A., & Gelade, G. (1980). A feature-integration theory of attention. Cognitive Psychology, 12(1), 97–136.CrossRefGoogle Scholar
  64. Vu, C. T., Larson, E. C., & Chandler, D. M. (2008). Visual fixation patterns when judging image quality: Effects of distortion type, amount, and subject experience. In Proceedings of the IEEE Southwest Symposium on Image Analysis and Interpretation (SSIAI ’08) (pp. 73–76).Google Scholar
  65. Wandell, B. A. (1995). Foundations of vision. Sinauer Associates, Inc.Google Scholar
  66. Wang, Z., & Bovik, A. C. (2006). Modern image quality assessment. Synthesis Lectures on Image, Video, and Multimedia Processing, 2(1), 1–156.CrossRefGoogle Scholar
  67. Watson, A. B. (1993). DC Tune: A technique for visual optimization of DCT quantization matrices for individual images. In Society for Information Display Digest of Technical Papers (Vol. XXIV, 946–949).Google Scholar
  68. Watson, A. B., & Ahumanda, A. (2005). A standard model for foveal detection of spatial contrast. Journal of Vision, 5(9), 717–740.CrossRefGoogle Scholar
  69. Watson, A. B., Hu, J., & McGowan, J. F., III. (2001). DVQ: A digital video quality metric based on human vision. Journal of Electronic Imaging, 10(1), 20–29.CrossRefGoogle Scholar
  70. Watson, A. B., Yang, G. Y., Solomon, J. A., & Villasenor, J. (1997). Visibility of wavelet quantization noise. IEEE Transactions on Image Processing, 6(8), 1164–1175.CrossRefGoogle Scholar
  71. Wilson, H. R., & Regan, D. (1984). Spatial frequency adaptation and grating discrimination: Predictions of a line element model. Journal of the Optical Society of America A: Optics and Image Science, and Vision, 1(11), 1091–1096.CrossRefGoogle Scholar
  72. Winkler, S. (1999). A perceptual distortion metric for digital color video. In Proceedings of SPIE (Vol. 3644, 175–184).Google Scholar
  73. Wolfe, J. (1994). Guided search 2.0: A revised model of visual search. Psychonomic Bulletin & Review, 1(2), 202–238.CrossRefGoogle Scholar
  74. Wu, J., Lin, W., Shi, G., & Liu, A. (2013). Perceptual quality metric with internal generative mechanism. IEEE Transactions on Image Processing, 22(1), 43–54.MathSciNetCrossRefzbMATHGoogle Scholar
  75. Wu, H. R., & Rao, K. R. (2006). Digital video image quality and perceptual coding. Taylor & Francis.Google Scholar
  76. Xue, W., Zhang, L., Mou, X., & Bovik, A. C. (2014). Gradient magnitude similarity deviation: A highly efficient perceptual image quality index. IEEE Transactions on Image Processing, 23(2), 684–695.MathSciNetCrossRefzbMATHGoogle Scholar
  77. Yamada, K., & Cottrell, G. W. (1995). A model of scan paths applied to face recognition. In Proceedings of the 17th Annual Conference of the Cognitive Science Society (pp. 55–60).Google Scholar
  78. Zeng, W., Daly, S., & Lei, S. (2002). An overview of the visual optimization tools in JPEG 2000. Signal Processing: Image Communication, 17(1), 85–104.Google Scholar
  79. Zhang, L., Shen, Y., & Li, H. (2014). VSI: A visual saliency-induced index for perceptual image quality assessment. IEEE Transactions on Image Processing, 23(10), 4270–4281.MathSciNetCrossRefzbMATHGoogle Scholar
  80. Zhang, L., Tong, M. H., Marks, T. K., Shan, H., & Cottrell, G. W. (2008). SUN: A bayesian framework for saliency using natural statistics. Journal of Vision, 8(7), 32.CrossRefGoogle Scholar

Copyright information

© Zhejiang University Press, Hangzhou and Springer-Verlag GmbH Germany 2018

Authors and Affiliations

  1. 1.Zhejiang UniversityHangzhouChina

Personalised recommendations