Validating the Visual Saliency Model

  • Ali Alsam
  • Puneet Sharma
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7944)

Abstract

Bottom up attention models suggest that human eye movements can be predicted by means of algorithms that calculate the difference between a region and its surround at different image scales where it is suggested that the more different a region is from its surround the more salient it is and hence the more it will attract fixations. Recent studies have however demonstrated that a dummy classifier which assigns more weight to the center region of the image out performs the best saliency algorithm calling into doubt the validity of the saliency algorithms and their associated bottom up attention models. In this paper, we performed an experiment using linear discrimination analysis to try to separate between the values obtained from the saliency algorithm for regions that have been fixated and others that haven’t. Our working hypothesis was that being able to separate the regions would constitute a proof as to the validity of the saliency model. Our results show that the saliency model performs well in predicting non-salient regions and highly salient regions but that it performs no better than a random classifier in the middle range of saliency.

Keywords

Saliency fixations 

References

  1. 1.
    Itti, L., Koch, C., Niebur, E.: A model of saliency-based visual attention for rapid scene analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence 20, 1254–1259 (1998)CrossRefGoogle Scholar
  2. 2.
    Koch, C., Ullman, S.: Shifts in selective visual attention: towards the underlying neural circuitry. Human Neurobiology 4, 219–227 (1985)Google Scholar
  3. 3.
    Itti, L., Koch, C.: A saliency-based search mechanism for overt and covert shifts of visual attention. Vision Research 40, 1489–1506 (2000)CrossRefGoogle Scholar
  4. 4.
    Itti, L., Koch, C.: Computational modelling of visual attention. Nature Reviews Neuroscience 2, 194–203 (2001)CrossRefGoogle Scholar
  5. 5.
    Walther, D., Koch, C.: Modeling attention to salient proto-objects. Neural Networks 19, 1395–1407 (2006)MATHCrossRefGoogle Scholar
  6. 6.
    Underwood, G., Humphreys, L., Cross, E.: Congruency, Saliency and Gist in the inspection of objects in natural scenes. In: Eye Movements: A Window on Mind and Brain, pp. 563–579. Elsevier (2007)Google Scholar
  7. 7.
    Walther, D.: Interactions of Visual Attention and Object Recognition: Computational Modeling, Algorithms, and Psychophysics. PhD thesis, California Institute of Technology, Pasadena, California (2006)Google Scholar
  8. 8.
    Harel, J., Koch, C., Perona, P.: Graph-based visual saliency. In: Proceedings of Neural Information Processing Systems (NIPS) (2006)Google Scholar
  9. 9.
    Cerf, M., Harel, J., Einhauser, W., Koch, C.: Predicting human gaze using low-level saliency combined with face detection. In: Advances in Neural Information Processing Systems (NIPS), vol. 20, pp. 241–248 (2007)Google Scholar
  10. 10.
    Henderson, J.M., Brockmole, J.R., Castelhano, M.S., Mack, M.: Visual Saliency Does Not Account for Eye Movements during Visual Search in Real-World Scenes. In: Eye Movements: A Window on Mind and Brain, pp. 537–562. Elsevier (2007)Google Scholar
  11. 11.
    Rajashekar, U., van der Linde, I., Bovik, A.C., Cormack, L.K.: Gaffe: A gaze-attentive fixation finding engine. IEEE Transactions on Image Processing 17, 564–573 (2008)MathSciNetCrossRefGoogle Scholar
  12. 12.
    Meur, O.L., Callet, P.L., Barba, D., Thoreau, D.: A coherent computational approach to model bottom-up visual attention. IEEE Transactions on Pattern Analysis and Machine Intelligence 28, 802–817 (2006)CrossRefGoogle Scholar
  13. 13.
    Borji, A., Itti, L.: State-of-the-art in visual attention modeling. IEEE Transactions on Pattern Analysis and Machine Intelligence 35, 185–207 (2013)CrossRefGoogle Scholar
  14. 14.
    Borji, A., Sihite, D.N., Itti, L.: Quantitative analysis of human-model agreement in visual saliency modeling: A comparative study. IEEE Transactions on Image Processing 22, 55–69 (2013)MathSciNetCrossRefGoogle Scholar
  15. 15.
    Parkhurst, D., Law, K., Niebur, E.: Modeling the role of salience in the allocation of overt visual attention. Vision Research 42, 107–123 (2002)CrossRefGoogle Scholar
  16. 16.
    Oliva, A., Torralba, A., Castelhano, M.S., Henderson, J.M.: Top-down control of visual attention in object detection. In: Proceedings of the 2003 International Conference on Image Processing, ICIP 2003, vol. 1, pp. 253–256 (2003)Google Scholar
  17. 17.
    Henderson, J.M.: Human gaze control during real-world scene perception. Trends in Cognitive Sciences 7, 498–504 (2003)CrossRefGoogle Scholar
  18. 18.
    Tatler, B.W., Baddeley, R.J., Gilchrist, I.D.: Visual correlates of fixation selection: effects of scale and time. Vision Research 45, 643–659 (2005)CrossRefGoogle Scholar
  19. 19.
    Tatler, B.W.: The central fixation bias in scene viewing: Selecting an optimal viewing position independently of motor biases and image feature distributions. Journal of Vision 7, 1–17 (2007)CrossRefGoogle Scholar
  20. 20.
    Judd, T., Ehinger, K., Durand, F., Torralba, A.: Learning to predict where humans look. In: International Conference on Computer Vision (ICCV) (2009)Google Scholar
  21. 21.
    Rosenholtz, R.: A simple saliency model predicts a number of motion popout phenomena. Vision Research 39, 3157–3163 (1999)CrossRefGoogle Scholar
  22. 22.
    Cerf, M., Frady, E.P., Koch, C.: Faces and text attract gaze independent of the task: Experimental data and computer model. Journal of Vision 9, 1–15 (2009)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Ali Alsam
    • 1
  • Puneet Sharma
    • 1
  1. 1.Department of Informatics & e-Learning (AITeL)Sør-Trøndelag University College (HiST)TrondheimNorway

Personalised recommendations