Advertisement

Can Saliency Map Models Predict Human Egocentric Visual Attention?

  • Kentaro Yamada
  • Yusuke Sugano
  • Takahiro Okabe
  • Yoichi Sato
  • Akihiro Sugimoto
  • Kazuo Hiraki
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6468)

Abstract

The validity of using conventional saliency map models to predict human attention was investigated for video captured with an egocentric camera. Since conventional visual saliency models do not take into account visual motion caused by camera motion, high visual saliency may be erroneously assigned to regions that are not actually visually salient. To evaluate the validity of using saliency map models for egocentric vision, an experiment was carried out to examine the correlation between visual saliency maps and measured gaze points for egocentric vision. The results show that conventional saliency map models can predict visually salient regions better than chance for egocentric vision and that the accuracy decreases significantly with an increase in visual motion induced by egomotion, which is presumably compensated for in the human visual system. This latter finding indicates that a visual saliency model is needed that can better predict human visual attention from egocentric videos.

Keywords

Visual Attention Visual Saliency Gaussian Pyramid Egocentric Perspective Scene Camera 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Treisman, A., Gelade, G.: A feature-integration theory of attention. Cognitive Psychology 12, 97–136 (1980)CrossRefGoogle Scholar
  2. 2.
    Koch, C., Ullman, S.: Shifts in selective visual attention: towards the underlying neural circuitry. Human neurobiology 4, 219–227 (1985)Google Scholar
  3. 3.
    Itti, L., Koch, C., Niebur, E.: A model of saliency-based visual attention for rapid scene analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence 20, 1254–1259 (1998)CrossRefGoogle Scholar
  4. 4.
    Itti, L., Dhavale, N., Pighin, F., et al.: Realistic avatar eye and head animation using a neurobiological model of visual attention. In: SPIE 48th AnnualInternational Symposiumon Optical Science and Technology, vol. 5200, pp. 64–78 (2003)Google Scholar
  5. 5.
    Avraham, T., Lindenbaum, M.: Esaliency (extended saliency): Meaningful attention using stochastic image modeling. IEEE Transactions on Pattern Analysis and Machine Intelligence 32, 693–708 (2010)CrossRefGoogle Scholar
  6. 6.
    Cerf, M., Harel, J., Einhäuser, W., Koch, C.: Predicting human gaze using low-level saliency combined with face detection. Advances in Neural Information Processing Systems 20, 241–248 (2008)Google Scholar
  7. 7.
    Harel, J., Koch, C., Perona, P.: Graph-based visual saliency. Advances in Neural Information Processing Systems 19, 545–552 (2006)Google Scholar
  8. 8.
    Costa, L.: Visual saliency and atention as random walks on complex networks. ArXiv Physics e-prints (2006)Google Scholar
  9. 9.
    Wang, W., Wang, Y., Huang, Q., Gao, W.: Measuring visual saliency by site entropy rate. In: IEEE Computer Vision and Pattern Recognition, pp. 2368–2375 (2010)Google Scholar
  10. 10.
    Foulsham, T., Underwood, G.: What can saliency models predict about eye movements? Spatial and sequential aspects of fixations during encoding and recognition. Journal of Vision 8, 1–17 (2008)CrossRefGoogle Scholar
  11. 11.
    Itti, L.: Quantitative modelling of perceptual salience at human eye position. Visual Cognition 14, 959–984 (2006)CrossRefGoogle Scholar
  12. 12.
    Parkhurst, D., Law, K., Niebur, E.: Modeling the role of salience in the allocation of overt visual attention. Vision Research 42, 107–123 (2002)CrossRefGoogle Scholar
  13. 13.
    Ward, L.M.: Attention. Scholarpedia 3, 1538 (2008)MathSciNetCrossRefGoogle Scholar
  14. 14.
    Broadbent, D.: Perception and communication. Pergamon Press, Oxford (1958)CrossRefGoogle Scholar
  15. 15.
    Derrington, A., Krauskopf, J., Lennie, P.: Chromatic mechanisms in lateral geniculate nucleus of macaque. The Journal of Physiology 357, 241–265 (1984)CrossRefGoogle Scholar
  16. 16.
    Greenspan, H., Belongie, S., Goodman, R., Perona, P., Rakshit, S., Anderson, C.: Overcomplete steerable pyramid filters and rotation invariance. In: IEEE Computer Vision and Pattern Recognition, pp. 222–228 (1994)Google Scholar
  17. 17.
    nac Image Technology Inc.: EMR-9 (2008), http://www.nacinc.com/products/Eye-Tracking-Products/EMR-9/
  18. 18.
    Howard, I.: The optokinetic system. The Vestibulo-ocular Reflex and Vertigo, 163–184 (1993)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Kentaro Yamada
    • 1
  • Yusuke Sugano
    • 1
  • Takahiro Okabe
    • 1
  • Yoichi Sato
    • 1
  • Akihiro Sugimoto
    • 2
  • Kazuo Hiraki
    • 3
  1. 1.The University of TokyoTokyoJapan
  2. 2.National Institute of InformaticsTokyoJapan
  3. 3.The University of TokyoTokyoJapan

Personalised recommendations