Advertisement

Generating Sequence of Eye Fixations Using Decision-Theoretic Attention Model

  • Erdan Gu
  • Jingbin Wang
  • Norman I. Badler
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4840)

Abstract

Human eyes scan images with serial eye fixations. We propose a novel attention selectivity model for the automatic generation of eye fixations on 2D static scenes. An activation map was first computed by extracting primary visual features and detecting meaningful objects from the scene. An adaptable retinal filter was applied on this map to generate “Regions of Interest” (ROIs), whose locations corresponded to those of activation peaks and whose sizes were estimated by an iterative adjustment algorithm. The focus of attention was moved serially over the detected ROIs by a decision-theoretic mechanism. The generated sequence of eye fixations was determined from the perceptual benefit function based on perceptual costs and rewards, while the time distribution of different ROIs was estimated by a memory learning and decaying model. Finally, to demonstrate the effectiveness of the proposed attention model, the gaze tracking results of different human subjects and the simulated eye fixation shifting were compared.

Keywords

Visual Attention Scene Image Active Appearance Model Coherence Result Perceptual Cost 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Bernardino, A., Santos-Victor, J.: A Binocular Stereo Algorithm for Log-polar Foveated Systems. In: Bülthoff, H.H., Lee, S.-W., Poggio, T.A., Wallraven, C. (eds.) BMCV 2002. LNCS, vol. 2525, pp. 127–136. Springer, Heidelberg (2002)CrossRefGoogle Scholar
  2. 2.
    Breazeal, C., Scassellati, B.: A context-dependent attention system for a social robot. In: Proceedings of the Sixteenth International Joint Conference on Artificial Intelligence, pp. 1146–1153 (1999)Google Scholar
  3. 3.
    Brunnstrm, K., Eklundh, J., Uhlin, T.: Active fixation for scene exploration. International Journal of Computer Vision 17, 137–162 (1996)CrossRefGoogle Scholar
  4. 4.
    Chen, L., Xie, X., Ma, W., Zhang, H., Zhou, H.: Image adaptation based on attention model for small-form-factor device. In: Proc. of 9th International Conference on Multi-Media Modeling (2003)Google Scholar
  5. 5.
    Cootes, T.F., Edwards, G.J., Taylor, C.J.: Active appearance models. In: Burkhardt, H., Neumann, B. (eds.) ECCV 1998. LNCS, vol. 1407, pp. 484–499. Springer, Heidelberg (1998)Google Scholar
  6. 6.
    Itti, L.: Visual attention. The Handbook of Brain Theory and Neural Networks, 1196–1201 (January 2003)Google Scholar
  7. 7.
    Itti, L., Dhavale, N., Pighin, F.: Realistic avatar eye and head animation using a neurobiological model of visual attention. In: Proc. SPIE 48th Annual International Symposium on Optical, pp. 21–21 (2003)Google Scholar
  8. 8.
    Itti, L., Koch, C.: Computational modeling of visual attention. Nature Reviews Neuroscience 2(3), 194–27 (2001)Google Scholar
  9. 9.
    Itti, L., Koch, C., Niebur, E.: A model of saliency-based visual attention for rapid scene analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence 20(11), 1254–1259 (1998)CrossRefGoogle Scholar
  10. 10.
    Hoffman, J.E., Subramaniam, B.: Saccadic eye movements and visual selective attention. Perception and Psychophysics 57, 787–795 (1995)CrossRefGoogle Scholar
  11. 11.
    Majaranta, P., Räihä, K.-J.: Twenty years of eye typing: systems and design issues. In: Proceedings of the symposium on Eye tracking research and applications, pp. 15–22. ACM Press, New York (2002)CrossRefGoogle Scholar
  12. 12.
    Moray, N.: Designing for attention. Attention: Slection, Awareness, and Motor Control (1993)Google Scholar
  13. 13.
    Newell, A.: Unified theories of cognition (1990)Google Scholar
  14. 14.
    Newell, A., Rosenbloom, P.: Mechanisms of skill acqusition and the law of practice. Cognitive skills and their acquistion, 1–55 (1981)Google Scholar
  15. 15.
    Ouerhani, N., von Wartburg, R., Hügli, H., Müri, R.: Empirical validation of the saliency-based model of visual attention. Electronic Letters on Computer Vision Image Anal. 3(1), 13–24 (2004)Google Scholar
  16. 16.
    Pomplun, M., Reingold, E.M., Shen, J.: Area activation: A computational model of saccadic selectivity in visual search. Cognitive Science 27, 299–312 (2003)CrossRefGoogle Scholar
  17. 17.
    Privitera, C., Stark, L.: Algorithms for defining visual regions-of-interest:comparison with eye fixations. PAMI 22(9), 970–982 (2000)CrossRefGoogle Scholar
  18. 18.
    Schwartz, E.: Spatial mapping in primate sensory projection:analytic structure and relevance to perception. Biological Cybernetics 25, 181–194 (1977)CrossRefGoogle Scholar
  19. 19.
    Sun, Y., Fisher, R.: Object-based visual attention for computer vision. Artificial Intelligent 146, 77–123 (2003)MathSciNetCrossRefzbMATHGoogle Scholar
  20. 20.
    Viola, P., Jones, M.J.: Robust real-time face detection. Int. J. Comput. Vision 57(2), 137–154 (2004)CrossRefGoogle Scholar
  21. 21.
    Wickens, C.D., Helleberg, J., Goh, X.X.J., Horrey, W.J.: Pilot task management: Testing an attentional expected value model of visual scanning. In Technical Report ARL-01-14/NASA-01-7, NASA Ames Research Center Moffett Field, CA (2001)Google Scholar
  22. 22.
    Yarbus, A.: Eye movements and vision (1967)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2007

Authors and Affiliations

  • Erdan Gu
    • 1
  • Jingbin Wang
    • 2
  • Norman I. Badler
    • 1
  1. 1.University of Pennsylvania, Philadelphia PA 19104-6389USA
  2. 2.Boston University, Boston, MA, 02215USA

Personalised recommendations