Skip to main content

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 3562))

Abstract

Visual attention is the ability of a vision system, be it biological or artificial, to rapidly detect potentially relevant parts of a visual scene. The saliency-based model of visual attention is widely used to simulate this visual mechanism on computers. Though biologically inspired, this model has been only partially assessed in comparison with human behavior. The research described in this paper aims at assessing its performance in the case of natural scenes, i.e. real 3D color scenes. The evaluation is based on the comparison of computer saliency maps with human visual attention derived from fixation patterns while subjects are looking at the scenes. The paper presents a number of experiments involving natural scenes and computer models differing by their capacity to deal with color and depth. The results point on the large range of scene specific performance variations and provide typical quantitative performance values for models of different complexity.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Kustov, A.A., Robinson, D.L.: Shared neural control of attentional shifts and eye movements. Nature 384, 74–77 (1997)

    Article  Google Scholar 

  2. Salvucci, D.D.: A model of eye movements and visual attention. In: Third International Conference on Cognitive Modeling, pp. 252–259 (2000)

    Google Scholar 

  3. Privitera, C., Stark, L.: Algorithms for defining visual regions-of-interest: Comparison with eye fixations. Pattern Analysis and Machine Intelligence (PAMI) 22(9), 970–981 (2000)

    Article  Google Scholar 

  4. Heinke, D., Humphreys, G.W.: Computational models of visual selective attention: A review. In: Houghton, G. (ed.) Connectionist Models in Psychology (in press)

    Google Scholar 

  5. Koch, Ch., Ullman, S.: Shifts in selective visual attention: Towards the underlying neural circuitry. Human Neurobiology 4, 219–227 (1985)

    Google Scholar 

  6. Itti, L., Koch, Ch., Niebur, E.: A model of saliency-based visual attention for rapid scene analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI) 20(11), 1254–1259 (1998)

    Article  Google Scholar 

  7. Ouerhani, N., Hugli, H.: Real-time visual attention on a massively parallel SIMD architecture. International Journal of Real Time Imaging 9(3), 189–196 (2003)

    Article  Google Scholar 

  8. Ouerhani, N., von Wartburg, R., Hugli, H., Mueri, R.: Empirical validation of the saliency-based model of visual attention. Electronic Letters on Computer Vision and Image Analysis (ELCVIA) 3(1), 13–24 (2004)

    Google Scholar 

  9. Parkhurst, D., Law, K., Niebur, E.: Modeling the role of salience in the allocation of overt visual attention. Vision Research 42(1), 107–123 (2002)

    Article  Google Scholar 

  10. Jost, T., Ouerhani, N., von Wartburg, R., Muri, R., Hugli, H.: Assessing the contribution of color in visual attention. Computer Vision and Image Understanding Journal (CVIU) (to appear)

    Google Scholar 

  11. Ouerhani, N., Hugli, H.: Computing visual attention from scene depth. In: International Conference on Pattern Recognition (ICPR 2000), vol. 1, pp. 375–378. IEEE Computer Society Press, Los Alamitos (2000)

    Chapter  Google Scholar 

  12. Jost, T., Ouerhani, N., von Wartburg, R., Muri, R., Hugli, H.: Contribution of depth to visual attention: comparison of a computer model and human. In: Early cognitive vision workshop, Isle of Skye, Scotland, vol. 1, pp. 28.5. - 1.6 (2004)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2005 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Hügli, H., Jost, T., Ouerhani, N. (2005). Model Performance for Visual Attention in Real 3D Color Scenes. In: Mira, J., Álvarez, J.R. (eds) Artificial Intelligence and Knowledge Engineering Applications: A Bioinspired Approach. IWINAC 2005. Lecture Notes in Computer Science, vol 3562. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11499305_48

Download citation

  • DOI: https://doi.org/10.1007/11499305_48

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-26319-7

  • Online ISBN: 978-3-540-31673-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics