Abstract
Visual attention is the ability of a vision system, be it biological or artificial, to rapidly detect potentially relevant parts of a visual scene. The saliency-based model of visual attention is widely used to simulate this visual mechanism on computers. Though biologically inspired, this model has been only partially assessed in comparison with human behavior. The research described in this paper aims at assessing its performance in the case of natural scenes, i.e. real 3D color scenes. The evaluation is based on the comparison of computer saliency maps with human visual attention derived from fixation patterns while subjects are looking at the scenes. The paper presents a number of experiments involving natural scenes and computer models differing by their capacity to deal with color and depth. The results point on the large range of scene specific performance variations and provide typical quantitative performance values for models of different complexity.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Kustov, A.A., Robinson, D.L.: Shared neural control of attentional shifts and eye movements. Nature 384, 74–77 (1997)
Salvucci, D.D.: A model of eye movements and visual attention. In: Third International Conference on Cognitive Modeling, pp. 252–259 (2000)
Privitera, C., Stark, L.: Algorithms for defining visual regions-of-interest: Comparison with eye fixations. Pattern Analysis and Machine Intelligence (PAMI) 22(9), 970–981 (2000)
Heinke, D., Humphreys, G.W.: Computational models of visual selective attention: A review. In: Houghton, G. (ed.) Connectionist Models in Psychology (in press)
Koch, Ch., Ullman, S.: Shifts in selective visual attention: Towards the underlying neural circuitry. Human Neurobiology 4, 219–227 (1985)
Itti, L., Koch, Ch., Niebur, E.: A model of saliency-based visual attention for rapid scene analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI) 20(11), 1254–1259 (1998)
Ouerhani, N., Hugli, H.: Real-time visual attention on a massively parallel SIMD architecture. International Journal of Real Time Imaging 9(3), 189–196 (2003)
Ouerhani, N., von Wartburg, R., Hugli, H., Mueri, R.: Empirical validation of the saliency-based model of visual attention. Electronic Letters on Computer Vision and Image Analysis (ELCVIA) 3(1), 13–24 (2004)
Parkhurst, D., Law, K., Niebur, E.: Modeling the role of salience in the allocation of overt visual attention. Vision Research 42(1), 107–123 (2002)
Jost, T., Ouerhani, N., von Wartburg, R., Muri, R., Hugli, H.: Assessing the contribution of color in visual attention. Computer Vision and Image Understanding Journal (CVIU) (to appear)
Ouerhani, N., Hugli, H.: Computing visual attention from scene depth. In: International Conference on Pattern Recognition (ICPR 2000), vol. 1, pp. 375–378. IEEE Computer Society Press, Los Alamitos (2000)
Jost, T., Ouerhani, N., von Wartburg, R., Muri, R., Hugli, H.: Contribution of depth to visual attention: comparison of a computer model and human. In: Early cognitive vision workshop, Isle of Skye, Scotland, vol. 1, pp. 28.5. - 1.6 (2004)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2005 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Hügli, H., Jost, T., Ouerhani, N. (2005). Model Performance for Visual Attention in Real 3D Color Scenes. In: Mira, J., Álvarez, J.R. (eds) Artificial Intelligence and Knowledge Engineering Applications: A Bioinspired Approach. IWINAC 2005. Lecture Notes in Computer Science, vol 3562. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11499305_48
Download citation
DOI: https://doi.org/10.1007/11499305_48
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-26319-7
Online ISBN: 978-3-540-31673-2
eBook Packages: Computer ScienceComputer Science (R0)