Advertisement

An Attentional System Combining Top-Down and Bottom-Up Influences

  • Babak Rasolzadeh
  • Alireza Tavakoli Targhi
  • Jan-Olof Eklundh
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4840)

Abstract

Attention plays an important role in human processing of sensory information as a mean of focusing resources toward the most important inputs at the moment. It has in particular been shown to be a key component of vision. In vision it has been argued that the attentional processes are crucial for dealing with the complexity of real world scenes. The problem has often been posed in terms of visual search tasks. It has been shown that both the use of prior task and context information - top-down influences - and favoring information that stands out clearly in the visual field - bottom-up influences - can make such search more efficient. In a generic scene analysis situation one presumably has a combination of these influences and a computational model for visual attention should therefore contain a mechanism for their integration. Such models are abundant for human vision, but relatively few attempts have been made to define any that apply to computer vision.

In this article we describe a model that performs such a combination in a principled way. The system learns an optimal representation of the influences of task and context and thereby constructs a biased saliency map representing the top-down information. This map is combined with bottom-up saliency maps in a process evolving over time as a function over the input. The system is applied to search tasks in single images as well as in real scenes, in the latter case using an active vision system capable of shifting its gaze. The proposed model is shown to have desired qualities and to go beyond earlier proposed systems.

Keywords

Visual Attention Texture Descriptor Visual Search Task Salient Region Spike Neural Network 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Itti, L.: Models of Bottom-Up and Top-Down Visual Attention, Ph.D. thesis, California Institute of Technology (2000)Google Scholar
  2. 2.
    Li, Z.: A saliency map in primary visual cortex. Trends in Cognitive Sciences 6(1), 9–16 (2002)CrossRefGoogle Scholar
  3. 3.
    Olshausen, B., Anderson, C., van Essen, D.: A neurobiological model of visual attention and invariant pattern recognition based on dynamic routing of information. J. Neuroscience 13, 4700–4719 (1993)Google Scholar
  4. 4.
    Treisman, A.M., Gelade, G.: A feature integration theory of attention. Cognitive Psychology 12, 97–136 (1980)CrossRefGoogle Scholar
  5. 5.
    Koch, C., Ullman, S.: Shifts in selective visual attention: Towards the underlying neural circuitry. Human Neurobiology 4, 219–227 (1985)Google Scholar
  6. 6.
    Koike, T., Saiki, J.: Stochastic Guided Search Model for Search Asymmetries in Visual Search Tasks. Biologically Motivated Computer Vision, 408–417 (2002)Google Scholar
  7. 7.
    Ramström, O., Christensen, H.I.: Object detection using background context. In: Proc. International Conference of Pattern Recognition, pp. 45–48 (2004)Google Scholar
  8. 8.
    Choi, S.B., Ban, S.W., Lee, M.: Biologically motivated visual attention system using bottom-up saliency map and top-down inhibition. Neural Information Processing-Letters and Review 2 (2004)Google Scholar
  9. 9.
    Itti, L., Koch, C., Niebur, E.: A Model of Saliency-Based Visual Attention for Rapid Scene Analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence 20, 1254–1259 (1998)CrossRefGoogle Scholar
  10. 10.
    Navalpakkam, V., Itti, L.: Sharing Resources: Buy Attention, Get Recognition. In: Proc. International Workshop Attention and Performance in Computer Vision, Graz, Austria (July 2003)Google Scholar
  11. 11.
    Lee, K., Buxton, H., Feng, J.: Selective attention for cueguided search using a spiking neural network. In: Proc. of the Int’l Workshop on Attention and Performance in Computer Vision, Graz, Austria, pp. 55–62 (2003)Google Scholar
  12. 12.
    Frintrop, S.: VOCUS: A Visual Attention System for Object Detection and Goal-Directed Search. LNCS (LNAI), vol. 3899. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  13. 13.
    Oliva, A., Torralba, A., Castelhano, M.S., Henderson, J.M.: Top-down control of visual attention in object detection. In: Proc. ICIP 2003, pp. 253–256 (2003)Google Scholar
  14. 14.
    Theeuwes, J.: Stimulus-driven capture and attentional set: Selective search for colour and visual abrupt onsets. Journal of Experimental Psychology: Human Perception & Performance 1, 799–806 (1994)Google Scholar
  15. 15.
    Itti, L., Koch, C.: Computational Modeling of Visual Attention. Nature Reviews Neuroscience 2, 194–203 (2001)CrossRefGoogle Scholar
  16. 16.
    Draper, B., Lionelle, A.: Evaluation of selective attention under similarity transforms. In: Proc. International Workshop on Attention and Performance in Computer Vision, pp. 31–38 (2003)Google Scholar
  17. 17.
    Rasolzadeh, B.: Interaction of Bottom-up and Top-down influences for Attention in an Active Vision System, MSc-thesis, TRITA-CSC-E 2006:117, ISSN-1653-5715, KTH, Stockholm (2006)Google Scholar
  18. 18.
    Rasolzadeh, B., Björkman, M., Eklundh, J.O.: An attentional system combining top-down and bottom-up influences. In: ICVW 2006. International Cognitive Vision Workshop, at ECCV (2006)Google Scholar
  19. 19.
    Haykin, S.: Neural Networks: A Comprehensive Foundation. Prentice Hall, Upper Saddle River, NJ (1994)zbMATHGoogle Scholar
  20. 20.
    Culhane, S.M., Tsotsos, J.K.: A Prototype for Data-Driven Visual Attention. In: Proc. International Conference on Pattern Recognition, vol. A, pp. 36–39 (1992)Google Scholar
  21. 21.
    Hu, Y., Xie, X., Ma, W-Y., Chia, L-T., Rajan, D.: Salient region detection using weighted feature maps based on the Human Visual Attention Model. In: IEEE Pacific-Rum Conference on Multimedia (submitted)Google Scholar
  22. 22.
    Wong, A.K.C., Sahoo, P.K: A gray-level threshold selection method based on maximum entropy principle. IEEE Trans. Systems Man and Cybernetics 19, 866–871 (1989)CrossRefGoogle Scholar
  23. 23.
    Malik, J., Belongie, S., Leung, T., Shi, J.: Contour and Texture Analysis for Image Segmentation. International Journal of Computer Vision 43, 7–27 (2001)CrossRefzbMATHGoogle Scholar
  24. 24.
    Varma, M., Zisserman, A.: Classifying Images of Materials: Achieving Viewpoint and Illumination Independence. In: Heyden, A., Sparr, G., Nielsen, M., Johansen, P. (eds.) ECCV 2002. LNCS, vol. 2350, pp. 255–271. Springer, Heidelberg (2002)CrossRefGoogle Scholar
  25. 25.
    Ojala, T., Pietikainen, M.: Unsupervised texture segmentation using feature distributions. Journal of Pattern Recognition 32, 477–486 (1999)CrossRefGoogle Scholar
  26. 26.
    Varma, M., Zisserman, A.: Texture classification: are filter banks necessary? In: Proc. CVPR, pp. 691–698 (2003)Google Scholar
  27. 27.
    Tavakoli Targhi, A., Shademan, A.: Clustering of singular value decomposition of image data with applications to texture classification. In: Proc. VCIP, pp. 972–979 (2003)Google Scholar
  28. 28.
    Tavakoli Targhi, A., Hayman, E., Eklundh, J.O., Shahshahani, M.: The Eigen-Transform and Applications. In: Proc. ACCV, pp. 70–79 (2006)Google Scholar
  29. 29.
    Tavakoli Targhi, A., Shahshahani, M.: A simple set of numerical invariants for the analysis of images. International Journal of Imaging Systems and Technology 16, 240–248 (2007)Google Scholar
  30. 30.
    Tavakoli Targhi, A., Rasolzadeh, B., Eklundh, J.O.: Texture for Multiple Cue Visual Analysis with Applications to Attention (Submitted, 2007)Google Scholar
  31. 31.
    Tavakoli Targhi, A., Björkman, M., Hayman, E., Eklundh, J.O: Real-Time Texture Detection Using the LU-Transform (submitted, 2007)Google Scholar
  32. 32.
    Björkman, M., Eklundh, J-O.: Foveated Figure-Ground Segmentation and Its Role in Recognition. In: Proc. British Machine Vision Conf., pp. 819–828 (September 2005)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2007

Authors and Affiliations

  • Babak Rasolzadeh
    • 1
  • Alireza Tavakoli Targhi
    • 1
  • Jan-Olof Eklundh
    • 1
  1. 1.Computer Vision and Active Perception Laboratory, CSC, KTH, SE-100 44 StockholmSweden

Personalised recommendations