Skip to main content

Visual Attention Model with a Novel Learning Strategy and Its Application to Target Detection from SAR Images

  • Conference paper
  • First Online:
Advances in Brain Inspired Cognitive Systems (BICS 2016)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 10023))

Included in the following conference series:

Abstract

The selective visual attention mechanism in human visual system helps human to act efficiently when dealing with massive visual information. Over the last two decades, biologically inspired attention model has drawn lots of research attention and many models have been proposed. However, the top-down cues in human brain are still not fully understood, which makes top-down models not biologically plausible. This paper proposes an attention model containing both the bottom-up stage and top-down stage for the target detection from SAR (Synthetic Aperture Radar) images. The bottom-up stage is based on the biologically-inspired Itti model and is modified by taking fully into account the characteristic of SAR images. The top-down stage contains a novel learning strategy to make the full use of prior information. It is an extension of the bottom-up process and more biologically plausible. The experiments in this research aim to detect vehicles in different scenes to validate the proposed model by comparing with the well-known CFAR (constant false alarm rate) algorithm.

This work was supported by the National Natural Science Foundation of China (61071139; 61471019; 61171122; 61501011), the Aeronautical Science Foundation of China (20142051022), the Pre-research Project (9140A07040515HK01009), the National Natural Science Foundation of China (NNSFC) under the RSE-NNSFC Joint Project (2012-2014) (61211130210) with Beihang University, and the RSE-NNSFC Joint Project (2012-2014) (61211130309) with Anhui University.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Borji, A., Itti, L.: State-of-the-art in visual attention modeling. IEEE Trans. Pattern Anal. Mach. Intell. 35, 185–207 (2013)

    Article  Google Scholar 

  2. Treisman, A.M., Gelade, G.: A feature-integration theory of attention. Cogn. Psychol. 12, 97–136 (1980)

    Article  Google Scholar 

  3. Koch, C., Ullman, S.: Shifts in selective visual attention: towards the underlying neural circuitry. Hum. Neurobiol. 4, 219–227 (1985)

    Google Scholar 

  4. Itti, L., Koch, C., Niebur, E.: A model of saliency-based visual attention for rapid scene analysis. IEEE Trans. Pattern Anal. Mach. Intell. 20, 1254–1259 (1998)

    Article  Google Scholar 

  5. Hopfinger, J.B., Buonocore, M.H., Mangun, G.R., Hopfinger, J.B., Buonocore, M.H., Mangun, G.R.: The neural mechanisms of top-down attentional control. Nat. Neurosci. 3, 284–291 (2000)

    Article  Google Scholar 

  6. Zhang, L., Lin, W.: Computational models for top-down visual attention. In: Selective Visual Attention: Computational Models and Applications, pp. 167–205. Wiley-IEEE Press (2013)

    Google Scholar 

  7. Wolfe, J.M.: Guided search 2.0 a revised model of visual search. Psychon. Bull. Rev. 1, 202–238 (1994)

    Article  Google Scholar 

  8. Navalpakkam, V., Itti, L.: An integrated model of top-down and bottom-up attention for optimizing detection speed. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2006, pp. 2049–2056 (20116)

    Google Scholar 

  9. Frintrop, S.: VOCUS: A Visual Attention System for Object Detection and Goal-Directed Search. LNCS (LNAI), vol. 3899. Springer, Heidelberg (2006)

    Google Scholar 

  10. Armanfard, Z., Bahmani, H., Nasrabadi, A.M.: A novel feature fusion technique in saliency-based visual attention. In: International Conference on Advances in Computational Tools for Engineering Applications, ACTEA 2009, pp. 230–233 (2009)

    Google Scholar 

  11. Han, H., Tcheang, L., Walsh, V., Gao, X.: A novel feature combination methods for saliency-based visual attention. In: 2009 Fifth International Conference on Natural Computation, pp. 18–22 (2009)

    Google Scholar 

  12. Tsotsos, J.K., Culhane, S.M., Wai, W.Y.K., Lai, Y., Davis, N., Nuflo, F.: Modeling visual attention via selective tuning. Artif. Intell. 78, 507–545 (1995)

    Article  Google Scholar 

  13. Kim, B., Ban, S.W., Lee, M.: Growing fuzzy topology adaptive resonance theory models with a push–pull learning algorithm. Neurocomputing 74, 646–655 (2011)

    Article  Google Scholar 

  14. Borji, A., Ahmadabadi, M.N., Araabi, B.N.: Cost-sensitive learning of top-down modulation for attentional control. Mach. Vis. Appl. 22, 61–76 (2011)

    Article  Google Scholar 

  15. Sang-Woo, B., Bumhwi, K., Minho, L.: Top-down visual selective attention model combined with bottom-up saliency map for incremental object perception. In: The 2010 International Joint Conference on Neural Networks (IJCNN), pp. 1–8 (2010)

    Google Scholar 

  16. Yang, J., Yang, M.H.: Top-down visual saliency via joint CRF and dictionary learning. IEEE Trans. Pattern Anal. Mach. Intell. PP, 1 (2016)

    Google Scholar 

  17. Triesch, J., Ballard, D.H., Hayhoe, M.M., Sullivan, B.T.: What you see is what you need. IEEE Comput. Soc. 3, 102 (2003)

    Google Scholar 

  18. Najemnik, J., Geisler, W.S.: Optimal eye movement strategies in visual search. Am. J. Ophthalmol. 434, 387–391 (2005)

    Google Scholar 

  19. Felzenszwalb, P.F., Girshick, R.B., Mcallester, D., Ramanan, D.: Object detection with discriminatively trained part-based models. IEEE Trans. Pattern Anal. Mach. Intell. 32, 1627–1645 (2010)

    Article  Google Scholar 

  20. Viola, P., Jones, M.J.: Robust real-time face detection. Int. J. Comput. Vis. 57, 137–154 (2004)

    Article  Google Scholar 

  21. Itti, L., Koch, C.: Computational modelling of visual attention. Nat. Rev. Neurosci. 2, 194–203 (2001)

    Article  Google Scholar 

  22. Ohtsu, N.: A threshold selection method from gray-level histograms. IEEE Trans. Syst. Man Cybern. 9, 62–66 (1979)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jun Wang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing AG

About this paper

Cite this paper

Gao, F., Xue, X., Wang, J., Sun, J., Hussain, A., Yang, E. (2016). Visual Attention Model with a Novel Learning Strategy and Its Application to Target Detection from SAR Images. In: Liu, CL., Hussain, A., Luo, B., Tan, K., Zeng, Y., Zhang, Z. (eds) Advances in Brain Inspired Cognitive Systems. BICS 2016. Lecture Notes in Computer Science(), vol 10023. Springer, Cham. https://doi.org/10.1007/978-3-319-49685-6_14

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-49685-6_14

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-49684-9

  • Online ISBN: 978-3-319-49685-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics