Abstract
The selective visual attention mechanism in human visual system helps human to act efficiently when dealing with massive visual information. Over the last two decades, biologically inspired attention model has drawn lots of research attention and many models have been proposed. However, the top-down cues in human brain are still not fully understood, which makes top-down models not biologically plausible. This paper proposes an attention model containing both the bottom-up stage and top-down stage for the target detection from SAR (Synthetic Aperture Radar) images. The bottom-up stage is based on the biologically-inspired Itti model and is modified by taking fully into account the characteristic of SAR images. The top-down stage contains a novel learning strategy to make the full use of prior information. It is an extension of the bottom-up process and more biologically plausible. The experiments in this research aim to detect vehicles in different scenes to validate the proposed model by comparing with the well-known CFAR (constant false alarm rate) algorithm.
This work was supported by the National Natural Science Foundation of China (61071139; 61471019; 61171122; 61501011), the Aeronautical Science Foundation of China (20142051022), the Pre-research Project (9140A07040515HK01009), the National Natural Science Foundation of China (NNSFC) under the RSE-NNSFC Joint Project (2012-2014) (61211130210) with Beihang University, and the RSE-NNSFC Joint Project (2012-2014) (61211130309) with Anhui University.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Borji, A., Itti, L.: State-of-the-art in visual attention modeling. IEEE Trans. Pattern Anal. Mach. Intell. 35, 185–207 (2013)
Treisman, A.M., Gelade, G.: A feature-integration theory of attention. Cogn. Psychol. 12, 97–136 (1980)
Koch, C., Ullman, S.: Shifts in selective visual attention: towards the underlying neural circuitry. Hum. Neurobiol. 4, 219–227 (1985)
Itti, L., Koch, C., Niebur, E.: A model of saliency-based visual attention for rapid scene analysis. IEEE Trans. Pattern Anal. Mach. Intell. 20, 1254–1259 (1998)
Hopfinger, J.B., Buonocore, M.H., Mangun, G.R., Hopfinger, J.B., Buonocore, M.H., Mangun, G.R.: The neural mechanisms of top-down attentional control. Nat. Neurosci. 3, 284–291 (2000)
Zhang, L., Lin, W.: Computational models for top-down visual attention. In: Selective Visual Attention: Computational Models and Applications, pp. 167–205. Wiley-IEEE Press (2013)
Wolfe, J.M.: Guided search 2.0 a revised model of visual search. Psychon. Bull. Rev. 1, 202–238 (1994)
Navalpakkam, V., Itti, L.: An integrated model of top-down and bottom-up attention for optimizing detection speed. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2006, pp. 2049–2056 (20116)
Frintrop, S.: VOCUS: A Visual Attention System for Object Detection and Goal-Directed Search. LNCS (LNAI), vol. 3899. Springer, Heidelberg (2006)
Armanfard, Z., Bahmani, H., Nasrabadi, A.M.: A novel feature fusion technique in saliency-based visual attention. In: International Conference on Advances in Computational Tools for Engineering Applications, ACTEA 2009, pp. 230–233 (2009)
Han, H., Tcheang, L., Walsh, V., Gao, X.: A novel feature combination methods for saliency-based visual attention. In: 2009 Fifth International Conference on Natural Computation, pp. 18–22 (2009)
Tsotsos, J.K., Culhane, S.M., Wai, W.Y.K., Lai, Y., Davis, N., Nuflo, F.: Modeling visual attention via selective tuning. Artif. Intell. 78, 507–545 (1995)
Kim, B., Ban, S.W., Lee, M.: Growing fuzzy topology adaptive resonance theory models with a push–pull learning algorithm. Neurocomputing 74, 646–655 (2011)
Borji, A., Ahmadabadi, M.N., Araabi, B.N.: Cost-sensitive learning of top-down modulation for attentional control. Mach. Vis. Appl. 22, 61–76 (2011)
Sang-Woo, B., Bumhwi, K., Minho, L.: Top-down visual selective attention model combined with bottom-up saliency map for incremental object perception. In: The 2010 International Joint Conference on Neural Networks (IJCNN), pp. 1–8 (2010)
Yang, J., Yang, M.H.: Top-down visual saliency via joint CRF and dictionary learning. IEEE Trans. Pattern Anal. Mach. Intell. PP, 1 (2016)
Triesch, J., Ballard, D.H., Hayhoe, M.M., Sullivan, B.T.: What you see is what you need. IEEE Comput. Soc. 3, 102 (2003)
Najemnik, J., Geisler, W.S.: Optimal eye movement strategies in visual search. Am. J. Ophthalmol. 434, 387–391 (2005)
Felzenszwalb, P.F., Girshick, R.B., Mcallester, D., Ramanan, D.: Object detection with discriminatively trained part-based models. IEEE Trans. Pattern Anal. Mach. Intell. 32, 1627–1645 (2010)
Viola, P., Jones, M.J.: Robust real-time face detection. Int. J. Comput. Vis. 57, 137–154 (2004)
Itti, L., Koch, C.: Computational modelling of visual attention. Nat. Rev. Neurosci. 2, 194–203 (2001)
Ohtsu, N.: A threshold selection method from gray-level histograms. IEEE Trans. Syst. Man Cybern. 9, 62–66 (1979)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2016 Springer International Publishing AG
About this paper
Cite this paper
Gao, F., Xue, X., Wang, J., Sun, J., Hussain, A., Yang, E. (2016). Visual Attention Model with a Novel Learning Strategy and Its Application to Target Detection from SAR Images. In: Liu, CL., Hussain, A., Luo, B., Tan, K., Zeng, Y., Zhang, Z. (eds) Advances in Brain Inspired Cognitive Systems. BICS 2016. Lecture Notes in Computer Science(), vol 10023. Springer, Cham. https://doi.org/10.1007/978-3-319-49685-6_14
Download citation
DOI: https://doi.org/10.1007/978-3-319-49685-6_14
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-49684-9
Online ISBN: 978-3-319-49685-6
eBook Packages: Computer ScienceComputer Science (R0)