Springer Nature is making SARS-CoV-2 and COVID-19 research free. View research | View latest news | Sign up for updates

Bottom-up saliency detection for attention determination

  • 437 Accesses

  • 4 Citations


In this paper, the technique of saliency detection is proposed to model people’s biological ability of attending to their interest. There are two phases in the scheme of intelligent saliency searching: saliency filtering and saliency refinement. In saliency filtering, non-salient regions of a scene image are filtered out by measuring information entropy and biological color sensitivity. The information entropy evaluates the level of knowledge and energy contained, and the color sensitivity measures biological stimulation of a presented scene. In saliency refinement, candidate salient regions obtained are cultivated for a good representation of saliency by extracting salient objects, similarly to people’s manner of perception. The performance of the proposed technique is studied on noiseless and noisy natural scenes and evaluated with eye fixation data. The evaluation proved the effectiveness of the approach in discovering salient regions or objects from scene images. The performance of addressing transformation and illumination variance is also investigated.

This is a preview of subscription content, log in to check access.


  1. 1

    Rosin P.: Edges: saliency measures and automatic thresholding. Mach. Vis. Appl. 9(4), 139–159 (1997)

  2. 2

    Sha’ashua, A., Ullman, S.: Structural saliency: the detection of globally salient structures using a locally connected network. In: Proceedings of the 2nd ICCV (1988)

  3. 3

    Sun Y., Fisher R.: Object-based visual attention for computer vision. Artif. Intell. 146(1), 77–123 (2003)

  4. 4

    Gao D., Vasconcelos N.: Discriminant saliency for visual recognition from cluttered scenes. Adv. Neural Inf. Process. Syst. 17, 481–488 (2005)

  5. 5

    Gao, D., Mahadevan, V., Vasconcelos, N.: The discriminant center-surround hypothesis for bottom-up saliency. Neural Information Processing Systems (NIPS), Vancouver, Canada

  6. 6

    Kadir T., Brady M.: Saliency, scale and image description. Int. J. Comput. Vis. 45(2), 83–105 (2001)

  7. 7

    Itti L., Koch C., Niebur E.: A model of saliency-based visual attention for rapid scene analysis. IEEE Trans. Pattern Anal. Mach. Intell. 20(11), 1254–1259 (2002)

  8. 8

    Parkhurst D., Law K., Niebur E.: Modeling the role of salience in the allocation of overt visual attention. Vis. Res. 42(1), 107–123 (2002)

  9. 9

    Park, S., Shin, J., Lee, M.: Biologically inspired saliency map model for bottom-up visual attention. In: Biologically Motivated Computer Vision, pp. 113–145. Springer, Berlin (2010)

  10. 10

    Borji, A., Ahmadabadi, M., Araabi, B.: Cost-sensitive learning of top-down modulation for attentional control. Mach. Vis. Appl. 1–16 (2009)

  11. 11

    Oliva, A., Torralba, A.,Castelhano, M., Henderson, J.: Top-down control of visual attention in object detection. In: Proceedings of International Conference on Image Processing, 2003, ICIP 2003 (2003)

  12. 12

    Judd, T., Ehinger, K., Durand, F., Torralba, A.: Learning to predict where humans look. In: IEEE 12th International Conference on Computer Vision, 2009, pp. 2106–2113. IEEE (2010)

  13. 13

    Li Z.: A saliency map in primary visual cortex. Trends Cogn. Sci. 6(1), 9–16 (2002)

  14. 14

    Ramanathan, S., Katti, H., Sebe, N., Kankanhalli, M., Chua, T.: An eye fixation database for saliency detection in images. In: Computer Vision–ECCV 2010, pp. 30–43 (2010)

  15. 15

    Fecteau J., Munoz D.: Salience, relevance, and firing: a priority map for target selection. Trends Cogn. Sci. 10(8), 382–390 (2006)

  16. 16

    Kandel E., Schwartz J., Jessell T., Mack S., Dodd J.: Principles of Neural Science. Elsevier, New York (1985)

  17. 17

    Sharpe L., Stockman A., Jagla W., Jagle H.: A luminous efficiency function, V* λ, for daylight adaptation. J. Vis. 5, 948–968 (2005)

  18. 18

    Sun J., Zhang X., Cui J., Zhou L.: Image retrieval based on color distribution entropy. Pattern Recognit. Lett. 27(10), 1122–1126 (2006)

  19. 19

    Mallat S.: A theory for multiresolution signal decomposition: the wavelet representation. IEEE Trans. Pattern Anal. Mach. Intell. 11(7), 674–693 (1989)

  20. 20

    Gao, D., Vasconcelos, N., Bottom-up saliency is a discriminant process. In: IEEE International Conference on Computer Vision, Citeseer (2007)

  21. 21

    Ruzon, M., Tomasi, C.,: Alpha estimation in natural images. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2000, vol. 1, pp. 18–25. IEEE (2002)

  22. 22

    Feng, S., Manmatha, R., Lavrenko, V.: Multiple bernoulli relevance models for image and video annotation. In: Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004, CVPR 2004, vol. 2. IEEE (2004)

  23. 23

    Rother C., Kolmogorov V., Blake A.: Grabcut: interactive foreground extraction using iterated graph cuts. ACM Trans. Graph. 23(3), 309–314 (2004)

  24. 24

    Goldberger, J., Gordon, S., Greenspan, H.: An efficient image similarity measure based on approximations of KL-divergence between two Gaussian mixtures. In: Proceedings of Ninth IEEE International Conference on Computer Vision, 2003, pp. 487–493. IEEE (2008)

  25. 25

    Boykov Y., Veksler O., Zabih R.: Fast approximate energy minimization via graph cuts. IEEE Trans. Pattern Anal. Mach. Intell. 23(11), 1222–1239 (2002)

Download references

Author information

Correspondence to Shuzhi Sam Ge.

Rights and permissions

Reprints and Permissions

About this article

Cite this article

Ge, S.S., He, H. & Zhang, Z. Bottom-up saliency detection for attention determination. Machine Vision and Applications 24, 103–116 (2013). https://doi.org/10.1007/s00138-011-0372-6

Download citation


  • Saliency
  • Informational entropy
  • Color sensitivity
  • Graph cuts
  • Social robot