An Approach for Preparing Groundtruth Data and Evaluating Visual Saliency Models

  • Rajarshi Pal
  • Jayanta Mukherjee
  • Pabitra Mitra
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5909)


Evaluation is a key part while proposing a new model. To evaluate models of visual saliency, one needs to compare the model’s output with salient locations in an image. This paper proposes an approach to find out the salient locations, i.e., groundtruth for experiments with visual saliency models. It is found that the proposed human hand-eye coordination based technique can be an alternative to costly human pupil-tracking based systems. Moreover, an evaluation metric is also proposed that suits the necessity of the saliency models.


Evaluation Visual saliency model Groundtruth 


  1. 1.
    Itti, L., Koch, C., Niebur, E.: A model of saliency-based visual attention for rapid scene analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence 20(11), 1254–1259 (1998)CrossRefGoogle Scholar
  2. 2.
    Kadir, T., Brady, M.: Saliency, scale and image description. International Journal of Computer Vision 45(2), 83–105 (2001)zbMATHCrossRefGoogle Scholar
  3. 3.
    Sun, Y., Fisher, R.: Object-based visual attention for computer vision. Artificial Intelligence 146, 77–123 (2003)zbMATHCrossRefMathSciNetGoogle Scholar
  4. 4.
    Yu, Z., Wong, H.S.: A rule based technique for extraction of visual attention regions based on real-time clustering. IEEE Transactions on Multimedia 9(4), 766–784 (2007)CrossRefMathSciNetGoogle Scholar
  5. 5.
    Minut, S., Mahadevan, S.: A reinforcement learning model of selective visual attention. In: Proceedings of 15th International Conference on Autonomous Agents, pp. 457–464 (2001)Google Scholar
  6. 6.
    Meur, O.L., Callet, P.L., Barba, D., Thoreau, D.: A coherent computational approach to model bottom-up visual attention. IEEE Transactions on Pattern Analysis and Machine Intelligence 25(5), 802–817 (2006)CrossRefGoogle Scholar
  7. 7.
    Bruce, N.D.B.: Features that draw visual attention: an information theoretic perspective. Neurocomputing 65-66, 125–133 (2005)CrossRefGoogle Scholar
  8. 8.
    Gao, D., Vasconcelos, N.: Bottom-up saliency is a discriminant process. In: Proceedings of IEEE 11th International Conference on Computer Vision, pp. 1–6 (2007)Google Scholar
  9. 9.
    Parkhurst, D., Law, K., Niebur, E.: Modeling the role of salience in the allocation of overt visual attention. Vision Research 42, 107–123 (2002)CrossRefGoogle Scholar
  10. 10.
    Meur, O.L., Thoreau, D., Callet, P.L., Barba, D.: A spatio-temporal model of the selective human visual attention. In: Proceedings of IEEE International Conference on Image Processing, pp. III–1188–1191 (2005)Google Scholar
  11. 11.
    Itti, L., Koch, C.: Feature combination strategies for saliency based visual attention systems. Journal of Electronic Imaging 10(1), 161–169 (2001)CrossRefGoogle Scholar
  12. 12.
    Schaefer, G., Stich, M.: Ucid - an uncompressed color image database. In: SPIE Storage and Retrieval Methods and Applications for Multimedia, vol. 5307, pp. 472–480 (2004)Google Scholar
  13. 13.
    Frey, H.P., Konig, P., Einhauser, W.: The role of first- and second- order stimulus features for human overt attention. Perception and Psychophysics 69, 153–161 (2007)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2009

Authors and Affiliations

  • Rajarshi Pal
    • 1
  • Jayanta Mukherjee
    • 1
  • Pabitra Mitra
    • 1
  1. 1.Department of Computer Science and EngineeringIndian Institute of TechnologyKharagpurIndia

Personalised recommendations