Skip to main content

What Do Datasets Say About Saliency Models?

  • Conference paper
  • First Online:
Book cover Pattern Recognition and Image Analysis (IbPRIA 2017)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 10255))

Included in the following conference series:

  • 1834 Accesses

Abstract

Given the amount and variety of saliency models, the knowledge of their pros and cons, the applications they are more suitable for, or which are the more challenging scenes for each of them, would be very useful for the progress in the field. This assessment can be done based on the link between algorithms and public datasets. In one hand, performance scores of algorithms can be used to cluster video samples according to the pattern of difficulties they pose to models. In the other hand, cluster labels can be combined with video annotations to select discriminant attributes for each cluster. In this work we seek this link and try to describe each cluster of videos in a few words.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Itti, L., Koch, C., Niebur, E.: A model of saliency-based visual attention for rapid scene analysis. IEEE Trans. Pattern Anal. Mach. Intell. 20(11), 1254–1259 (1998)

    Article  Google Scholar 

  2. Guo, C., Zhang, L.: A novel multiresolution spatiotemporal saliency setection model and its applications in image and video compression. IEEE Trans. Image Process. 19(1), 185–198 (2010)

    Article  MathSciNet  Google Scholar 

  3. Lanillos, P., Ferreira, J.F., Dias, J.: Multisensory 3D saliency for artificial attention systems. In: REACTS Workshop, CAIP 2015, pp. 1–6 (2015)

    Google Scholar 

  4. Seo, H.J., Milanfar, P.: Action recognition from one example. IEEE Trans. Pattern Anal. Mach. Intell. 33(5), 867–882 (2011)

    Article  Google Scholar 

  5. Mehmood, I., Sajjad, M., Ejaz, W., Baik, S.W.: Saliency-directed prioritization of visual data in wireless surveillance networks. Inf. Fusion 24, 16–30 (2015)

    Article  Google Scholar 

  6. Borji, A., Sihite, D.N., Itti, L.: Quantitative analysis of human-model agreement in visual saliency modeling: a comparative study. IEEE Trans. Image Process. 22(1), 55–69 (2013)

    Article  MathSciNet  Google Scholar 

  7. Bruce, N., Wloka, C., Frosst, N., Rahman, S., Tsotsos, J.K.: On computational modeling of visual saliency: examining what’s right, and what’s left. Visi. Res. Part B 116, 95–112 (2015)

    Article  Google Scholar 

  8. Bylinskii, Z., DeGennaro, E.M., Rajalingham, R., Ruda, H., Zhang, J., Tsotsos, J.K.: Towards the quantitative evaluation of visual attention models. Vision Res. Part B 116, 258–268 (2015)

    Article  Google Scholar 

  9. Rahman, S., Bruce, N.: Visual saliency prediction and evaluation across different perceptual tasks. PloS One 10(9), e0138053 (2015)

    Article  Google Scholar 

  10. Mathe, S., Sminchisescu, C.: Dynamic eye movement datasets and learnt saliency models for visual action recognition. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012. LNCS, pp. 842–856. Springer, Heidelberg (2012). doi:10.1007/978-3-642-33709-3_60

    Chapter  Google Scholar 

  11. Riche, N., Mancas, M., Culibrk, D., Crnojevic, V., Gosselin, B., Dutoit, T.: Dynamic saliency models and human attention: a comparative study on videos. In: Lee, K.M., Matsushita, Y., Rehg, J.M., Hu, Z. (eds.) ACCV 2012. LNCS, vol. 7726, pp. 586–598. Springer, Heidelberg (2013). doi:10.1007/978-3-642-37431-9_45

    Chapter  Google Scholar 

  12. Henderson, J.M., Ferreira, F.: Scene Perception for Psycholinguists. The Interface of Language, Vision, and Action: Eye Movements and the Visual World, pp. 1–58. Psychology Press, New York (2004)

    Google Scholar 

  13. Dorr, M., Martinetz, T., Gegenfurtner, K.R., Barth, E.: Variability of eye movements when viewing dynamic natural scenes. J. Vis. 10(10), 28 (2010)

    Article  Google Scholar 

  14. Itti, L., Carmi, R.: Eye-tracking data from human volunteers watching complex video stimuli (2009)

    Google Scholar 

  15. Leborán, V., Garcia-Diaz, A., Fdez-Vidal, X.R., Pardo, X.M.: Dynamic whitening saliency. IEEE Trans. Pattern Anal. Mach. Intell. 39(5), 893–903 (2017)

    Article  Google Scholar 

  16. Harel, J., Koch, C., Perona, P.: Graph-Based Visual Saliency. Advances in Neural Information Processing Systems, pp. 545–552. MIT Press, Cambridge (2007)

    Google Scholar 

  17. Xiaodi, H., Zhang, L.: Dynamic visual attention: searching for coding length increments. In: NIPS, pp. 681–688. Curran Associates Inc. (2008)

    Google Scholar 

  18. Zhang, L., Tong, M.H., Garrison, W.: SUNDAy: saliency using natural statistics for dynamic analysis of scenes. In: CogSci 2009 (2009)

    Google Scholar 

  19. Itti, L., Baldi, P.F.: Bayesian surprise attracts human attention. Vis. Res. 49(10), 1295–1306 (2009)

    Article  Google Scholar 

  20. Seo, H.J., Milanfar, P.: Training-free, generic object detection using locally adaptive regression kernels. IEEE Trans. Pattern Anal. Mach. Intell. 32(9), 1688–1704 (2010)

    Article  Google Scholar 

  21. Rahtu, E., Kannala, J., Salo, M., Heikkilä, J.: Segmenting salient objects from images and videos. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010. LNCS, vol. 6315, pp. 366–379. Springer, Heidelberg (2010). doi:10.1007/978-3-642-15555-0_27

    Chapter  Google Scholar 

  22. Zhong, S.-H., Liu, Y., Ren, F., Zhang, J., Ren, T.: Video saliency detection via dynamic consistent spatio-temporal attention modelling. In: AAAI, Technical Track, pp. 1063–1069 (2013)

    Google Scholar 

  23. Muthuswamy, K., Rajan, D.: Salient motion detection in compressed domain. IEEE Sig. Process. Lett. 20(10), 996–999 (2013)

    Article  Google Scholar 

  24. Fang, Y., Lin, W., Chen, Z., Tsai, C.-H., Lin, C.-W.: A video saliency detection model in compressed domain. IEEE Trans. Circuits Syst. Video Technol. 24(1), 27–38 (2014)

    Article  Google Scholar 

  25. Khatoonabadi, S.H., Bajić, I.V., Shan, Y.: Compressed-domain correlates of human fixations in dynamic scenes. Multim. Tools Appl. 74(22), 10057–10075 (2015)

    Article  Google Scholar 

  26. Khatoonabadi, S.H., Vasconcelos, N., Bajić, I.V., Shan, Y.: How many bits does it take for a stimulus to be salient? In: CVPR 2015, pp. 5501–5510 (2015)

    Google Scholar 

  27. Bruce, N., Tsotsos, J.K.: Saliency, attention, and visual search: an information theoretic approach. J. Vis. 9(3), 1–24 (2009)

    Article  Google Scholar 

  28. Hou, X., Harel, J., Koch, C.: Image signature: highlighting sparse salient regions. IEEE Trans. Pattern Anal. Mach. Intell. 34(1), 194–201 (2012)

    Article  Google Scholar 

  29. Hou, X., Zhang, L.: Saliency detection: a spectral residual approach. In: CVPR 2007, pp. 1–8 (2007)

    Google Scholar 

  30. Schauerte, B., Stiefelhagen, R.: Predicting human gaze using quaternion DCT image signature saliency and face detection. In: WACV 2012 (2012)

    Google Scholar 

  31. Torralba, A., Oliva, A., Castelhano, M.S., Henderson, J.M.: Contextual guidance of eye movements and attention in real-world scenes: the role of global features in object search. Psychol. Rev. 113(4), 766–786 (2006)

    Article  Google Scholar 

  32. Riche, N., Mancas, M., Gosselin, B., Dutoit, T.: RARE: a new bottom-up saliency model. In: ICIP 2012 (2012)

    Google Scholar 

  33. Murray, N., Vanrell, M., Otazu, X., Parraga, C.A.: Saliency estimation using a non-parametric low-level vision model. In: CVPR 2011, pp. 433–440 (2011)

    Google Scholar 

  34. Li, Y., Zhou, Y., Xu, L., Yang, X., Yang, J.: Incremental sparse saliency detection. In: ICIP 2009, pp. 3093–3096 (2009)

    Google Scholar 

  35. Li, Y., Zhou, Y., Yan, J., Niu, Z., Yang, J.: Visual saliency based on conditional entropy. In: ACCV 2009, pp. 246–257 (2010)

    Google Scholar 

  36. Kalousis, A., Gama, J., Hilario, M.: On data and algorithms: understanding inductive performance. Mach. Learn. 54(3), 275–312 (2004)

    Article  MATH  Google Scholar 

  37. Jung, Y., Park, H., Du, D.-Z., Drake, B.L.: A decision criterion for the optimal number of clusters in hierarchical clustering. J. Global Optim. 25(1), 91–111 (2003)

    Article  MathSciNet  Google Scholar 

  38. Tibshirani, R.: Regression shrinkage and selection via the lasso. J. R. Stat. Soc. Ser. B 58(1), 267–288 (1996)

    MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

This work has received financial support from the Consellería de Cultura, Educación e Ordenación Universitaria (accreditation 2016–2019, ED431G/08) and the European Regional Development Fund (ERDF).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xosé M. Pardo .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper

Pardo, X.M., Fdez-Vidal, X.R. (2017). What Do Datasets Say About Saliency Models?. In: Alexandre, L., Salvador Sánchez, J., Rodrigues, J. (eds) Pattern Recognition and Image Analysis. IbPRIA 2017. Lecture Notes in Computer Science(), vol 10255. Springer, Cham. https://doi.org/10.1007/978-3-319-58838-4_12

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-58838-4_12

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-58837-7

  • Online ISBN: 978-3-319-58838-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics