Advertisement

Benchmarking Saliency Detection Methods on Multimodal Image Data

  • Hanan AnzidEmail author
  • Gaetan Le Goic
  • Aissam Bekkari
  • Alamin Mansouri
  • Driss Mammass
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10884)

Abstract

Saliency detecmage processing. Most of the work is adapted to the specific application and available dataset. The present work is about a comparative analysis of saliency detection for multimodal images dataset. There were many researches on the detection of saliency on several types of images, such as multispectral, natural, 3D and so on. This work presents a first focused study on saliency detection on multimodal images. Our database was extracted from acquisitions on cultural heritage wall paintings that contain four modalities UV, IR, Visible and fluorescence. In this paper, the analysis has been performed for many methods on saliency detection. We evaluate the performance of each method using NSS similarity metric. The results show that the best methods are [16] for visible modality, and [20] for UV, IR and fluorescence modalities.

Keywords

Saliency map Multimodal images 

Notes

Acknowledgments

The authors thank the Chateau de Germolles managers for providing data and expertise and the COST Action TD1201 Colour and Space in Cultural Heritage (COSCH) (www.cosch.info) for supporting this case study. The authors also thank the PHC Toubkal/16/31: 34676YA program for the financial support.

References

  1. 1.
    Smith, T.F., Waterman, M.S.: Identification of common molecular subsequences. J. Mol. Biol. 147, 195–197 (1981)CrossRefGoogle Scholar
  2. 2.
    Torralba, A., Oliva, A., Castelhano, M.S., Henderson, J.M.: Contextual guidance of attention and eye movements in real-world scenes: the role of global features in object search. Psychol. Rev. 113, 766–786 (2006)CrossRefGoogle Scholar
  3. 3.
    Anzid, H., Le Goic, G., Bekkarri, A., Mansouri, A., Mammass, D.: Improving point matching on multimodal images using distance and orientation automatic filtering. In: 2016 IEEE/ACS 13th International Conference of Computer Systems and Applications (AICCSA), pp. 1–8 (2016)Google Scholar
  4. 4.
    Bharath, R., Nicholas, L.Z., Cheng, X.: Scalable scene understanding using saliency-guided object localization. 2013 10th IEEE International Conference on Control and Automation (ICCA), pp. 1503–1508 (2013)Google Scholar
  5. 5.
    Bruce, N., Tsotsos, J.: Saliency based on information maximization. In: Advances in Neural Information Processing Systems, pp. 155–162 (2006)Google Scholar
  6. 6.
    Murabito, F., Spampinato, C., Palazzo, S., Pogorelov, K., Riegler, M.: Top-down saliency detection driven by visual classification. arXiv preprint arXiv:1709.05307 (2017)
  7. 7.
    Goferman, S., Zelnik-Manor, L., Tal, A.: Context-aware saliency detection. IEEE Trans. Pattern Anal. Mach. Intell. 34, 1915–1926 (2012)CrossRefGoogle Scholar
  8. 8.
    Harel, J. Koch, C., Perona, P.: Graph-based visual saliency. In: Advances in Neural Information Processing Systems, pp. 545–552 (2007)Google Scholar
  9. 9.
    He, S., Lau, R.W., Yang, Q.: Exemplar-driven top-down saliency detection via deep association. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5723–5732 (2016)Google Scholar
  10. 10.
    Hou, X., Zhang, L.: Saliency detection: a spectral residual approach. In: Zeng, Z., Li, Y., King, I. (eds.) Advances in Neural Networks – ISNN 2014. LNCS, vol. 8866, pp. 1–8. Springer, Cham (2007).  https://doi.org/10.1007/978-3-319-12436-0_34CrossRefGoogle Scholar
  11. 11.
    Hou, X., Harel, J., Koch, C.: Image signature: highlighting sparse salient regions. IEEE Trans. Pattern Anal. Mach. Intell. 34, 194–201 (2012)CrossRefGoogle Scholar
  12. 12.
    Hu, Y., Xie, X., Ma, W.-Y., Chia, L.-T., Rajan, D.: Salient region detection using weighted feature maps based on the human visual attention model. In: Aizawa, K., Nakamura, Y., Satoh, S. (eds.) PCM 2004 Part II. LNCS, vol. 3332, pp. 993–1000. Springer, Heidelberg (2004).  https://doi.org/10.1007/978-3-540-30542-2_122CrossRefGoogle Scholar
  13. 13.
    Han, J., Ngan, K.N., Li, M., Zhang, H.J.: Unsupervised extraction of visual attention objects in color images. Trans. Circ. Syst. Video Technol. 16, 141–145 (2006)CrossRefGoogle Scholar
  14. 14.
    Itti, L., Koch, C.: A saliency-based search mechanism for overt and covert shifts of visual attention. Vis. Res. 40, 1489–1506 (2000)CrossRefGoogle Scholar
  15. 15.
    Itti, L., Koch, C., Niebur, E.: A model of saliency-based visual attention for rapid scene analysis. PAMI 20, 1254–1259 (1988)CrossRefGoogle Scholar
  16. 16.
    Le Moan, S., Mansouri, A., Hardeberg, J.Y., Voisin, Y.: Saliency for spectral image analysis. J. Sel. Topics. Appl. Earth. Obs. Remote Sens. 6, 2472–2479 (2012)CrossRefGoogle Scholar
  17. 17.
    Sharma, P., Cheikh, F.A., Hardeberg, J.Y.: Saliency map for human gaze prediction in images. In: Sixteenth Color Imaging Conference, Portland, Oregon, USA (2008)Google Scholar
  18. 18.
    Perazzi, F., Krähenbühl, P., Pritch, Y., Hornung, A.: Saliency filters: contrast based filtering for salient region detection. 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 733–740 (2012)Google Scholar
  19. 19.
    Peters, R.J., Iyer, A., Itti, L., Koch, C.: Components of bottom-up gaze allocation in natural images. Vis. Res. 45, 2397–2416 (2005)CrossRefGoogle Scholar
  20. 20.
    Rahtu, E., Kannala, J., Salo, M., Heikkilä, J.: Segmenting salient objects from images and videos. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010 Part V. LNCS, vol. 6315, pp. 366–379. Springer, Heidelberg (2010).  https://doi.org/10.1007/978-3-642-15555-0_27CrossRefGoogle Scholar
  21. 21.
    Judd, T., Ehinger, K., Durand, F., Torralba, A.: Learning to predict where humans look. In: IEEE 12th International Conference on Computer Vision, pp. 2106–2113 (2009)Google Scholar
  22. 22.
    Torralba, A.: Contextual priming for object detection. IJCV 53, 169–191 (2003)CrossRefGoogle Scholar
  23. 23.
    Torralba, A.: Modeling global scene factors in attention. JOSA 20, 1407–1418 (2003)CrossRefGoogle Scholar
  24. 24.
    Rutishauser, U., Walther, D., Koch, C., Perona, P.: Is bottom-up attention useful for object recognition? In: CVPR, pp. 37–44 (2004)Google Scholar
  25. 25.
    Mahadevan, V., Vasconcelos, N.: Saliency-based discriminant tracking. In: CVPR (2009)Google Scholar
  26. 26.
    Wei, Y., Wen, F., Zhu, W., Sun, J.: Geodesic saliency using background priors. In: ECCV (2012)CrossRefGoogle Scholar
  27. 27.
    Yu, J.-G., Xia, G.S., Gao, C., Samal, A.: A computational model for object-based visual saliency: spreading attention along gestalt cues. IEEE Trans. Multimed. 18, 273–286 (2016)CrossRefGoogle Scholar
  28. 28.
    Chen, Z., Tu, Y., Wang, L.: An improved saliency detection algorithm based on Itti’s model. Tehn. Vjesnik 21, 1337–1344 (2014)Google Scholar
  29. 29.
    Hou, X., Zhang, L.: Saliency detection: a spectral approach. In: IEEE Conference on Computer Vision And Pattern Recognition (2007)Google Scholar
  30. 30.
    Zhang, L., Guo, C.: A novel multiresolution spatiotemporal saliency detection model and its applications in image and video compression. IEEE Trans. Image Process. 19, 185–198 (2010)MathSciNetCrossRefGoogle Scholar
  31. 31.
    Chen, Z., Tu, Y., Eang, L.: An improved saliency detection algorithm based on Ittis model. Techn. Gaz. 21, 1337–1344 (2014)Google Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  • Hanan Anzid
    • 1
    • 2
    Email author
  • Gaetan Le Goic
    • 2
  • Aissam Bekkari
    • 1
  • Alamin Mansouri
    • 2
  • Driss Mammass
    • 1
  1. 1.IRF-SIC Laboratory, Faculty of ScienceAgadirMorocco
  2. 2.LE2I LaboratoryUniversity of Burgundy-Franche-ComteDijonFrance

Personalised recommendations