FqSD: Full-Quaternion Saliency Detection in Images

  • Reynolds León GuerraEmail author
  • Edel B. García Reyes
  • Annette M. González Quevedo
  • Heydi Méndez Vázquez
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11401)


Saliency detection aims to segment in two groups the pixels of an image, in important or less important visual information. Important information can be used to detect objects with some semantics for tasks in computer vision. In this paper, we develop a saliency detection method using full-quaternion. The proposed method makes a combination between local and global approaches. Local features are obtained at patches level using a Module Local Binary Patterns and the comparison between feature vectors (module and phase). The salient object is obtained by a weighted combination of salient maps where a function of center-bias and refinement is applied. To verify the effectiveness of our method, this is validated using the mean absolute error metric in the ECSSD-1000 and DUT-OMRON datasets and compared with others state of the art algorithms. A statistical analysis is applied to the results to obtain the statistical significance among color spaces with the method of Wilcoxon signed test. The results show that the HSV color space is better in effectiveness than others.


Full-quaternion Salient map Color Feature Object 


  1. 1.
    Wang, X., et al.: Edge preserving and multi-scale contextual neural network for salient object detection. IEEE Trans. Image Process. 27(1), 121–134 (2018)MathSciNetCrossRefGoogle Scholar
  2. 2.
    Aytekin, C., Iosifidis, A., Gabbouj, M.: Probabilistic saliency estimation. Pattern Recognit. 74, 359–372 (2018)CrossRefGoogle Scholar
  3. 3.
    Murabito, F., et al.: Top-down saliency detection driven by visual classification. arXiv preprint arXiv:1709.05307 (2017)
  4. 4.
    Li, S., Mathews, P.: Can image retrieval help visual saliency detection? arXiv preprint arXiv:1709.08172 (2017)
  5. 5.
    Zhu, F., et al.: A novel two-stream saliency image fusion CNN architecture for person re-identification. Multimed. Syst., 1–14 (2017)Google Scholar
  6. 6.
    Erdem, E., Erdem, A.: Visual saliency estimation by nonlinearly integrating features using region covariances. J. Vis. 13(4), 11–11 (2013)CrossRefGoogle Scholar
  7. 7.
    Hu, D., et al.: Saliency region detection via local and global (2015)Google Scholar
  8. 8.
    Liu, S., Hu, J.: Visual saliency based on frequency domain analysis and spatial information. Multimed. Tools Appl. 75(23), 16699–16711 (2016)CrossRefGoogle Scholar
  9. 9.
    Yu, C., Zhang, W., Wang, C.: A saliency detection method based on global contrast. Int. J. Signal Process. Image Process. Pattern Recognit. 8(7), 111–122 (2015)Google Scholar
  10. 10.
    Wang, L., et al.: Deep networks for saliency detection via local estimation and global search. In: 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3183–3192. IEEE (2015)Google Scholar
  11. 11.
    Rajankar, O.S., Kolekar, U.D.: Scale space reduction with interpolation to speed up visual saliency detection. Int. J. Image Graph. Signal Process. 7(8), 58 (2015)CrossRefGoogle Scholar
  12. 12.
    Burt, P.J., Adelson, E.H.: The Laplacian pyramid as a compact image code. In: Readings in Computer Vision, pp. 671–679 (1987)Google Scholar
  13. 13.
    Guerra, R.L., García Reyes, E.B., Mata, F.J.S.: Full-quaternion color correction in images for person re-identification. In: Mendoza, M., Velastín, S. (eds.) CIARP 2017. LNCS, vol. 10657, pp. 339–346. Springer, Cham (2018). Scholar
  14. 14.
    Morais, J.P., Georgiev, S., Sprößig, W.: Real Quaternionic Calculus Handbook. Springer, Basel (2014). Scholar
  15. 15.
    Buso, V., Benois-Pineau, J., Domenger, J.-P.: Geometrical cues in visual saliency models for active object recognition in egocentric videos. Multimed. Tools Appl. 74(22), 10077–10095 (2015)CrossRefGoogle Scholar
  16. 16.
    Yan, Q., et al.: Hierarchical saliency detection. In: 2013 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1155–1162. IEEE (2013)Google Scholar
  17. 17.
    Yang, C., et al.: Saliency detection via graph-based manifold ranking. In: 2013 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3166–3173. IEEE (2013)Google Scholar
  18. 18.
    Itti, L., Koch, C., Niebur, E.: A model of saliency-based visual attention for rapid scene analysis. IEEE Trans. Pattern Anal. Mach. Intell. 20(11), 1254–1259 (1998)CrossRefGoogle Scholar
  19. 19.
    Zhu, W., et al.: Saliency optimization from robust background detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2814–2821 (2014)Google Scholar
  20. 20.
    Li, H., et al.: Inner and inter label propagation: salient object detection in the wild. IEEE Trans. Image Process. 24, 3176–3186 (2015)MathSciNetCrossRefGoogle Scholar
  21. 21.
    Jiang, L., Zhong, H., Lin, X.: Saliency detection via boundary prior and center prior. Int. Robot. Autom. J. 2(4), 00027 (2017). Scholar
  22. 22.
    Tong, N., et al.: Salient object detection via bootstrap learning. In: 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1884–1892. IEEE (2015)Google Scholar
  23. 23.
    Li, Y., et al.: The secrets of salient object segmentation. In: 2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 280–287. IEEE (2014)Google Scholar
  24. 24.
    Kim, J., et al.: Salient region detection via high-dimensional color transform. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 883–890 (2014)Google Scholar
  25. 25.
    Qin, Y., et al.: Saliency detection via cellular automata. In: 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 110–119. IEEE (2015)Google Scholar
  26. 26.
    Xia, C., Zhang, H., Gao, X.: Saliency detection by aggregating complementary background template with optimization framework. arXiv preprint arXiv:1706.04285 (2017)
  27. 27.
    Wang, H., et al.: Saliency region detection method based on background and spatial position. Int. J. Pattern Recognit. Artif. Intell. 32(07), 1850024 (2018)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Reynolds León Guerra
    • 1
    Email author
  • Edel B. García Reyes
    • 1
  • Annette M. González Quevedo
    • 1
  • Heydi Méndez Vázquez
    • 1
  1. 1.Advanced Technologies Application Center (CENATAV)HavanaCuba

Personalised recommendations