Advertisement

Vision Saliency Feature Extraction Based on Multi-scale Tensor Region Covariance

  • Shimin WangEmail author
  • Mingwen Wang
  • Jihua Ye
  • Anquan Jie
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10390)

Abstract

In the process of extracting image saliency features by using regional covariance, the low-level higher-order data are dealt with by vectorization, however, the structure of the data (color, intensity, direction) may be lost in the process, leading to a poorer representation and overall performance degradation. In this paper we introduce an approach for sparse representation of region covariance that will preserve the inherent structure of the image. This approach firstly calculates the image low-level data (color, intensity, direction), and then uses multi-scale transform to extract the multi-scale features for constructing tensor space, at last by using tensor sparse coding the image bottom features are extracted from region covariance. In the paper, it compares the experimental results with the commonly used feature extraction algorithms’ results. The experimental results show that the proposed algorithm is closer to the actual boundary of the object and achieving better results.

Keywords

Saliency feature Region covariance Tensor space Multi-scale transform 

References

  1. 1.
    Torralba, A., Efros, A.A.: Unbiased look at dataset bias. In: Proceedings of the IEEE Conference on Computer Vision Pattern Recognition (CVPR), pp. 1521–1528, June 2011Google Scholar
  2. 2.
    Borji, A., Sihite, D.N., Itti, L.: Salient object detection: a benchmark. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 414–429 (2012)Google Scholar
  3. 3.
    Itti, L., Koch, C., Niebur, E.: A model of saliency-based visual attention for rapid scene analysis. IEEE Trans. Pattern Anal. Mach. Intell. 20(11), 1254–1259 (1998)CrossRefGoogle Scholar
  4. 4.
    Koch, C., Ullman, S.: Shifts in selective visual attention: towards the underlying neural circuitry. Hum. Neurobiol. 4(4), 219–227 (1985)Google Scholar
  5. 5.
    Treisman, A.M., Gelade, G.: A feature-integration theory of attention. Cognit. Psychol. 12(1), 97–136 (1980)CrossRefGoogle Scholar
  6. 6.
    Harel, J., Koch, C., Perona, P.: Graph-based visual saliency. In: Proceedings of the Advances in Neural Information Processing Systems (NIPS), pp. 542–552 (2006)Google Scholar
  7. 7.
    Liu, T., et al.: Learning to detect a salient object. IEEE Trans. Pattern Anal. Mach. Intell. 33(2), 353–367 (2011)CrossRefGoogle Scholar
  8. 8.
    Bruce, N.D.B., Tsotsos, J.K.: Saliency based on information maximization. In: Proceedings of the Advances in Neural Information Processing Systems (NIPS), pp. 155–162 (2005)Google Scholar
  9. 9.
    Hou, X., Zhang, L.: Dynamic attention: searching for coding length increments. In: Proceedings of the Advances in Neural Information Processing Systems (NIPS), pp. 681–688 (2008)Google Scholar
  10. 10.
    Zhang, L., Tong, M.H., Marks, T.K., Shan, H., Cottrell, G.W.: SUN: a Bayesian framework for saliency using natural statistics. J. Vis. 8(7), 1–20 (2008)CrossRefGoogle Scholar
  11. 11.
    Kong, L., Duan, L., Yang, W., Dou, Y.: Salient region detection: an integration approach based on image pyramid and region property. IET Comput. Vis. 9(1), 85–97 (2015)CrossRefGoogle Scholar
  12. 12.
    Fang, Y., Lin, W., Chen, Z., Tsai, C.-M., Lin, C.-W.: A video saliency detection model in compressed domain. IEEE Trans. Circuits Syst. Video Technol. 24(1), 27–38 (2014)CrossRefGoogle Scholar
  13. 13.
    Jiang, H., Wang, J., Yuan, Z., Liu, T., Zheng, N., Li, S.: Automatic salient object segmentation based on context and shape prior. In: Proceedings of the British Machine Vision Conference (BMVC), pp. 1–12 (2011)Google Scholar
  14. 14.
    Cerf, M., Frady, E.P., Koch, C.: Faces and text attract gaze independent of the task: experimental data and computer model. J. Vis. 9(12), 1–15 (2009)CrossRefGoogle Scholar
  15. 15.
    Judd, T., Ehinger, K., Durand, F., Torralba, A.: Learning to predict where humans look. In: Proceedings of the IEEE 12th International Conference on Computer Vision (ICCV), pp. 795–825, September/October 2009Google Scholar
  16. 16.
    Chang, K.-Y., Liu, T.-L., Chen, H.-T., Lai, S.-H.: Fusing generic abjectness and visual saliency for salient object detection. In: Proceedings of the International Conference on Computer Vision (ICCV), pp. 914–921, November 2011Google Scholar
  17. 17.
    Alexe, B., Deselaers, T., Ferrari, V.: What is an object? In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 73–80, June 2010Google Scholar
  18. 18.
    Tuzel, O., Porikli, F., Meer, P.: Region covariance: a fast descriptor for detection and classification. In: Leonardis, A., Bischof, H., Pinz, A. (eds.) ECCV 2006. LNCS, vol. 3952, pp. 589–600. Springer, Heidelberg (2006). doi: 10.1007/11744047_45 CrossRefGoogle Scholar
  19. 19.
    Daugman, J.G.: Uncertainty relation for resolution in space, spatial frequency, and orientation optimized by two-dimensional visual cortical filters. J. Opt. Soc. Am. A 2(7), 1160–1169 (1985)CrossRefGoogle Scholar
  20. 20.
    Wang, S., Cheng, B., Ye, J., Wang, M.: Method of face image feature extract based on weighted multi-scale tensor subspace. J. Data Acquis. Process. 31(04), 791–7 98 (2016)Google Scholar
  21. 21.
    Sivalingam, R., Boley, D., Morellas, V., Papanikolopoulos, N.: Tensor sparse coding for region covariances. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010. LNCS, vol. 6314, pp. 722–735. Springer, Heidelberg (2010). doi: 10.1007/978-3-642-15561-1_52 CrossRefGoogle Scholar
  22. 22.
    Borji, A., Itti, L.: CAT2000: a large scale fixation dataset for boosting saliency research. In: CVPR 2015 Workshop on “Future of Datasets” (2015). arXiv preprint arXiv:1505.03581
  23. 23.
    Bylinskii, Z., Judd, T., Oliva, A., Torralba, A., Durand, F.: What do different evaluation metrics tell us about saliency models? arXiv preprint arXiv:1604.03605 (2016)
  24. 24.
    Judd, T., Durand, F., Torralba, A.: A benchmark of computational models of saliency to predict Hum. Fixat. (2012)Google Scholar
  25. 25.
    Goferman, S., Zelnik-Manor, L., Tal, A.: Context-aware saliency detection. IEEE Trans. Pattern Anal. Mach. Intell. 34(10), 1915–1926 (2012)CrossRefGoogle Scholar
  26. 26.
    Erdem, E., Erdem, A.: Visual saliency estimation by nonlinearly integrating features using region covariances. J. Vis. 13(4), 103–104 (2013)CrossRefGoogle Scholar
  27. 27.
    Fang, S., Li, J., Tian, Y., et al.: Learning discriminative subspaces on random contrasts for image saliency analysis. IEEE Trans. Neural Netw. Learn. Syst. (2016)Google Scholar
  28. 28.
    Riche, N., Mancas, M., Duvinage, M., et al.: RARE2012: a multi-scale rarity-based saliency detection with its comparative statistical analysis. Sig. Process. Image Commun. 28(6), 642–658 (2013)CrossRefGoogle Scholar
  29. 29.
    Murray, N., Vanrell, M., Otazu, X., et al.: Saliency estimation using a non-parametric low-level vision model. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2011), Colorado Springs, 20–25 June, DBLP, pp. 433–440 (2011)Google Scholar
  30. 30.
    Ran, M., Zelnik-Manor, L., Tal, A.: Saliency for image manipulation. Vis. Comput. 29(5), 381–392 (2013)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • Shimin Wang
    • 1
    Email author
  • Mingwen Wang
    • 1
  • Jihua Ye
    • 1
  • Anquan Jie
    • 1
  1. 1.College of Computer Information and EngineeringJiangxi Normal UniversityNanchangChina

Personalised recommendations