Skip to main content

A Spatio Temporal Texture Saliency Approach for Object Detection in Videos

  • Conference paper
  • First Online:
Computer Vision, Graphics, and Image Processing (ICVGIP 2016)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 10481))

  • 1368 Accesses

Abstract

Detecting what attracts human attention is one of the vital tasks for visual processing. Saliency detection finds out the location of foci of attention on an outstanding object in image/video sequences. However, temporal information in videos play major role in human visual perception in locating salient objects. This paper presents a novel approach to detect salient object in a video using spatio-temporal textural saliency which also includes temporal information, an important aspect in videos. In this work, the context driven static saliency extracted from Lab color space in XY plane is combined with the local phase quantization on three orthogonal planes (LPQ-TOP) driven dynamic saliency to detect the spatio-temporal saliency in videos. The dynamic saliency is obtained by fusing two temporal saliencies extracted from XT-plane and YT-plane using LPQ texture feature, which extracts the temporal salient region. This approach is evaluated on Benchmark dataset and the result shows that the proposed saliency approach yields promising performance.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Gray, C., James, S., Collomosse, J.: A particle filtering approach to salient video object localization. In: IEEE International Conference on Image Processing, Paris, pp. 194–198 (2014)

    Google Scholar 

  2. Kannan, R., Ghinea, G., Swaminathan, S.: Discovering salient object from videos using spatio temporal salient region detection. Sig. Process. Image Commun. 36, 154–178 (2015)

    Article  Google Scholar 

  3. Li, H., Wang, Y., Liu, W.: Moving object detection based on HFT and dynamic fusion. In: Proceedings of the International Conference on Signal Processing, China, pp. 895–899 (2014)

    Google Scholar 

  4. Luo, Y., Yua, J.: Salient object detection in videos by optimal spatio-temporal path discovery. In: 21st ACM International Conference on Multimedia, Newyork, USA, pp. 509–512 (2013)

    Google Scholar 

  5. Mahapatra, D., Gilani, S.O., Saini, M.K.: Coherency based spatio-temporal saliency detection for video object segmentation. IEEE J. Sel. Top. Sig. Process. 8(3), 454–462 (2014)

    Article  Google Scholar 

  6. Mauthner, T., Possegger, H., Waltner, G., Bischof, H.: Encoding based saliency detection for videos and images. In: 28th International Conference on Computer Vision and Pattern Recognition, Bosto, MA, pp. 2494–2502 (2015)

    Google Scholar 

  7. Borji, A., Sihite, D.N., Itti, L.: Salient object detection: a benchmark. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012. LNCS, pp. 414–429. Springer, Heidelberg (2012). doi:10.1007/978-3-642-33709-3_30

    Chapter  Google Scholar 

  8. Muthuswamy, K., Rajan, D.: Particle filter framework for salient object detection in videos. IET Comput. Vis. 9, 428–438 (2015)

    Article  Google Scholar 

  9. Zou, W., Komodakis, N.: HARF: hierarchy associated rich features for salient object detection. In: International Conference on Computer Vision, Santiago, Chile, pp. 406–414 (2015)

    Google Scholar 

  10. Perazzi, F., Sorkine Hornung, O., Sorkine-Hornung, A.: Efficient salient foreground detection for images and video using fiedler vector. In: Workshop on Intelligent Cinematography and Editing, vol. 34, no. 2, pp. 21–29 (2015)

    Google Scholar 

  11. Zhou, B., Hou, X., Zhang, L.: A phase discrepancy analysis of object motion. In: Kimmel, R., Klette, R., Sugimoto, A. (eds.) ACCV 2010. LNCS, vol. 6494, pp. 225–238. Springer, Heidelberg (2011). doi:10.1007/978-3-642-19318-7_18

    Chapter  Google Scholar 

  12. Jiang, B., Valstar, M.F., Pantic, M.: Action unit detection using sparse appearance descriptors in space-time video volumes. In: IEEE International Conference on Automatic Face & Gesture Recognition and Workshops, USA, pp. 314–321 (2011)

    Google Scholar 

  13. Zhen, Q., Huang, D., Wang, Y., Chen, L.: LPQ based static and dynamic modeling of facial expressions in 3D videos. In: Sun, Z., Shan, S., Yang, G., Zhou, J., Wang, Y., Yin, Y. (eds.) CCBR 2013. LNCS, vol. 8232, pp. 122–129. Springer, Cham (2013). doi:10.1007/978-3-319-02961-0_15

    Chapter  Google Scholar 

  14. Muddamsetty, S.M., Sidibe, D., Tremeau, A., Meriaudeau, F.: A performance evaluation of fusion techniques for spatio-temporal saliency detection in dynamic scenes. In: Proceedings of IEEE International Conference on Image Processing, Australia, pp. 1–5 (2013)

    Google Scholar 

  15. Liu, F., Gleicher, M.: Automatic image retargeting with fish eye-view warping. In: 18th Annual ACM Symposium on User Interface Software and Technology, USA, pp. 153–162 (2005)

    Google Scholar 

  16. Goferman, S., Zelnik-manor, L., Tal, A.: Context-aware saliency detection. In: IEEE Conference on Computer Vision and Pattern Recognition, San Francisco, CA, pp. 1915–1926 (2010)

    Google Scholar 

  17. Website for NTT Dataset. www.kecl.ntt.co.jp/

  18. Ojansivu, V., Heikkilä, J.: Blur insensitive texture classification using local phase quantization. In: Elmoataz, A., Lezoray, O., Nouboud, F., Mammass, D. (eds.) ICISP 2008. LNCS, vol. 5099, pp. 236–243. Springer, Heidelberg (2008). doi:10.1007/978-3-540-69905-7_27

    Chapter  Google Scholar 

  19. Lin, L., Zhou, W.: LGOH-based discriminant centre-surround saliency detection. Int. J. Adv. Rob. Syst. 10, 1–8 (2013)

    Article  Google Scholar 

  20. Muddamsetty, S.M., Sidibé, D., Trémeau, A., Mériaudeau, F.: Spatio-temporal saliency detection in dynamic scenes using local binary patterns. In: ICPR, Sweden, pp. 2353–2358 (2014)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to A. Sasithradevi .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper

Sasithradevi, A., Mohamed Mansoor Roomi, S., Sanofer, I. (2017). A Spatio Temporal Texture Saliency Approach for Object Detection in Videos. In: Mukherjee, S., et al. Computer Vision, Graphics, and Image Processing. ICVGIP 2016. Lecture Notes in Computer Science(), vol 10481. Springer, Cham. https://doi.org/10.1007/978-3-319-68124-5_6

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-68124-5_6

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-68123-8

  • Online ISBN: 978-3-319-68124-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics