Skip to main content

Visual Saliency Based Video Summarization: A Case Study For Preview Video Generation

  • Conference paper
  • First Online:
Information, Photonics and Communication

Part of the book series: Lecture Notes in Networks and Systems ((LNNS,volume 79))

Abstract

A direction research to visual content-driven videos has been in facilitating a short preview of each video through summarization that largely contains short-duration sequence combination of each scene corresponding to stationary camera. This work aims at using visual saliency features to trace scene-change positions in the video. In the present work, visual saliency features are built using color and intensity information as features. Further, using accumulated difference measure (Forward and Backward) in saliency features has been used to filter out false-positive scene-change outcomes. The results have been found to be quite satisfactory and provide closed match to the exact scene-change positions in the video. Significant accuracy is observed with videos using stationary cameras. For moving or non-stationary camera, video summarization has always been a challenging issue. The proposed method has been successfully tested over visual content-driven videos ranging from underwater scenes, fight sequences to surveillance videos in generating preview video.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Evangelopoulos, G., Rapantzikos, K., Potamianos, A., Maragos, P., Zlatintsi, A., Avrithis, Y.: Audiovisual attention modeling and salient event detection. In: Multimodal Processing and Interaction: Audio, Video, Text (eds.). Springer (2008)

    Google Scholar 

  2. Guo, C., Zhang, L.: A novel multiresolution spatiotemporal saliency detection model and its applications in image and video compression. IEEE Trans. Image Proc. 19(1), (2010)

    Google Scholar 

  3. Evangelopoulos, V., Zlatintsi, A., Skoumas, G., Rapantzikos, K., Potamianos, A., Maragos, P., Avrithis, Y.: Video event detection and summarization using audio, visual, text saliency. In IEEE Trans., ICASSP (2009)

    Google Scholar 

  4. Tong, Y., Konik, H., Cheikh, F.A., Guraya, F.F.E., Tremeau, A.: Multi-feature based visual saliency detection in surveillance video. In: Visual Communications and Image Processing 2010, vol. 7744 (2010)

    Google Scholar 

  5. El Khattabi, Z., Tabii, Y., Benkaddour, A.: Video summarization: techniques and applications. Int. J. Comput. Inf. Eng. 9(4) (2015)

    Google Scholar 

  6. Ying, L., Lee, S.-H., Yeh, C.-H., Kuo, C.-C.: Techniques for movie content analysis and skimming. In IEEE Signal Proc. Mag. 23(2) (2006)

    Google Scholar 

  7. Avrithis, Y., Doulamis, A., Doulamis, N., Kollias, S.: Summarization of video taped presentations: automatic analysis of motion and gesture. Comput. Vision Image Underst 75(12), 3–24 (1998)

    Google Scholar 

  8. Mu, Y., Lu, L., Zhang, H., Li, M.: A user attention model for video summarization. In: Proceedings ACM Int’l Conference on Multimedia (2002)

    Google Scholar 

  9. Ratakonda, K., Sezan, M., Crinon, R.: Hierarchical video summarization. In Proceedings SPIE, Visual Communication and Image Processing, vol. 3653, pp. 1531–1541, (Dec 1998)

    Google Scholar 

  10. Itti, L., Koch, C., Niebur, E.: A model of saliency based visual attention for rapid scene analysis. IEEE Trans. PAMI 20(11), 1254–1259 (1998)

    Article  Google Scholar 

  11. Freeman, W.T., Adelson, E.H.: The design and use of steerable filters. IEEE Trans. PAMI 9, 891–906 (1991)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Subhash Kulkarni .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Ramya, G., Kulkarni, S. (2020). Visual Saliency Based Video Summarization: A Case Study For Preview Video Generation. In: Mandal, J., Bhattacharya, K., Majumdar, I., Mandal, S. (eds) Information, Photonics and Communication. Lecture Notes in Networks and Systems, vol 79. Springer, Singapore. https://doi.org/10.1007/978-981-32-9453-0_16

Download citation

  • DOI: https://doi.org/10.1007/978-981-32-9453-0_16

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-32-9452-3

  • Online ISBN: 978-981-32-9453-0

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics