Advertisement

Automatic Video Abstraction via the Progress of Story

  • Songhao Zhu
  • Zhiwei Liang
  • Yuncai Liu
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6297)

Abstract

In this paper, an automatic video abstraction scheme on continuously recorded video, such as a movie video, is proposed. Conventional methods deal with the issue of video abstraction from the level of scene, while the proposed method attempts to comprehend video contents from the progress of the overall story and viewers’ semantic understanding. The generated dynamic abstraction not only provides a bird view of the original video but also helps a viewer understand the progress of the overall story. Furthermore, different types of video abstraction can be appropriately generated with respect to different user-defined duration length. Experimental results show that the proposed scheme is a feasible solution for the effective management of video repository and online review services.

Keywords

Automatic video abstraction progress of a story semantic understanding 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Yeung, M., Yeo, B.: Video visualization for compact presentation and fast browsing of pictorial content. IEEE Transaction on Circuits System and Video Technology 7(5), 771–785 (1997)CrossRefGoogle Scholar
  2. 2.
    Ma, Y., Lu, L., Zhang, H., Li, M.: A user attention model for video summarization. In: ACM International Conference on Multimedia, pp. 533–542. ACM Press, Juan les Pins France (2002)Google Scholar
  3. 3.
    Ma, Y., Lu, L., Zhang, H.: Video Snapshot: A bird view of video sequence. In: IEEE Conference on Multimedia Modeling, pp. 94–101. IEEE Press, Melbourne (2005)Google Scholar
  4. 4.
    Hua, X., Zhang, H.: Media content analysis. Scholarpedia 3(2), 3712–4161 (2008)CrossRefMathSciNetGoogle Scholar
  5. 5.
    Gao, Y., Wang, W., Yong, J., Gu, H.: Dynamic video summarization using two-level redundancy detection. Multimedia Tools and Applications 42(2), 233–250 (2009)CrossRefGoogle Scholar
  6. 6.
    Sundaram, H., Chang, S.: Computable scenes and structures in films. IEEE Transaction on Multimedia 4(4), 482–491 (2002)CrossRefGoogle Scholar
  7. 7.
    Zhu, S., Liu, Y.: Two-dimensional entropy model for video shot partitioning. Science in China Series F-Information Sciences 52(2), 183–194 (2009)zbMATHCrossRefGoogle Scholar
  8. 8.
    Zhu, S., Liu, Y.: A Novel Scheme for Video Scenes Segmentation and Semantic Representation. In: IEEE Conference on Multimedia and Expo., pp. 1289–1292. IEEE Press, Hannover (2008)Google Scholar
  9. 9.
    Li, S., Zhu, L., Zhang, Z., Blake, A., Zhang, H., Shum, H.: Statistic learning of multi-view face detection. In: Europe Conference on Computer Vision, pp. 117–121. Europe Press, Copenhagen (2002)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2010

Authors and Affiliations

  • Songhao Zhu
    • 1
  • Zhiwei Liang
    • 1
  • Yuncai Liu
    • 2
  1. 1.Nanjing University of Post and TelecommunicationsNanjingP.R. China
  2. 2.Shanghai Jiao Tong UniversityShanghaiP.R. China

Personalised recommendations