Advertisement

Multimedia Tools and Applications

, Volume 34, Issue 2, pp 249–266 | Cite as

OM-based video shot retrieval by one-to-one matching

  • Yuxin PengEmail author
  • Chong-Wah Ngo
  • Jianguo Xiao
Article

Abstract

This paper proposes a new approach for shot-based retrieval by optimal matching (OM), which provides an effective mechanism for the similarity measure and ranking of shots by one-to-one matching. In the proposed approach, a weighted bipartite graph is constructed to model the color similarity between two shots. Then OM based on Kuhn–Munkres algorithm is employed to compute the maximum weight of a constructed bipartite graph as the shot similarity value by one-to-one matching among frames. To improve the speed efficiency of OM, two improved algorithms are also proposed: bipartite graph construction based on subshots and bipartite graph construction based on the same number of keyframes. Besides color similarity, motion feature is also employed for shot similarity measure. A motion histogram is constructed for each shot, the motion similarity between two shots is then measured by the intersection of their motion histograms. Finally, the shot similarity is based on the linear combination of color and motion similarity. Experimental results indicate that the proposed approach achieves better performance than other methods in terms of ranking and retrieval capability.

Keywords

Shot-based retrieval OM Color and motion similarity 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Chen L, Chua TS (2001) A match and tiling approach to content-based video retrieval. International Conference on Multimedia and ExpoGoogle Scholar
  2. 2.
    Deng Y, Manjunath BS (1997) Content-based search of video using color, texture and motion. International Conference on Image Processing 534–537Google Scholar
  3. 3.
    Fan J, Elmagarmid AK, Zhu X, Aref WG, Wu L (2004) Classview: hierarchical video shot classification, indexing, and accessing. IEEE Trans Multimedia 6(1):70–86CrossRefGoogle Scholar
  4. 4.
    Hauptmann A, Chen M-Y, Christel M et al Confounded expectations: Informedia at TRECVID 2004. http://www-nlpir.nist.gov/projects/tvpubs/ tvpapers04/
  5. 5.
    Jain AK, Vailaya A, Wei X (1999) Query by video clip. Multimedia Syst 7:369–384CrossRefGoogle Scholar
  6. 6.
    Lienhart R, Effelsberg W, Jain R (1998) VisualGREP: a systematic method to compare and retrieve video sequences. In: SPIE Conference on Storage and Retrieval for Image and Video Databases. pp 271–282Google Scholar
  7. 7.
    Lin T, Ngo CW, Zhang HJ et al (2001) Integrating color and spatial features for content-based video retrieval. In: IEEE International Conference on Image Processing (ICIP 2001). pp 592–595Google Scholar
  8. 8.
    Liu X, Zhuang Y, Pan Y (1999) A new approach to retrieve video by example video clip. ACM Multimedia ConferenceGoogle Scholar
  9. 9.
    MPEG video group (1999) Description of Core Experiments for MPEG-7 Color/Texture Descriptions. ISO/MPEGJTC1/SC29/WG11 MPEG98/M2819Google Scholar
  10. 10.
    Ngo CW, Pong TC, Chin RT (2001) Video partitioning by temporal slice coherency. IEEE Trans Circuits Syst Video Technol 11(8):941–953CrossRefGoogle Scholar
  11. 11.
    Ngo CW, Pong TC, Zhang HJ (2002) Motion-based video representation for scene change detection. Int J Comput Vis 50(2):127–143 (Nov)zbMATHCrossRefGoogle Scholar
  12. 12.
    Over P, Kraaij W, Laneva T, Smeaton A, Buckland L TREC 2005 video retrieval evaluation introductions. http://www-nlpir.nist.gov/projects/tvpubs/ tv.pubs.org.html
  13. 13.
    Peng Y, Ngo CW (2006) Clip-based similarity measure for query-dependent clip retrieval and video summarization. IEEE Trans Circuits Syst Video Technol 16(5):612–627 (May)CrossRefGoogle Scholar
  14. 14.
    Schrijver A (2003) Combinatorial optimization: Polyhedra and efficiency, vol A. Springer Heidelberg New YorkGoogle Scholar
  15. 15.
    Smeaton A, Laneva T TRECVID 2005: Search task. http://www-nlpir.nist.gov/projects/tvpubs/tv.pubs.org.html
  16. 16.
    Smeaton A, Over P, Arlandis J TRECVID-2004: Search task overview. http://www-nlpir.nist.gov/projects/tvpubs/tv.pubs.org.html
  17. 17.
    Souvannavong F, Merialdo B, Huet B (2004) Latent semantic analysis for an effective region-based video shot retrieval system. In: The 6th ACM international workshop on multimedia information retrieval. New York, pp 243–250 (October)Google Scholar
  18. 18.
    Swain MJ, Ballard DH (1991) Color indexing. Int J Comput Vis 7(1):11–32CrossRefGoogle Scholar
  19. 19.
    Taskiran C, Chen J-Y, Albiol A, Torres L, Bouman CA, Delp EJ (2004) ViBE: a compressed video database structured for active browsing and search. IEEE Trans Multimedia 6(1):103–118CrossRefGoogle Scholar
  20. 20.
    Wu Y, Zhuang Y, Pan Y (2000) Content-based video similarity model. In: ACM Multimedia Conference.Google Scholar
  21. 21.
    Xiao WS (1993) Graph theory and its algorithms. Aviation Industrial Press, BeijingGoogle Scholar
  22. 22.
    Yuan J, Duan L-Y, Tian Q, Wu C (2004) Fast and robust short video clip search using an index structure. In: The 6th ACM international workshop on multimedia information retrieval. New York, pp 61–68 (October)Google Scholar
  23. 23.
    Zhao L, Qi W, Li SZ et al (2000) “Key-frame extraction and shot retrieval using nearest feature line (NFL). In: ACM SIGMM international workshop on multimedia information retrieval.Google Scholar

Copyright information

© Springer Science+Business Media, LLC 2007

Authors and Affiliations

  1. 1.Institute of Computer Science and TechnologyPeking UniversityBeijingChina
  2. 2.Department of Computer ScienceCity University of Hong KongKowloonChina

Personalised recommendations