Improving Retake Detection by Adding Motion Feature

  • Hiep Van Hoang
  • Duy-Dinh Le
  • Shin’ichi Satoh
  • Quang Hong Nguyen
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6979)

Abstract

Retake detection is useful for many applications of video summarization. It is a challenging task since different takes of the same scene are usually of different lengths; or have been recorded under different environment conditions. A general approach to solve this problem is to decompose the input video sequence into sub-sequences and then group these sub-sequences into clusters. By combining with temporal information, the clustering result is used to find take and scene boundaries. One of the most difficult steps in this approach is to cluster sub-sequences. Most of previous approaches only use one keyframe for representing one sub-sequence and extract features such as color and texture from this keyframe for clustering. We propose another approach to improve the performance by combining the motion feature extracted from each sub-sequence and the features extracted from each representing keyframe. Experiments evaluated on the standard benchmark dataset of TRECVID BBC Rushes 2007 show the effectiveness of the proposed method.

References

  1. 1.
    Bailer, W., Lee, F., Thallinger, G.: A distance measure for repeated takes of one scene. The Visual Computer: International Journal of Computer Graphics 25, 53–68 (2008)CrossRefGoogle Scholar
  2. 2.
    Dumont, E., Merialdo, B.: Rushes video summarization and evaluation. Multimedia Tools and Applications 48, 51–68 (2010)CrossRefGoogle Scholar
  3. 3.
    Wang, F., Ngo, C.W.: Rushes video summarization by object and event understanding. In: TVS 2007 Proceedings of the International Workshop on TRECVID Video Summarization, pp. 25–29 (2007)Google Scholar
  4. 4.
    Rand, W.M.: Objective criteria for the evaluation of clustering methods. Journal of the American Statistical Association 66, 846–850 (1971)CrossRefGoogle Scholar
  5. 5.
    Flickner, M., Sawhney, H.S., Ashley, J., Huang, Q., Dom, B., Gorkani, M., Hafner, J., Lee, D., Petkovic, D., Steele, D., Yanker, P.: Query by image and video content: The qbic system. IEEE Computer 28, 23–32 (1995)CrossRefGoogle Scholar
  6. 6.
    Stricker, M.A., Orengo, M.: Similarity of color images. In: Proc. of SPIE, Storage and Retrieval for Image and Video Databases III, vol. 2420, pp. 381–392 (1995)Google Scholar
  7. 7.
    Ojala, T., Pietikainen, M., Maenpaa, T.: Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Trans. on Pattern Analysis and Machine Intelligence 24, 971–987 (2002)CrossRefMATHGoogle Scholar
  8. 8.
    Lucas, B.D., Kanade, T.: An iterative image registration technique with an application to stereo vision. In: Proceedings of Imaging Understanding Workshop, pp. 121–130 (1981)Google Scholar
  9. 9.
    Smith, T.F., Waterman, M.S.: Identification of common molecular subsequences. Journal of Molecular Biology 147, 195–197 (1981)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Hiep Van Hoang
    • 1
  • Duy-Dinh Le
    • 2
  • Shin’ichi Satoh
    • 2
  • Quang Hong Nguyen
    • 1
  1. 1.Hanoi University of Science and TechnologyHanoi CityVietnam
  2. 2.National Institute of InformaticsChiyoda-kuJapan

Personalised recommendations