Advertisement

Automatic Video Indexing Based on Shot Classification

  • Ichiro Ide
  • Koji Yamamoto
  • Hidehiko Tanaka
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1554)

Abstract

Automatic indexing to video data is in strong demand to cope with the increasing amount. We propose an automatic indexing method for television news video, which indexes to shots considering the correspondence of image contents and semantic attributes of keywords. This is realized by first, (1) classifying shots by graphical feature, and (2) analyzing semantic attributes of accompanying captions. Next, keywords are selectively indexed to shots according to appropriate correspondence of typical shot classes and semantic attributes of keywords. The method was applied to 75 minutes of actual news video, and resulted in indexing successfully to approximately 50% of the typical shots (60% of the shots were classified as typical), and 80% of the typical shots where captions existed.

Keywords

Facial Region Semantic Attribute News Video Video Database Gathering Shot 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Ide, I. and Tanaka, H.; “Automatic Semantic Analysis of Television News Captions”; Proc. 3rd Intl. Workshop on Information Retrieval with Asian Languages, Oct 1998 (to appear).Google Scholar
  2. 2.
    Ide, I., Hamada, R., Tanaka, H. and Sakai, S.; “News Video Classification based on Semantic Attributes of Captions”; Proc. 6th ACM Intl. Multimedia Conf.-Art Demos-Techinical Demos-Poster Papers-, pp.60–61, Sep 1998.Google Scholar
  3. 3.
    Kaneko, T., and Hori, O.; “Cut Detection Technique from MPEG Compressed Video Using Likelihood Ratio Test”; Proc. 14th Intl. Conf. on Pattern Recognition, Aug 1998.Google Scholar
  4. 4.
    Ide, I. and Tanaka, H.; “Semantic Analysis of Television News Captions by Suffixes”; Trans. IPS Japan, Vol.39, No.8, pp.2543–2546, Aug 1998 (in Japanese).Google Scholar
  5. 5.
    Nakamura, Y. and Kanade, T.; “Semantic Analysis for Video Contents Extraction-Spotting by Association in News Video-”; Proc. 5th ACM Intl. Multimedia Conf., pp.393–402, Nov 1997.Google Scholar
  6. 6.
    Satoh, S., Nakamura, Y. and Kanade, T.; “Name-It: Naming and Detecting Faces in Video by the Integration of Image and Natural Language Processing”; Proc. IJCAI’97, pp.1488–1493, Aug 1997.Google Scholar
  7. 7.
    Nasukawa, T.; “Keyword Categorization based on Discourse Information”; Proc. 11th Annual Conf. JSAI, pp.348–349, Jun 1997 (in Japanese).Google Scholar
  8. 8.
    Hauptmann, A.G. and Witbrock, M. J.; “Informedia News-on-Demand: Using Speech Recognition to Create a Digital Video Library”; Proc. AAAI’97 Spring Symp. on Intelligent Integration and Use of Text, Image, Video and Audio Corpora, pp.120–126, Mar 1997.Google Scholar
  9. 9.
    Kurakake, S., Kuwano, H. and Odaka, K.; “Recognition and Visual Feature Matching of Text Region in Video for Conceptual Indexing”; SPIE Proc. of Storage and Retrieval for Image and Video Database V, Vol.3022, Feb 1997.Google Scholar
  10. 10.
    Smith, M.A. and Kanade, T.; “Video Skimming and Characterization through the Combination of Image and Language Understanding Techniques”; CMU Tech. Rep. CMU-CS-97-111, Feb 1997.Google Scholar
  11. 11.
    Watanabe, Y., Okada, Y. and Nagao, M.; “Semantic Analysis of Telops in TV Newscasts”; Tech. Rep. IPS Japan 96-NL-116, Vol.96, No.89, pp.107–114, Nov 1996 (in Japanese).Google Scholar
  12. 12.
    Ariki, Y. and Saito, Y.; “Extraction of TV News Articles Based on Scene Cut Detection Using DCT Clustering”; Proc. 1996 Intl. Conf. on Image Processing, pp.847–850, Sep 1996.Google Scholar
  13. 13.
    Wactlar, H. D., Kanade, T., Smith, M.A. and Stevens, S. M.; “Intelligent Access to Digital Video: Informedia Project”; IEEE Computer, Vol.29, No.3, pp.46–52, May 1996.Google Scholar
  14. 14.
    Motegi, Y. and Ariki, Y.; “Indexing to TV News Articles Based on Character Recognition”; Tech. Report IEICE, PRU-95-240, Vol.95, No.584, pp.33–40, Mar 1996 (in Japanese). Google Scholar
  15. 15.
    United States Defense Advanced Research Projects Agency (DARPA), Information Technology Office; “Named Entity Task Definition, Version 2.1”; Proc. 6th Message Understanding Conference, pp.317–332, Nov 1995.Google Scholar
  16. 16.
    Matsuhashi, S., Nakamura, O. and Minami, T.; “Human-Face Extraction Using Modified HSV Color System and Personal Identification Through Facial Image Based on Isodensity Maps”; Canadian Conf. on Elec. and Comp. Eng.’ 95, Vol.2, pp.909–912, Sep 1995.Google Scholar
  17. 17.
    Nagasaka, A. and Tanaka, Y.; “Automatic Video Indexing and Full-Video Search for Object Appearances”; IFIP Trans., Vol.A, No.7, 1992.Google Scholar
  18. 18.
    Kurohsashi, S., Saito, Y. and Nagao, M.; “Kyoto University Corpus version 2.0”; Jun 1998. Available from http://www-lab25.kuee.kyoto-u.ac.jp/nl-resource/corpus.html
  19. 19.
    Real World Computing Partnership (RWCP); “RWC Text Database”; Mar 1996.Google Scholar
  20. 20.
    Kurohsashi, S. and Nagao, M.; “Japanese Morphological Analysis System JUMAN version 3.5”; Mar 1998. Available from http://www-lab25.kuee.kyoto-u.ac.jp/nl-resource/juman-e.html
  21. 21.
    “The Informedia Project”; http://www.informedia.cs.cmu.edu/

Copyright information

© Springer-Verlag Berlin Heidelberg 1999

Authors and Affiliations

  • Ichiro Ide
    • 1
  • Koji Yamamoto
    • 1
  • Hidehiko Tanaka
    • 1
  1. 1.Graduate School of Electrical EngineeringThe University of TokyoTokyoJapan

Personalised recommendations