Multimedia Tools and Applications

, Volume 63, Issue 2, pp 547–567 | Cite as

Multimodal genre classification of TV programs and YouTube videos

  • Hazım Kemal Ekenel
  • Tomas Semela


This paper presents an automatic video genre classification system, which utilizes several low level audio-visual features as well as cognitive and structural information, and in case of web videos tag-based features, to classify the types of TV programs and YouTube videos. Classification is performed using an ensemble of support vector machines. The visual descriptors consist of color and texture-based features, which are often used to represent the concepts appearing in a video. The audio descriptors are signal energy, zero crossing rate, fundamental frequency, and mel-frequency cepstral coefficients representing a wide range of perceptual cues available in the audio signal. Cognitive descriptors correspond to the information derived from a face detector, whereas structural descriptors are related to shot editing of the video. Tag descriptor is used additionally for the genre classification of YouTube videos and it is based on term frequency-inverse document frequency measure. For each feature and type of genre a separate support vector machine classifier is trained following the one-vs-all scheme. The outputs of the classifiers are then combined to yield the final classification result. The proposed system is extensively evaluated using complete TV programs from Italian RAI TV channel, from a French TV channel, and videos from YouTube on which using only the audio-visual cues as well as cognitive and structural information, 99.2, 94.5 and 87.3% correct classification rates are attained, respectively. These results show that the developed system can reliably determine TV programs’ genre. Incorporating tag feature to the content-based features increases the YouTube genre classification performance from 87.3 to 89.7%. Further experiments indicate that the quality of videos does not influence the results significantly. It is found that the performance drop in classifying genres of YouTube videos is mainly due to the large variety of content contained in these videos. In summary, this study shows that the proposed low level visual feature set, which we have used to represent the concepts appearing in a video, also provides robust cues for genre classification. In addition, obtained genre information is expected to provide additional cues which can be used to improve the concept detection system’s performance. It has also been shown that ensemble of support vector machine classifiers outperforms neural network based classification proposed in the previous state-of-the-art genre classification systems (Montagnuolo and Messina, AIIA, LNAI 4733:730–741, 2007, Multimed Tools Appl 41(1):125–159, 2009). Besides the improvement in the employed feature set and classification scheme, the experimental framework of the study is exemplary with the extensive tests conducted on different domains ranging from TV programs from different countries to web videos.


Genre classification Content-based descriptors Tag descriptors TV programs YouTube videos 



The authors would like to thank Alberto Messina and Maurizio Montagnuolo from RAI Centre for Research and Technological Innovation for their contributions to the study and for providing the TV program data. The authors would also like to thank INA (French National Audiovisual Institute) for providing the corpus used in Quaero evaluations. This study is funded by OSEO, French State agency for innovation, as part of the Quaero Programme.


  1. 1.
    Borth D et al (2009) TubeFiler—an automatic web video categorizer. In: Proc. of ACM multimedia. Beijing, China, pp 1111–1112Google Scholar
  2. 2.
    Cao J, Zhang YD, Song YC, Chen ZN, Zhang X, Li JT (2009) MCG-WEBV: a benchmark dataset for web video analysis. Technical Report, ICT-MCG-09-001, Institute of Computing TechnologyGoogle Scholar
  3. 3.
    Campbell M et al (2006) IBM research TRECVID-2006 video retrieval system. In: Proc. of NIST TRECVID workshop, Gaithersburg, USAGoogle Scholar
  4. 4.
    Chang C-C, Lin C-J (2001) LIBSVM: a library for support vector machines. Software available at
  5. 5.
    Ekenel HK, Fischer M, Gao H, Kilgour K, Marcos JS, Stiefelhagen R (2007) Universität Karlsruhe (TH) at TRECVID 2007. In: Proc. of NIST TRECVID workshop, Gaithersburg, MDGoogle Scholar
  6. 6.
    Ekenel HK, Gao H, Stiefelhagen R (2008) Universität Karlsruhe (TH) at TRECVID 2008. In: Proc. of NIST TRECVID workshop, Gaithersburg, MDGoogle Scholar
  7. 7.
    Fischer S, Lienhart R, Effelsberg W (1995) Automatic recognition of film genres. In: Proc. of ACM multimedia, San Francisco, USA, pp 295–304Google Scholar
  8. 8.
    Huang J, Kumar SR, Mitra M, Zhu W-J, Zabih R (1997) Image indexing using color correlograms. In: Proc. IEEE conf. computer vision and pattern recognition, San Juan, pp 762–768Google Scholar
  9. 9.
    Lin H-T, Lin C-J, Weng RC (2007) A note on Platt’s probabilistic outputs for support vector machines. Mach Learn 68(3):267–276CrossRefGoogle Scholar
  10. 10.
    Lu L, Zhang H-J, Li SZ (2003) Content-based audio classification and segmentation by using support vector machines. Multimed Syst 8(6):482–492CrossRefGoogle Scholar
  11. 11.
    Montagnuolo M, Messina A (2007) TV genre classification using multimodal information and multilayer perceptrons. AIIA, LNAI 4733:730–741Google Scholar
  12. 12.
    Montagnuolo M, Messina A (2008) Fuzzy mining of multimedia genre applied to television archives. In: Proc. of IEEE intl. conference on multimedia and expo, pp 117–120Google Scholar
  13. 13.
    Montagnuolo M, Messina A (2009) Parallel neural networks for multimodal video genre classification. Multimed Tools Appl 41(1):125–159CrossRefGoogle Scholar
  14. 14.
    Multimedia Grand Challenge (2009, 2010)
  15. 15.
    Quaero Programme website (2011)
  16. 16.
    Saunders J (1996) Real-time discrimination of broadcast speech/music. In: Proceedings of the acoustics, speech, and signal processing conference, Washington, pp 993–996Google Scholar
  17. 17.
    Song Y, Zhang Y, Yhang X, Cao J, Li J (2009) Google challenge: incremental-learning for web video categorization on robust semantic feature space. In: Proc. of ACM multimedia, Beijing, China, pp 1113–1114Google Scholar
  18. 18.
    Song Y, Zhao M, Yagnik J, Wu X (2010) Taxonomic Classification for Web-based Videos. In: Proc. of computer vision and pattern recognition (CVPR), pp 871–878Google Scholar
  19. 19.
    Stricker M, Orengo M (1995) Similarity of color images. In: Proc. SPIE storage and retrieval for image and video databases, vol 2420, San Jose, USA, pp 381–392Google Scholar
  20. 20.
    Swain MJ, Ballard DH (1991) Color indexing. Int J Comput Vis 7(1):11–32CrossRefGoogle Scholar
  21. 21.
    Talkin D (1995) A robust algorithm for pitch tracking (RAPT). In: Speech coding & synthesis, pp 495–518Google Scholar
  22. 22.
    Tzanetakis G, Cook P (2002) Musical genre classification of audio signals. IEEE Trans Speech Audio Process 10(5):293–302CrossRefGoogle Scholar
  23. 23.
    Viola P, Jones MJ (2004) Robust real-time face detection. Int J Comput Vis 57(2):137–154CrossRefGoogle Scholar
  24. 24.
    VOICEBOX Speech processing toolbox for MATLAB (2011)
  25. 25.
    Wang J, Xu C, Chng E (2006) Automatic sports video genre classification using Pseudo-2D-HMM. In: Proc. of intl. conf. on pattern recognition, Washington DC, USA, pp 778–781Google Scholar
  26. 26.
    Wang Z, Zhao M, Song Y, Kumar S, Li B (2010) YouTubeCat: learning to categorize wild web videos. In: Proc. of computer vision and pattern recognition (CVPR), pp 879–886Google Scholar
  27. 27.
    Wu T-F, Lin C-J, Weng RC (2004) Probability estimates for multi-class classification by pairwise coupling. J Mach Learn Res 5:975–1005MathSciNetMATHGoogle Scholar
  28. 28.
    Wu X, Zhao WL, Ngo CW (2009) Towards google challenge: combining contextual and social information for web video categorization. In: Proc. of ACM multimedia, Beijing, China, pp 1109–1110Google Scholar
  29. 29.
    Yang L, Liu J, Yang X, Hua XS (2007) Multi-modality web video categorization. In: Proc. of multimedia information retrieval, MIR ’07, Augsburg, Germany, pp 265–274Google Scholar

Copyright information

© Springer Science+Business Media, LLC 2011

Authors and Affiliations

  1. 1.Institute of AnthropomaticsKarlsruhe Institute of Technology (KIT)KarlsruheGermany

Personalised recommendations