Skip to main content

Global Affective Video Content Regression Based on Complementary Audio-Visual Features

  • Conference paper
  • First Online:
MultiMedia Modeling (MMM 2020)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 11962))

Included in the following conference series:

Abstract

In this paper, we propose a new framework for global affective video content regression with five complementary audio-visual features. For the audio modality, we select the global audio feature eGeMAPS and two deep features SoundNet and VGGish. As for the visual modality, the key frames of original images and those of optical flow images are both used to extract VGG-19 features with finetuned models, in order to represent the original visual cues in conjunction with motion information. In the experiments, we perform the evaluations of selected audio and visual features on the dataset of Emotional Impact of Movies Task 2016 (EIMT16), and compare our results with those of competitive teams in EIMT16 and state-of-the-art method. The experimental results show that the fusion of five features can achieve better regression results in both arousal and valence dimensions, indicating the selected five features are complementary with each other in the audio-visual modalities. Furthermore, the proposed approach can achieve better regression results than the state-of-the-art method in both evaluation metrics of MSE and PCC in the arousal dimension and comparable MSE results in the valence dimension. Although our approach obtains slightly lower PCC result than the state-of-the-art method in the valence dimension, the fused feature vectors used in our framework have much lower dimensions with a total of 1752, only five thousandths of feature dimensions in the state-of-the-art method, largely bringing down the memory requirements and computational burden.

This work is supported by the National Natural Science Foundation of China under Grant Nos. 61801440 and 61631016, and the Fundamental Research Funds for the Central Universities under Grant Nos. 2018XNG1824 and YLSZ180226.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Baveye, Y., Chamaret, C., Dellandréa, E., Chen, L.M.: Affective video content analysis: a multidisciplinary insight. IEEE Trans. Affect. Comput. 9(4), 396–409 (2018)

    Article  Google Scholar 

  2. Baveye, Y., Dellandréa, E., Chamaret, C., Chen, L.M.: LIRIS-ACCEDE: a video database for affective content analysis. IEEE Trans. Affect. Comput. 6(1), 43–55 (2015)

    Article  Google Scholar 

  3. Sjöberg, M., Baveye, Y., Wang, H.L., Quang, V.L., Ionescu, B., et al.: The MediaEval 2015 affective impact of movies task. In: MediaEval (2015)

    Google Scholar 

  4. Dellandréa, E., Chen, L.M., Baveye, Y., Sjöberg, M.V., Chamaret, C.: The MediaEval 2016 emotional impact of movies task. In: MediaEval (2016)

    Google Scholar 

  5. Chen, S.Z., Jin, Q.: RUC at MediaEval 2016 emotional impact of movies task: fusion of multimodal features. In: MediaEval (2016)

    Google Scholar 

  6. Liu, Y., Gu, Z.L., Zhang, Y., Liu, Y.: Mining emotional features of movies. In: MediaEval (2016)

    Google Scholar 

  7. Ma, Y., Ye, Z.P., Xu, M.X.: THU-HCSI at MediaEval 2016: emotional impact of movies task. In: MediaEval (2016)

    Google Scholar 

  8. Jan, A., Gaus, Y.F.B.A., Meng, H.Y., Zhang, F.: BUL in MediaEval 2016 emotional impact of movies task. In: MediaEval (2016)

    Google Scholar 

  9. Timoleon, A.T., Hadjileontiadis, L.J.: AUTH-SGP in MediaEval 2016 emotional impact of movies task. In: MediaEval (2016)

    Google Scholar 

  10. Yi, Y., Wang, H.L.: Multi-modal learning for affective content analysis in movies. Multimedia Tools Appl. 78(10), 13331–13350 (2019)

    Article  Google Scholar 

  11. Eyben, F., Scherer, K.R., Schuller, B.W., Sundberg, J., Andre, E., et al.: The geneva minimalistic acoustic parameter set (GeMAPS) for voice research and affective computing. IEEE Trans. Affect. Comput. 7(2), 190–202 (2016)

    Article  Google Scholar 

  12. Aytar, Y., Vondrick, C., Torralba, A.: SoundNet: learning sound representations from unlabeled video. In: Advances in Neural Information Processing Systems, pp. 892–900. Barcelona, Spain (2016)

    Google Scholar 

  13. Gemmeke, J.F., Ellis, D.P.W., Freedman, D., Jansen, A., Lawrence, W., et al.: Audio set: an ontology and human-labeled dataset for audio events. In: IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 776–780. New Orleans, USA (2017)

    Google Scholar 

  14. Hershey, S., Chaudhuri, S., Ellis, D.P.W., Gemmeke, J.F., Jansen, A., et al.: CNN architectures for large-scale audio classification. In: IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 131–135. New Orleans, USA (2017)

    Google Scholar 

  15. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations. San Diego, USA (2015)

    Google Scholar 

  16. Crammer, K., Dekel, O., Keshet, J., Shalev-Shwartz, S., Singer, Y.: Online passive-aggressive algorithms. J. Mach. Learn. Res. 7(3), 551–585 (2006)

    MathSciNet  MATH  Google Scholar 

  17. Efron, B., Hastie, T., Johnstone, I., Tibshirani, R., Ishwaran, H., et al.: Least angle regression. Ann. Stat. 32(2), 407–499 (2004)

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wei Zhong .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Guo, X., Zhong, W., Ye, L., Fang, L., Heng, Y., Zhang, Q. (2020). Global Affective Video Content Regression Based on Complementary Audio-Visual Features. In: Ro, Y., et al. MultiMedia Modeling. MMM 2020. Lecture Notes in Computer Science(), vol 11962. Springer, Cham. https://doi.org/10.1007/978-3-030-37734-2_44

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-37734-2_44

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-37733-5

  • Online ISBN: 978-3-030-37734-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics