Skip to main content

Video Affective Content Analysis Based on Protagonist via Convolutional Neural Network

  • Conference paper
  • First Online:
Advances in Multimedia Information Processing - PCM 2016 (PCM 2016)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 9916))

Included in the following conference series:

Abstract

Affective recognition is an important and challenging task for video content analysis. Affective information in videos is closely related to the viewer’s feelings and emotions. Thus, video affective content analysis has a great potential value. However, most of the previous methods are focused on how to effectively extract features from videos for affective analysis. There are several issues are worth to be investigated. For example, what information is used to express emotions in videos, and which information is useful to affect audiences’ emotions. Taking into account these issues, in this paper, we proposed a new video affective content analysis method based on protagonist information via Convolutional Neural Network (CNN). The proposed method is evaluated on the largest video emotion dataset and compared with some previous work. The experimental results show that our proposed affective analysis method based on protagonist information achieves best performance in emotion classification and prediction.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Zhu, Y., Huang, X., Huang, Q., Tian, Q.: Large-scale video copy retrieval with temporal-concentration sift. Neurocomputing 187, 83–91 (2016)

    Article  Google Scholar 

  2. Deng, C., Xu, J., Zhang, K., Tao, D., Gao, X., Li, X.: Similarity constraints-based structured output regression machine: an approach to image super-resolution. IEEE Trans. Neural Netw. Learn. Syst. PP, 1 (2015)

    Article  Google Scholar 

  3. Zhong, S.h., Liu, Y., Ng, T.Y., Liu, Y.: Perception-oriented video saliency detection via spatio-temporal attention analysis. Neurocomputing 207, 178–188 (2016)

    Article  Google Scholar 

  4. Yuan, H., Kwong, S., Wang, X., Zhang, Y., Li, F.: A virtual view PSNR estimation method for 3-D videos. IEEE Trans. Broadcast. 62(1), 134–140 (2016)

    Article  Google Scholar 

  5. Wang, S., Ji, Q.: Video affective content analysis: a survey of state-of-the-art methods. IEEE Trans. Affect. Comput. 6(4), 410–430 (2015)

    Article  Google Scholar 

  6. Hanjalic, A.: Extracting moods from pictures and sounds: towards truly personalized TV. IEEE Sig. Process. Mag. 23(2), 90–100 (2006)

    Article  Google Scholar 

  7. Zhao, S., Yao, H., Sun, X., Xu, P., Liu, X., Ji, R.: Video indexing and recommendation based on affective analysis of viewers. In: Proceedings of the 19th ACM International Conference on Multimedia, pp. 1473–1476. ACM (2011)

    Google Scholar 

  8. Acar, E., Hopfgartner, F., Albayrak, S.: Understanding affective content of music videos through learned representations. In: Gurrin, C., Hopfgartner, F., Hurst, W., Johansen, H., Lee, H., O’Connor, N. (eds.) MMM 2014. LNCS, vol. 8325, pp. 303–314. Springer, Heidelberg (2014). doi:10.1007/978-3-319-04114-8_26

    Chapter  Google Scholar 

  9. Wang, H.L., Cheong, L.F.: Affective understanding in film. IEEE Trans. Circuits Syst. Video Technol. 16(6), 689–704 (2006)

    Article  Google Scholar 

  10. Cui, Y., Luo, S., Tian, Q., Zhang, S., Peng, Y., Jiang, L., Jin, J.S.: Mutual information-based emotion recognition. In: The Era of Interactive Media, pp. 471–479. Springer, New York (2014)

    Google Scholar 

  11. Baveye, Y., Dellandrea, E., et al.: Deep learning vs. kernel methods: performance for emotion prediction in videos. In: Proceedings of the 2015 International Conference on Affective Computing and Intelligent Interaction (ACII), pp. 77–83. IEEE (2015)

    Google Scholar 

  12. Hubel, D.H., Wiesel, T.N.: Receptive fields, binocular interaction and functional architecture in the cat’s visual cortex. J. Physiol. 160(1), 106–154 (1962)

    Article  Google Scholar 

  13. Felleman, D.J., Van Essen, D.C.: Distributed hierarchical processing in the primate cerebral cortex. Cereb. Cortex 1(1), 1–47 (1991)

    Article  Google Scholar 

  14. Jin, C.-B., Li, S., Do, T.D., Kim, H.: Real-Time human action recognition using CNN over temporal images for static video surveillance cameras. In: Ho, Y.-S., Sang, J., Ro, Y.M., Kim, J., Wu, F. (eds.) PCM 2015. LNCS, vol. 9315, pp. 330–339. Springer, Heidelberg (2015). doi:10.1007/978-3-319-24078-7_33

    Chapter  Google Scholar 

  15. He, K., Zhang, X., Ren, S., Sun, J.: Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE Trans. PAMI 37(9), 1904–1916 (2015)

    Article  Google Scholar 

  16. Kahou, S.E., Pal, C., Bouthillier, X., et al.: Combining modality specific deep neural networks for emotion recognition in video. In: Proceedings of the 15th ACM on International Conference on Multimodal Interaction, pp. 543–550. ACM (2013)

    Google Scholar 

  17. Chen, T., Borth, D., Darrell, T.: Deepsentibank: Visual sentiment concept classification with deep convolutional neural networks. arXiv preprint arXiv:1410.8586 (2014)

  18. Baveye, Y., Dellandrea, E., Chamaret, C., Chen, L.: Liris-accede: a video database for affective content analysis. IEEE Trans. Affect. Comput. 6(1), 43–55 (2015)

    Article  Google Scholar 

  19. Krizhevsky, A., Sutskever, I.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012)

    Google Scholar 

Download references

Acknowledgement

This work was supported by the National Natural Science Foundation of China (No. 61502311, No. 61373122), the Natural Science Foundation of Guangdong Province (No. 2016A030310053, No.2016A030313043), the Science and Technology Innovation Commission Foundation of Shenzhen (No. JCYJ20150324141711640, No. JCYJ20150324141711630), the Strategic Emerging Industry Research Foundation of Shenzhen (No. JCYJ20160226191842793), the Strategic Emerging Industry Development Foundation of Shenzhen (No. JCY20130326105637578), the Shenzhen University Research Funding (201535), NSFC-Guangdong Joint Fund for supercomputing application (Stage II), the National Supercomputing Center in Guangzhou (No. NSFC2015_275), and the Tencent Rhinoceros Birds Scientific Research Foundation (2015).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sheng-hua Zhong .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing AG

About this paper

Cite this paper

Zhu, Y., Jiang, Z., Peng, J., Zhong, Sh. (2016). Video Affective Content Analysis Based on Protagonist via Convolutional Neural Network. In: Chen, E., Gong, Y., Tie, Y. (eds) Advances in Multimedia Information Processing - PCM 2016. PCM 2016. Lecture Notes in Computer Science(), vol 9916. Springer, Cham. https://doi.org/10.1007/978-3-319-48890-5_17

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-48890-5_17

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-48889-9

  • Online ISBN: 978-3-319-48890-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics