Two-Level Attention with Multi-task Learning for Facial Emotion Estimation

  • Xiaohua Wang
  • Muzi Peng
  • Lijuan Pan
  • Min HuEmail author
  • Chunhua Jin
  • Fuji Ren
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11295)


Valence-Arousal model can represent complex human emotions, including slight changes of emotion. Most prior works of facial emotion estimation only considered laboratory data and used video, speech or other multi-modal features. The effect of these methods applied on static images in the real world is unknown. In this paper, a two-level attention with multi-task learning (MTL) framework is proposed for facial emotion estimation on static images. The features of corresponding region were automatically extracted and enhanced by first-level attention mechanism. And then we designed a practical structure to process the features extracted by first-level attention. In the following, we utilized Bi-directional Recurrent Neural Network (Bi-RNN) with self-attention (second-level attention) to make full use of the relationship of these features adaptively. It can be concluded as a combination of global and local information. In addition, we exploited MTL to estimate the value of valence and arousal simultaneously, which employed the correlation of the two tasks. The quantitative results conducted on AffectNet dataset demonstrated the superiority of the proposed framework. In addition, extensive experiments were carried out to analysis effectiveness of different components.


Facial emotion estimation Attention mechanism Multi-task learning 


  1. 1.
    Jung, H., Lee, S., Yim, J., Park, S., Kim, J.: Joint fine-tuning in deep neural networks for facial expression recognition. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2983–2991 (2015)Google Scholar
  2. 2.
    Kim, B.K., Dong, S.Y., Roh, J., Kim, G., Lee, S.Y.: Fusing aligned and non-aligned face information for automatic affect recognition in the wild: a deep learning approach. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 48–57 (2016)Google Scholar
  3. 3.
    Zhang, K., Huang, Y., Du, Y., Wang, L.: Facial expression recognition based on deep evolutional spatial-temporal networks. IEEE Trans. Image Process. 26(9), 4193–4203 (2017)MathSciNetCrossRefGoogle Scholar
  4. 4.
    Chen, S., Jin, Q., Zhao, J., Wang, S.: Multimodal multi-task learning for dimensional and continuous emotion recognition. In: Proceedings of the 7th Annual Workshop on Audio/Visual Emotion Challenge, pp. 19–26. ACM (2017)Google Scholar
  5. 5.
    Xia, R., Liu, Y.: A multi-task learning framework for emotion recognition using 2D continuous space. IEEE Trans. Affect. Comput. 1, 3–14 (2017)CrossRefGoogle Scholar
  6. 6.
    Russell, J.A.: A circumplex model of affect. J. Pers. Socialpsychol. 39(6), 1161 (1980)CrossRefGoogle Scholar
  7. 7.
    Mollahosseini, A., Hasani, B., Mahoor, M.H.: AffectNet: a database for facial expression, valence, and arousal computing in the wild. arXiv preprint arXiv:1708.03985 (2017)
  8. 8.
    Liu, X., Kumar, B.V., You, J., Jia, P.: Adaptive deep metric learning for identity-aware facial expression recognition. In: CVPR Workshops, pp. 522–531 (2017)Google Scholar
  9. 9.
    Li, S., Deng, W., Du, J.: Reliable crowdsourcing and deep locality-preserving learning for expression recognition in the wild. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2584–2593. IEEE (2017)Google Scholar
  10. 10.
    Sun, W., Zhao, H., Jin, Z.: A visual attention based ROI detection method for facial expression recognition. Neurocomputing 296, 12–22 (2018)CrossRefGoogle Scholar
  11. 11.
    Wang, F., et al.: Residual attention network for image classification. arXiv preprint arXiv:1704.06904 (2017)
  12. 12.
    Chang, W.Y., Hsu, S.H., Chien, J.H.: FATAUVA-Net: an integrated deep learning framework for facial attribute recognition, action unit (au) detection, and valence-arousal estimation. In: Proceedings of the IEEE Conference onComputer Vision and Pattern Recognition Workshop (2017)Google Scholar
  13. 13.
    Kollias, D., Zafeiriou, S.: A multi-component CNN-RNN approach for dimensional emotion recognition in-the-wild. arXiv preprint arXiv:1805.01452 (2018)
  14. 14.
    Zhou, F., Kong, S., Fowlkes, C., Chen, T., Lei, B.: Fine-grained facial expression analysis using dimensional emotion model. arXiv preprint arXiv:1805.01024 (2018)
  15. 15.
    Mnih, V., Heess, N., Graves, A., et al.: Recurrent models of visual attention. In: Advances in Neural Information Processing Systems, pp. 2204–2212 (2014)Google Scholar
  16. 16.
    Hu, J., Shen, L., Sun, G.: Squeeze-and-excitation networks. arXiv preprint arXiv:1709.015077 (2017)
  17. 17.
    Vaswani, A., et al.: Attention is all you need. In: Advances in Neural Information Processing Systems, pp. 5998–6008 (2017)Google Scholar
  18. 18.
    Das, A., Agrawal, H., Zitnick, L., Parikh, D., Batra, D.: Human attention in visual question answering: do humans and deep networks look at the same regions? Comput. Vis. Image Underst. 163, 90–100 (2017)CrossRefGoogle Scholar
  19. 19.
    Xu, K., et al.: Show, attend and tell: neural image caption generation with visual attention. In: International Conference on Machine Learning, pp. 2048–2057 (2015)Google Scholar
  20. 20.
    Chang, J., Scherer, S.: Learning representations of emotional speech with deep convolutional generative adversarial networks. In: 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 2746–2750. IEEE (2017)Google Scholar
  21. 21.
    Duan, M., Li, K., Tian, Q.: A novel multi-task tensor correlation neural network for facial attribute prediction. arXiv preprint arXiv:1804.02810 (2018)
  22. 22.
    Black, M.J., Rangarajan, A.: On the unification of line processes, outlier rejection, and robust statistics with applications in early vision. Int. J. Comput. Vis. 19(1), 57–91 (1996)CrossRefGoogle Scholar
  23. 23.
    Zhang, K., Zhang, Z., Li, Z., Qiao, Y.: Joint face detection and alignment using multitask cascaded convolutional networks. IEEE Sig. Process. Lett. 23(10), 1499–1503 (2016)CrossRefGoogle Scholar
  24. 24.
    Mahoor, M.H.: AffectNet. Accessed 27 July 2018

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Xiaohua Wang
    • 1
    • 2
  • Muzi Peng
    • 1
  • Lijuan Pan
    • 1
  • Min Hu
    • 1
    Email author
  • Chunhua Jin
    • 2
  • Fuji Ren
    • 1
    • 3
  1. 1.School of Computer Science and Information EngineeringHefei University of TechnologyHefeiChina
  2. 2.The Laboratory for Internet of Things and Mobile Internet Technology of Jiangsu ProvinceHuaiyin Institute of TechnologyHuai’anChina
  3. 3.Faculty of EngineeringUniversity of TokushimaTokushimaJapan

Personalised recommendations