Advertisement

Human Emotion Recognition Using Combination of Shape-Texture Signature Feature

  • Paramartha DuttaEmail author
  • Asit Barman
Chapter
  • 27 Downloads
Part of the Cognitive Intelligence and Robotics book series (CIR)

Abstract

In the previous Chaps.  2,  3, and  4 consist of feature descriptors individually distance signature, shape signature, and texture signature and Chaps.  5 and  6 are considered as combined descriptors such as distance and shape (D-S) and distance and texture (D-T). In course of Shape-Texture (S-T) signature, respective stability indices and statistical measures supplement the signature features with a view to enhance the performance task of facial expression classification. Incorporation of these supplementary features is duly justified through extensive study and analysis of results obtained thereon.

References

  1. 1.
    T.F. Cootes, G.J. Edwards, C.J. Taylor et al., Active appearance models. IEEE Trans. Pattern Anal. Mach. Intell. 23(6), 681–685 (2001)CrossRefGoogle Scholar
  2. 2.
    G. Tzimiropoulos, M. Pantic, Optimization problems for fast AAM fitting in-the-wild, in Proceedings of the IEEE international conference on computer vision (2013), pp. 593–600Google Scholar
  3. 3.
    A. Barman, P. Dutta, Influence of shape and texture features on facial expression recognition. IET Image Process. 13(8), 1349–1363 (2019)CrossRefGoogle Scholar
  4. 4.
    H. Boughrara, M. Chtourou, C.B. Amar, L. Chen, Facial expression recognition based on a mlp neural network using constructive training algorithm. Multimedia Tools Appl. 75(2), 709–731 (2016)CrossRefGoogle Scholar
  5. 5.
    J. Susskind, V. Mnih, G. Hinton et al., On deep generative models with applications to recognition, in 2011 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2011), pp. 2857–2864Google Scholar
  6. 6.
    D. Chakrabarti, D. Dutta, Facial expression recognition using eigenspaces. Procedia Technol. 10, 755–761 (2013)CrossRefGoogle Scholar
  7. 7.
    G. Hinton, A practical guide to training restricted boltzmann machines. Momentum 9(1), 926 (2010)Google Scholar
  8. 8.
    M.A. Carreira-Perpinan, G.E. Hinton, On contrastive divergence learning, in Aistats, vol. 10 (2005), pp. 33–40Google Scholar
  9. 9.
    P. Lucey, J.F. Cohn, T. Kanade, J. Saragih, Z. Ambadar, I. Matthews, The extended Cohn-Kanade dataset (ck+): a complete dataset for action unit and emotion-specified expression, in Computer Society Conference on Computer Vision and Pattern Recognition-Workshops (IEEE, 2010), pp. 94–101Google Scholar
  10. 10.
    M. Lyons, S. Akamatsu, M. Kamachi, J. Gyoba, Coding facial expressions with Gabor wavelets, in Proceedings of the Third IEEE International Conference on Automatic Face and Gesture Recognition (IEEE, 1998), pp. 200–205Google Scholar
  11. 11.
    M.F. Valstar, M. Pantic, Induced disgust, happiness and surprise: an addition to the mmi facial expression database, in Proceedings of International Conference on Language Resources and Evaluation, Workshop on EMOTION (Malta, 2010), pp. 65–70Google Scholar
  12. 12.
    N. Aifanti, C. Papachristou, A. Delopoulos, The mug facial expression database, in Proceedings of the 11th International Workshop on Image Analysis for Facial Expression Database (Desenzano, Italy, 2010), pp. 12–14Google Scholar
  13. 13.
    C. Sagonas, E. Antonakos, G. Tzimiropoulos, S. Zafeiriou, M. Pantic, 300 faces in-the-wild challenge: database and results. Image Vis. Comput. 47, 3–18 (2016)CrossRefGoogle Scholar
  14. 14.
    C. Sagonas, G. Tzimiropoulos, S. Zafeiriou, M. Pantic, 300 faces in-the-wild challenge: the first facial landmark localization challenge, in Proceedings of the IEEE International Conference on Computer Vision Workshops (2013), pp. 397–403Google Scholar
  15. 15.
    A. Majumder, L. Behera, V.K. Subramanian, Automatic facial expression recognition system using deep network-based data fusion. IEEE Trans. Cybern. 48(1), 103–114 (2018)CrossRefGoogle Scholar
  16. 16.
    A.T. Lopes, E. de Aguiar, T. Oliveira-Santos, A facial expression recognition system using convolutional networks, in 2015 28th SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI) (IEEE, 2015), pp. 273–280Google Scholar
  17. 17.
    L. Zhong, Q. Liu, P. Yang, J. Huang, D.N. Metaxas, Learning multiscale active facial patches for expression analysis. IEEE Trans. Cybern. 45(8), 1499–1510 (2015)CrossRefGoogle Scholar
  18. 18.
    S.L. Happy, A. Routray, Automatic facial expression recognition using features of salient facial patches. IEEE Trans. Affect. Comput. 6(1), 1–12 (2015)CrossRefGoogle Scholar
  19. 19.
    A. Poursaberi, H.A. Noubari, M. Gavrilova, S.N. Yanushkevich, Gauss–laguerre wavelet textural feature fusion with geometrical information for facial expression identification. EURASIP J. Image Video Process. 2012(1), 1–13 (2012)CrossRefGoogle Scholar
  20. 20.
    L. Zhang, D. Tjondronegoro, Facial expression recognition using facial movement features. IEEE Trans. Affect. Comput. 2(4), 219–229 (2011)CrossRefGoogle Scholar
  21. 21.
    M.Z. Uddin, M.M. Hassan, A. Almogren, A. Alamri, M. Alrubaian, G. Fortino, Facial expression recognition utilizing local direction-based robust features and deep belief network. IEEE Access 5, 4525–4536 (2017)CrossRefGoogle Scholar
  22. 22.
    D. Ghimire, J. Lee, Z.N. Li, S. Jeong, Recognition of facial expressions based on salient geometric features and support vector machines. Multimedia Tools Appl. 76(6), 7921–7946 (2017)CrossRefGoogle Scholar
  23. 23.
    Y. Lv, Z. Feng, C. Xu, Facial expression recognition via deep learning, in 2014 International Conference on Smart Computing (SMARTCOMP) (IEEE, 2014), pp. 303–308Google Scholar
  24. 24.
    A. Mollahosseini, B. Hasani, M.J. Salvador, H. Abdollahi, D. Chan, M.H. Mahoor, Facial expression recognition from world wild web, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (2016), pp. 58–65Google Scholar
  25. 25.
    A. Mollahosseini, B. Hasani, M.H. Mahoor, Affectnet: a database for facial expression, valence, and arousal computing in the wild (2017). arXiv:1708.03985

Copyright information

© Springer Nature Singapore Pte Ltd. 2020

Authors and Affiliations

  1. 1.Department of Computer and Systems SciencesVisva-Bharati UniversitySantiniketanIndia
  2. 2.Department of Computer Science and Engineering and Information TechnologySiliguri Institute of TechnologySiliguriIndia

Personalised recommendations