Skip to main content

Distance-Texture Signature Duo for Determination of Human Emotion

  • Chapter
  • First Online:
  • 671 Accesses

Part of the book series: Cognitive Intelligence and Robotics ((CIR))

Abstract

Previous Chaps. 2, 3 and 4 consist of three feature descriptors individually distance signature, shape signature and texture signature and Chap. 5 considered as combined descriptor such as distance and shape signature (D-S). In course of Distance-Texture (S-T) signature, respective stability indices and statistical measures supplement the signature features with a view to enhance the performance task of facial expression classification. Incorporation of these supplementary features is duly justified through extensive study and analysis of results obtained thereon.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   149.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   199.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD   199.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. T.F. Cootes, G.J. Edwards, C.J. Taylor, et al., Active appearance models. IEEE Trans. Pattern Anal. Mach. Intell. 23(6), 681–685 (2001)

    Google Scholar 

  2. T. Ojala, M. Pietikäinen, D. Harwood, A comparative study of texture measures with classification based on featured distributions. Pattern Recognit. 29(1), 51–59 (1996)

    Article  Google Scholar 

  3. C. Shan, S. Gong, P.W. McOwan, Facial expression recognition based on local binary patterns: a comprehensive study. Image Vis. Comput. 27(6), 803–816 (2009)

    Google Scholar 

  4. G. Tzimiropoulos, M. Pantic, Optimization problems for fast aam fitting in-the-wild, in Proceedings of the IEEE International Conference on Computer Vision (2013), pp. 593–600

    Google Scholar 

  5. Y. Tong, Y. Wang, Z. Zhu, Q. Ji, Robust facial feature tracking under varying face pose and facial expression. Pattern Recognit. 40(11), 3195–3208 (2007)

    Article  Google Scholar 

  6. X. Zhu, D. Ramanan, Face detection, pose estimation, and landmark localization in the wild, in 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2012), pp. 2879–2886

    Google Scholar 

  7. L. Zhong, Q. Liu, P. Yang, J. Huang, D.N. Metaxas, Learning multiscale active facial patches for expression analysis. IEEE Trans. Cybern. 45(8), 1499–1510 (2015)

    Google Scholar 

  8. A. Barman, P. Dutta, Facial expression recognition using distance and texture signature relevant features. Appl. Soft Comput. 77, 88–105 (2019)

    Article  Google Scholar 

  9. D. Chakrabarti, D. Dutta, Facial expression recognition using eigenspaces. Procedia Technol. 10, 755–761 (2013)

    Article  Google Scholar 

  10. M. Rosenblum, Y. Yacoob, L.S. Davis, Human expression recognition from motion using a radial basis function network architecture. IEEE Trans. Neural Netw. 7(5), 1121–1138 (1996)

    Google Scholar 

  11. H. Jaeger, Tutorial on Training Recurrent Neural Networks, Covering BPPT, RTRL, EKF and the “Echo State Network” Approach, vol. 5 (GMD-Forschungszentrum Informationstechnik, 2002)

    Google Scholar 

  12. T. Lin, B.G. Horne, P. Tino, C.L. Giles, Learning long-term dependencies in narx recurrent neural networks. IEEE Trans. Neural Netw. 7(6), 1329–1338 (1996)

    Google Scholar 

  13. R. Pascanu, T. Mikolov, Y. Bengio, On the difficulty of training recurrent neural networks. ICML 3(28), 1310–1318 (2013)

    Google Scholar 

  14. P. Lucey, J.F. Cohn, T. Kanade, J. Saragih, Z. Ambadar, I. Matthews, The extended cohn-kanade dataset (ck+): a complete dataset for action unit and emotion-specified expression, in Computer Society Conference on Computer Vision and Pattern Recognition-Workshops (IEEE, 2010), pp. 94–101

    Google Scholar 

  15. M.F. Valstar, M. Pantic, Induced disgust, happiness and surprise: an addition to the mmi facial expression database, in Proceedings of International Conference on Language Resources and Evaluation, Workshop on EMOTION (Malta, May 2010), pp. 65–70

    Google Scholar 

  16. M. Lyons, S. Akamatsu, M. Kamachi, J. Gyoba, Coding facial expressions with gabor wavelets, in Proceedings of Third IEEE International Conference on Automatic Face and Gesture Recognition, 1998 (IEEE, 1998), pp. 200–205

    Google Scholar 

  17. N. Aifanti, C. Papachristou, A. Delopoulos, The mug facial expression database, in Proceedings of 11th International Workshop on Image Analysis for Facial Expression Database (Desenzano, Italy, April 2010), pp. 12–14

    Google Scholar 

  18. SL Happy and Aurobinda Routray, Automatic facial expression recognition using features of salient facial patches. IEEE Trans. Affect. Comput. 6(1), 1–12 (2015)

    Article  Google Scholar 

  19. L. Zhang, D. Tjondronegoro, Facial expression recognition using facial movement features. IEEE Trans. Affect. Comput. 2(4), 219–229 (2011)

    Article  Google Scholar 

  20. A. Poursaberi, H.A. Noubari, M. Gavrilova, S.N. Yanushkevich, Gauss–laguerre wavelet textural feature fusion with geometrical information for facial expression identification. EURASIP J. Image Video Process. (1), 1–13 (2012)

    Google Scholar 

  21. I. Kotsia, I. Pitas, Facial expression recognition in image sequences using geometric deformation features and support vector machines. IEEE Trans. Image Process. 16(1), 172–187 (2007)

    Article  MathSciNet  Google Scholar 

  22. L. Zhong, Q. Liu, P. Yang, B. Liu, J. Huang, D.N. Metaxas, Learning active facial patches for expression analysis, in 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2012), pp. 2562–2569

    Google Scholar 

  23. H. Boughrara, M. Chtourou, C.B. Amar, L. Chen, Facial expression recognition based on a mlp neural network using constructive training algorithm. Multimed. Tools Appl. 75(2), 709–731 (2016)

    Google Scholar 

  24. A.T. Lopes, E. de Aguiar, T. Oliveira-Santos, A facial expression recognition system using convolutional networks, in 2015 28th SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI) (IEEE, 2015), pp. 273–280

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Paramartha Dutta .

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Singapore Pte Ltd.

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Dutta, P., Barman, A. (2020). Distance-Texture Signature Duo for Determination of Human Emotion. In: Human Emotion Recognition from Face Images. Cognitive Intelligence and Robotics. Springer, Singapore. https://doi.org/10.1007/978-981-15-3883-4_6

Download citation

  • DOI: https://doi.org/10.1007/978-981-15-3883-4_6

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-15-3882-7

  • Online ISBN: 978-981-15-3883-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics