Skip to main content

Dynamic Eyes and Mouth Reinforced LBP Histogram Descriptors Based on Emotion Classification in Video Sequences

  • Chapter
  • First Online:
Smart Techniques for a Smarter Planet

Part of the book series: Studies in Fuzziness and Soft Computing ((STUDFUZZ,volume 374))

Abstract

In the world of visual technology, classifying emotions from the face image is a challenging task. In the recent surveys, they have focused on capturing the whole facial signatures. But the mouth and eyes are the utmost vital facial components involved in classifying the emotions. This paper proposes an innovative approach to emotion classification using dynamic eyes and mouth signatures with high performance in minimum time. Initially, each eye and mouth image is separated into non-intersecting regions from this video sequences. The regions are further separated into small intersecting sub-regions. Dynamic reinforced local binary pattern signatures are seized from the sub-region of eyes and mouth in the subsequent frames which shows the dynamic changes of eyes and mouth aspects, respectively. In each sub-region, the dynamic eyes and mouth signatures are normalized using Z-score which is further converted into binary form signatures with the help of threshold values. The binary signatures are obtained for each pixel in a region on eyes and mouth computing histogram signatures. Concatenate the histogram signature which is captured from all the regions in the eye and mouth into a single enhanced signature. The discriminative dynamic signatures are categorized into seven emotions utilizing multi-class AdaBoost categorizer algorithm.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Fakhreddin, K., Milad, A., Jamil, A.S.: NoNA: Human-computer interaction: overview on state of the art. Int. J. Smart Sens. Intell. Syst. 1, 23 (2008)

    Google Scholar 

  2. Cohn, J.F.: Advances in behavioral science using automated facial image analysis and synthesis. IEEE Signal Process. 27, 128–133 (2010)

    Google Scholar 

  3. Ekman, P., Friesen, A.: The Facial Action Coding System. W.V., Consulting Psychologist Press, San Francisco (1978)

    Google Scholar 

  4. Fasela, B., Juergen, L.: Automatic facial expression analysis: a survey. Pattern Classif. 36, 259–275 (2003)

    Article  Google Scholar 

  5. Xingguo, J., Bin, F., Liangnian, J.: Facial expression classification via sparse representation using positive and reverse templates. IET Image Process. 10, 616–623 (2016)

    Google Scholar 

  6. Sunil, K., Bhuyan, M.K., Biplab, K.C.: Extraction of informative regions of a face for facial expression classification. IET Comput. Vis. 10, 567–576 (2016)

    Google Scholar 

  7. Ahmed, K.R, Alexandre, B., et al.: Framework for reliable, real-time facial expression classification for low resolution images. Pattern Recognit. Lett. 34, 1159–1168 (2013)

    Google Scholar 

  8. Liu, P., Han, S., Meng, Z.: Facial expression classification via a boosted deep belief network. In: IEEE Conference on Computer Vision and Pattern Classification, pp. 1805–1812 (2014)

    Google Scholar 

  9. Zhong, L., Liu, Q., et al.: Learning multiscale active facial patches for expression analysis. IEEE Trans. Cybern. 45, 1499–1510 (2014)

    Google Scholar 

  10. Zdzisław, L., Piotr, C.: Identification of emotions based on human facial expressions using a color-space approach. In: International Conference on Diagnostics of Processes and Systems Advanced Solutions in Diagnostics and Fault Tolerant Control, pp. 291–303 (2017)

    Google Scholar 

  11. Sreenivasa, K., Shashidhar, G., et al.: Classification of emotions from video using acoustic and facial descriptors. Signal Image Video Process. 9(10.5), 1029–1045 (2015)

    Google Scholar 

  12. Shubhada, D., Manasi, P., et al.: Survey on real-time facial expression classification techniques. IET Biom. 5, 155–163 (2016); Yun, T., Ling, G.: A deformable 3-D facial expression model for dynamic human emotional state classification. IEEE Trans. Circuits Syst. Video Technol. 23, 142–157 (2013)

    Google Scholar 

  13. Kalyan, V.P., Suja, P., et al.: Emotion classification from facial expressions for 4D videos using geometric approach. In: Advances in Signal Processing and Intelligent Classification Systems, pp. 3–14. Springer, Cham (2015)

    Google Scholar 

  14. Niese, R., Al-Hamadi, A., et al.: Facial expression classification based on geometric and optical flow descriptors in colour image sequences. IET Comput. Vis. 6(10.2), 79–88 (2012)

    Article  MathSciNet  Google Scholar 

  15. Siti, K., Mohamed, H., et al.: Spatiotemporal descriptor extraction for facial expression classification. IET Image Process. 10(10.7), 534–541 (2016)

    Google Scholar 

  16. Yi, J., Idrissi, K.: Automatic facial expression classification based on spatiotemporal descriptors. Pattern Classif. Lett. 33, 1373–1380 (2012)

    Article  Google Scholar 

  17. Isabelle, M., Menne, F.: Faces of emotion: investigating emotional facial expressions towards a robot. Int. J. Soc. Robot. 30, 1–11 (2017)

    Google Scholar 

  18. Chakraborty, A., Konar, A.: Fuzzy models for facial expression-based emotion classification and control. Emot. Intell. (Springer-Verlag SCI) 23, 33–173 (2009)

    Google Scholar 

  19. Ligang, Z., Tjondronegoro, D.: Facial expression classification using facial movement descriptors. IEEE Trans. Affect. Comput. 2, 219–229 (2011)

    Google Scholar 

  20. Sugata, B., Abhishek, V., et al.: LBP and color descriptors for image classification. Cross Disciplinary Biometric Systems, pp. 205–225. Springer, Berlin (2012)

    Google Scholar 

  21. Shan, C., Gong, S., Mcowan, P.: Facial expression classification based on local binary patterns: a comprehensive study. Image Vis. Comput. 27, 803–816 (2009)

    Article  Google Scholar 

  22. Liu, Z., Wu, M., Cao, W., et al.: A facial expression emotion recognition based human-robot interaction system. IEEE/CAA J. Autom. Sin. 4(10.4), 668–676 (2017)

    Article  Google Scholar 

  23. Ithayarani, P., Muneeswaran, K .: Facial emotion classification based on eye and mouth regions. Int. J. Pattern Classif. Artif. Intell. 30, 5020–5025 (2016)

    Google Scholar 

  24. Daugman, J.: Demodulation by complex-valued wavelets for stochastic pattern classification. Int. J. Wavelets Multiresolution Inform. Process. (2003)

    Google Scholar 

  25. Xiaoyang, T., Bill, T.: Fusing Gabor and LBP Descriptor Sets for Kernel-based Face Classification, pp. 235–249. INRIA & Laboratoire Jean Kuntzmann, France (2007)

    Google Scholar 

  26. Freund, Y., Schapire, R.E.: A decision-theoretic generalization of on-line learning and an application to boosting. J. Comput. Syst. Sci. 55, 119–139 (1997)

    Article  MathSciNet  Google Scholar 

  27. Saberian, M., Vasoncelos, J.: Multiclass boosting: theory and algorithms. In: Proceedings of Neural Information Processing Systems. (NIPS), pp. 2124–2132., Granada, Spain (2011)

    Google Scholar 

  28. Yongjin, W., Ling, G.: Recognizing human emotional state from audiovisual signals. IEEE Trans. Multimed. 10, 659–668 (2008)

    Google Scholar 

  29. Kanade, T., Cohn, J.F., et al.: Comprehensive database for facial expression analysis. In: IEEE International Conference on Automatic Face & Gesture Classification (FG) (2000)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ithaya Rani Panneer Selvam .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Panneer Selvam, I.R., Hari Prasath, T. (2019). Dynamic Eyes and Mouth Reinforced LBP Histogram Descriptors Based on Emotion Classification in Video Sequences. In: Mishra, M., Mishra, B., Patel, Y., Misra, R. (eds) Smart Techniques for a Smarter Planet. Studies in Fuzziness and Soft Computing, vol 374. Springer, Cham. https://doi.org/10.1007/978-3-030-03131-2_10

Download citation

Publish with us

Policies and ethics