Detecting Happiness in Human Face Using Minimal Feature Vectors

  • Manoj Prabhakaran Kumar
  • Manoj Kumar Rajagopal
Conference paper
Part of the Lecture Notes in Electrical Engineering book series (LNEE, volume 490)

Abstract

Human emotions estimated from face become more effective compared to various modes of extracting emotion owing to its robustness, high accuracy and better efficiency. This paper proposes detecting happiness of human face using minimal facial features from geometric deformable model and supervised classifier. First, the face detection and tracking is observed by constrained local model (CLM). Using CLM grid node, the entire and minimal feature vectors displacement is obtained by facial feature extraction. Compared to entire features, minimal feature vectors is considered for detecting happiness to improve accuracy. Facial animation parameters (FAPs) helps in identifying the facial feature movements to forms the feature vectors displacement. The feature vectors displacement is computed in supervised bilinear support vector machines (SVMs) classifier to detect the happiness in human frontal face image sequences. This paper focuses on minimal feature vectors of happiness (frontal face) in both training and testing phases. MMI facial expression database is used in training, and real-time data are used for testing phases. As a result, the overall accuracy of happiness is achieved 91.66% using minimal feature vectors.

Keywords

Constrained local model (CLM) Facial animation parameters (FAPs) Minimal feature vectors displacement Support vector machines (SVMs) 

Notes

Acknowledgements

The authors would like to thanks my research colleague for real-time dataset from Vellore Institute of Technology, Chennai.

References

  1. 1.
    Kollias S, Karpouzis K (2005) Multimodal emotion recognition and expressivity analysis. In: 2005 IEEE international conference on multimedia and expo. IEEE, pp 779–783Google Scholar
  2. 2.
    Ekman P, Sorenson ER, Friesen WV et al (1969) Pan-cultural elements in facial displays of emotion. Science 164(3875):86–88CrossRefGoogle Scholar
  3. 3.
    Mase K (1991) Recognition of facial expression from optical flow. IEICE Trans Inf Syst 74(10):3474–3483Google Scholar
  4. 4.
    Samal A, Iyengar PA (1992) Automatic recognition and analysis of human faces and facial expressions: a survey. Pattern Recogn 25(1):65–77CrossRefGoogle Scholar
  5. 5.
    Bartlett MS et al (2003) Real time face detection and facial expression recognition: development and applications to human computer interaction. In: Conference on computer vision and pattern recognition workshop (CVPRW’03), vol 5. IEEE, pp 53–53Google Scholar
  6. 6.
    Lucey P et al (2010) The extended Cohn-Kanade dataset (CK+): a complete dataset for action unit and emotion-specified expression. In: 2010 IEEE computer society conference on computer vision and pattern recognition-workshops. IEEE, pp 94–101Google Scholar
  7. 7.
    Kotsia I, Pitas I (2007) Facial expression recognition in image sequences using geometric deformation features and support vector machines. IEEE Trans Image Process 16(1):172–187MathSciNetCrossRefGoogle Scholar
  8. 8.
    Zhang Y et al (2008) Dynamic facial expression analysis and synthesis with mpeg-4 facial animation parameters. IEEE Trans Circuits Syst Video Technol 18(10):1383–1396CrossRefGoogle Scholar
  9. 9.
    Pantic M, Patras I (2006) Dynamics of facial expression: recognition of facial actions and their temporal segments from face profile image sequences. IEEE Trans Syst Man Cybern Part B (Cybern) 36(2):433–449CrossRefGoogle Scholar
  10. 10.
    Okada T, Takiguchi T, Ariki Y (2010) Video searching system based on human face identification and facial expression recognition using MSM and AAM. Far East J Electron Commun 4(1):41–48MATHGoogle Scholar
  11. 11.
    Tian Y-I et al (2001) Recognizing action units for facial expression analysis. IEEE Trans Pattern Anal Mach Intell 23(2):97–115CrossRefGoogle Scholar
  12. 12.
    Tekalp AM, Ostermann J (2000) Face and 2-D mesh animation in MPEG-4. Signal Process Image Commun 15(4):387–421CrossRefGoogle Scholar
  13. 13.
    Salam H (2013) Multi-object modelling of the face. Ph.D. thesis, SupelecGoogle Scholar
  14. 14.
    Saragih JM et al (2011) Deformable model fitting by regularized landmark mean-shift. Int J Comput Vis 91(2):200–215MathSciNetCrossRefMATHGoogle Scholar
  15. 15.
    Ventura D (2009) SVM example. Lectures notes, Mar 2009Google Scholar
  16. 16.
    Vapnik VJ, Vapnik V (1998) Statistical learning theory, vol 1. Wiley, New YorkMATHGoogle Scholar
  17. 17.
    Cristinacce D, Cootes TF (2006) Feature detection and tracking with constrained local models. In: BMVC, vol 1, p 3Google Scholar
  18. 18.
    Cheng Y (1995) Mean shift, mode seeking, and clustering. IEEE Trans Pattern Anal Mach Intell 17(8):790–799CrossRefGoogle Scholar
  19. 19.
    Valstar M, Pantic M (2010) Induced disgust, happiness and surprise: an addition to the MMI facial expression database. In Proceedings of the 3rd international workshop on EMOTION (satellite of LREC): Corpora for research on emotion and affect, p 65Google Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2018

Authors and Affiliations

  • Manoj Prabhakaran Kumar
    • 1
  • Manoj Kumar Rajagopal
    • 1
  1. 1.School of Electronics EngineeringVellore Institute of TechnologyChennaiIndia

Personalised recommendations