Advertisement

Facial Expression Recognition Based on Quaternion-Space and Multi-features Fusion

  • Yong YangEmail author
  • Shubo Cai
  • Qinghua Zhang
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9436)

Abstract

There is an increasing trend of using feature fusion technique in facial expression recognition. However, when traditional serial or parallel feature fusion methods are used, the problem of highly dimensional features and insufficient fusion of possible feature categories always exist. In order to solve these problems, a novel facial expression recognition method based on quaternion-space and multi-features fusion is proposed. Firstly, four different kinds of expression features are extracted such as Gabor wavelet, LBP, LPQ and DCT features, then PCA+CCA framework is proposed and used to reduce the dimensions of the four original features. Secondly, quaternion is used to construct the combinative features. Thirdly, a novel quaternion-space HDA method is proposed and used as the dimensional reduction method of the combinative features. Finally, SVM is used and set as the classifier. Experimental results indicate that the proposed method is capable of fusing four kinds of features more effectively while it achieves higher recognition rates than the traditional feature fusion methods.

Keywords

Facial expression recognition Multi-features fusion Quaternion Dimensional reduction Quaternion-space HDA 

References

  1. 1.
    Mehrabian, A.: Silent Messages: Implicit Communication of Emotions and Attitudes. Wadsworth, Belmont (1981)Google Scholar
  2. 2.
    Abidi, M.A., Gonzalez, R.C.: Data Fusion in Robotics and Machine Intelligence. Academic Press, San Diego (1992)zbMATHGoogle Scholar
  3. 3.
    Yang, J., Yang, J.Y., Zhang, D., et al.: Feature fusion: parallel strategy vs. serial strategy. J Pattern Recogn. 36(6), 1369–1381 (2003)CrossRefGoogle Scholar
  4. 4.
    Liu, C.J., Wechsler, H.: A shape- and texture-based enhanced Fisher classifier for face recognition. J. IEEE Trans. Image Process. 10(4), 598–608 (2001)CrossRefGoogle Scholar
  5. 5.
    Kotsia, I., Nikolaidis, N., Pitas, I.: Fusion of geometrical and texture information for facial expression recognition. In: 2006 IEEE International Conference on Image Processing, pp. 2649–2652. IEEE Press, Atlanta (2006)Google Scholar
  6. 6.
    Luo, Y., Wu, C.M., Zhang, Y.: Facial expression feature extraction using hybrid PCA and LBP. J. China Univ. Posts Telecommun. 20(2), 120–124 (2013)CrossRefGoogle Scholar
  7. 7.
    Yang, J., Yang, J.Y., Wang, Z.Q., et al.: A novel method of combined feature extraction. J. Chinese J. Comput. 25(6), 570–575 (2002)Google Scholar
  8. 8.
    Luo, F., Wang, G.Y., Yang, Y., et al.: Facial expression recognition based on improved parallel features fusion. J. Guangxi Univ. Nat. Sci. Ed. 34(5), 700–703 (2009)Google Scholar
  9. 9.
    Bai, G., Jia, W.H., Jin, Y.: Facial expression recognition based on fusion features of lbp and gabor with lda. In: 2nd International Congress on Image and Signal Processing(CISP2 009), pp. 1–5. IEEE Press, Tianjin (2009)Google Scholar
  10. 10.
    Yang, Y., Cai, S.B.: Facial expression recognition method based on two-steps dimensionality reduction and parallel feature fusion. J. Chongqing Univ. Posts Telecomm. Nat. Sci. Ed. 27(3), 377–385 (2015)Google Scholar
  11. 11.
    Lang, F.N., Zhou, J.L., Zhong, F., et al.: Quaternion based image information parallel fusion. J. Acta Automatica Sinica 33(11), 1136–1143 (2008)zbMATHGoogle Scholar
  12. 12.
    Hamilton, W.R.: Elements of Quaternions. Longmans Green and Company, London (1866)Google Scholar
  13. 13.
    Ruan, J.X., Yin, J.X., Chen, Q., et al.: Facial expression recognition based on gabor wavelet transform and relevance vector machine. J. Inf. Comput. Sci. 11(1), 295–302 (2014)CrossRefGoogle Scholar
  14. 14.
    Verma, R., Dabbagh, M.Y.: Fast facial expression recognition based on local binary patterns. In: 2013 26th Annual IEEE Canadian Conference on Electrical and Computer Engineering (CCECE), pp. 1–4. IEEE Press, Regina (2013)Google Scholar
  15. 15.
    Wang, Z., Ying, Z.L.: Facial Expression Recognition Based on Local Phase Quantization and Sparse Representation. In: 2012 Eighth International Conference on Natural Computation (ICNC), pp. 222–225. IEEE Press, Chongqing (2012)Google Scholar
  16. 16.
    Kharat, G.U., Dudul, S.V.: Neural network classifier for human emotion recognition from facial expressions using discrete cosine transform. In: First International Conference on Emerging Trends in Engineering and Technology (ICETET 2008), pp. 653–658. IEEE Press, Nagpur (2008)Google Scholar
  17. 17.
    Xanthopoulos, P., Pardalos, P.M., Trafalis, T.B.: Principal component analysis. Robust Data Mining, pp. 21–26. Springer Press, New York (2013)CrossRefGoogle Scholar
  18. 18.
    Hardoon, D., Szedmak, S., Shawe-Taylor, J.: Canonical correlation analysis: an overview with application to learning methods. Neural Comput. 16(12), 2639–2664 (2004)CrossRefGoogle Scholar
  19. 19.
    Yu, J., Tian, Q., Rui, T., et al.: Integrating discriminant and descriptive information for dimension reduction and classification. IEEE Trans. Circuits Sys. Video Technol. 17(3), 372–377 (2007)CrossRefGoogle Scholar
  20. 20.
    Ekman, P., Friesen, W.V.: Constants across cultures in the face and emotion. J. Pers. Soc. Psychol. 17(2), 124 (1971)CrossRefGoogle Scholar
  21. 21.
    Lyons, M., Akamatsu, S., Kamachi, M., et al.: Coding facial expressions with gabor wavelets. In: Third IEEE International Conference on Automatic Face and Gesture Recognition, pp. 200–205. IEEE Press, Nara (1998)Google Scholar
  22. 22.
    Lucey, P., Cohn, J.F., Kanade, T., et al.: The Extended cohn-kanade dataset (CK+): a complete dataset for action unit and emotion-specified expression. In: 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 94–101. IEEE Press, San Francisco (2010)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Open Access This chapter is licensed under the terms of the Creative Commons Attribution-NonCommercial 2.5 International License (http://creativecommons.org/licenses/by-nc/2.5/), which permits any noncommercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

Authors and Affiliations

  1. 1.Chongqing Key Laboratory of Computational IntelligenceChongqing University of Posts and TelecommunicationsChongqingPeople’s Republic of China
  2. 2.School of Information and Communication EngineeringInha UniversityIncheonKorea

Personalised recommendations