Analysis, Interpretation, and Recognition of Facial Action Units and Expressions Using Neuro-Fuzzy Modeling

  • Mahmoud Khademi
  • Mohammad Hadi Kiapour
  • Mohammad T. Manzuri-Shalmani
  • Ali A. Kiaei
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5998)

Abstract

In this paper an accurate real-time sequence-based system for representation, recognition, interpretation, and analysis of the facial action units (AUs) and expressions is presented. Our system has the following characteristics: 1) employing adaptive-network-based fuzzy inference systems (ANFIS) and temporal information, we developed a classification scheme based on neuro-fuzzy modeling of the AU intensity, which is robust to intensity variations, 2) using both geometric and appearance-based features, and applying efficient dimension reduction techniques, our system is robust to illumination changes and it can represent the subtle changes as well as temporal information involved in formation of the facial expressions, and 3) by continuous values of intensity and employing top-down hierarchical rule-based classifiers, we can develop accurate human-interpretable AU-to-expression converters. Extensive experiments on Cohn-Kanade database show the superiority of the proposed method, in comparison with support vector machines, hidden Markov models, and neural network classifiers.

Keywords

biased discriminant analysis (BDA) classifier design and evaluation facial action units (AUs) hybrid learning neuro-fuzzy modeling 

References

  1. 1.
    Mehrabian, A.: Communication without words. Psychology Today 2(4), 53–56 (1968)Google Scholar
  2. 2.
    Patnic, M., Rothkrantz, J.: Automatic analysis of facial expressions: the state of art. IEEE Transactions on PAMI 22(12) (2000)Google Scholar
  3. 3.
    Fasel, B., Luettin, J.: Automatic facial expression analysis: a survey. Pattern Recognition 36(1), 259–275 (2003)MATHCrossRefGoogle Scholar
  4. 4.
    Lyons, M., Akamatsu, S., Kamachi, M., Gyoba, J.: Coding facial expressions with Gabor wavelets. In: 3rd IEEE Int. Conf. on Automatic Face and Gesture Recognition, pp. 200–205 (1998)Google Scholar
  5. 5.
    Cohen, I., Sebe, N., Cozman, F., Cirelo, M., Huang, T.: Coding, analysis, interpretation, and recognition of facial expressions. Journal of Computer Vision and Image Understanding Special Issue on Face Recognition (2003)Google Scholar
  6. 6.
    Rosenblum, M., Yacoob, Y., Davis, L.: Human expression recognition from motion using a radial basis function network architecture. IEEE Transactions on Neural Network 7(5), 1121–1138 (1996)CrossRefGoogle Scholar
  7. 7.
    Cohn, J., Kanade, T., Moriyama, T., Ambadar, Z., Xiao, J., Gao, J., Imamura, H.: A comparative study of alternative faces coding algorithms, Technical Report CMU-RI-TR-02-06, Robotics Institute, Carnegie Mellon University, Pittsburgh (2001)Google Scholar
  8. 8.
    Ekman, P., Friesen, W.: The facial action coding system: A technique for the measurment of facial movement. Consulting Psychologist Press, San Francisco (1978)Google Scholar
  9. 9.
    Tian, Y., Kanade, T., Cohn, F.: Recognizing action units for facial expression analysis. IEEE Transactions on PAMI 23(2) (2001)Google Scholar
  10. 10.
    Sean, Z., Huang, T.: Small sample learning during multimedia retrieval using bias map. In: IEEE Int. Conf. on Computer Vision and Pattern Recognition, Hawaii (2001)Google Scholar
  11. 11.
    Lu, Y., Yu, J., Sebe, N., Tian, Q.: Two-dimensional adaptive discriminant analysis. In: IEEE International Conf. on Acoustics, Speech and Signal Processing, vol. 1, pp. 985–988 (2007)Google Scholar
  12. 12.
    Yang, D., Frangi, A., Yang, J.: Two-dimensional PCA: A new approach to appearance-based face representation and recognition. IEEE Transactions on PAMI 26(1), 131–137 (2004)Google Scholar
  13. 13.
    Kotsia, I., Pitas, I.: Facial expression recognition in image sequences using geometric deformation features and support vector machines. IEEE Transactions on Image Processing 16(1) (2007)Google Scholar
  14. 14.
    Wiskott, L., Fellous, K.N., Malsburg, C.: Face recognition by elastic bunch graph matching. IEEE Transactions on PAMI 19(7), 775–779 (1997)Google Scholar
  15. 15.
    Bouguet, J.: Pyramidal implementation of the Lucas Kanade feature tracker description of the algorithm, Technical Report, Intel Corporation, Microprocessor Research Labs (1999)Google Scholar
  16. 16.
    Jang, R.: ANFIS: Adaptive-network-based fuzzy inference systems. IEEE Transactions on Systems, Man and Cybernetics 23(3), 665–685 (1993)CrossRefMathSciNetGoogle Scholar
  17. 17.
    Quinlan, R.: C4.5: Programs for machine learning. Morgan Kaufmann Publishers, San Mateo (1993)Google Scholar
  18. 18.
    Kanade, T., Tian, Y.: Comprehensive database for facial expression analysis. In: IEEE In. Conf. on Face and Gesture Recognition, pp. 46–53 (2000)Google Scholar
  19. 19.
    Bartlett, M., Braathen, B., Littlewort-Ford, G., Hershey, J., Fasel, I., Marks, T., Smith, E., Sejnowski, T.: Movellan. J.:Automatic analysis of spontaneous facial behavior: A final project report, Technical Report INC-MPLab-TR-2001.08, UCSD (2001)Google Scholar
  20. 20.
    Tian, Y., Kanade, T., Cohn., J.: Evaluation of gabor-wavelet-based facial action unit recognition in image sequences of increasing complexity. In: IEEE Int. Conf. on Automatic Face and Gesture Recognition (2002)Google Scholar
  21. 21.
    Takagi, T., Sugeno, M.: Fuzzy identification of systems and its applications to modeling and control. IEEE Transactions on systems, Man and Cybernetics 15(1) (1985)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2010

Authors and Affiliations

  • Mahmoud Khademi
    • 1
  • Mohammad Hadi Kiapour
    • 2
  • Mohammad T. Manzuri-Shalmani
    • 1
  • Ali A. Kiaei
    • 1
  1. 1.DSP LabSharif University of TechnologyTehranIran
  2. 2.Institute for Studies in Fundamental Sciences (IPM)TehranIran

Personalised recommendations