Advertisement

Pose-Invariant Facial Expression Recognition Using Variable-Intensity Templates

  • Shiro Kumano
  • Kazuhiro Otsuka
  • Junji Yamato
  • Eisaku Maeda
  • Yoichi Sato
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4843)

Abstract

In this paper, we propose a method for pose-invariant facial expression recognition from monocular video sequences. The advantage of our method is that, unlike existing methods, our method uses a very simple model, called the variable-intensity template, for describing different facial expressions, making it possible to prepare a model for each person with very little time and effort. Variable-intensity templates describe how the intensity of multiple points defined in the vicinity of facial parts varies for different facial expressions. By using this model in the framework of a particle filter, our method is capable of estimating facial poses and expressions simultaneously. Experiments demonstrate the effectiveness of our method. A recognition rate of over 90% was achieved for horizontal facial orientations on a range of ±40 degrees from the frontal view.

Keywords

Facial Expression Video Sequence Recognition Rate Face Image Interest Point 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Otsuka, K., Yamato, J., Takemae, Y., Murase, H.: Conversation scene analysis with Dynamic Bayesian Network based on visual head tracking. In: Proc. of the IEEE International Conference on Multimedia and Expo, pp. 949–952. IEEE Computer Society Press, Los Alamitos (2006)CrossRefGoogle Scholar
  2. 2.
    Cohen, I., Sebe, N., Chen, L., Garg, A., Huang, T.: Facial expression recognition from video sequences: Temporal and static modeling. Computer Vision and Image Understanding 91, 160–187 (2003)CrossRefGoogle Scholar
  3. 3.
    Kaliouby, R., Robinson, P.: Generalization of a vision-based computational model of mind-reading. In: Proc. of the First International Conference on Affective Computing and Intelligent Interatction, pp. 582–589 (2005)Google Scholar
  4. 4.
    Chang, Y., Hu, C., Feris, R., Turk, M.: Manifold based analysis of facial expression. Image and Vision Computing 24, 605–614 (2006)CrossRefGoogle Scholar
  5. 5.
    Bartlett, M., Littlewort, G., Frank, M., Lainscsek, C., Fasel, I., Movellan, J.: Automatic recognition of facial actions in spontaneous expressions. Journal of Multimedia 1, 22–35 (2006)CrossRefGoogle Scholar
  6. 6.
    Gokturk, S.B., Tomasi, C., Girod, B., Bouguet, J.Y.: Model-based face tracking for view-independent facial expression recognition. In: Proc. of the Fifth IEEE International Conference on Automatic Face and Gesture Recognition, pp. 287–293. IEEE Computer Society Press, Los Alamitos (2002)CrossRefGoogle Scholar
  7. 7.
    Oka, K., Sato, Y.: Real-time modeling of face deformation for 3D head pose estimation. In: Proc. of the IEEE International Workshop on Analysis and Modeling of Faces and Gestures, pp. 308–320. IEEE Computer Society Press, Los Alamitos (2005)CrossRefGoogle Scholar
  8. 8.
    Dornaika, F., Davoine, F.: Simultaneous facial action tracking and expression recognition using a particle filter. In: Proc. of the Tenth IEEE International Conference on Computer Vision, vol. 2, pp. 1733–1738 (2005)Google Scholar
  9. 9.
    Zhu, Z., Ji, Q.: Robust real-time face pose and facial expression recovery. In: Proc. of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 681–688. IEEE Computer Society Press, Los Alamitos (2006)Google Scholar
  10. 10.
    Lucey, S., Matthews, I., Hu, C., Ambadar, Z., Torre, F., Cohn, J.: AAM derived face representations for robust facial action recognition. In: Proc. of the 7th International Conference on Automatic Face and Gesture Recognition, pp. 155–160 (2006)Google Scholar
  11. 11.
    Gross, R., Matthews, I., Baker, S.: Generic vs. person specific Active Appearance Models. Image and Vision Computing 23, 1080–1093 (2005)CrossRefGoogle Scholar
  12. 12.
    Matsubara, Y., Shakunaga, T.: Sparse template matching and its application to real-time object tracking. IPSJ Transactions on Computer Vision and Image Media 46(9), 17–40 (2005)Google Scholar
  13. 13.
    Viola, P.A., Jones, M.J.: Rapid object detection using a boosted cascade of simple features. In: Proc. of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 511–518. IEEE Computer Society Press, Los Alamitos (2001)Google Scholar
  14. 14.
    Rich, E., Knight, K.: Artificial intelligence, pp. 537–583. McGraw-Hill Book Company, New York (1991)Google Scholar
  15. 15.
  16. 16.
    Jepson, A.D., Fleet, D.J., El-Maraghi, T.F.: Robust online appearance models for visual tracking. IEEE Transactions on Pattern Analysis and Machine Intelligence 25, 1296–1311 (2003)CrossRefGoogle Scholar
  17. 17.
    Belhumeur, P.N., Hespanha, J., Kriegman, D.J.: Eigenfaces vs. Fisherfaces: Recognition using class specific linear projection. IEEE Transactions on Pattern Analysis and Machine Intelligence 19, 711–720 (1997)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2007

Authors and Affiliations

  • Shiro Kumano
    • 1
  • Kazuhiro Otsuka
    • 2
  • Junji Yamato
    • 2
  • Eisaku Maeda
    • 2
  • Yoichi Sato
    • 1
  1. 1.Institute of Industrial Science, The University of Tokyo, 4–6–1 Komaba, Meguro-ku, Tokyo, 153–8505Japan
  2. 2.NTT Communication Science Laboratories, NTT, 3–1 Morinosato-Wakamiya, Atsugi-shi, Kanagawa, 243–0198Japan

Personalised recommendations