Advertisement

Extracting Facial Motion Parameters by Tracking Feature Points

  • Takahiro Otsuka
  • Jun Ohya
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1554)

Abstract

A method for extracting facial motion parameters is proposed. The method consists of three steps. First, the feature points of the face, selected automatically in the first frame, are tracked in successive frames. Then, the feature points are connected with Delaunay triangulation so that the motion of each point relative to the surrounding points can be computed. Finally, muscle motions are estimated based on motions of the feature points placed near each muscle. The experiments showed that the proposed method can extract facial motion parameters accurately. In addition, the facial motion parameters are used to render a facial animation sequence.

Keywords

Facial Expression Feature Point Delaunay Triangulation Circular Muscle Facial Motion 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Massaro, D, W.: Perceiving Talking Faces, MIT Press (1998).Google Scholar
  2. 2.
    Ekman, P., Friesen, W.V.: The Facial Action Coding System, Consulting Psychologists Press, Inc., (1978).Google Scholar
  3. 3.
    Terzopoulos, D., Waters, K.: Physically-based facial modeling, analysis, and animation, The J. of Visualization and Computer Animation, 1(2) (1990) 73–80.Google Scholar
  4. 4.
    Terzopoulos, D., Waters, K.: Analysis and synthesis of facial image sequences using physical and anatomical models, IEEE Trans. on Pattern Analysis and Machine Intelligence, 15(6) (1993) 569–579.CrossRefGoogle Scholar
  5. 5.
    Mase, K.: Recognition of facial expression from optical flow, IEICE Trans., E74(10) (1991) 3474–3483.Google Scholar
  6. 6.
    Essa, I. A., Pentland, A.: Coding, Analysis, Interpretation, and Recognition of Facial Expressions, IEEE Trans. on Pattern Analysis and Machine Intelligence, 19(7) (1997).Google Scholar
  7. 7.
    Otsuka, T., Ohya, J.: Recognizing Multiple Persons’ Facial Expressions Using HMM Based on Automatic Extraction of Significant Frames from Image Sequences, ICIP’97, vol.II (1997) 546–549.Google Scholar
  8. 8.
    DeCarlo, D., Metaxas, D.: Deformable Modes-Based Shape and Motion Analysis from Images using Motion Residual Error, Proc. ICCV’98 (1998) 113–119.Google Scholar
  9. 9.
    Cambridge Digital Research Laboratory: FaceWorks, URL http://www.interface.digital.com/.
  10. 10.
    Shi, J., Tomasi, C.: Good features to track, Proc of the IEEE Conf. on Computer Vision and Pattern Recognition (1994) 593–600, 1994.Google Scholar
  11. 11.
    Shapiro, L., Zisserman, A., Brady, M.: 3D motion recovery via affine epipolar geometry, Int. J. Computer Vision, 16(2) (1995) 147–182.CrossRefGoogle Scholar
  12. 12.
    Lawson, C.L., Transforming triangulations, Discrete Math., 3 (1972) 365–372.zbMATHCrossRefMathSciNetGoogle Scholar
  13. 13.
    de Berg, M., van Kreveld, M., Overmars, M., Schwarzkopf, O.: Computational Geometry, Chapter 9. Springer-Verlag (1997).Google Scholar
  14. 14.
    Pelachaud, C., Badler, N. I., Steedman, M.: Generating facial expressions for speech, Cognitive Science, 20(1) (1996) 1–46.CrossRefGoogle Scholar
  15. 15.
    Otsuka, T., J. Ohya, J.: Converting Facial Expressions Using Recognition-Based Analysis of Image Sequences, ACCV’98, vol. II (1998) 703–710.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1999

Authors and Affiliations

  • Takahiro Otsuka
    • 1
  • Jun Ohya
    • 1
  1. 1.ATR Media Integration & Communications Research LaboratoriesKyotoJapan

Personalised recommendations