Journal of Computer Science and Technology

, Volume 19, Issue 5, pp 618–625 | Cite as

Surface detail capturing for realistic facial animation

  • Pei-Hsuan TuEmail author
  • I-Chen Lin
  • Jeng-Sheng Yeh
  • Rung-Huei Liang
  • Ming Ouhyoung


In this paper, a facial animation system is proposed for capturing both geometrical information and illumination changes of surface details, called expression details, from video clips simultaneously, and the captured data can be widely applied to different 2D face images and 3D face models. While tracking the geometric data, we record the expression details by ratio images. For 2D facial animation synthesis, these ratio images are used to generate dynamic textures. Because a ratio image is obtained via dividing colors of an expressive face by those of a neutral face, pixels with ratio value smaller than one are where a wrinkle or crease appears. Therefore, the gradients of the ratio value at each pixel in ratio images are regarded as changes of a face surface, and original normals on the surface can be adjusted according to these gradients. Based on this idea, we can convert the ratio images into a sequence of normal maps and then apply them to animated 3D model rendering. With the expression detail mapping, the resulted facial animations are more life-like and more expressive.


facial animation facial expression deformations morphing bump mapping 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. [1]
    Lin I-C, Yeh J-S, Ouhyoung M. Extracting 3D facial animation parameters from multiview video clips.IEEE Computer Graphics and Applications, Nov/Dec. 2002, 22(6): 72–80.CrossRefGoogle Scholar
  2. [2]
    Liu Z, Shan Y, Zhang Z. Expressive expression mapping with ratio images. InProc. SIGGRAPH'01, Los Angeles, CA, USA, 2001, pp.271–276.Google Scholar
  3. [3]
    Platt S M. A structural model of the human face [Dissertation]. University of Pennsylvania, 1985.Google Scholar
  4. [4]
    Waters K. A muscle model for animating threedimensional facial expression.Computer Graphics (SIGGRAPH Proceedings), 1987, 22: 17–24.CrossRefGoogle Scholar
  5. [5]
    Williams L. Performance-driven facial animation. InProc. SIGGRAPH'90, Dallas, Texas, USA, Aug. 1990, pp.235–242.Google Scholar
  6. [6]
    Guenter B, Grimn C, Wood D. Making faces. InProc. SIGGRAPH'98, Orlando, Florida, USA, Aug. 1998, pp.55–66.Google Scholar
  7. [7]
    Bregler C, Covell M, Slaney M. Video rewrite: Driven visual speech with audio. InProc. SIGGRAPH'97, Los Angeles, CA, USA, 1997, pp.353–360.Google Scholar
  8. [8]
    Cosatto E, Graf H P. Photo-realistic talking-heads from image samples.IEEE Tran. Multimedia, 2000, 2(3): 152–162.CrossRefGoogle Scholar
  9. [9]
    Ezzat T, Geiger G, Poggio T. Trainable videorealistic speech animation. InProc. SIGGRAPH'02, San Antonio, Texas, USA, 2002, pp.388–398.Google Scholar
  10. [10]
    Wu Y, Kalra P, Moccozet D, Magnenat-Thalmann N. Simulating wrinkles and skin aging.The Visual Computer, 1999, 15(4): 183–198.CrossRefGoogle Scholar
  11. [11]
    Tiddeman B, Burt M, Perret D. Prototyping and transforming facial textures for perception research.IEEE Trans. Computer Graphics and Applications, Sep/Oct 2001, 21(5): 42–50.CrossRefGoogle Scholar
  12. [12]
    Bando Y, Kuratate T, Nishita T. A simple method for modeling wrinkles on human skin. InPacific Graphics 2002 Proceeding, 2002, pp.166–175.Google Scholar
  13. [13]
    Gonzalez R C, Woods R E. Digital Image Processing. Addison-Wesley Press, ISBN: 0-201-60078-1, 1992.Google Scholar
  14. [14]
    Noh J-Y, Neumann U. Expression cloning. InProc. SIGGRAPH'01, Los Angeles, CA, USA, 2001, pp.277–288.Google Scholar
  15. [15]
    Lin I-C, Yeh J-S, Ouhyoung M. Realistic 3D facial animation parameters from mirror-reflected multi-view video. InProc. Computer Animation 2001, IEEE Computer Society, Nov. 2001, pp.2–11.Google Scholar
  16. [16]
    Jensen H W, Marschner S R, Levoy M, Hanrahan P. A practical model for subsurface light transport. InProc. SIGGRAPH'01, Los Angeles, CA, USA, 2001, pp.511–518.Google Scholar

Copyright information

© Science Press, Beijing China and Allerton Press Inc., Beijing China and Allerton Press Inc. 2004

Authors and Affiliations

  • Pei-Hsuan Tu
    • 1
    Email author
  • I-Chen Lin
    • 1
  • Jeng-Sheng Yeh
    • 1
  • Rung-Huei Liang
    • 1
  • Ming Ouhyoung
    • 1
  1. 1.Communication and Multimedia Laboratory, Department of Computer Science and Information Engineering“National Taiwan University”China

Personalised recommendations