Skip to main content
Log in

Data-driven facial expression synthesis via Laplacian deformation

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

Realistic talking heads have important use in interactive multimedia applications. This paper presents a novel framework to synthesize realistic facial animations driven by motion capture data using Laplacian deformation. We first capture the facial expression from a performer, then decompose the motion data into two components: the rigid movement of the head and the change of the facial expression. By making use of the local-detail preserving property of the Laplacian coordinate, we clone the captured facial expression onto a neutral 3D facial model using Laplacian deformation. We choose some expression “independent points” in the facial model as the fixed points when solving the Laplacian deformation equations. Experimental results show that our approach can synthesize realistic facial expressions in real time while preserving the facial details. We compare our method with the state-of-the-art facial expression synthesis methods to verify the advantages of our method. Our approach can be applied in real-time multimedia systems.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

References

  1. Alexa M (2003) Differential coordinates for local mesh morphing and deformation. Vis Comput 19:105–114

    MATH  Google Scholar 

  2. Bickel B, Botsch M, Angst R, Matusik W, Otaduy M, Pfister H, Gross M (2007) Multi-scale capture of facial geometry and motion. ACM Trans Graph 26:33–41

    Article  Google Scholar 

  3. Bickel B, Lang M, Botsch M, Otaduy MA, Gross M (2008) Pose-space animation and transfer of facial details. In: Proceedings of symposium on computer animation, Dublin, Ireland. Eurographics Association, Aire-la-ville, pp 57–66

    Google Scholar 

  4. Botsch M, Kobbelt L (2005) Real-time shape editing using radial basis functions. Comput Graph Forum 24:611–621

    Article  Google Scholar 

  5. Cao Y, Faloutsos P, Pighin F (2003) Unsupervised learning for speech motion editing. In: Proceedings of symposium on computer animation. Eurographics Association, Aire-la-Ville, pp 225–231

    Google Scholar 

  6. Chou Y-F, Shih Z-C (2010) A nonparametric regression model for virtual humans generation. Multimed Tools Appl 47:163–187

    Article  Google Scholar 

  7. Chuang E, Bregler C (2005) Mood swings: expressive speech animation. ACM Trans Graph 24:331–347

    Article  Google Scholar 

  8. Deng Z, Chiang P-Y, Fox P, Neumann U (2006) Animating blendshape faces by cross-mapping motion capture data. In: Proceedings of the 2006 symposium on interactive 3D graphics and games. ACM, New York, pp 43–48

    Chapter  Google Scholar 

  9. Deng Z, Neumann U (2007) Data-driven 3d facial animation. Springer, Berlin

    Book  Google Scholar 

  10. Deng Z, Neumann U (2008) Expressive speech animation synthesis with phoneme-level control. Comput Graph Forum 27:2096–2113

    Article  Google Scholar 

  11. Fratarcangeli M, Schaerf M, Forchheimer R (2007) Facial motion cloning with radial basis functions in mpeg-4 fba. Graph Models 69:106–118

    Article  Google Scholar 

  12. Ju E, Lee J (2008) Expressive facial gestures from motion capture data. Comput Graph Forum 27:381–388

    Article  Google Scholar 

  13. Kim S-K, An S-O, Hong M, Park D-S, Kang S-J (2010) Decimation of human face model for real-time animation in intelligent multimedia systems. Multimed Tools Appl 47:147–162

    Article  Google Scholar 

  14. Kshirsagar S, Garchery S, Magnenat-Thalmann N (2001) Feature point based mesh deformation applied to mpeg-4 facial animation. In: Proceedings of the IFIP TC5/WG5.10 DEFORM’2000 workshop and AVATARS’2000 workshop on deformable avatars, Deventer, The Netherlands. Kluwer, Norwell, pp 24–34

  15. Lewis JP, Pighin F (2006) Retargeting: algorithms for performance-driven animation. In: ACM SIGGRAPH 2006 courses. ACM, New York

    Google Scholar 

  16. Lipman Y, Sorkine O, Cohen-Or D, Levin D, Rossl C, Seidel H-P (2004) Differential coordinates for interactive mesh editing. In: Proceedings of the shape modeling international. IEEE Computer Society, Washington, DC, pp 181–190

    Chapter  Google Scholar 

  17. Liu X, Mao T, Xia S, Yu Y, Wang Z (2008) Facial animation by optimized blendshapes from motion capture data. Comput Animat VirtW 19:235–245

    Article  Google Scholar 

  18. Ma W-C, Jones A, Chiang J-Y, Hawkins T, Frederiksen S, Peers P, Vukovic M, Ouhyoung M, Debevec P (2008) Facial performance synthesis using deformation-driven polynomial displacement maps. ACM Trans Graph 27:1–10

    Article  Google Scholar 

  19. Ma X, Le BH, Deng Z (2009) Style learning and transferring for facial animation editing. In: Proceedings of symposium on computer animation. ACM, New York, pp 123–132

    Google Scholar 

  20. Muller M, Heidelberger B, Teschner M, Gross M (2005) Meshless deformations based on shape matching. ACM Trans Graph 24:471–478

    Article  Google Scholar 

  21. Noh J, Neumann U (2001) Expression cloning. In: Proceedings of ACM SIGGRAPH. ACM, New York, pp 277–288

    Google Scholar 

  22. Parke FI (1972) Computer generated animation of faces. In: Proceedings of ACM annual conference. ACM, New York, pp 451–457

    Chapter  Google Scholar 

  23. Parke FI (1974) A parametric model for human faces. PhD thesis, University of Utah

  24. Pighin F, Hecker J, Lischinski D, Szeliski R, Salesin DH (1998) Synthesizing realistic facial expressions from photographs. In: Proceedings of ACM SIGGRAPH. ACM, New York, pp 75–84

    Google Scholar 

  25. Pighin F, Lewis JP (2005) Digital face cloning. In: ACM SIGGRAPH 2005 courses. ACM, New York

    Google Scholar 

  26. Pyun H, Kim Y, Chae W, Kang HW, Shin SY (2003) An example-based approach for facial expression cloning. In: Proceedings of symposium on computer animation. Eurographics Association, Aire-la-Ville, pp 167–176

    Google Scholar 

  27. Schaefer S, McPhail T, Warren J (2006) Image deformation using moving least squares. ACM Trans Graph 25:533–540

    Article  Google Scholar 

  28. Sorkine O (2006) Differential representations for mesh processing. Comput Graph Forum 25:789–807

    Article  Google Scholar 

  29. Sorkine O, Cohen-Or D, Lipman Y, Alexa M, Rossl C, Seidel H-P (2004) Laplacian surface editing. In: Proceedings of symposium on geometry processing. ACM, New York, pp 175–184

    Google Scholar 

  30. Sorkine O, Cohen-Or D, Toledo S (2003) High-pass quantization for mesh encoding. In: Proceedings of symposium on geometry processing. Eurographics Association, Aire-la-Ville, pp 42–51

    Google Scholar 

  31. Vlasic D, Brand M, Pfister H, Popović J (2005) Face transfer with multilinear models. ACM Trans Graph 24:426–433

    Article  Google Scholar 

  32. Vosinakis S, Panayiotopoulos T (2005) A tool for constructing 3d environments with virtual agents. Multimed Tools Appl 25:253–279

    Article  Google Scholar 

  33. Williams L (1990) Performance-driven facial animation. In: Proceedings of ACM SIGGRAPH. ACM, New York, pp 235–242

    Chapter  Google Scholar 

  34. Yang C-K, Chiang W-T (2008) An interactive facial expression generation system. Multimed Tools Appl 40:41–60

    Article  Google Scholar 

  35. Zhao H, Tai C-L (2007) Subtle facial animation transfer from 2d videos to 3d faces with laplacian deformation. In: Proceedings of computer animation and social agents, Hasselt, Belgium, June 11–13, 2007

    Google Scholar 

Download references

Acknowledgements

The authors are grateful to our anonymous reviewers for their insightful and constructive comments. We thank Dr. Yuwei Meng for the performance of the facial expressions. Special thanks go to Professor Chiew-Lan Tai from the Hong Kong University of Science and Technology for the discussion of the project. Xiaogang Jin was supported by the National Key Basic Research Foundation of China (Grant No. 2009CB320801), the NSFC-MSRA Joint Funding (Grant no. 60970159), the National Natural Science Foundation of China (Grant No. 60933007), and the Key Technology R&D Program (Grant No. 2007BAH11B03). Xianmei Wan was Supported by Scientific Research Fund of Zhejiang Provincial Education Department (Grant No. Y201017097).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiaogang Jin.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Wan, X., Jin, X. Data-driven facial expression synthesis via Laplacian deformation. Multimed Tools Appl 58, 109–123 (2012). https://doi.org/10.1007/s11042-010-0688-7

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-010-0688-7

Keywords

Navigation