Skip to main content

Computer Facial Animation: A Survey

  • Chapter
Data-Driven 3D Facial Animation

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. F. Parke. Computer generated animation of faces. In Proc. ACM Natl. Conf., volume 1, pages 451–457, 1972.

    Google Scholar 

  2. F.I. Parke and K. Waters. Computer Facial Animation. A.K. Peters, Ltd. Natick, MA, USA, 1996.

    Google Scholar 

  3. Z. Deng, P.Y. Chiang, P. Fox, and U. Neumann. Animating blendshape faces by cross mapping motion capture data. In Proc. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games, 2006.

    Google Scholar 

  4. P. Bergeron and P. Lachapelle. Controlling facial expression and body movements in the computer generated short “tony de peltrie,” tutorial, SIGGRAPH, 1985.

    Google Scholar 

  5. F. Pighin, J. Hecker, D. Lischinski, R. Szeliski, and D.H. Salesin. Synthesizing realistic facial expressions from photographs. In SIGGRAPH Proceedings, pages 75–84, 1998.

    Google Scholar 

  6. K. Waters and T.M. Levergood. Decface: An automatic lip-synchronization algorithm for synthetic faces, 1993.

    Google Scholar 

  7. F.I. Parke. A parametric model for human faces. Ph.D. Thesis, University of Utah, UTEC-CSc-75-047, 1974.

    Google Scholar 

  8. K. Arai, T. Kurihara, and K. Anjyo. Bilinear interpolation for facial expression and metamorphosis in real-time animation. The Visual Computer, 12:105–116, 1996.

    Google Scholar 

  9. H. Sera, S. Morishma, and D. Terzopoulos. Physics-based muscle model for moth shape control. In IEEE International Workshop on Robot and Human Communication, pages 207–212, 1996.

    Google Scholar 

  10. B. W. Choe and H. S. Ko. Analysis and synthesis of facial expressions with hand-generated muscle actuation basis. In IEEE Computer Animation Conference, pages 12–19, 2001.

    Google Scholar 

  11. E. Sifakis, I. Neverov, and R. Fedkiw. Automatic determination of facial muscle activations from sparse motion capture marker data. ACM Trans. Graph., 24(3):417–425, 2005.

    Article  Google Scholar 

  12. J.P. Lewis, M. Cordner, and N. Fong. Pose space deformation: A unified approach to shape interpolation and skeleton-driven deformation. In SIGGRAPH Proc., pages 165–172, 2000.

    Google Scholar 

  13. J.P. Lewis, J. Mooser, Z. Deng, and U. Neumann. Reducing blendshape interference by selected motion attenuation. In Proc. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games (I3DG), pages 25–29, 2005.

    Google Scholar 

  14. M. Cohen and D. Massaro. Modeling co-articulation in synthetic visual speech. Model and Technique in Computer Animation, pages 139–156, 1993.

    Google Scholar 

  15. F.I. Parke. Parameterized models for facial animation. IEEE Computer Graphics and Applications, 2(9):61–68, 1982.

    Article  Google Scholar 

  16. F.I. Parke. Parameterized models for facial animation revisited, In ACM Siggraph Facial Animation Tutorial Notes, 1989, 53–56.

    Google Scholar 

  17. K. Waters and J. Frisbie. A coordinated muscle model for speech animation. In Graphics Interface’95, pages 163–170, 1995.

    Google Scholar 

  18. P. Ekman and W.V. Friesen. Facial Action Coding System. Consulting Psychologists Press, 1978.

    Google Scholar 

  19. I.A. Essa, S. Basu, T. Darrell, and A. Pentland. Modeling, tracking and interactive animation of faces and heads using input from video. In Proceedings of Computer Animation, Palo Alto, California pages 85–94, 1996.

    Google Scholar 

  20. M. Nahas, H. Hutric, M. Rioux, and J. Domey. Facial image synthesis using skin texture recording. Visual Computer, 6(6):337–343, 1990.

    Article  Google Scholar 

  21. M.L. Viad and H. Yahia. Facial animation with wrinkles. In Proceedings of the Third Eurographics Workshop on Animation and Simulation, 1992.

    Google Scholar 

  22. CLY Wang and D.R. Forsey. Langwidere: A new facial animation system. In Proceedings of Computer Animation, pages 59–68, 1994.

    Google Scholar 

  23. K. Singh K and E. Fiume. Wires: A geometric deformation technique. In SIGGRAPH Proceedings, pages 405–414, 1998.

    Google Scholar 

  24. S. Coquillart. Extended free-form deformation: A sculpturing tool for 3D geometric modeling. Computer Graphics, 24:187–193, 1990.

    Article  Google Scholar 

  25. P. Kalra, A. Mangili, N.M. Thalmann, and D. Thalmann. Simulation of facial muscle actions based on rational free from deformations. In Eurographics, volume 11, pages 59–69, 1992.

    Google Scholar 

  26. T. Beier and S. Neely. Feature-based image metamorphosis. In SIGGRAPH Proceedings, pages 35–42. ACM Press, 1992.

    Google Scholar 

  27. F. Pighin, J. Auslander, D. Lischinski, D.H. Salesin, and R. Szeliski. Realistic facial animation using image-based 3D morphing, Technical report UW-CSE-97-01-03, 1997.

    Google Scholar 

  28. T.W. Sederberg and S.R. Parry. Free-form deformation of solid geometry models. In Computer Graphics, SIGGRAPH, volume 20, pages 151–160, 1996.

    Google Scholar 

  29. N. M. Thalmann and D. Thalmann. Interactive Computer Animation. Prentice Hall, Englewood Cliffs; NJ, 1996.

    Google Scholar 

  30. K. Waters. A muscle model for animating three-dimensional facial expression. In SIGGRAPH Proceedings, volume 21, pages 17–24, 1987.

    Article  Google Scholar 

  31. E. Catmull and J. Clark. Recursively generated B-spline surfaces on arbitrary topological meshes. Computer Aided Design, 10(6):350–355, 1978.

    Article  Google Scholar 

  32. T. Derose, M. Kass, and T. Truong. Subdivision surfaces in character animation. In SIGGRAPH Proceedings, pages 85–94, 1998.

    Google Scholar 

  33. E. Catmull. Subdivision algorithm for the display of curved surfaces. Ph.D. Thesis, University of Utah, 1974.

    Google Scholar 

  34. P. Eisert and B. Girod. Analyzing facial expressions for virtual conferencing. IEEE Computer Graphics and Applications, 18(5):70–78, 1998.

    Article  Google Scholar 

  35. S. Platt and N. Badler. Animating facial expression. Computer graphics. Computer Graphics, 15(3):245–252, 1981.

    Article  Google Scholar 

  36. S.M. Platt. A structural model of the human face. Ph.D. Thesis, University of Pennsylvania, 1985.

    Google Scholar 

  37. Y. Zhang, E. C. Parkash, and E. Sung. A physically-based model with adaptive refinement for facial animation. In Proc. of IEEE Computer Animation’2001, pages 28–39, 2001.

    Google Scholar 

  38. K. Kähler, J. Haber, and H.P. Seidel. Geometry-based muscle modeling for facial animation. In Proc. of Graphics Interface’2001, 2001.

    Google Scholar 

  39. D. Terzopoulos and K. Waters. Physically-based facial modeling, analysis, and animation. Journal of Visualization and Computer Animation, 1(4):73–80, 1990.

    Google Scholar 

  40. Y. Wu, N.M. Thalmann, and D. Thalmann. A plastic-visco-elastic model for wrinkles in facial animation and skin aging. In Proc. 2nd Pacific Conference on Computer Graphics and Applications, Pacific Graphics, 1994.

    Google Scholar 

  41. Y. Lee, D. Terzopoulos, and K. Waters. Constructing physics-based facial models of individuals. In Proc. of Graphics Interface’93, 1993.

    Google Scholar 

  42. Y.C. Lee, D. Terzopoulos, and K. Waters. Realistic face modeling for animation. In SIGGRAPH Proceedings, pages 55–62, 1995.

    Google Scholar 

  43. F. Ulgen. A step toward universal facial animation via volume morphing. In 6th IEEE International Workshop on Robot and Human Communication, pages 358–363, 1997.

    Google Scholar 

  44. B. Guenter, C. Grimm, D. Wood, H. Malvar, and F. Pighin. Making faces. Proc.of ACM SIGGRAPH’98, pages 55–66, 1998.

    Google Scholar 

  45. MJD Powell. Radial basis functions for multivariate interpolation: A review. Algorithms for Approximation, 1987.

    Google Scholar 

  46. T. Poggio and F. Girosi. A theory of networks for approximation and learning, Technical Report 1140, MIT AI Lab, 1989.

    Google Scholar 

  47. V. Blanz and T. Vetter. A morphable model for the synthesis of 3d faces. In SIGGRAPH Proceedings. ACM Press, 1999.

    Google Scholar 

  48. C.J. Kuo, R.S. Huang, and T.G. Lin. Synthesizing lateral face from frontal facial image using anthropometric estimation. In Proceedings of International Conference on Image Processing, volume 1, pages 133–136, 1997.

    Google Scholar 

  49. D. DeCarlo, D. Metaxas, and M. Stone. An anthropometric face model using variational technique. In SIGGRAPH Proceedings, 1998.

    Google Scholar 

  50. S. Gortler and M. Cohen. Hierarchical and variational geometric modeling with wavelets. In Symposium on Interactive 3D Graphics, pages 35–42, 1995.

    Google Scholar 

  51. W. Welch and A. Witkin. Variational surface modeling. In SIGGRAPH Proceedings, pages 157–166, 1992.

    Google Scholar 

  52. M. Kass, A. Witkin, and D. Terzopoulos. Snakes: Active contour models. International Journal of Computer Vision, 1(4):321–331, 1987.

    Article  Google Scholar 

  53. N.M. Thalmann, A. Cazedevals, and D. Thalmann. Modeling facial communication between an animator and a synthetic actor in real time. In Proc. Modeling in Computer Graphics, pages 387–396, 1993.

    Google Scholar 

  54. D. Terzopouos and R. Szeliski. Tracking with kalman snakes. Active Vision, pages 3–20, 1993.

    Google Scholar 

  55. K. Waters and D. Terzopoulos. Modeling and animating faces using scanned data. Journal of Visualization and Computer Animation, 2(4):123–128, 1990.

    Article  Google Scholar 

  56. D. Terzopoulos and K. Waters. Techniques for realistic facial modeling and animation. In Proc. of IEEE Computer Animation, pages 59–74. Springer-Verlag, 1991.

    Google Scholar 

  57. I.S. Pandzic, P. Kalra, and N. M. Thalmann. Real time facial interaction. Displays (Butterworth-Heinemann), 15(3), 1994.

    Google Scholar 

  58. E.M. Caldognetto, K. Vagges, N.A. Borghese, and G. Ferrigno. Automatic analysis of lips and jaw kinematics in vcv sequences. In Proceedings of Eurospeech Conference, volume 2, pages 453–456, 1989.

    Google Scholar 

  59. L. Williams. Performance-driven facial animation. In Proc. of ACM SIGGRAPH ’90, pages 235–242. ACM Press, 1990.

    Google Scholar 

  60. E.C. Patterson, P.C. Litwinowicz, and N. Greene. Facial animation by spatial mapping. In Proc. Computer Animation, pages 31–44, 1991.

    Google Scholar 

  61. F. Kishino. Virtual space teleconferencing system—real time detection and reproduction of human images. In Proc. Imagina, pages 109–118, 1994.

    Google Scholar 

  62. P. Litwinowicz and L. Williams. Animating images with drawings. In ACM SIGGRAPH Conference Proceedings, pages 409–412, 1994.

    Google Scholar 

  63. L. Moubaraki, J. Ohya, and F. Kishino. Realistic 3D facial animation in virtual space teleconferencing. In 4th IEEE International Workshop on Robot and Human Communication, pages 253–258, 1995.

    Google Scholar 

  64. J. Ohya, Y. Kitamura, H. Takemura, H. Ishi, F. Kishino, and N. Teraima. Virtual space teleconferencing: Real-time reproduction of 3D human images. Journal of Visual Communications and Image Representation, 6(1): 1–25, 1995.

    Article  Google Scholar 

  65. BKP Horn and BG Schunck. Determining optical flow. Artificial Intelligence, pages 185–203, 1981.

    Google Scholar 

  66. T. Darrell and A. Pentland. Space–time gestures. In Computer Vision and Pattern Recognition, 1993.

    Google Scholar 

  67. I.A. Essa, T. Darrell, and A. Pentland. Tracking facial motion, In Proceedings of the workshop on motion of nonrigid and articulated objects, pages 36–42. IEEE Computer Society, 1994.

    Google Scholar 

  68. J. Chai, J. Xiao, and J. Hodgins. Vision-based control of 3D facial animation. In Proc. of Symposium on Computer Animation, pages 193–206. ACM Press, 2003.

    Google Scholar 

  69. L. Zhang, N. Snavely, B. Curless, and S. M. Seitz. Spacetime faces: High resolution capture for modeling and animation. ACM Trans. Graph., 23(3): 548–558, 2004.

    Article  Google Scholar 

  70. B. Choe B, H. Lee, and H.S. Ko. Performance driven muscle based facial animation. Journal of Visualization and Computer Animation, 12(2):67–79, 2001.

    Article  Google Scholar 

  71. J. Noh, D. Fidaleo, and U. Neumann. Gesture driven facial animation, USC Technical Report 02–761, 2002.

    Google Scholar 

  72. P. Joshi, W.C. Tien, M. Desbrun, and F. Pighin. Learning controls for blend shape based realistic facial animation. In Eurographics/SIGGRAPH Symposium on Computer Animation, pages 35–42, 2003.

    Google Scholar 

  73. Iso/iec 14496—MPEG-4 international standard, moving picture experts group, www.cselt.it/mpeg.

    Google Scholar 

  74. J. Ostermann. Animation of synthetic faces in MPEG-4. In Proc. of IEEE Computer Animation, 1998.

    Google Scholar 

  75. M. Escher, I. S. Pandzic, and N.M. Thalmann. Facial deformations for MPEG-4. In Proc. of Computer Animation’98, pages 138–145, Philadelphia, 1998.

    Google Scholar 

  76. S. Kshirsagar, S. Garchery, and N.M. Thalmann. Feature point based mesh deformation applied to MPEG-4 facial animation. In Proc. Deform’2000, Workshop on Virtual Humans by IFIP Working Group 5.10, pages 23–34, November 2000.

    Google Scholar 

  77. G.A. Abrantes and F. Pereira. MPEG-4 facial animation technology: Survey, implementation, and results. IEEE Transaction on Circuits and Systems for Video Technology, 9(2): 290–305, 1999.

    Article  Google Scholar 

  78. F. Lavagetto and R. Pockaj. The facial animation engine: Toward a high-level interface for the design of MPEG-4 compliant animated faces. IEEE Transaction on Circuits and Systems for Video Technology, 9(2):277–289, 1999.

    Article  Google Scholar 

  79. S. Garchery and N.M. Thalmann. Designing MPEG-4 facial animation tables for web applications. In Proc. of Multimedia Modeling, pages 39–59, 2001.

    Google Scholar 

  80. I.S. Pandzic. Facial animation framework for the web and mobile platforms. In Proc. of the 7th Int’l Conf. on 3D Web Technology, 2002.

    Google Scholar 

  81. I.S. Pandzic and R. Forchheimer. MPEG-4 Facial Animation: The Standard, Implementation, and Applications. John Wiley & Sons, New york, 2002.

    Google Scholar 

  82. A. Pearce, B. Wyvill, G. Wyvill, and D. Hill. Speech and expression: A computer solution to face animation. In Proc. of Graphics Interface’86, pages 136–140, 1986.

    Google Scholar 

  83. J.P. Lewis. Automated lip-sync: Background and techniques. Journal of Visualization and Computer Animation, pages 118–122, 1991.

    Google Scholar 

  84. B.L. Goff and C. Benoit. A text-to-audovisual-speech synthesizer for French. In Proc. of the Int’l. Conf. on Spoken Language Processing (ICSLP), pages 2163–2166, 1996.

    Google Scholar 

  85. P. Cosi, C.E. Magno, G. Perlin, and C. Zmarich. Labial coarticulation modeling for realistic facial animation. In Proc. of Int’l Conf. on Multimodal Interfaces 02, pages 505–510, Pittsburgh, PA, 2002.

    Google Scholar 

  86. S.A. King and R.E. Parent. Creating speech-synchronized animation. IEEE Trans. Vis. Graph., 11(3):341–352, 2005.

    Article  Google Scholar 

  87. C. Pelachaud. Communication and coarticulation in facial animation. Ph.D. Thesis, Univ. of Pennsylvania, 1991.

    Google Scholar 

  88. J. Beskow. Rule-based visual speech synthesis. In Proc. of Eurospeech 95, Madrid, 1995.

    Google Scholar 

  89. E. Bevacqua and C. Pelachaud. Expressive audio-visual speech. Journal of Visualization and Computer Animation, 15(3-4):297–304, 2004.

    Google Scholar 

  90. Z. Deng, M. Bulut, U. Neumann, and S.S. Narayanan. Automatic dynamic expression synthesis for speech animation. In Proc. of IEEE Computer Animation and Social Agents (CASA) 2004, pages 267–274, Geneva, Switzerland, July 2004.

    Google Scholar 

  91. Z. Deng, J.P. Lewis, and U. Neumann. Synthesizing speech animation by learning compact speech co-articulation models. In Proc. of Computer Graphics International, pages 19–25, 2005.

    Google Scholar 

  92. Z. Deng, U. Neumann, J.P. Lewis, T.Y. Kim, M. Bulut, and S. Narayanan. Expressive facial animation synthesis by learning speech co-articulations and expression spaces. IEEE Trans. Vis. Graph., 12(6):1523–1534, 2006.

    Article  Google Scholar 

  93. C. Bregler, M. Covell, and M. Slaney. Video rewrite: Driving visual speech with audio. Proc. of ACM SIGGRAPH’97, pages 353–360, 1997.

    Google Scholar 

  94. E. Cosatto. Sample-based talking-head synthesis. Ph.D. Thesis, Swiss Federal Institute of Technology, 2002.

    Google Scholar 

  95. Y. Cao, P. Faloutsos, E. Kohler, and F. Pighin. Real-time speech motion synthesis from recorded motions. In Proc. of Symposium on Computer Animation, pages 345–353, 2004.

    Google Scholar 

  96. Y. Cao, P. Faloutsos, and F. Pighin. Expressive speech-driven facial animation. ACM Trans. on Graph., 24(4), 2005.

    Google Scholar 

  97. E. Cosatto and H.P. Graf. Audio-visual unit selection for the synthesis of photo-realistic talking-heads. In Proc. of ICME, pages 619–622, 2000.

    Google Scholar 

  98. J. Ma, R. Cole, B. Pellom, W. Ward, and B. Wise. Accurate automatic visible speech synthesis of arbitrary 3d model based on concatenation of diviseme motion capture data. Computer Animation and Virtual Worlds, 15:1–17, 2004.

    Article  Google Scholar 

  99. J. Ma, R. Cole, B. Pellom, W. Ward, and B. Wise. Accurate visible speech synthesis based on concatenating variable length motion capture data. IEEE Transaction on Visualization and Computer Graphics, 12(2):266–276, 2006.

    Article  Google Scholar 

  100. Z. Deng and U. Neumann. efase: Expressive facial animation synthesis and editing with phoneme-level controls. In Proc. of ACM SIGGGRAPH/Eurographics Symposium on Computer Animation, pages 251–259, Vienna, Austria, 2006.

    Google Scholar 

  101. Z. Deng. Data-driven facial animation synthesis by learning from facial motion capture data. Ph.D. Thesis, University of Southern California, 2006.

    Google Scholar 

  102. S. Kshirsagar and N.M. Thalmann. Visyllable based speech animation. Computer Graphics Forum, 22(3), 2003.

    Google Scholar 

  103. E. Sifakis, A. Selle, A.R. Mosher, and R. Fedkiw. Simulating speech with a physics-based facial muscle model. In Proc. of Symposium on Computer Animation (SCA), 2006.

    Google Scholar 

  104. M. Brand. Voice pupperty. Proc. of ACM SIGGRAPH’99, pages 21–28, 1999.

    Google Scholar 

  105. T. Ezzat, G. Geiger, and T. Poggio. Trainable videorealistic speech animation. ACM Trans. Graph., pages 388–398, 2002.

    Google Scholar 

  106. V. Blanz, C. Basso, T. Poggio, and T. Vetter. Reanimating faces in images and video. Computer Graphics Forum, 22(3), 2003.

    Google Scholar 

  107. S. Kshirsagar, T. Molet, and N.M. Thalmann. Principal components of expressive speech animation. In Proc. of Computer Graphics International, 2001.

    Google Scholar 

  108. A.S. Meyer, S. Garchery, G. Sannier, and N.M. Thalmann. Synthetic faces: Analysis and applications. International Journal of Imaging Systems and Technology, 13(1):65–73, 2003.

    Article  Google Scholar 

  109. Y. Cao, P. Faloutsos, and F. Pighin. Unsupervised learning for speech motion editing. In Proc. of ACM SIGGRAPH/Eurographics Symposium on Computer Animation, 2003.

    Google Scholar 

  110. Q. Zhang, Z. Liu, B. Guo, and H. Shum. Geometry-driven photorealistic facial expression synthesis. In Proc. of Symposium on Computer Animation, pages 177–186, 2003.

    Google Scholar 

  111. D. Vlasic, M. Brand, H. Pfister, and J. Popović. Face transfer with multilinear models. ACM Trans. Graph., 24(3):426–433, 2005.

    Article  Google Scholar 

  112. E. Chang and O.C. Jenkins. Sketching articulation and pose for facial animation. In Proc. of Symposium on Computer Animation (SCA), 2006.

    Google Scholar 

  113. J. Y. Noh and U. Neumann. Expression cloning. Proc. of ACM SIGGRAPH’01, pages 277–288, 2001.

    Google Scholar 

  114. R.W. Sumner and J. Popović. Deformation transfer for triangle meshes. ACM Trans. Graph., 23(3):399–405, 2004.

    Article  Google Scholar 

  115. H. Pyun, Y. Kim, W. Chae, H.W. Kang, and S.Y. Shin. An example-based approach for facial expression cloning. In Proc. of Symposium on Computer Animation, pages 167–176, 2003.

    Google Scholar 

  116. E.S. Chuang, H. Deshpande, and C. Bregler. Facial expression space learning. In Proc. of Pacific Graphics’2002, pages 68–76, 2002.

    Google Scholar 

  117. E. Chuang and C. Bregler. Moodswings: Expressive speech animation. ACM Trans. on Graph., 24(2), 2005.

    Google Scholar 

  118. J.B. Tenenbaum and W.T. Freeman. Separating style and content with bilinear models. Neural Computation, 12(6):1247–1283, 2000.

    Article  Google Scholar 

  119. V. Vinayagamoorthy, M. Gillies, A. Steed, E. Tanguy, X. Pan, C. Loscos, and M. Slater. Building expression into virtual characters. In STAR Report, Proc. of Eurographics 2006, 2006.

    Google Scholar 

  120. S.C. Khullar and N. Badler. Where to look? Automating visual attending behaviors of virtual human characters. In Proc. of Third ACM Conf. on Autonomous Agents, pages 16–23, 1999.

    Google Scholar 

  121. R. Vertegaal, G.V. Derveer, and H. Vons. Effects of gaze on multiparty mediated communication. In Proc. of Graphics Interface’00, pages 95–102, Montreal, 2000.

    Google Scholar 

  122. R. Vertegaal, R. Slagter, G.V. Derveer, and A. Nijholt. Eye gaze patterns in conversations: There is more to conversational agents than meets the eyes. In Proc. of ACM CHI 2001 Conference on Human Factors in Computing Systems, pages 301–308, 2001.

    Google Scholar 

  123. S.P. Lee, J.B. Badler, and N. Badler. Eyes alive. ACM Trans. Graph. (Proc. of ACM SIGGRAPH’02), 21(3):637–644, 2002.

    Google Scholar 

  124. Z. Deng, J.P. Lewis, and U. Neumann. Practical eye movement model using texture synthesis. In Proc. of ACM SIGGRAPH 2003 Sketches and Applications, San Diego, 2003.

    Google Scholar 

  125. Z. Deng, J.P. Lewis, and U. Neumann. Automated eye motion synthesis using texture synthesis. IEEE Computer Graphics and Applications, pages 24–30, March/April 2005.

    Google Scholar 

  126. C. Busso, Z. Deng, U. Neumann, and S. Narayanan. Natural head motion synthesis driven by acoustic prosody features. Computer Animation and Virtual Worlds, 16(3-4):283–290, July 2005.

    Article  Google Scholar 

  127. C. Busso, Z. Deng, M. Grimm, U. Neumann, and S. Narayanan. Rigid head motion in expressive speech animation: Analysis and synthesis. IEEE Transaction on Audio, Speech and Language Processing, March 2007.

    Google Scholar 

  128. C. Pelachaud, N. Badler, and M. Steedman. Generating facial expressions for speech. Cognitive Science, 20(1):1–46, 1994.

    Article  Google Scholar 

  129. J. Cassell, C. Pelachaud, N. Badler, M. Steedman, B. Achorn, T. Becket, B. Douville, S. Prevost, and M. Stone. Animated conversation: Rule-based generation of facial expression, gesture and spoken intonation for multiple conversational agents. In Proc. of ACM SIGGRAPH’94, pages 413–420, 1994.

    Google Scholar 

  130. H.P. Graf, E. Cosatto, V. Strom, and F.J. Huang. Visual prosody: Facial movements accompanying speech. In Proc. of IEEE Int’l Conf. on Automatic Face and Gesture Recognition(FG’02), Washington, DC., May 2002.

    Google Scholar 

  131. M. Costa, T. Chen, and F. Lavagetto. Visual prosody analysis for realistic motion synthesis of 3d head models. In Proc. of Int’l. Conf. on Augmented, Virtual Environments and Three-Dimensional Imaging, Ornos, Mykonos, Greece, 2001.

    Google Scholar 

  132. E. Chuang and C. Bregler. Performance driven facial animation using blendshape interpolation. CS-TR-2002-02, Department of Computer Science, Stanford University, 2002.

    Google Scholar 

  133. Z. Deng, C. Busso, S.S. Narayanan, and U. Neumann. Audio-based head motion synthesis for avatar-based telepresence systems. In Proc. of ACM SIGMM 2004 Workshop on Effective Telepresence (ETP 2004), pages 24–30, New York, Oct. 2004.

    Google Scholar 

Download references

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2008 Springer-Verlag London Limited

About this chapter

Cite this chapter

Deng, Z., Noh, J. (2008). Computer Facial Animation: A Survey. In: Deng, Z., Neumann, U. (eds) Data-Driven 3D Facial Animation. Springer, London. https://doi.org/10.1007/978-1-84628-907-1_1

Download citation

  • DOI: https://doi.org/10.1007/978-1-84628-907-1_1

  • Publisher Name: Springer, London

  • Print ISBN: 978-1-84628-906-4

  • Online ISBN: 978-1-84628-907-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics