Skip to main content

Modelling Facial Communication Between an Animator and a Synthetic Actor in Real Time

  • Conference paper
Modeling in Computer Graphics

Part of the book series: IFIP Series on Computer Graphics ((IFIP SER.COMP.))

Abstract

This paper describes methods for acquiring and analyzing in real-time the motion of human faces. It proposes a model based on the use of snakes and image processing techniques. It explains how to generate real-time facial animation corresponding to the recorded motion. It also proposed a strategy for the communication between animators and synthetic actors.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  • deGraf B (1989) in State of the Art in Facial Animation, SIGGRAPH ’89 Course Notes No. 26, pp. 10–20.

    Google Scholar 

  • DiPaola S (1991) Extending the Range of Facial Types, Journal of Visualization and Computer Animation, Vol.2, No4, pp.129–131.

    Article  Google Scholar 

  • Ekman P, Friesen WV (1975), Unmasking the Face: A Guide to Recognizing Emotions from Facial Clues, Printice-Hall

    Google Scholar 

  • Hill DR, Pearce A, Wyvill B (1988), Animating Speech: An Automated Approach Using Speech Synthesised by Rules, The Visual Computer, Vol. 3, No. 5, pp. 277–289.

    Article  Google Scholar 

  • Kalra P, Mangili A, Magnenat-Thalmann N, Thalmann D (1991) SMILE : A Multilayered Facial Animation System, Proc IFIP WG 5.10, Tokyo, Japan (Ed Kunii Tosiyasu L) pp. 189–198.

    Google Scholar 

  • Kalra P, Mangili A, Magnenat Thalmann N, Thalmann D (1992) Simulation of Facial Muscle Actions Based on Rational Free Form Deformations, Proc. Eurographics ’92, pp.59–69.

    Google Scholar 

  • Kato M, So I, Hishinuma Y, Nakamura O, Minami T (1992) Description and Synthesis of Facial Expression based on Isodensity Maps, in: Tosiyasu L (ed): Visual Computing, Springer-Verlag, Tokyo, pp.39–56.

    Google Scholar 

  • Kurihara T, Arai K (1991), A Transformation Method for Modeling and Animation of the Human Face from Photographs, Proc. Computer Animation ’91 Geneva, Switzerland, Springer-Verlag, Tokyo, pp. 45–57.

    Google Scholar 

  • Lewis JP, Parke FI (1987), Automated Lipsync and Speech Synthesis for Character Animation, Proc. CHI ’87 and Graphics Interface ’87, Toronto, pp. 143–147.

    Google Scholar 

  • Lewis J (1991) Automated Lip-sync: Background and Techniques, Journal of Visualization and Computer Animation, Vol.2, No4, pp.117–122.

    Article  Google Scholar 

  • Magnenat-Thalmann N, Primeau E, Thalmann D (1988), Abstract Muscle Action Procedures for Human Face Animation, The Visual Computer, Vol. 3, No. 5, pp. 290–297.

    Article  Google Scholar 

  • Magnenat-Thalmann N, Thalmann D (1987), The Direction of Synthetic Actors in the film Rendez-vous à Montréal, IEEE Computer Graphics and Applications, Vol. 7, No. 12, pp. 9–19.

    Article  Google Scholar 

  • Magnenat-Thalmann N, Thalmann D (1991), Complex Models for Visualizing Synthetic Actors, IEEE Computer Graphics and Applications, Vol. 11, No. 6.

    Google Scholar 

  • Mase K, Pentland A (1990) Automatic Lipreading by Computer, Trans. Inst. Elec. Info, and Comm. Eng., vol. J73-D-II, No. 6, pp. 796–803.

    Google Scholar 

  • Nahas M, Huitric H, Saintourens M (1988), Animation of a B-Spline Figure, The Visual Computer, Vol. 3, No. 5, pp. 272–276.

    Article  Google Scholar 

  • Parke FI (1975), A Model for Human Faces that allows Speech Synchronized Animation, Computer and Graphics, Pregamon Press, Vol. 1, No. 1, pp. 1–4.

    Google Scholar 

  • Parke FI (1982), Parametrized Models for Facial Animation, IEEE Computer Graphics and Applications, Vol. 2, No. 9, pp. 61–68.

    Article  Google Scholar 

  • Parke FI (1991), Control Parameterization for Facial Animation, Proc. Computer Animation ’91, Geneva, Switzerland, Springer-Verlag, Tokyo, pp. 3–13.

    Google Scholar 

  • Platt S, Badler N (1981), Animating Facial Expressions, Proc SIGGRAPH ’81, pp. 245–252.

    Google Scholar 

  • Saji H, Hioki H, Shinagawa Y, Yoshida K, Kunii TL (1992) Extraction of 3D Shapes from the Moving Human face Using Lighting Switch Photometry, in Magnenat Thalmann N, Thalmann D (eds) Creating and Animating the Virtual World, Springer-Verlag Tokyo, pp. 69–86.

    Google Scholar 

  • Terzopoulos D, Waters K (1990) Physically Based Facial Modeling, Analysis, and Animation, Journal of Visualization and Computer Animation, Vol. 1, No. 2, pp. 73–80.

    Article  Google Scholar 

  • Terzopoulos and Waters (1991) Techniques for Realistic Facial Modeling and Animation, Proc. Computer Animation ’91, Geneva, Switzerland, SpringerVerlag, Tokyo, pp. 59–74.

    Google Scholar 

  • Turner R, Gobbetti E, Balaguer F, Mangili A, Thalmann D, Magnenat-Thalmann N, An Object-Oriented Methodology Using Dynamic Variables for Animation and Scientific Visualization, CG International ’90, Springer Verlag, pp. 317–328.

    Google Scholar 

  • Waters K (1987), A Muscle Model for Animating Three Dimensional Facial Expression, Proc SIGGRAPH ’87, Vol. 21, No. 4, pp. 17–24.

    Article  MathSciNet  Google Scholar 

  • Waters K, Terzopoulos D (1991) Modelling and Animating Faces using Scanned Data, Journal of Visualization and Computer Animation, Vol. 2, No. 4, pp. 123–128.

    Article  Google Scholar 

  • Williams L (1990), Performance Driven Facial Animation, Proc SIGGRAPH ’90, pp. 235–242.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 1993 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Thalmann, N.M., Cazedevals, A., Thalmann, D. (1993). Modelling Facial Communication Between an Animator and a Synthetic Actor in Real Time. In: Falcidieno, B., Kunii, T.L. (eds) Modeling in Computer Graphics. IFIP Series on Computer Graphics. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-78114-8_24

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-78114-8_24

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-78116-2

  • Online ISBN: 978-3-642-78114-8

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics