Advertisement

On the Development of a Talking Head System Based on the Use of PDE-Based Parametic Surfaces

  • Michael Athanasopoulos
  • Hassan Ugail
  • Gabriela González Castro
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6670)

Abstract

In this work we propose a talking head system based on animating facial expressions using a template face generated from a Partial Differential Equation (PDE). It uses a set of pre-configured curves (as boundary conditions for the chosen PDE) to calculate an internal template surface face. This surface is then used to associate various facial features with a given 3D face object. Motion retargeting is then used to transfer the deformations in these areas from the template to the target object. The procedure is continued until all the expressions in the database are calculated and transferred to the target 3D human face object. Additionally the system interacts with the user using an artificial intelligence (AI) chatterbot to generate response from a given text. Speech and facial animation are synchronized using the Microsoft Speech API, whereby the response from the AI bot is converted to speech.

Keywords

Facial animation Speech animation Motion re-targeting PDE method Parametric surface representation Virtual interactive environments 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Lee, R., Terzopoulos, D., Waters, K.: Realistic Modeling for Facial Animation. In: Proceedings of the 22nd Annual Conference on Computer Graphics and Interactive Technique, pp. 55–62 (1995)Google Scholar
  2. 2.
    Wallace, S.R.: The Anatomy of A.L.I.C.E.Google Scholar
  3. 3.
    Kim, S.W., et al.: A Talking Head System for Korean Text. World Academy of Science. Engineering and Technology 50 (2005)Google Scholar
  4. 4.
    Pasquariello, R., Pelachaud, C.: Greta: A Simple Facial Animation Engine. In: Proceedings of the 6th Online World Conference on Soft Computing in Industrial Applications (2001)Google Scholar
  5. 5.
    Huangm, Y., et al.: Real-time Lip Synchronization Based on Hidden Markov Models. In: The 5th Asian Conference on Computer Vision, Melbourne, Australia (2002)Google Scholar
  6. 6.
    Maddock, S., Edge, J., Sanchez, M.I.: Movement Realism in Computer Facial Animation. In: Workshop on Human-Animated Characters Interaction, vol. 4 (2005)Google Scholar
  7. 7.
    Fedorov, A., et al.: Talking Head: Synthetic Video Facial Animation in MPEG-4. In: International Conference Graphicon Moscow, Russia (2003)Google Scholar
  8. 8.
    Balcõ, K.: Xface: MPEG4 Based Open Source Toolkit for 3D Facial. In: Proceedings of the Working Conference on Advanced Visual Interfaces, pp. 399–402 (2004)Google Scholar
  9. 9.
    González Castro, G., et al.: A Survey of Partial Differential Equations in Geometric Design. The Visual Computer 24(3), 213–225 (2008)MathSciNetCrossRefGoogle Scholar
  10. 10.
    Ugail, H., Bloor, M.I.G., Wilson, M.J.: Manipulation of PDE surfaces using an interactively defined parameterization. Computers and Graphics 23(4), 525–534 (1999)CrossRefGoogle Scholar
  11. 11.
    Deng, Z., Noh, J.: Computer Facial Animation: A Survey in Data- driven 3D facial animation. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  12. 12.
    Marschner, S.R., Guenter, B., Raghupathy, R.: Modeling and Rendering for Realistic Facial Animation. In: Proceedings of the Eurographics Workshop on Rendering Techniques, pp. 231–242 (2000)Google Scholar
  13. 13.
    Haber, J., et al.: Face to Face: From Real Humans to Realistic Facial Animation. In: Proceedings Israel-Korea Binational Conference on Geometrical Modeling and Computer Graphics, pp. 73–82 (2001)Google Scholar
  14. 14.
    Sheng, Y. et al.: PDE-Based Facial Animation: Making the Complex Simple. In: Proceedings of the 4th International Symposium on Advances in Visual Computing, pp. 723–732 (2008)Google Scholar
  15. 15.
    Dutoit, T.: High-quality text-to-speech synthesis. Springer, Heidelberg (2001)Google Scholar
  16. 16.
    Galvão, A.M., Barros, F.A., Neves, A.M.M., Ramalho, G.L.: Adding personality to chatterbots using the persona-AIML architecture. In: Lemaître, C., Reyes, C.A., González, J.A. (eds.) IBERAMIA 2004. LNCS (LNAI), vol. 3315, pp. 963–973. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  17. 17.
    Wallace, R.: Artificial Intelligence Markup Language (AIML) v1.0.1, A.L.I.C.E. AI Foundation Working Draft (2001)Google Scholar
  18. 18.
    Mana, M., Pianesi, F.: HMM-based Synthesis of Emotional Facial Expressions during Speech in Synthetic Talking Heads. In: Proceedings of the 8th International Conference on Multimodal Interface, pp. 380–387 (2006)Google Scholar
  19. 19.
    Galvo, A.M., et al.: Persona-AIML: An Architecture for Developing Chatterbots with Personality. In: Third International Joint Conference on Autonomous Agents and Multiagent Systems, New York, USA, vol. 3 (2004)Google Scholar
  20. 20.
    Giacomo, T.D., Garchery, S., Thalmann, N.M.: Expressive Visual Speech Generation in Data-driven 3D facial animation. Springer, Heidelberg (2007)Google Scholar
  21. 21.
    Fonte, A.M.: TQ-Bot: An AIML-based Tutor and Evaluator Bot Fernando. Journal of Universal Computer Science 15(7)Google Scholar
  22. 22.
    Pighin, F., et al.: Synthesizing Realistic Facial Expressions from Photographs. In: Proceedings of the 25th Annual Conference on Computer Graphics and Interactive Techniques, pp. 75–84 (1998)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Michael Athanasopoulos
    • 1
  • Hassan Ugail
    • 1
  • Gabriela González Castro
    • 1
  1. 1.Centre for Visual ComputingUniversity of BradfordBradfordUnited Kingdom

Personalised recommendations