Laughter Animation Generation

Reference work entry

Abstract

Laughter is an important communicative signal in human-human communication. It involves the whole body, from lip motion, facial expression, to rhythmic body and shoulder movement. Laughter is an important social signal in human-human interaction and may convey a wide range of meanings (extreme happiness, social bounding, politeness, irony,). To enhance human-machine interactions, efforts have been made to endow embodied conversational agents, ECAs, with laughing capabilities. Recently, motion capture technologies have been applied to record laughter behaviors including facial expressions and body movements. It allows investigating the temporal relationship of laughter behaviors in details. Based on the available data, researchers have made efforts to develop automatic generation models of laughter animations. These models control the multimodal behaviors of ECAs including lip motions, upper facial expressions, head rotations, shoulder shaking, and torso movements. The underlying idea of these works is to propose a statistical framework able to automatically capture the correlation between laughter audio and multimodal behaviors. In the synthesis phase, the captured correlation is rendered into synthesized animations according to laughter audio given in input. This chapter reviews existing works on automatic generation of laughter animation.

Keywords

Laughter Laughter animation Communicative behaviors Human communication Animation synthesis Statistical framework Data driven Prosody Nonverbal behaviors Virtual character Facial expression 

References

  1. Adelsward V (1989) Laughter and dialogue: the social significance of laughter in institutional discourse. Nordic J Linguist 102(12):107–136CrossRefGoogle Scholar
  2. Brand M, Oliver N, Pentland A (1997) Coupled hidden markov models for complex action recognition. IEEE computer society conference on computer vision and pattern recognition, pp 994–999Google Scholar
  3. Buchanan TS, Lloyd DG, Manal K, Besier TF (2004) Neuromusculoskeletal modeling: estimation of muscle forces and joint moments and movements from measurements of neural command. J Appl Biomech 20(4):367–395CrossRefGoogle Scholar
  4. Çakmak H, Urbain J, Tilmanne J, Dutoit T (2014) Evaluation of HMM-based visual laughter synthesis. IEEE international conference on acoustics, speech and signal processing, pp 4578–4582Google Scholar
  5. Cakmak H, Haddad KE, Dutoit T (2015a) GMM-based synchronization rules for HMM-based audio-visual laughter synthesis. International conference on affective computing and intelligent interaction, pp 428–434Google Scholar
  6. Cakmak H, Urbain J, Dutoit T (2015b) Synchronization rules for HMM-based audio-visual laughter synthesis. IEEE international conference on acoustics, speech and signal processing, pp 2304–2308Google Scholar
  7. Cosker D, Edge J (2009) Laughing, crying, sneezing and yawning: automatic voice driven animation of non-speech articulations. Proceedings of computer animation and social agents, pp 21–24.Google Scholar
  8. DiLorenzo PC, Zordan VB, Sanders BL (2008) Laughing out loud: control for modeling anatomically inspired laughter using audio. ACM SIGGRAPH Asia 27:125:1–125:8Google Scholar
  9. Ding Y (2014) Data-driven expressive animation model of speech and laughter for an embodied conversational agent. PhD dissertation, Télécom ParisTechGoogle Scholar
  10. Ding Y, Pelachaud C (2015) Lip animation synthesis: a unified framework for speaking and laughing virtual agent. Auditory-visual speech processing, pp 78–83Google Scholar
  11. Ding Y, Prepin K, Huang J, Pelachaud C, Artières T (2014a) Laughter animation synthesis. Proceedings of the 2014 international conference on autonomous agents and multi-agent systems, pp 773–780Google Scholar
  12. Ding Y, Huang J, Fourati N, Artières T, Pelachaud C (2014b) Upper body animation synthesis for a laughing character. Intell Virtual Agents 8637:164–173Google Scholar
  13. Ekman P, Friesen W (1982) Felt, false, miserable smiles. J Nonverbal Behav 6(4):238–251CrossRefGoogle Scholar
  14. Foer J (2001) Laughter: a scientific investigation. Yale J Biol Med 74(2):141–143Google Scholar
  15. Glenn P (2003) Laughter in interaction. In: Studies in interactional sociolinguistics. Cambridge University Press, New YorkGoogle Scholar
  16. Huber T, Ruch W (2007) Laughter as a uniform category? A historic analysis of different types of laughter. Congress of the Swiss Society of PsychologyGoogle Scholar
  17. Luschei ES, Ramig LO, Finnegan EM, Baker KK, Smith ME (2006) Patterns of laryngeal electromyography and the activity of the respiratory system during spontaneous laughter. J Neurophysiol 96(1):442–450CrossRefGoogle Scholar
  18. Mancini M, Ach L, Bantegnie E, Baur T, Berthouze N, Datta D, Ding Y, Dupont S, Griffin H, Lingenfelser F, Niewiadomski R, Pelachaud C, Pietquin O, Piot B, Urbain J, Volpe G, Wagner J (2014) Laugh when you’re winning. In: Innovative and creative developments in multimodal interaction systems, vol 425. Springer, Berlin/Heidelberg, pp 50–79CrossRefGoogle Scholar
  19. McKeown G, Curran W, McLoughlin C, Griffin HJ, Bianchi-Berthouze N (2013) Laughter induction techniques suitable for generating motion capture data of laughter associated body movements. IEEE international conference and workshops on automatic face and gesture recognition (FG), pp 1–5Google Scholar
  20. Morrison D, Wang R, Silva LCD (2007) Ensemble methods for spoken emotion recognition in call-centres. Speech Comm 49(2):98–112CrossRefGoogle Scholar
  21. Ng-Thow-Hing V (2001) Anatomically-based models for physical and geometric reconstruction of humans and other animals. PhD dissertation, TorontoGoogle Scholar
  22. Niewiadomski R, Pelachaud C (2012) Towards multimodal expression of laughter. International Conference on Intelligent Virtual Agents, pp 231–244Google Scholar
  23. Niewiadomski R, Hofmann J, Urbain J, Platt T, Wagner J, Piot B, Cakmak H, Pammi S, Baur T, Dupont S, Geist M, Lingenfelser F, McKeown G, Pietquin O, Ruch W (2013) Laugh-aware virtual agent and its impact on user amusement. International conference on autonomous agents and multiagent systems, pp 619–626Google Scholar
  24. Niewiadomski R, Mancini M, Ding Y, Pelachaud C, Volpe G (2014) Rhythmic body movements of laughter. Proceedings of the 16th international conference on multimodal interaction, pp 299–306Google Scholar
  25. Owren M, Bachorowski J (2001) The evolution of emotional expression: a selfish-gene account of smiling and laughter in early hominids and humans. In: Emotion: current issues and future directions. Guilford Press, New York, pp 152–191Google Scholar
  26. Provine R (1996) Laughter. Am Sci 84(1):38–47Google Scholar
  27. Ruch W, Ekman P (2001) The expressive pattern of laughter. In: Emotion, qualia, and consciousness. World Scientific Publisher, Tokyo, pp 426–443CrossRefGoogle Scholar
  28. Ruch W, Kohler G, Van Thriel C (1996) Assessing the ‘humorous temperament’: construction of the facet and standard trait forms of the state-trait-cheerfulness-inventory. Humor Int J Humor Res 9:303–339Google Scholar
  29. Tokuda K, Yoshimura T, Masuko T, Kobayashi T, Kitamura T (2000) Speech parameter generation algorithms for HMM-based speech synthesis. IEEE international conference on acoustics, speech and signal processing, pp. 1315–1318Google Scholar
  30. Urbain J, Bevacqua E, Dutoit T, Moinet A, Niewiadomski R, Pelachaud C, Picart B, Tilmanne J, Wagner J (2010) The AVLaughterCycle database. Language resources and evaluation conference, pp 2996–3001Google Scholar
  31. Zajac FE (1989) Muscle and tendon: properties, models, scaling, and application to biomechanics and motor control. Crit Rev Biomed Eng 17(4):359–411Google Scholar
  32. Zordan VB, Celly B, Chiu B, DiLorenzo PC (2004) Breathe easy: model and control of simulated respiration for animation. Proceedings of the 2004 ACM SIGGRAPH/Eurographics symposium on computer animation, pp 29–37Google Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  • Yu Ding
    • 1
  • Thierry Artières
    • 2
    • 3
  • Catherine Pelachaud
    • 4
  1. 1.University of HoustonHoustonUSA
  2. 2.Ecole Centrale MarseilleMarseilleFrance
  3. 3.Laboratoire d’Informatique Fondamentale (LIF), UMR CNRS 7279, Université Aix-MarseilleParisFrance
  4. 4.CNRS - ISIR, Université Pierre et Marie CurieParisFrance

Section editors and affiliations

  • Zhigang Deng
    • 1
  1. 1.Department of Computer ScienceUniversity of HoustonHoustonUSA

Personalised recommendations