Skip to main content

Laughter Animation Generation

  • Living reference work entry
  • First Online:

Abstract

Laughter is an important communicative signal in human-human communication. It involves the whole body, from lip motion, facial expression, to rhythmic body and shoulder movement. Laughter is an important social signal in human-human interaction and may convey a wide range of meanings (extreme happiness, social bounding, politeness, irony,). To enhance human-machine interactions, efforts have been made to endow embodied conversational agents, ECAs, with laughing capabilities. Recently, motion capture technologies have been applied to record laughter behaviors including facial expressions and body movements. It allows investigating the temporal relationship of laughter behaviors in details. Based on the available data, researchers have made efforts to develop automatic generation models of laughter animations. These models control the multimodal behaviors of ECAs including lip motions, upper facial expressions, head rotations, shoulder shaking, and torso movements. The underlying idea of these works is to propose a statistical framework able to automatically capture the correlation between laughter audio and multimodal behaviors. In the synthesis phase, the captured correlation is rendered into synthesized animations according to laughter audio given in input. This chapter reviews existing works on automatic generation of laughter animation.

This is a preview of subscription content, log in via an institution.

References

  • Adelsward V (1989) Laughter and dialogue: the social significance of laughter in institutional discourse. Nordic J Linguist 102(12):107–136

    Article  Google Scholar 

  • Brand M, Oliver N, Pentland A (1997) Coupled hidden markov models for complex action recognition. IEEE computer society conference on computer vision and pattern recognition, pp 994–999

    Google Scholar 

  • Buchanan TS, Lloyd DG, Manal K, Besier TF (2004) Neuromusculoskeletal modeling: estimation of muscle forces and joint moments and movements from measurements of neural command. J Appl Biomech 20(4):367–395

    Article  Google Scholar 

  • Çakmak H, Urbain J, Tilmanne J, Dutoit T (2014) Evaluation of HMM-based visual laughter synthesis. IEEE international conference on acoustics, speech and signal processing, pp 4578–4582

    Google Scholar 

  • Cakmak H, Haddad KE, Dutoit T (2015a) GMM-based synchronization rules for HMM-based audio-visual laughter synthesis. International conference on affective computing and intelligent interaction, pp 428–434

    Google Scholar 

  • Cakmak H, Urbain J, Dutoit T (2015b) Synchronization rules for HMM-based audio-visual laughter synthesis. IEEE international conference on acoustics, speech and signal processing, pp 2304–2308

    Google Scholar 

  • Cosker D, Edge J (2009) Laughing, crying, sneezing and yawning: automatic voice driven animation of non-speech articulations. Proceedings of computer animation and social agents, pp 21–24.

    Google Scholar 

  • DiLorenzo PC, Zordan VB, Sanders BL (2008) Laughing out loud: control for modeling anatomically inspired laughter using audio. ACM SIGGRAPH Asia 27:125:1–125:8

    Google Scholar 

  • Ding Y (2014) Data-driven expressive animation model of speech and laughter for an embodied conversational agent. PhD dissertation, Télécom ParisTech

    Google Scholar 

  • Ding Y, Pelachaud C (2015) Lip animation synthesis: a unified framework for speaking and laughing virtual agent. Auditory-visual speech processing, pp 78–83

    Google Scholar 

  • Ding Y, Prepin K, Huang J, Pelachaud C, Artières T (2014a) Laughter animation synthesis. Proceedings of the 2014 international conference on autonomous agents and multi-agent systems, pp 773–780

    Google Scholar 

  • Ding Y, Huang J, Fourati N, Artières T, Pelachaud C (2014b) Upper body animation synthesis for a laughing character. Intell Virtual Agents 8637:164–173

    Google Scholar 

  • Ekman P, Friesen W (1982) Felt, false, miserable smiles. J Nonverbal Behav 6(4):238–251

    Article  Google Scholar 

  • Foer J (2001) Laughter: a scientific investigation. Yale J Biol Med 74(2):141–143

    Google Scholar 

  • Glenn P (2003) Laughter in interaction. In: Studies in interactional sociolinguistics. Cambridge University Press, New York

    Google Scholar 

  • Huber T, Ruch W (2007) Laughter as a uniform category? A historic analysis of different types of laughter. Congress of the Swiss Society of Psychology

    Google Scholar 

  • Luschei ES, Ramig LO, Finnegan EM, Baker KK, Smith ME (2006) Patterns of laryngeal electromyography and the activity of the respiratory system during spontaneous laughter. J Neurophysiol 96(1):442–450

    Article  Google Scholar 

  • Mancini M, Ach L, Bantegnie E, Baur T, Berthouze N, Datta D, Ding Y, Dupont S, Griffin H, Lingenfelser F, Niewiadomski R, Pelachaud C, Pietquin O, Piot B, Urbain J, Volpe G, Wagner J (2014) Laugh when you’re winning. In: Innovative and creative developments in multimodal interaction systems, vol 425. Springer, Berlin/Heidelberg, pp 50–79

    Chapter  Google Scholar 

  • McKeown G, Curran W, McLoughlin C, Griffin HJ, Bianchi-Berthouze N (2013) Laughter induction techniques suitable for generating motion capture data of laughter associated body movements. IEEE international conference and workshops on automatic face and gesture recognition (FG), pp 1–5

    Google Scholar 

  • Morrison D, Wang R, Silva LCD (2007) Ensemble methods for spoken emotion recognition in call-centres. Speech Comm 49(2):98–112

    Article  Google Scholar 

  • Ng-Thow-Hing V (2001) Anatomically-based models for physical and geometric reconstruction of humans and other animals. PhD dissertation, Toronto

    Google Scholar 

  • Niewiadomski R, Pelachaud C (2012) Towards multimodal expression of laughter. International Conference on Intelligent Virtual Agents, pp 231–244

    Google Scholar 

  • Niewiadomski R, Hofmann J, Urbain J, Platt T, Wagner J, Piot B, Cakmak H, Pammi S, Baur T, Dupont S, Geist M, Lingenfelser F, McKeown G, Pietquin O, Ruch W (2013) Laugh-aware virtual agent and its impact on user amusement. International conference on autonomous agents and multiagent systems, pp 619–626

    Google Scholar 

  • Niewiadomski R, Mancini M, Ding Y, Pelachaud C, Volpe G (2014) Rhythmic body movements of laughter. Proceedings of the 16th international conference on multimodal interaction, pp 299–306

    Google Scholar 

  • Owren M, Bachorowski J (2001) The evolution of emotional expression: a selfish-gene account of smiling and laughter in early hominids and humans. In: Emotion: current issues and future directions. Guilford Press, New York, pp 152–191

    Google Scholar 

  • Provine R (1996) Laughter. Am Sci 84(1):38–47

    Google Scholar 

  • Ruch W, Ekman P (2001) The expressive pattern of laughter. In: Emotion, qualia, and consciousness. World Scientific Publisher, Tokyo, pp 426–443

    Chapter  Google Scholar 

  • Ruch W, Kohler G, Van Thriel C (1996) Assessing the ‘humorous temperament’: construction of the facet and standard trait forms of the state-trait-cheerfulness-inventory. Humor Int J Humor Res 9:303–339

    Google Scholar 

  • Tokuda K, Yoshimura T, Masuko T, Kobayashi T, Kitamura T (2000) Speech parameter generation algorithms for HMM-based speech synthesis. IEEE international conference on acoustics, speech and signal processing, pp. 1315–1318

    Google Scholar 

  • Urbain J, Bevacqua E, Dutoit T, Moinet A, Niewiadomski R, Pelachaud C, Picart B, Tilmanne J, Wagner J (2010) The AVLaughterCycle database. Language resources and evaluation conference, pp 2996–3001

    Google Scholar 

  • Zajac FE (1989) Muscle and tendon: properties, models, scaling, and application to biomechanics and motor control. Crit Rev Biomed Eng 17(4):359–411

    Google Scholar 

  • Zordan VB, Celly B, Chiu B, DiLorenzo PC (2004) Breathe easy: model and control of simulated respiration for animation. Proceedings of the 2004 ACM SIGGRAPH/Eurographics symposium on computer animation, pp 29–37

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yu Ding .

Editor information

Editors and Affiliations

Section Editor information

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this entry

Cite this entry

Ding, Y., Artières, T., Pelachaud, C. (2017). Laughter Animation Generation. In: Müller, B., et al. Handbook of Human Motion. Springer, Cham. https://doi.org/10.1007/978-3-319-30808-1_190-1

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-30808-1_190-1

  • Received:

  • Accepted:

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-30808-1

  • Online ISBN: 978-3-319-30808-1

  • eBook Packages: Springer Reference EngineeringReference Module Computer Science and Engineering

Publish with us

Policies and ethics