Skip to main content

Individualized Gesturing Outperforms Average Gesturing – Evaluating Gesture Production in Virtual Humans

  • Conference paper

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 6356))

Abstract

How does a virtual agent’s gesturing behavior influence the user’s perception of communication quality and the agent’s personality? This question was investigated in an evaluation study of co-verbal iconic gestures produced with the Bayesian network-based production model GNetIc. A network learned from a corpus of several speakers was compared with networks learned from individual speaker data, as well as two control conditions. Results showed that automatically GNetIc-generated gestures increased the perceived quality of an object description given by a virtual human. Moreover, gesturing behavior generated with individual speaker networks was rated more positively in terms of likeability, competence and human-likeness.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Bergmann, K., Kopp, S.: GNetIc–Using Bayesian decision networks for iconic gesture generation. In: Ruttkay, Z., Kipp, M., Nijholt, A., Vilhjálmsson, H.H. (eds.) IVA 2009. LNCS, vol. 5773, pp. 76–89. Springer, Heidelberg (2009)

    Chapter  Google Scholar 

  2. Bergmann, K., Kopp, S.: Increasing expressiveness for virtual agents–Autonomous generation of speech and gesture in spatial description tasks. In: Decker, K., Sichman, J., Sierra, C., Castelfranchi, C. (eds.) Proceedings of the 8th International Conference on Autonomous Agents and Multiagent Systems, Budapest, Hungary, pp. 361–368 (2009)

    Google Scholar 

  3. Bergmann, K., Kopp, S.: Modeling the production of co-verbal iconic gestures by learning bayesian decision networks. Applied Artificial Intelligence (to appear)

    Google Scholar 

  4. Bickmore, T., Cassell, J.: Social dialogue with embodied conversational agents. In: van Kuppevelt, J., Dybkjaer, L., Bernsen, N. (eds.) Advances in Natural, Multimodal Dialogue Systems, New York, Kluwer Academic Publishers, Dordrecht (2005)

    Google Scholar 

  5. Buisine, S., Martin, J.-C.: The effects of speech-gesture cooperation in animated agents’ behavior in multimedia presentations. Interacting with Computers 19, 484–493 (2007)

    Article  Google Scholar 

  6. Cassell, J., Stone, M., Yan, H.: Coordination and context-dependence in the generation of embodied conversation. In: Proceedings of the First Intern. Conf. on NLG (2000)

    Google Scholar 

  7. Cassell, J., Thórisson, K.: The power of a nod and a glance: Envelope vs. emotional feedback in animated conversational agents. Applied Artificial Intelligence 13, 519–538 (1999)

    Article  Google Scholar 

  8. Dale, R., Viethen, J.: Referring expression generation through attribute-based heuristics. In: Krahmer, E., Theune, M. (eds.) Proceedings of the 12th European Workshop on Natural Language Generation, Athens, Greece, pp. 58–65 (2009)

    Google Scholar 

  9. Fiske, S.T., Cuddy, A.J., Glick, P.: Universal dimensions of social cognition: Warmth and competence. Trends in Cognitive Science 11(2), 77–83 (2006)

    Article  Google Scholar 

  10. Foster, M., Oberlander, J.: Corpus-based generation of head and eyebrow motion for an embodied conversational agent. Language Resources and Evaluation 41, 305–323 (2007)

    Article  Google Scholar 

  11. Gullberg, M., Holmqvist, K.: What speakers do and what listeners look at. Visual attention to gestures in human interaction live and on video. Pragmatics & Cognition 14, 53–82 (2006)

    Article  Google Scholar 

  12. Hartmann, B., Mancini, M., Pelachaud, C.: Implementing expressive gesture synthesis for embodied conversational agents. In: Gibet, S., Courty, N., Kamp, J.-F. (eds.) GW 2005. LNCS (LNAI), vol. 3881, pp. 188–199. Springer, Heidelberg (2006)

    Chapter  Google Scholar 

  13. Heylen, D., van Es, I., Nijholt, A., van Dijk, B.: Experimenting with the gaze of a conversational agent. In: van Kuppevelt, J., Dybkjær, L., Bernsen, N. (eds.) Proceedings International CLASS Workshop on Natural, Intelligent and Effective Interaction in Multimodal Dialogue Systems, pp. 93–100 (2002)

    Google Scholar 

  14. Hoffmann, A., Krämer, N., Lam-Chi, A., Kopp, S.: Media equation revisited. Do users show polite reactions towards an embodied agent? In: Ruttkay, Z., Kipp, M., Nijholt, A., Vilhjálmsson, H.H. (eds.) IVA 2009. LNCS, vol. 5773, pp. 159–165. Springer, Heidelberg (2009)

    Chapter  Google Scholar 

  15. Hostetter, A., Alibali, M.: Raise your hand if you’re spatial–Relations between verbal and spatial skills and gesture production. Gesture 7(1), 73–95 (2007)

    Article  Google Scholar 

  16. Huenerfauth, M.: Spatial, temporal and semantic models for American Sign Language generation: Implications for gesture generation. Semantic Computing 2(1), 21–45 (2008)

    Article  Google Scholar 

  17. Kendon, A.: Gesture–Visible Action as Utterance. Cambridge University Press, Cambridge (2004)

    Book  Google Scholar 

  18. Kopp, S.: Social resonance and embodied coordination in face-to-face conversation with artificial interlocutors. Speech Communication 52, 587–597 (2010)

    Article  Google Scholar 

  19. Kopp, S., Tepper, P., Ferriman, K., Striegnitz, K., Cassell, J.: Trading spaces: How humans and humanoids use speech and gesture to give directions. In: Nishida, T. (ed.) Engineering Approaches to Conversational Informatics, pp. 133–160. John Wiley, New York (2007)

    Chapter  Google Scholar 

  20. Kopp, S., Wachsmuth, I.: Synthesizing multimodal utterances for conversational agents. Computer Animation and Virtual Worlds 15(1), 39–52 (2004)

    Article  Google Scholar 

  21. Krämer, N., Tietz, B., Bente, G.: Effects of embodied interface agents and their gestural activity. In: Rist, T., Aylett, R.S., Ballin, D., Rickel, J. (eds.) IVA 2003. LNCS (LNAI), vol. 2792, pp. 292–300. Springer, Heidelberg (2003)

    Chapter  Google Scholar 

  22. Lücking, A., Bergmann, K., Hahn, F., Kopp, S., Rieser, H.: The Bielefeld speech and gesture alignment corpus (SaGA). In: Kipp, M., Martin, J.-C., Paggio, P., Heylen, D. (eds.) Proceedings of the LREC 2010 Workshop on Multimodal Corpora (2010)

    Google Scholar 

  23. Martin, J.-C., Abrilian, S., Devillers, L.: Individual differences in the perception of spontaneous gesture expressivity. In: Integrating Gestures, p. 71 (2007)

    Google Scholar 

  24. Müller, C.: Redebegleitende Gesten: Kulturgeschichte–Theorie–Sprachvergleich. Berlin Verlag, Berlin (1998)

    Google Scholar 

  25. Nass, C., Isbister, K., Lee, E.-J.: Truth is beauty: Researching embodied conversational agents. In: Cassell, J., et al. (eds.) Embodied Conversational Agents, pp. 374–402. MIT Press, Cambridge (2000)

    Google Scholar 

  26. Neff, M., Kipp, M., Albrecht, I., Seidel, H.-P.: Gesture modeling and animation based on a probabilistic re-creation of speaker style. ACM Transactions on Graphics 27(1), 1–24 (2008)

    Article  Google Scholar 

  27. Oberlander, J., Gill, A.: Individual differences and implicit language: personality, parts-of-speech and pervasiveness. In: Proceedings of the 26th Annual Conference of the Cognitive Science Society, Chicago, IL, pp. 1035–1040 (2004)

    Google Scholar 

  28. Rehm, M., André, E.: Informing the design of agents by corpus analysis. In: Nishida, T., Nakano, Y. (eds.) Conversational Informatics. John Wiley & Sons, Chichester (2007)

    Google Scholar 

  29. Ruttkay, Z.: Presenting in style by virtual humans. In: Esposito, A., Faundez-Zanuy, M., Keller, E., Marinaro, M. (eds.) COST Action 2102. LNCS (LNAI), vol. 4775, pp. 23–36. Springer, Heidelberg (2007)

    Chapter  Google Scholar 

  30. Sowa, T., Wachsmuth, I.: A computational model for the representation an processing of shape in coverbal iconic gestures. In: Coventry, K.R., Tenbrink, T., Bateman, J.A. (eds.) Spatial Language and Dialogue, pp. 132–146. Oxford University Press, Oxford (2009)

    Chapter  Google Scholar 

  31. Streeck, J.: Depicting by gesture. Gesture 8(3), 285–301 (2008)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2010 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Bergmann, K., Kopp, S., Eyssel, F. (2010). Individualized Gesturing Outperforms Average Gesturing – Evaluating Gesture Production in Virtual Humans. In: Allbeck, J., Badler, N., Bickmore, T., Pelachaud, C., Safonova, A. (eds) Intelligent Virtual Agents. IVA 2010. Lecture Notes in Computer Science(), vol 6356. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-15892-6_11

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-15892-6_11

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-15891-9

  • Online ISBN: 978-3-642-15892-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics