Skip to main content

Pragmatic Multimodality: Effects of Nonverbal Cues of Focus and Certainty in a Virtual Human

  • Conference paper
  • First Online:
Book cover Intelligent Virtual Agents (IVA 2017)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 10498))

Included in the following conference series:

Abstract

In pragmatic multimodality, modal (pragmatic) information is conveyed multimodally by cues in gesture, facial expressions, head movements and prosody. We observed these cues in natural interaction data. They can convey positive and negative focus, in that they emphasise or de-emphasise a piece of information, and they can convey uncertainty. In this work, we test the effects on perception and recall in a human user, when those cues are carried out by a virtual human. The nonverbal behaviour of the virtual human was modelled using motion capture data and ensured a fully multimodal appearance. Results of the study show that the virtual human was perceived as very competent and as saying something important. A special case of de-emphasising cues led to lower content recall.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Wharton, T.: Pragmatics and non-verbal communication. Cambridge University Press (2009)

    Google Scholar 

  2. Smith, M.: Pragmatic functions and lexical categories. Linguistics 48(3), 717–777 (2010)

    Article  Google Scholar 

  3. Kendon, A.: Gesture: Visible Action as Utterance. Cambridge Uni. Press (2004)

    Google Scholar 

  4. Kendon, A.: Gestures as illocutionary and discourse structure markers in Southern Italian conversation. Journal of Pragmatics 23(3), 247–279 (1995)

    Article  Google Scholar 

  5. Norrick, N.R.: Discussion article. Catalan. Journal of Linguistics 6, 159–168 (2007)

    Google Scholar 

  6. Ward, N.: Pragmatic functions of prosodic features in non-lexical utterances. In: Speech Prosody 2004 International Conference (2004)

    Google Scholar 

  7. Payrató, L., Teßendorf, S.: Pragmatic gestures. In: Body Language Communication: An International Handbook on Multimodality in Human Interaction. Handbooks of Linguistics and Communication Science, vol. 38, no. 1, pp. 1531–1539 (2013)

    Google Scholar 

  8. Bressem, J., Müller, C.: The family of away gestures: negation, refusal, and negative assessment. In: Body–Language–Communication, pp. 1592–1604 (2014)

    Google Scholar 

  9. Freigang, F., Kopp, S.: Analysing the modifying functions of gesture in multimodal utterances. In: Proc. of the 4th Conference on Gesture and Speech in Interaction (GESPIN), Nantes, France (2015)

    Google Scholar 

  10. Freigang, F., Kopp, S.: Modal Pragmatic Functions of Gesture - Exploring the Dimensions of Function and Form (in prep.)

    Google Scholar 

  11. Hunyadi, L., et al.: Annotation of spoken syntax in relation to prosody and multimodal pragmatics. In: 3rd International Conference on IEEE on Cognitive Infocommunications (CogInfoCom) (2012)

    Google Scholar 

  12. Freigang, F., Kopp, S.: This is what’s important-using speech and gesture to create focus in multimodal utterance. In: International Conference on Intelligent Virtual Agents (IVA). Springer International Publishing (2016)

    Google Scholar 

  13. Halliday, M.A.K.: Intonation and Grammar in British English, Mouton, The Hague (1967)

    Google Scholar 

  14. Allwood, J.: Bodily communication dimensions of expression and content. In: Multimodality in Language and Speech Systems, pp. 7–26. Springer, Netherlands (2002)

    Google Scholar 

  15. Allwood, J.: Linguistic communication as action and cooperation, University of Göteborg, Department of Linguistics (1976)

    Google Scholar 

  16. McNeill, D.: Hand and mind: What gestures reveal about thought. University of Chicago press (1992)

    Google Scholar 

  17. McNeill, D.: Pointing and morality in Chicago, Pointing: Where language, culture, and cognition meet, pp. 293–306 (2003)

    Google Scholar 

  18. McNeill, D., Cassell, J., Levy, E.T.: Abstract deixis. Semiotica 95(1–2), 5–20 (1993)

    Google Scholar 

  19. Teßendorf, S.: Pragmatic and metaphoric-combining functional with cognitive approaches in the analysis of the ‘brushing aside gesture’. In: Body–Language–Communication: An International Handbook on Multimodality in Human Interaction, pp. 1540–1558 (2014)

    Google Scholar 

  20. Müller, C.: Forms and uses of the Palm Up Open Hand: A case of a gesture family. The Semantics and Pragmatics of Everyday Gestures 9, 233–256 (2004)

    Google Scholar 

  21. Kranstedt, A., Kopp, S., Wachsmuth., I.: MURML: a multimodal utterance representation markup language for conversational agents. In: Proceedings of the AAMAS 2002 Workshop on Embodied Conversational Agents (2002)

    Google Scholar 

  22. van Welbergen, H., Yaghoubzadeh, R., Kopp, S.: AsapRealizer 2.0: the next steps in fluent behavior realization for ECAs. In: Bickmore, T., Marsella, S., Sidner, C. (eds.) IVA 2014. LNCS, vol. 8637, pp. 449–462. Springer, Cham (2014). doi:10.1007/978-3-319-09767-1_56

    Chapter  Google Scholar 

  23. De Rosis, F., et al.: From Greta’s mind to her face: modelling the dynamics of affective states in a conversational embodied agent. International Journal of Human-Computer Studies 59(1), 81–118 (2003)

    Article  Google Scholar 

  24. Lee, J., Marsella, S.: Nonverbal behavior generator for embodied conversational agents. In: Gratch, J., Young, M., Aylett, R., Ballin, D., Olivier, P. (eds.) IVA 2006. LNCS, vol. 4133, pp. 243–255. Springer, Heidelberg (2006). doi:10.1007/11821830_20

    Chapter  Google Scholar 

  25. Liu, C., et al.: Generation of nodding, head tilting and eye gazing for human-robot dialogue interaction. In: 7th ACM/IEEE International Conference on IEEE Human-Robot Interaction (HRI) (2012)

    Google Scholar 

  26. Leiner, D.J.: SoSci Survey (Version 2.6.00-i) [Computer software] (2014). http://www.soscisurvey.com

  27. Bergmann, K., Macedonia, M.: A virtual agent as vocabulary trainer: iconic gestures help to improve learners’ memory performance. In: Aylett, R., Krenn, B., Pelachaud, C., Shimodaira, H. (eds.) IVA 2013. LNCS, vol. 8108, pp. 139–148. Springer, Heidelberg (2013). doi:10.1007/978-3-642-40415-3_12

    Chapter  Google Scholar 

  28. Bergmann, K., Kopp, S., Eyssel, F.: Individualized gesturing outperforms average gesturing – evaluating gesture production in virtual humans. In: Allbeck, J., Badler, N., Bickmore, T., Pelachaud, C., Safonova, A. (eds.) IVA 2010. LNCS, vol. 6356, pp. 104–117. Springer, Heidelberg (2010). doi:10.1007/978-3-642-15892-6_11

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Farina Freigang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper

Freigang, F., Klett, S., Kopp, S. (2017). Pragmatic Multimodality: Effects of Nonverbal Cues of Focus and Certainty in a Virtual Human. In: Beskow, J., Peters, C., Castellano, G., O'Sullivan, C., Leite, I., Kopp, S. (eds) Intelligent Virtual Agents. IVA 2017. Lecture Notes in Computer Science(), vol 10498. Springer, Cham. https://doi.org/10.1007/978-3-319-67401-8_16

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-67401-8_16

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-67400-1

  • Online ISBN: 978-3-319-67401-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics