Skip to main content

Development of a Repository of Virtual 3D Conversational Gestures and Expressions

  • Conference paper
  • First Online:
Applied Physics, System Science and Computers III (APSAC 2018)

Part of the book series: Lecture Notes in Electrical Engineering ((LNEE,volume 574 ))

Abstract

This paper outlines a novel framework that has been designed to create a repository of “gestures” for embodied conversational agents. By utilizing it, the virtual agents can sculpt conversational expressions incorporating both verbal and non-verbal cues. The 3D representations of gestures are captured in EVA Corpus, and then stored as a repository of motor skills in the form of expressively tunable templates.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. McNeill, D.: Why We Gesture: The Surprising Role of Hand Movements in Communication. Cambridge University Press, Cambridge (2015)

    Google Scholar 

  2. Debreslioska, S., Gullberg, M.: Discourse reference is bimodal: how information status in speech interacts with presence and viewpoint of gestures. Discourse Process. 56(1), 41–60 (2017)

    Google Scholar 

  3. Kopp, S., Bergmann, K.: Using cognitive models to understand multimodal processes: the case for speech and gesture production. In: The Handbook of Multimodal-Multisensor Interfaces, pp. 239–276. Association for Computing Machinery and Morgan & Claypool, New York (2017)

    Google Scholar 

  4. Bonsignori, V., Camiciottoli, B.C. (eds.): Multimodality Across Communicative Settings, Discourse Domains and Genres. Cambridge Scholars Publishing, Newcastle (2017)

    Google Scholar 

  5. Kendon, A.: Pragmatic functions of gestures. Gesture 16(2), 157–175 (2017)

    Article  Google Scholar 

  6. Colletta, J.M., Guidetti, M., Capirci, O., Cristilli, C., Demir, O.E., Kunene-Nicolas, R.N., Levine, S.: Effects of age and language on co-speech gesture production: an investigation of French, American, and Italian children’s narratives. J. Child Lang. 42(1), 122–145 (2015)

    Article  Google Scholar 

  7. Esposito, A., Vassallo, J., Esposito, A.M., Bourbakis, N.: On the amount of semantic information conveyed by gestures. In: 2015 IEEE 27th International Conference on Tools with Artificial Intelligence (ICTAI), pp. 660–667. IEEE (2015)

    Google Scholar 

  8. Venkatesh, A., Khatri, C., Ram, A., Guo, F., Gabriel, R., Nagar, A., et al.: On evaluating and comparing conversational agents. CoRR, arXiv:1801.03625 (2018)

    Google Scholar 

  9. Graesser, A.C., Cai, Z., Morgan, B., Wang, L.: Assessment with computer agents that engage in conversational dialogues and trialogues with learners. Comput. Hum. Behav. 76, 607–616 (2017)

    Article  Google Scholar 

  10. Ciechanowski, L., Przegalinska, A., Magnuski, M., Gloor, P.: In the shades of the uncanny valley: an experimental study of human-chatbot interaction. Future Gener. Comput. Syst. 92, 539–548 (2018)

    Article  Google Scholar 

  11. Lhommet, M., Marsella, S.C.: Gesture with meaning. In: International Workshop on Intelligent Virtual Agents, pp. 303–312. Springer, Heidelberg (2013)

    Google Scholar 

  12. Fernández-Baena, A., Montaño, R., Antonijoan, M., Roversi, A., Miralles, D., Alías, F.: Gesture synthesis adapted to speech emphasis. Speech Commun. 57, 331–350 (2014)

    Article  Google Scholar 

  13. Kipp, M., Heloir, A., Schröder, M., Gebhard, P.: Realizing multimodal behavior. In: International Conference on Intelligent Virtual Agents, pp. 57–63. Springer, Heidelberg (2010)

    Google Scholar 

  14. Bozkurt, E., Erzin, E., Yemez, Y.: Affect-expressive hand gestures synthesis and animation. In: 2015 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6. IEEE (2015)

    Google Scholar 

  15. Rojc, M., Mlakar, I., Kačič, Z.: The TTS-driven affective embodied conversational agent EVA, based on a novel conversational-behavior generation algorithm. Eng. Appl. Artif. Intell. 57, 80–104 (2017)

    Article  Google Scholar 

  16. Bozkurt, E., Yemez, Y., Erzin, E.: Multimodal analysis of speech and arm motion for prosody-driven synthesis of beat gestures. Speech Commun. 85, 29–42 (2016)

    Article  Google Scholar 

  17. Sadoughi, N., Busso, C.: Head motion generation with synthetic speech: a data driven approach. In: Interspeech, pp. 52–56 (2016)

    Google Scholar 

  18. Vogt, D., Grehl, S., Berger, E., Amor, H.B., Jung, B.: A data-driven method for real-time character animation in human-agent interaction. In: International Conference on Intelligent Virtual Agents, pp. 463–476. Springer, Heidelberg (2014)

    Google Scholar 

  19. Heloir, A., Kipp, M.: Real-time animation of interactive agents: specification and realization. Appl. Artif. Intell. 24(6), 510–529 (2010)

    Article  Google Scholar 

  20. Neff, M., Pelachaud, C.: Animation of natural virtual characters. IEEE Comput. Graph. Appl. 37(4), 14–16 (2017)

    Article  Google Scholar 

  21. Rojc, M., Mlakar, I.: An Expressive Conversational-behavior Generation Model for Advanced Interaction Within Multimodal User Interfaces (Computer Science, Technology and Applications). Nova Science Publishers Inc, New York (2016)

    Google Scholar 

  22. Lamberti, F., Paravati, G., Gatteschi, V., Cannavo, A., Montuschi, P.: Virtual character animation based on affordable motion capture and reconfigurable tangible interfaces. IEEE Trans. Visual. Comput. Graph. 24(5), 1742–1755 (2018)

    Google Scholar 

  23. Pelachaud, C.: Greta: an interactive expressive embodied conversational agent. In: Proceedings of the 2015 International Conference on Autonomous Agents and Multiagent Systems, pp. 5–5. ACM (2015)

    Google Scholar 

  24. Daz People: https://www.daz3d.com/people-and-wearables

  25. Mlakar, I., Kačič, Z., Rojc, M.: A Corpus for Investigating the Multimodal Nature of Multi-Speaker Spontaneous Conversations - EVA Corpus. WSEAS Trans. Inf. Sci. Appl. 14, 213–226 (2017)

    Google Scholar 

  26. Wheatland, N., Wang, Y., Song, H., Neff, M., Zordan, V., Jörg, S.: State of the art in hand and finger modeling and animation. Comput. Graphics Forum 34(2), 735–760 (2015)

    Article  Google Scholar 

  27. Etemad, S.A., Arya, A., Parush, A., DiPaola, S.: Perceptual validity in animation of human motion. Comput. Anim. Virtual Worlds 27(1), 58–71 (2016)

    Article  Google Scholar 

  28. Paczkowski, P., Dorsey, J., Rushmeier, H., Kim, M.H.: PaperCraft3D: paper-based 3D modeling and scene fabrication. IEEE Trans. Visual. Comput. Graph. 25(4), 1717–1731 (2018)

    Google Scholar 

  29. Akinjala, T.B., Agada, R., Yan, J.: Animating human movement & gestures on an agent using Microsoft Kinect. In: 2016 IEEE International Symposium on Multimedia (ISM), pp. 369–374. IEEE (2016)

    Google Scholar 

  30. Mlakar, I., Kačič, Z., Borko, M., Rojc, M.: A novel realizer of conversational behavior for affective and personalized human machine interaction - EVA U-Realizer. WSEAS Trans. Environ. Dev. 14, 87–101 (2018)

    Google Scholar 

Download references

Acknowledgments

This work is partially funded by the European Regional Development Fund and the Ministry of Education, Science and Sport of Slovenia; project SAIAL.

This work is partially funded by the European Regional Development Fund and Republic of Slovenia; project IQHOME.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Izidor Mlakar .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Mlakar, I., Kačič, Z., Borko, M., Zögling, A., Rojc, M. (2019). Development of a Repository of Virtual 3D Conversational Gestures and Expressions. In: Ntalianis, K., Vachtsevanos, G., Borne, P., Croitoru, A. (eds) Applied Physics, System Science and Computers III. APSAC 2018. Lecture Notes in Electrical Engineering, vol 574 . Springer, Cham. https://doi.org/10.1007/978-3-030-21507-1_16

Download citation

Publish with us

Policies and ethics