Abstract
This paper outlines a novel framework that has been designed to create a repository of “gestures” for embodied conversational agents. By utilizing it, the virtual agents can sculpt conversational expressions incorporating both verbal and non-verbal cues. The 3D representations of gestures are captured in EVA Corpus, and then stored as a repository of motor skills in the form of expressively tunable templates.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
McNeill, D.: Why We Gesture: The Surprising Role of Hand Movements in Communication. Cambridge University Press, Cambridge (2015)
Debreslioska, S., Gullberg, M.: Discourse reference is bimodal: how information status in speech interacts with presence and viewpoint of gestures. Discourse Process. 56(1), 41–60 (2017)
Kopp, S., Bergmann, K.: Using cognitive models to understand multimodal processes: the case for speech and gesture production. In: The Handbook of Multimodal-Multisensor Interfaces, pp. 239–276. Association for Computing Machinery and Morgan & Claypool, New York (2017)
Bonsignori, V., Camiciottoli, B.C. (eds.): Multimodality Across Communicative Settings, Discourse Domains and Genres. Cambridge Scholars Publishing, Newcastle (2017)
Kendon, A.: Pragmatic functions of gestures. Gesture 16(2), 157–175 (2017)
Colletta, J.M., Guidetti, M., Capirci, O., Cristilli, C., Demir, O.E., Kunene-Nicolas, R.N., Levine, S.: Effects of age and language on co-speech gesture production: an investigation of French, American, and Italian children’s narratives. J. Child Lang. 42(1), 122–145 (2015)
Esposito, A., Vassallo, J., Esposito, A.M., Bourbakis, N.: On the amount of semantic information conveyed by gestures. In: 2015 IEEE 27th International Conference on Tools with Artificial Intelligence (ICTAI), pp. 660–667. IEEE (2015)
Venkatesh, A., Khatri, C., Ram, A., Guo, F., Gabriel, R., Nagar, A., et al.: On evaluating and comparing conversational agents. CoRR, arXiv:1801.03625 (2018)
Graesser, A.C., Cai, Z., Morgan, B., Wang, L.: Assessment with computer agents that engage in conversational dialogues and trialogues with learners. Comput. Hum. Behav. 76, 607–616 (2017)
Ciechanowski, L., Przegalinska, A., Magnuski, M., Gloor, P.: In the shades of the uncanny valley: an experimental study of human-chatbot interaction. Future Gener. Comput. Syst. 92, 539–548 (2018)
Lhommet, M., Marsella, S.C.: Gesture with meaning. In: International Workshop on Intelligent Virtual Agents, pp. 303–312. Springer, Heidelberg (2013)
Fernández-Baena, A., Montaño, R., Antonijoan, M., Roversi, A., Miralles, D., Alías, F.: Gesture synthesis adapted to speech emphasis. Speech Commun. 57, 331–350 (2014)
Kipp, M., Heloir, A., Schröder, M., Gebhard, P.: Realizing multimodal behavior. In: International Conference on Intelligent Virtual Agents, pp. 57–63. Springer, Heidelberg (2010)
Bozkurt, E., Erzin, E., Yemez, Y.: Affect-expressive hand gestures synthesis and animation. In: 2015 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6. IEEE (2015)
Rojc, M., Mlakar, I., Kačič, Z.: The TTS-driven affective embodied conversational agent EVA, based on a novel conversational-behavior generation algorithm. Eng. Appl. Artif. Intell. 57, 80–104 (2017)
Bozkurt, E., Yemez, Y., Erzin, E.: Multimodal analysis of speech and arm motion for prosody-driven synthesis of beat gestures. Speech Commun. 85, 29–42 (2016)
Sadoughi, N., Busso, C.: Head motion generation with synthetic speech: a data driven approach. In: Interspeech, pp. 52–56 (2016)
Vogt, D., Grehl, S., Berger, E., Amor, H.B., Jung, B.: A data-driven method for real-time character animation in human-agent interaction. In: International Conference on Intelligent Virtual Agents, pp. 463–476. Springer, Heidelberg (2014)
Heloir, A., Kipp, M.: Real-time animation of interactive agents: specification and realization. Appl. Artif. Intell. 24(6), 510–529 (2010)
Neff, M., Pelachaud, C.: Animation of natural virtual characters. IEEE Comput. Graph. Appl. 37(4), 14–16 (2017)
Rojc, M., Mlakar, I.: An Expressive Conversational-behavior Generation Model for Advanced Interaction Within Multimodal User Interfaces (Computer Science, Technology and Applications). Nova Science Publishers Inc, New York (2016)
Lamberti, F., Paravati, G., Gatteschi, V., Cannavo, A., Montuschi, P.: Virtual character animation based on affordable motion capture and reconfigurable tangible interfaces. IEEE Trans. Visual. Comput. Graph. 24(5), 1742–1755 (2018)
Pelachaud, C.: Greta: an interactive expressive embodied conversational agent. In: Proceedings of the 2015 International Conference on Autonomous Agents and Multiagent Systems, pp. 5–5. ACM (2015)
Daz People: https://www.daz3d.com/people-and-wearables
Mlakar, I., Kačič, Z., Rojc, M.: A Corpus for Investigating the Multimodal Nature of Multi-Speaker Spontaneous Conversations - EVA Corpus. WSEAS Trans. Inf. Sci. Appl. 14, 213–226 (2017)
Wheatland, N., Wang, Y., Song, H., Neff, M., Zordan, V., Jörg, S.: State of the art in hand and finger modeling and animation. Comput. Graphics Forum 34(2), 735–760 (2015)
Etemad, S.A., Arya, A., Parush, A., DiPaola, S.: Perceptual validity in animation of human motion. Comput. Anim. Virtual Worlds 27(1), 58–71 (2016)
Paczkowski, P., Dorsey, J., Rushmeier, H., Kim, M.H.: PaperCraft3D: paper-based 3D modeling and scene fabrication. IEEE Trans. Visual. Comput. Graph. 25(4), 1717–1731 (2018)
Akinjala, T.B., Agada, R., Yan, J.: Animating human movement & gestures on an agent using Microsoft Kinect. In: 2016 IEEE International Symposium on Multimedia (ISM), pp. 369–374. IEEE (2016)
Mlakar, I., Kačič, Z., Borko, M., Rojc, M.: A novel realizer of conversational behavior for affective and personalized human machine interaction - EVA U-Realizer. WSEAS Trans. Environ. Dev. 14, 87–101 (2018)
Acknowledgments
This work is partially funded by the European Regional Development Fund and the Ministry of Education, Science and Sport of Slovenia; project SAIAL.
This work is partially funded by the European Regional Development Fund and Republic of Slovenia; project IQHOME.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Mlakar, I., Kačič, Z., Borko, M., Zögling, A., Rojc, M. (2019). Development of a Repository of Virtual 3D Conversational Gestures and Expressions. In: Ntalianis, K., Vachtsevanos, G., Borne, P., Croitoru, A. (eds) Applied Physics, System Science and Computers III. APSAC 2018. Lecture Notes in Electrical Engineering, vol 574 . Springer, Cham. https://doi.org/10.1007/978-3-030-21507-1_16
Download citation
DOI: https://doi.org/10.1007/978-3-030-21507-1_16
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-21506-4
Online ISBN: 978-3-030-21507-1
eBook Packages: Physics and AstronomyPhysics and Astronomy (R0)