Abstract
Humanoid robot companions that are intended to engage in natural and fluent human-robot interaction are supposed to combine speech with non-verbal modalities for comprehensible and believable behavior. We present an approach to enable the humanoid robot ASIMO to flexibly produce and synchronize speech and co-verbal gestures at run-time, while not being limited to a predefined repertoire of motor action. Since this research challenge has already been tackled in various ways within the domain of virtual conversational agents, we build upon the experience gained from the development of a speech and gesture production model used for our virtual human Max. Being one of the most sophisticated multi-modal schedulers, the Articulated Communicator Engine (ACE) has replaced the use of lexicons of canned behaviors with an on-the-spot production of flexibly planned behavior representations. As an underlying action generation architecture, we explain how ACE draws upon a tight, bi-directional coupling of ASIMO’s perceptuo-motor system with multi-modal scheduling via both efferent control signals and afferent feedback.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Bennewitz, M., Faber, F., Joho, D., Behnke, S.: Fritz – A humanoid communication robot. In: RO-MAN 2007: Proc. of the 16th IEEE International Symposium on Robot and Human Interactive Communication (2007)
Cassell, J., Bickmore, T., Campbell, L., Vilhjálmsson, H., Yan, H.: Human conversation as a system framework: desigining embodied conversational agents. In: Embodied Conversational Agents, pp. 29–63. MIT Press, Cambridge (2000)
Cassell, J., Vilhjalmsson, H., Bickmore, T.: Beat: the behavior expression animation toolkit. In: Proceedings of SIGGRAPH 2001 (2001)
Gibet, S., Lebourque, T., Marteau, P.F.: High-level specification and animation of communicative gestures. Journal of Visual Languages and Computing 12(6), 657–687 (2001)
Gienger, M., Janßen, H., Goerick, S.: Task-oriented whole body motion for humanoid robots. In: Proceedings of the IEEE-RAS International Conference on Humanoid Robots, Tsukuba, Japan (2005) (accepted)
Gorostiza, J., Barber, R., Khamis, A., Malfaz, M., Pacheco, R., Rivas, R., Corrales, A., Delgado, E., Salichs, M.: Multimodal human-robot interaction framework for a personal robot. In: ROMAN 2006: Proc. of the 15th IEEE International Symposium on Robot and Human Interactive Communication (2006)
Kopp, S., Bergmann, K., Wachsmuth, I.: Multimodal communication from multimodal thinking - towards an integrated model of speech and gesture production. Semantic Computing 2(1), 115–136 (2008)
Kopp, S., Wachsmuth, I.: A Knowledge-based Approach for Lifelike Gesture Animation. In: Horn, W. (ed.) ECAI 2000 - Proceedings of the 14th European Conference on Artificial Intelligence, pp. 663–667. IOS Press, Amsterdam (2000)
Kopp, S., Wachsmuth, I.: Model-based Animation of Coverbal Gesture. In: Proceedings of Computer Animation 2002, pp. 252–257. IEEE Pres, Los Alamitos (2002)
Kopp, S., Wachsmuth, I.: Synthesizing multimodal utterances for conversational agents. Computer Animation and Virtual Worlds 15(1), 39–52 (2004)
Kranstedt, A., Kopp, S., Wachsmuth, I.: MURML: A Multimodal Utterance Representation Markup Language for Conversational Agents. In: Proceedings of the AAMAS 2002 Workshop on Embodied Conversational Agents - let’s specify and evaluate them, Bologna, Italy (2002)
Macdorman, K., Ishiguro, H.: The uncanny advantage of using androids in cognitive and social science research. Interaction Studies 7(3), 297–337 (2006)
McNeill, D.: Hand and Mind: What Gestures Reveal about Thought. University of Chicago Press, Chicago (1992)
Minato, T., Shimada, M., Ishiguro, H., Itakura, S.: Development of an android robot for studying human-robot interaction. In: Orchard, B., Yang, C., Ali, M. (eds.) IEA/AIE 2004. LNCS (LNAI), vol. 3029, pp. 424–434. Springer, Heidelberg (2004), http://www.springerlink.com/content/rcvkmh0ucra0gkjb
Reiter, E., Dale, R.: Building Natural Language Generation Systems. Cambridge Univ. Press, Cambridge (2000)
Sidner, C., Lee, C., Lesh, N.: The role of dialog in human robot interaction. In: International Workshop on Language Understanding and Agents for Real World Interaction (2003)
Wachsmuth, I., Kopp, S.: Lifelike Gesture Synthesis and Timing for Conversational Agents. In: Wachsmuth, I., Sowa, T. (eds.) GW 2001. LNCS (LNAI), vol. 2298, pp. 120–133. Springer, Heidelberg (2002)
Zeltzer, D.: Motor control techniques for figure animation. IEEE Computer Graphics Applications 2(9), 53–59 (1982)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2009 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
Salem, M., Kopp, S., Wachsmuth, I., Joublin, F. (2009). Towards Meaningful Robot Gesture. In: Ritter, H., Sagerer, G., Dillmann, R., Buss, M. (eds) Human Centered Robot Systems. Cognitive Systems Monographs, vol 6. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-10403-9_18
Download citation
DOI: https://doi.org/10.1007/978-3-642-10403-9_18
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-10402-2
Online ISBN: 978-3-642-10403-9
eBook Packages: EngineeringEngineering (R0)