Abstract
Embodied Conversational Agents (ECAs) are life-like CG characters that interact with human users in face-to-face conversations. To achieve natural conversations, they need to understand the inputs from human users, deliberate the responding behaviors and realize those behaviors in multiple modalities. They are sophisticated, require numbers of building assemblies and are thus difficult for individual research groups to develop. To facilitate result sharing and rapid prototyping of ECA researches, a Generic ECA Framework that is meant to integrate ECA assemblies seamlessly is being developed by our group. This framework is composed of a low-level communication platform (GECA Platform), a set of communication API libraries (GECA Plugs) and a high-level protocol (GECA Protocol, GECAP).
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsReferences
OpenAIR 1.0, http://www.mindmakers.org/openair/airPage.jsp
Artificial Intelligence Markup Language (AIML), http://www.alicebot.org/
Nakano, Y., Okamoto, M., Kawahara, D., Li, Q., Nishida, T.: Converting Text into Agent Animations: Assigning Gestures to Text. In: Proceedings of The Human Language Technology Conference (2004)
Oka, K., Sato, Y.: Real-time modeling of a face deformation for 3D head pose estimation. In: AMFG 2005. Proc. IEEE International Workshop on Analysis and Modeling of Faces and Gestures, IEEE Computer Society Press, Los Alamitos (2005)
visage|SDK, visage technologies, http://www.visagetechnologies.com/index.html
Huang, H., Cerekovic, A., Tarasenko, K., Levacic, V., Zoric, G., Treumuth, M., Pandzic, I.S., Nakano, Y., Nishida, T.: An Agent Based Multicultural User Interface in a Customer Service Application. In: The Proceedings of the eNTERFACE 2006 workshop on multimodal interfaces (2006)
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 2007 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Huang, HH., Cerekovic, A., Pandzic, I.S., Nakano, Y., Nishida, T. (2007). A Script Driven Multimodal Embodied Conversational Agent Based on a Generic Framework. In: Pelachaud, C., Martin, JC., André, E., Chollet, G., Karpouzis, K., Pelé, D. (eds) Intelligent Virtual Agents. IVA 2007. Lecture Notes in Computer Science(), vol 4722. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-74997-4_49
Download citation
DOI: https://doi.org/10.1007/978-3-540-74997-4_49
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-74996-7
Online ISBN: 978-3-540-74997-4
eBook Packages: Computer ScienceComputer Science (R0)