Skip to main content

A Script Driven Multimodal Embodied Conversational Agent Based on a Generic Framework

  • Conference paper

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 4722))

Abstract

Embodied Conversational Agents (ECAs) are life-like CG characters that interact with human users in face-to-face conversations. To achieve natural conversations, they need to understand the inputs from human users, deliberate the responding behaviors and realize those behaviors in multiple modalities. They are sophisticated, require numbers of building assemblies and are thus difficult for individual research groups to develop. To facilitate result sharing and rapid prototyping of ECA researches, a Generic ECA Framework that is meant to integrate ECA assemblies seamlessly is being developed by our group. This framework is composed of a low-level communication platform (GECA Platform), a set of communication API libraries (GECA Plugs) and a high-level protocol (GECA Protocol, GECAP).

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. OpenAIR 1.0, http://www.mindmakers.org/openair/airPage.jsp

  2. Artificial Intelligence Markup Language (AIML), http://www.alicebot.org/

  3. Nakano, Y., Okamoto, M., Kawahara, D., Li, Q., Nishida, T.: Converting Text into Agent Animations: Assigning Gestures to Text. In: Proceedings of The Human Language Technology Conference (2004)

    Google Scholar 

  4. Oka, K., Sato, Y.: Real-time modeling of a face deformation for 3D head pose estimation. In: AMFG 2005. Proc. IEEE International Workshop on Analysis and Modeling of Faces and Gestures, IEEE Computer Society Press, Los Alamitos (2005)

    Google Scholar 

  5. visage|SDK, visage technologies, http://www.visagetechnologies.com/index.html

  6. Huang, H., Cerekovic, A., Tarasenko, K., Levacic, V., Zoric, G., Treumuth, M., Pandzic, I.S., Nakano, Y., Nishida, T.: An Agent Based Multicultural User Interface in a Customer Service Application. In: The Proceedings of the eNTERFACE 2006 workshop on multimodal interfaces (2006)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Catherine Pelachaud Jean-Claude Martin Elisabeth André Gérard Chollet Kostas Karpouzis Danielle Pelé

Rights and permissions

Reprints and permissions

Copyright information

© 2007 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Huang, HH., Cerekovic, A., Pandzic, I.S., Nakano, Y., Nishida, T. (2007). A Script Driven Multimodal Embodied Conversational Agent Based on a Generic Framework. In: Pelachaud, C., Martin, JC., André, E., Chollet, G., Karpouzis, K., Pelé, D. (eds) Intelligent Virtual Agents. IVA 2007. Lecture Notes in Computer Science(), vol 4722. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-74997-4_49

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-74997-4_49

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-74996-7

  • Online ISBN: 978-3-540-74997-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics