Advertisement

Requirements for an Architecture for Embodied Conversational Characters

  • J. Cassell
  • T. Bickmore
  • L. Campbell
  • K. Chang
  • H. Vilhjálmsson
  • H. Yan
Part of the Eurographics book series (EUROGRAPH)

Abstract

In this paper we describe the computational and architectural requirements for systems which support real-time multimodal interaction with an embodied conversational character. We argue that the three primary design drivers are real-time multithreaded entrainment, processing of both interactional and propositional information, and an approach based on a functional understanding of human face-to-face conversation. We then present an architecture which meets these requirements and an initial conversational character that we have developed who is capable of increasingly sophisticated multimodal input and output in a limited application domain.

Keywords

Interactional Information Multimodal Interface Input Manager Virtual Human Conversational Agent 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Azarbayejani, A., Wren, C., Pentland A. Real-time 3-D tracking of the human body. In Proceedings of IMAGE’COM 96, Bordeaux, France, May 1996.Google Scholar
  2. 2.
    Badler, N., et al. A Parameterized Action Representation for Virtual Human Agents. Proceedings of the 1st Workshop on Embodied Conversational Characters 1998, 1–8 Google Scholar
  3. 3.
    Beskow, J. and McGlashan, S. Olga — A Conversational Agent with Gestures, In Proceedings of the 1JCAI’97 workshop on Animated Interface Agents — Making them Intelligent, Nagoya, Japan, August 1997Google Scholar
  4. 4.
    Brooks, R.A. A Robust Layered Control System for a Mobile Robot. IEEE Journal of Robotics and Automation 2 (1), 1986, 14–23.CrossRefGoogle Scholar
  5. 5.
    Cassell, J., Pelachaud, C., Badler, N.I., Steedman, M., Achorn, B., Beckett, T., Douville, B., Prevost, S. and Stone, M. Animated conversation: rule-based generation of facial display, gesture and spoken intonation for multiple conversational agents. Computer Graphics (S1GGRAPH ’94 Proceedings),1994, 28(4): 413–420.Google Scholar
  6. 6.
    Cassell, J. and Thórisson, K. The Power of a Nod and a Glance: Envelope vs. Emotional Feedback in Animated Conversational Agents. Journal of Applied AI,in press.Google Scholar
  7. 7.
    Lester, J., Towns, S., Calloway, C., and FitzGerald, P. Deictic and emotive communication in animated pedagogical agents. In Proceedings of the Workshop on Embodied Conversational Characters 1998.Google Scholar
  8. 8.
    Nagao, K. and Takeuchi, A. Social interaction: multimodal conversation with social agents. Proceedings of the 12th National Conference on Artificial Intelligence (AAAI-94), (Seattle, WA, August 1994), AAAI Press/MIT Press, 1994, vol. 1, 22–28.Google Scholar
  9. 9.
    Reeves, B. and Nass, C. The Media Equation: How People Treat Computers, Television, and New Media Like Real People and Places. Cambridge University Press, 1996.Google Scholar
  10. 10.
    Rickel, J. and Johnson, L. Task-oriented Dialogs with Animated Agents in Virtual Reality” in Workshop on Embodied Conversational Characters. In Proceedings of the Workshop on Embodied Conversational Characters 1998.Google Scholar
  11. 11.
    Specification for a Standard VRML Humanoid Version 1.0 http://ece.uwaterloo.ca/~hanim/spec.html
  12. 12.
    Stone, M. Modality in Dialogue: Planning, Pragmatics, and Computation PhD Thesis, University of Pennsylvania, 1998.Google Scholar
  13. 13.
    Thalmann, N.M., Kalra, P., and Escher, M. (1998). Face to virtual face. Proceeding of the IEEE, 86(5), 870–883.CrossRefGoogle Scholar
  14. 14.
    Thórisson, K. R. Communicative Humanoids: A Computational Model of Psychosocial Dialogue Skills PhD Thesis, MIT Media Laboratory, 1996.Google Scholar

Copyright information

© Springer-Verlag Wien 1999

Authors and Affiliations

  • J. Cassell
    • 1
  • T. Bickmore
    • 1
  • L. Campbell
    • 1
  • K. Chang
    • 1
  • H. Vilhjálmsson
    • 1
  • H. Yan
    • 1
  1. 1.MIT Media LaboratoryGesture Narrative Language GroupCambridgeUSA

Personalised recommendations