Abstract
This contribution deals with the requirements on representation languages employed in planning and displaying communicative multimodal behaviour of embodied conversational agents (ECAs). We focus on the role of behaviour representation frameworks as part of the processing chain from intent planning to the planning and generation of multimodal communicative behaviours. On the one hand, the field is fragmented, with almost everybody working on ECAs developing their own tailor-made representations, which is amongst others reflected in the extensive references list. On the other hand, there are general aspects that need to be modelled in order to generate multimodal behaviour. Throughout the chapter, we take different perspectives on existing representation languages and outline the fundamental of a common framework.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsNotes
- 1.
- 2.
http://www.vhml.org, Virtual Human Markup Language
- 3.
- 4.
- 5.
- 6.
References
Allbeck J, Badler N (2002) Towards representing agent behaviours modified by personality and emotion, In: Marriott A, Pelachaud C, Rist T, Ruttkay Z, Vilhjalmsson H (eds) Embodied conversational agents: let’s specify and compare Them!, workshop notes, autonomous agents and multiagent systems 2002, July 16, University of Bologna, Bologna, Italy
André E, Rist T (2000) Presenting through performing: on the use of life-like characters in knowledge-based presentation systems. In: Proceedings of the 2000 international conference on intelligent user interfaces, New Orleans, LA, USA, 9–12 January 2000
André E, Rist T, van Mulken S, Klesen M, Baldes S (2000) The automated design of believable dialogues for animated presentation teams. In: Cassell J, Sullivan J, Prevost S, Churchill E (eds) Embodied conversational agents. MIT Press, Cambridge, MA
André E, Klesen M, Gebhard P, Allen S, Rist T (2000) Integrating models of personality and emotions into life-like characters. In: Paiva A (ed) Affective interactions: towards a new generation of computer interfaces. Lecture Notes in Computer Science, Vol 1814, Springer, Berlin
Arafa Y, Kamyab K, Mamdani E, Kshirsagar S, Guye-Vuilléme A, Thalmann D (2002) Two approaches to scripting character animation. In: Marriott A, Pelachaud C, Rist T, Ruttkay Z, Vilhjálmsson H (eds) Embodied conversational agents: let’s specify and compare them!, workshop notes, autonomous agents and multiagent systems 2002, July 16. University of Bologna, Bologna, Italy
Arafa Y, Kamyab K , Mamdani E (2003) Character animation scripting languages: a comparison. In: Rosenschein JS et al. (eds) Proceedings of the second international joint conference on autonomous agents and multiagent systems (AAMAS 2003), July 14–18, Melbourne, Australia. ACM Press, New York, NY, pp 920–921
Ball G, Breese J (2000) Emotion and personality in a conversational agent. In: Cassell J, Sullivan J, Prevost S, Churchill E (eds) Embodied conversational agents. MIT Press, Cambridge, pp 189–219
Beard S, Reid D (2002) MetaFace and VHML: A first implementation of the virtual human markup language. In: Marriott A, Pelachaud C, Rist T, Ruttkay Z, Vilhjalmsson H (eds) Embodied conversational agents: let’s specify and compare them!, workshop notes, autonomous agents and multiagent systems 2002, July 16. University of Bologna, Bologna, Italy
Bickmore T, Cassell J (2005) Social dialogue with embodied conversational agents. In: van Kuppevelt J, Dybkjaer L, Bernsen N (eds) Advances in natural, multimodal dialogue systems. Kluwer, New York, NY
Black AW, Taylor PA (1997) The festival speech synthesis system: system documentation. Technical Report HCRC/TR-83, Human Communication Research Centre, University of Edinburgh, Scotland, UK. http://www.cstr.ed.ac.uk/projects/festival/. Accessed 3 May 2010
Brooks R (ed) (2002) Human Markup Language Primary Base Specification 1.0, OASIS HumanMarkupTC. http://www.oasis-open.org/committees/download.php/60/HM.Primary-Base-Spec-1.0.html. Accessed 31 May 2010
Calbris G (1990) The semiotics of French gestures. University Press, Bloomington, IN
Cassell J, Pelachaud C, Badler N, Steedman M, Achorn B, Becket T, Douville B, Prevost S, Stone M (1994) Animated conversation: rule-based generation of facial expression, gesture and spoken intonation for multiple conversational agents. In: Proceedings of Siggraph 94, ACM SIGGRAPH, Addison Wesley, Massachu setts, pp 413–420
Cassell J, Bickmore T, Billinghurst M, Campbell L, Chang K, Vilhjálmsson H., Yan H (1999). Embodiment in conversational interfaces: rea. In: Proceedings of the CHI’99 Conference, Pittsburgh, PA, pp 520–527
Cassell J., Stone M, Yan H (2000) Coordination and context-dependence in the generation of embodied conversation. In: First international natural language generation conference (INLG’2000), June 12, Mitzpe Ramon, Israel, pp 171–178
Cassell J, Sullivan J, Prevost S, Churchill E (eds) Embodied conversational agents. MIT Press, Cambridge, MA
Cassell J, Vilhjálmsson H, Bickmore T (2001) BEAT: The behavior expression animation toolkit. In: Proceedings of SIGGRAPH ’01, Los Angeles, CA, pp 477-486, August 12–17
Cowie R, Sussman N, Ben-Ze’ev A (2010) Emotions: concepts and definitions. In: this volume
De Carolis B, Pelachaud C, Poggi I, De Rosis F (2001) Behavior planning for a reflexive agent. In: Proceedings of IJCAI 2001, Oporto, Portugal, April, 2001
De Carolis B, Pelachaud C, Poggi I, Steedman M (2004) APML, a mark-up language for believable behavior generation. In: Prendinger H, Ishizuka M (eds) Life-like characters. tools, affective functions and applications, Springer, Berlin, pp 65–85
de Rosis F, Pelachaud C, Poggi I, Carofiglio V, De Carolis N (2003) From greta’s mind to her face: Modeling the dynamics of affective states in a conversational embodied agent. Special Issue on “Applications of affective computing in human-computer interaction”. Int J Human-Comput Stud 59(1–2): 81–118
Doenges P, Capin TK, Lavagetto F, Ostermann J, Pandzic IS, Petajan E (1997) MPEG-4: Audio/video and synthetic graphics/ audio for real-time, interactive media delivery, signal processing. Image Commun J. 9(4): 433–463
Egges A, Kshirsagar S, Magnenat-Thalmann N (2003) A model for personality and emotion simulation. In: Knowledge-based intelligent information and engineering systems. Lect Notes Comput Sci 2773/2003: 453–461
Egges A, Kshirsagar S, Magnenat-Thalmann N (2004) Generic personality and emotion simulation for conversational agents. J Visual Comput Animation 15(1): 1–13
Ekman P (1993) Facial expression of emotion. Am Psychol 48: 384–392
Ekman P, Friesen W (1978) Facial action coding system. Consulting Psychologists Press, Palo Alto, CA
Ekman P, Friesen W, Hager J (2002) Facial action coding system: the manual. A Human Face, Salt Lake City
Elliott CD (1992) The affective reasoner: a process model of emotions in a multi-agent system. Ph.D. Thesis, Northwestern University, Illinois
Elliott R, Glauert J R W, Jennings V, Kennaway J R (2004) An overview of the SiGML notation and SiGMLSigning software system. In: Streiter O, Vettori C (eds) 4th international conference on language resources and evaluation (LREC 2004), Lisbon, Portugal, May 26–28, 2004, pp 98–104
Gebhard P (2005) ALMA – a layered model of affect. In: Proceedings of the fourth international joint conference on autonomous agents and multiagent systems (AAMAS’05), Utrecht University, Utrecht July 25-29, 2005, pp 29–36
Gebhard P, Kipp M, Klesen M, Rist T (2003) Adding the emotional dimension to scripting character dialogues. In: Proceedings of the 4th international working conference on intelligent virtual agents (IVA’03), – Irsee, Germany, 15-17 September, 2003, pp 48–56
Hall L, Vala M, Hall M, Webster M, Woods S, Gordon A, Aylett R (2006) FearNot’s appearance: reflecting children’s expectations and perspectives. In: Gratch J, Young M, Aylett R, Ballin D, Olivier P (eds) 6th international conference on intelligent virtual agents (IVA’06), Springer, Berlin, LNAI 4133, pp 407–419
Hanke T (2004) HamNoSys; representing sign language data in language resources and language processing contexts. In: Proceedings of 4th international conference on language resources and evaluation (LREC 2004), Lisbon, Portugal, 26–28 May, 2004, pp 1–6
Hartmann B, Mancini M, Pelachaud C (2002) Formational parameters and adaptive prototype instantiation for MPEG-4 compliant gesture synthesis. In Proceedings of computer animation 2002 (CA 2002), Geneva, Switzerland, 19-21 June, 2002, p 111
Heylen D, Kopp S, Marsell S, Pelachsud C, Vilhjalmsson H (eds) (2008) Why conversational agents do what they do. Functional representations for generating conversational agent behavior, AAMAS 2008 Workshop 2, April 9, Estoril
Huang Z, Eliens A, Visser C (2003) XSTEP: a markup language for embodied agents. In: Proceedings of the 16th international conference on computer animation and social agents (CASA’2003), May 8–9, Rutgers University, New-Brunswick, NJ, USA, IEEE Computer Society, Washington, DC, pp 105–110
Huang Z, Eliens A, Visser C (2004) STEP: a scripting language for embodied agents. In: Prendinger H, Ishizuka M (eds) Life-like characters, tools, affective functions and applications, Springer, Berlin
Johns M, Silverman, BG (2001) How emotions and personality effect the utility of alternative decisions: a terrorist target selection case study. In: Tenth conference on computer generated forces and behavioral representation. SISO. Norfolk, Virginia, pp 55–64
Johnson JH, Vilhjálmsson H, Marsella S (2005) Serious games for language learning: how much game, how much AI? In: 12th international conference on artificial intelligence in education, Amsterdam, The Netherlands, 18–22 July, 2005
Joshi AK, Levy L, Takahashi M (1975) Tree adjunct grammars. J Comput Syst Sci 10: 136–163
Kamp H, Reyle U (1993) From discourse to logic. Kluwer, Dordrecht
Kendon A (1990) Conducting interaction. Cambridge University Press, Cambridge
Klesen M, Gebhard P (2004) Player markup language. Version 1.2.4, DFKI, internal document
Kölsch M, Martell C (2006) Toward a common human gesture description language, workshop on specification of mixed reality user interfaces: approaches, languages, standardization, IEEE Virtual Reality Conference (VR 06), Alexandria, VA, 25 March, 2006
Kipp, M (2001) Anvil – a Generic annotation tool for multimodal dialogue, In: Proceedings of the 7th European conference on speech communication and technology (Eurospeech), Aalborg, Denmark, 3–7 September (2001) pp 1367–1370
Kopp S, Jung B (2000) An anthropomorphic assistant for virtual assembly: max. In: Proceedings of the autonomous agents ’00 workshop: communicative agents in intelligent environments, Barcelona, Spain
Kopp S, Wachsmuth I (2004) Synthesizing multimodal utterances for conversational agents. J Comput Animation Virtual Worlds 15(1): 39–52
Kopp S, Krenn B, Marsella S, Marshall A, Pelachaud C, Pirker H, Thórisson K, Vilhjálmsson H (2006) Towards a common framework for multimodal generation: the behaviour markup language. In: Gratch J et al (eds) Intelligent virtual agents 2006, LNAI 4133. Springer, Berlin, pp 205–217
Kopp S, Pfeiffer-Leßmann N (2008) Functions of speaking and acting. In: Heylen D, Kopp S, Marsell S, Pelachsud C, Vilhjalmsson H (eds) Why conversational agents do what they do. Functional representations for generating conversational agent behavior, AAMAS 2008 Workshop 2, April 9, Estoril
Kranstedt A, Kopp S, Wachsmuth I (2002) MURML: a multimodal utterance representation markup language for conversational agents. In: Marriott A, Pelachaud C, Rist T, Ruttkay Z, Vilhjalmsson H (eds) Embodied conversational agents: let’s specify and compare them!, workshop notes, autonomous agents and multiagent systems 2002, University of Bologna, Bologna, Italy, 16 July, 2002
Krenn B, Pirker H (2004) Defining the gesticon: language and gesture coordination for interacting embodied agents. In: Proceedings of the AISB-2004 symposium on language, speech and gesture for expressive characters, University of Leeds, UK, March 29–April 1, 2004, pp 107–115
Krenn B, Sieber G (2008) Functional Mark-up for behaviour planning. Theory and practice. In: Heylen D, Kopp S, Marsell S, Pelachsud C, Vilhjalmsson H (eds) Why conversational agents do what they do. Functional representations for generating conversational agent behavior, AAMAS 2008 Workshop 2, April 9, Estoril
Krenn B, Grice M, Piwek P, Schröder M, Klesen M, Baumann S, Pirker H, van Deemter K, Gstrein E (2002) Generation of multi-modal dialogue for net environments. In: Proceedings of KONVENS-02, Saarbrcken, Germany, September 30–October 2 (2002)
Kshirsagar S, Magnenat-Thalmann N (2002) A multilayer personality model. In: Proceedings of 2nd international symposium on smart graphics, ACM Press, New York, NY, pp 107–115
Lee J, DeVault D, Marsella S, Traum D (2008) Thoughts on FML: behavior generation in the virtual human communication architecture. In: Heylen D, Kopp S, Marsell S, Pelachsud C, Vilhjalmsson H (eds) Why conversational agents do what they do. Functional representations for generating conversational agent behavior, AAMAS 2008 Workshop 2, 9 April 2008, Estoril
Louis V, Martinez T (2007) JADE semantics framework. In: Developing multi-agent systems with jade. Wiley, Chichester, pp 225–246
Mancini M, Pelachaud C (2008) The FML-APML language. In: Heylen D, Kopp S, Marsell S, Pelachsud C, Vilhjalmsson H (eds) Why conversational agents do what they do. Functional representations for generating conversational agent Behavior, AAMAS 2008 Workshop 2, 9 April 2008, Estoril
Marsella S (2003) Interactive pedagogical drama: Carmen’s bright IDEAS assessed. In: Proceedings of the 4th international working conference on intelligent virtual agents (IVA’03), Irsee, Germany, 15-17 September, 2007, pp 1–4
Marsella, S. and Gratch, J (2009) EMA: A model of emotional dynamics. J Cogn Syst Res 10(1): 70–90
Marsella S, Johnson WL, LaBore C (2003) Interactive pedagogical drama for health interventions. In: Proceedings of the 11th international conference on artificial intelligence in education AIED 2003, Sidney, Australia, 20–24 September 2003
Martell C (2002) FORM: an extensible, kinematically-based gesture annotation scheme. In: Proceedings of ICSLP-2002, Denver, Colorado, 16–20 September, 2002, pp 353–356
Martell C, Howard P, Osborn C, Britt L, Myers K (2003) FORM2 kinematic gesture. Video recording and annotation. Linguistic Data Consortium LDC, Philadelphia, PA
Mateas M, Stern A (2003) Facade: an experiment in building a fully-realized interactive drama. In: Game Developer’s Conference: Game Design Track, San Jose, CA, USA, 20–24, March 2003
Mateas M, Stern A (2004) A behaviour language: joint action and behavioural Idioms. In: Prendinger H, Ishizuka M (eds) Life-like characters. Tools, affective functions, and applications. Springer, Berlin, pp 19–38
Matheson C, Pelachaud C, de Rosis F, Rist T (2003) MagiCster: believable agents and dialogue. In: Künstliche Intelligenz, special issue on “Embodied Conversational Agents”, November 2003, 4, pp 24–29
McCrae R R, Costa P T Jr. (1996) Toward a new generation of personality theories: theoretical contexts for the five-factor model. In: Wiggins SJ (ed) The five-factor model of personality: theoretical perspectives. Guilford, NY, pp 51–87
McNeill D (1992) Hand and mind – what gestures reveal about thought. The University of Chicago Press, Chicago, IL
Moffat D (1995) Personality parameters and programs. In: Lecture notes in artificial intelligence: creating personalities for synthetic actors: towards autonomous personality agents. LNCS. doi:10.1007/BFb0030565, pp 120–165
Moreno, R (2007) Animated software pedagogical agents: how do they help students construct knowledge from interactive multimedia games? In: Lowe R, Schnotz W (eds) Learning with animation. Cambridge University Press, New York, NY, pp 183–207
Mori K, Jatowt A, Ishizuka M. (2003) Enhancing conversational flexibility in multimodal interactions with embodied lifelike agents. In: Proceedings of the International conference on intelligent user interfaces (IUI 2003), Miami, Florida, 12–15 January 2003, pp 270–272
Niewiadomski R, Pelachaud C (2007) Model of facial expressions management for an embodied conversational agent. In: Proceedings of ACII 2007, September 12-14, Lisbon. LNCS. doi:10.1007/978-3-540-74889-2, pp 12–23
Nijholt A (2006) Towards the automatic generation of virtual presenter agents. In: Proceedings of InSITE 2006, June 25-28, Salford. Infor Sci 9: 97–115
Nischt M, Prendinger H, André E, Ishizuka M (2006) Creating three-dimensional animated characters: an experience report and recommendations of good practice. Upgrade: virtual environments 7(2): 35–41. http://www.upgrade-cepis.org/issues/2006/2/upgrade-vol-VII-2.pdf. (Accessed) 31 May 2010
Nischt M, Prendinger H, André E, Ishizuka (2006) MPML3D: a reactive framework for the multimodal presentation markup language. In: Proceedings of the 6th international conference on intelligent virtual agents (IVA’06), August 21-23, Marina del Rey, CA, LNCS. doi:10.1007/11821830, pp 218–229
Noot H, Ruttkay Z (2004) Gesture in style. In: Camurri A, Volpe G (eds) Gesture-based communication in human-computer interaction – GW 2003. LNCS vol 2915, Springer, Berlin, pp 471–472
Ochs M, Pelachaud C, Sadek D (2008) An empathic virtual dialog agent to improve human-machine interaction. In: Seventh international joint conference on autonomous agents and multi-agent systems, AAMAS’08, Estoril Portugal, 12–16, May 2008, pp 89–96
Ortony A (2003) On making believable emotional agents believable. In: Trappl R, Petta P, Payr S (eds) Emotions in humans and artefacts. MIT Press, Cambridge, MA pp 189–212
Ortony A, Clore GL, Collins A (1988) The cognitive structure of emotions. Cambridge University Press, Cambridge
Ostermann J (1998) Animation of synthetic faces in MPEG-4. In: Proceedings of the computer animation’ 98, Philadelphia, PA, USA, 8–10 June 1998, pp 49–51
Pandzic IS, Forchheimer R (eds) (2002) MPEG4 facial animation – the standard, implementations and applications. Wiley, New York, NY
Paradiso A, L’Abbate M (2001) A model for the generation and combination of emotional expressions. In: Proceedings of the AA’ 01 workshop on multimodal communication and context in embodied agents, Montreal, Canada, 29 May 2001
Pelachaud C (2005) Multimodal expressive embodied conversational agents. In: Proceedings of the 13th annual ACM international conference on multimedia. SESSION: brave new topics 2: affective multimodal human-computer interaction; Singapore, 6–11, November 2005, pp 683–689
Peltz J, Kumar Thunga R (2005) HumanML: The Vision. TheHumanMLReport-WhiteP, July 12. http://www.oasis-open.org/committees/download.php/13625/HumanMLReport-WhitePaper.pdf Accessed 31 May 2010
Piwek P (2003) An annotated bibliography of affective natural language generation. version 1.3. (version 1.0 appeared in 2002 as ITRI Technical Report ITRI-02-02, University of Brighton). http://www.itri.brighton.ac.uk/projects/neca/affect-bib.pdf Accessed 31 May 2010
Piwek P, Krenn B, Schröder M, Grice M, Baumann S, Pirker H (2002) RRL: a rich representation language for the description of agent behaviour in NECA. In: Marriott A, Pelachaud C, Rist T, Ruttkay Z, Vilhjalmsson H (eds) Embodied conversational agents: let’s specify and compare them!, workshop notes, Autonomous Agents and Multiagent Systems 2002, University of Bologna, Bologna, Italy, 16 July, 2002
Poggi I, Pelachaud C, de Rosis F (2000) Eye communication in a conversational 3D synthetic agent. AI Commun 13(3): 169–182
Predinger H, Ishizuka M (eds) (2004) Life-like characters. Cognitive technologies. Springer, Berlin
Prendinger H, Saeyor S, Ishizuka M (2004) MPML and SCREAM: scripting the bodies and minds of life-like characters. In: Predinger H, Ishizuka M (eds) Life-like Characters. Cognitive technologies. Springer, Berlin, pp 213–242
Prillwitz S, Leven R, Zienert H, Hanke T, Henning J (1989) Hamburg notation system for sign languages: an introductory guide. In: International studies on sign language and communication of the deaf, vol 5. Signum Press, Hamburg, Germany
Rank S, Petta P (2005) Appraisal for a character-based story-world. In: Panayiotopoulos T et al (eds) Intelligent virtual agents, 5th international working conference, IVA 2005. Kos, Greece, September 12–14. Springer, Berlin pp 495–496
Rehm M, André E (2005) From chatterbots to natural interaction – face to face communication with embodied conversational agents. IEICE transactions on information and systems, special issue on life-like agents and communication. Oxford University Press Oxford, Oxford, UK, pp 2445–2452
Rehm M, André E (2005) Catch me if you can: exploring lying agents in social settings. In: Proceedings of the fourth international joint conference on autonomous agents and multiagent systems AAMAS ’05. July 25 – 29, Utrecht, The Netherlands. ACM, New York, NY, pp 937–944
Rehm M, André E (2005) Informing the design of embodied conversational agents by analysing multimodal politeness behaviors. In: AISB symposium for conversational informatics, University of Hertfordshire, Hatfield, England, 12–15 April 2005
Rickel J, Johnson WL (1998) STEVE: a pedagogical agent for virtual reality. In: Sierra C et al (eds) Proceedings of the second international conference on autonomous agents (Agents’98), 9–13, Minneapolis/St. Paul, MN, USA. ACM Press, New York, NY, pp 332–333
Rist T (2004) Issues in the design of scripting and representation languages for life-like characters. In: Prendinger H, Ishizuka M (eds) Life-like characters. Tools, affective functions, and applications. Springer, Berlin, pp 463–468
Schabes Y (1990) Mathematical and computational aspects of lexicalized grammars. Ph.D. thesis, Computer Science Department, University of Pennsylvania
Scherer K (2000) Emotion. In: Hewstone M, Stroebe W (eds) Introduction to social psychology: a European perspective. Wiley-Blackwell, Oxford, UK, pp 151–191
Schröder M (2004) Speech and emotion research: an overview of research frameworks and a dimensional approach to emotional speech synthesis (Ph.D thesis). vol 7 of Phonus, Research Report of the Institute of Phonetics, Saarland University
Schröder M, Trouvain J (2003) The German text-to-speech synthesis system MARY: a tool for research, development and teaching. Int J Speech Tech 6: 365–377
Schröder M, Cowie R, Douglas-Cowie E, Westerdijk M, Gielen S (2001) Acoustic correlates of emotion dimensions in view of speech synthesis. In: Proceedings of Eurospeech 2001, Aalborg, Denmark, 3–7 September 2001, pp 87–90
Schröder M, Pirker H, Lamolle, Burkhardt F, Peter C, Zovato E (2010) Representing emotions and related states in technological systems. In: this volume
Searle J R (1969) Speech acts: an essay in the philosophy of language. Cambridge University Press, Cambridge
Serenari M, Dybkjaer L, Heid U, Kipp M, Reithinger N (2002) Survey of existing gesture, facial expression, and cross-modality coding schemes. IST-2000-26095 Deliverable D2.1, Project NITE
Stokoe WC (1978) Sign language structure: an outline of the communicative systems of the American deaf. Linstock Press, Silver Spring
Stone M, Bleam T, Doran C, Palmer M (2000) Lexicalized grammar and the description of motion events. In: TAG+5, Workshop on tree-adjoining grammar and related formalisms, Paris, France, 25–27 May 2000
Thiébaux M, Marsella S, Marshall AN, Kallmann M (2000) SmartBody: behavior realization for embodied conversational agents. In: Padgham L, Parkes DC, Müller J, Parsons S (eds) Proceedings of conference on autonomous agents and multi-agent systems (AAMAS08), Estoril, Portugal, 12–16 May 2008, pp 151–158
Trippel T, Gibbon D, Thies A, Milde JT, Looks K, Hell B, Gut U (2004) CoGesT: a formal transcription system for conversational gesture. In: Proceedings of 4th international conference on language resources and evaluation (LREC 2004), Lisbon, Portugal, 26–28 May 2004
Tsutsui T, Saeyor S, Ishizuka M (2000) MPML: a multimodal presentation markup language with character agent control functions. In: Proceedings of world conference on the http://WWW and internet, WebNet 2000, San Antonio, TX, USA, October 30–November 4
van Deemter K, Krenn B, Piwek P, Klesen M, Schröder M, Baumann S (2008) Fully generated scripted dialogue for embodied agents. Artif Intell J 172(10):1219–1244
Vilhjálmsson H, Cantelmo N, Cassell J, Chafai NE, Kipp M, Kopp S, Mancini M, Marsella S, Marshall AN, Pelachaud C, Ruttkay Z, Thórisson KR, van Welbergen H, van der Werf RJ (2007) The behavior markup language: recent developments and challenges. In: Pelachaud C et al (eds) Intelligent virtual agents. Springer, Berlin, pp 99–111
Walker M, Cahn J, Whittaker S (1996) Linguistic style improvisation for lifelike computer characters. In: Proceedings of the AAAI Workshop on AI, Alife and Entertainment. August, Portland, Oregon, USA
Wiggins J (1996) The five-factor model of personality: theoretical perspectives. The Guilford Press, New York, NY
Yue B, de Byl P (2006) The state of the art in game AI standardisation. In: Proceedings of the 2006 international conference on game research and development. December 4, Perth, Australia, ACM International conference proceedings series Vol 223. Murdoch University, Australia, pp 41–46
Zong Y, Dohi H, Ishizuka M (2000) Multimodal presentation markup language MPML with emotion expression functions attached. In: Proceedings of the international symposium on multimedia software engineering (IEEE Computer Soc), Taipei, Taiwan
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2011 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
Krenn, B., Pelachaud, C., Pirker, H., Peters, C. (2011). Embodied Conversational Characters: Representation Formats for Multimodal Communicative Behaviours. In: Cowie, R., Pelachaud, C., Petta, P. (eds) Emotion-Oriented Systems. Cognitive Technologies. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-15184-2_20
Download citation
DOI: https://doi.org/10.1007/978-3-642-15184-2_20
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-15183-5
Online ISBN: 978-3-642-15184-2
eBook Packages: Computer ScienceComputer Science (R0)