Skip to main content

Coordinating the Generation of Signs in Multiple Modalities in an Affective Agent

  • Chapter
  • First Online:
Emotion-Oriented Systems

Abstract

In order to be believable, embodied conversational agents (ECAs) must show expression of emotions in a consistent and natural looking way across modalities. The ECA has to be able to display coordinated signs of emotion during realistic emotional behaviour. Such a capability requires one to study and represent emotions and coordination of modalities during non-basic realistic human behaviour, to define languages for representing such behaviours to be displayed by the ECA, to have access to mono-modal representations such as gesture repositories. This chapter is concerned about coordinating the generation of signs in multiple modalities in such an affective agent. Designers of an affective agent need to know how it should coordinate its facial expression, speech, gestures and other modalities in view of showing emotion. This synchronisation of modalities is a main feature of emotions.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 219.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  • Albrecht I, Schrder M, Haber J, Seidel H-P (2005) Mixed feelings: Expression of non-basic emotions in a muscle-based talking head.  J Virtual Reality Lang Speech Gesture 8(4 Special issue):201–212

    Google Scholar 

  • André E (2006) Corpus-based approaches to behavior modeling for virtual humans: a critical review. Modeling communication with robots and virtual humans. In: Workshop of the ZiF: research group 2005/2006 “Embodied communication in humans and machines”. Scientific organization: Ipke Wachsmuth (Bielefeld), Günther Knoblich (Newark)

    Google Scholar 

  • Andr E, Rist T, van S, Mulken, Klesen M, Baldes S (2000) The automated design of believable dialogues for animated presentation teams. In: Cassell JSJ, Prevost S, Churchill E (eds) Embodied conversational agents. MIT Press, Cambridge, MA, pp 220–255

    Google Scholar 

  • Arfib D (2006) Time warping of a piano (and other) video sequences following different emotions. In: Workshop on “subsystem synchronization and multimodal behavioral organization” held during Humaine summer school. Genova

    Google Scholar 

  • Argyle M (2004) Bodily communication, 2nd edn. Routledge, London and Taylor and Francis, New York, NY

    Google Scholar 

  • Bänziger T, Scherer K (2007) Using actor portrayals to systematically study mul-timodal emotion expression: the GEMEP corpus. In: 2nd international conference on affective computing and intelligent interaction (ACII 2007) Lisbon, Portugal, pp 476–487

    Google Scholar 

  • Bassili JN (1979) Emotion recognition: the role of facial movement and the relative importance of upper and lower areas of the face. J Pers Soc Psychol 37(11):2049–2058

    Article  Google Scholar 

  • Batliner A, Fisher K, Huber R, Spilker J, Noth E (2000) Desperately seeking emotions or: Actors, wizards, and human beings. ISCA Workshop on speech and emotion: a conceptual framework for research Newcastle, Northern Ireland, pp 195–200

    Google Scholar 

  • Buisine S, Abrilian S, Niewiadomski R, Martin J-C, Devillers L, Pelachaud C (2006) Perception of blended emotions: from video corpus to expressive agent. 6th International Conference on Intelligent Virtual Agents (IVA’2006). Best paper award. Springer, Marina del Rey, CA, pp 93–106

    Google Scholar 

  • Buisine S (2005) Conception et évaluation d’Agents conversationnels multimodaux bidirectionnels. PhD Thesis. Doctorat de Psychologie Cognitive – Ergonomie, Paris V. 8 avril 2005. Direction J.-C. Martin and J.-C. Sperandio. 2005. URL http://stephanie.buisine.free.fr/. Accessed on 4 November 2010

  • Buisine S, Martin JC (2007a) The effects of speech-gesture cooperation in animated agents’ behavior in multimedia presentations. Interact Comput 19:484–493

    Google Scholar 

  • Buisine S, Martin J-C (2007b) The influence of personality on the perception of embodied agents’ multimodal behavior. In: 3rd conference of the international society for gesture studies

    Google Scholar 

  • Cacioppo JT, Petty RP, Losch ME, Kim HS (1986) Electromyographic activity over facial muscle regions can differentiate the valence and intensity of affective reactions. J Pers Soc Psychol 50:260–268

    Google Scholar 

  • Caridakis G, Raouzaiou A, Karpouzis K, Kollias S (2006) Synthesizing gesture expressivity based on real sequences. In: Workshop “multimodal corpora. From multimodal behaviour theories to usable models”. 5th international conference on language resources and evaluation (LREC’2006), Genova, Italy, pp 19–23

    Google Scholar 

  • Caridakis G, Raouzaiou A, Bevacqua E, Mancini M, Karpouzis K, Malatesta L, Pelachaud C (2007) Virtual agent multimodal mimicry of humans. J Lang Res Eval (Special issue) “Multimodal Corpora”. Lang Res Eval (41):367–388

    Google Scholar 

  • Carofiglio V, de Rosis F, Grassano R (2008) Dynamic models of multiple emotion activation. In: Canamero L, Aylett R (eds) Animating expressive characters for social interactions. John Benjamins, Amsterdam, pp 123–141

    Google Scholar 

  • Cassell J, Pelachaud C, Badler N, Steedman M, Achorn B, Becket T, Douville B, Prevost S, Stone M (1994) Animated conversation: rule-based generation of facial expression, gesture and spoken intonation for multiple conversational agents. In: ACM SIGGRAPH’94, pp 413–420

    Google Scholar 

  • Cassell J, Bickmore T, Billinghurst M, Campbell L, Chang K, Vilhjálmsson HH, Yan H (1999) Embodiment in conversational interfaces: Rea. CHI’99 (SIGCHI conference on Human factors in computing systems: the CHI is the limit) Pittsburgh, PA, USA, pp 520–527

    Google Scholar 

  • Cassell J, Bickmore T, Campbell L, Vilhjlmsson H, Yan H (2000) Human conversation as a system framework: designing embodied conversational agents. In: Cassell J, Sullivan J, Prevost S, Churchill E (eds) Embodied conversational agents. MIT Press, Cambridge, MA, pp 29–63

    Google Scholar 

  • Cassell J, Vilhjálmsson H, Bickmore T (2001) BEAT: the Behavior Expression Animation Toolkit. In: 28th Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH ’01) Los Angeles, CA, pp 477–486

    Google Scholar 

  • Cassell J, Kopp S, Tepper P, Ferriman K, Striegnitz K (2007) Trading spaces: how humans and humanoids use speech and gesture to give directions. In: Nishida T (ed) Conversational informatics. Wiley, New York, NY, pp 133–160

    Google Scholar 

  • Chopra-Khullar S, Badler N (2001) Where to look? Automating attending behaviours of virtual human characters. In: 4th Conference on AAMAS, pp 9–23

    Google Scholar 

  • Condon WS (1986) Communication: rhythm and structure. rhythm in psychological, linguistic and musical processes. Charles C Thomas Publisher

    Google Scholar 

  • Constantini E, Pianesi F, Prete M (2005) Recognizing emotions in human and synthetic faces: the role of the upper and lower parts of the face. In: Intelligent User Interfaces (IUI’05) San Diego, CA, USA, pp 20–27

    Google Scholar 

  • Coutaz J, Nigay L, Salber D, Blandford AE, May J, Young RMY (1995) Four easy pieces for assessing the usability of multimodal interaction. In: Interact’95 pp 115–120

    Google Scholar 

  • de Melo C, Paiva A (2006) A Story about Gesticulation Expression. In: 6th International Conference on Intelligent Virtual Agents (IVA’06) Marina del Rey, CA, pp 270–281

    Google Scholar 

  • Dehn DM, van Mulken S (2000) The impact of animated interface agents: a review of empirical research. Int J Hum Comput Stud 52:1–22

    Google Scholar 

  • Devillers L, Cowie R, Martin J-C, Douglas-Cowie E, Abrilian S, McRorie M (2006) Real life emotions in French and English TV video clips: an integrated annotation protocol combining continuous and discrete approaches. In: 5th in-ternational conference on Language Resources and Evaluation (LREC 2006), Genoa, Italy

    Google Scholar 

  • Devillers L, Martin J-C, Cowie R, Douglas-Cowie E, Batliner A (2006) Workshop “Corpora for research on emotion and affect”. In: 5th international conference on language resources and evaluation (LREC’2006). Genova, Italy

    Google Scholar 

  • Douglas-Cowie E, Campbell N, Cowie R, Roach P (2003) Emotional speech; Towards a new generation of databases. Speech Commun 40

    Google Scholar 

  • Douglas-Cowie E, Devillers L, Martin J-C, Cowie R, Savvidou S, Abrilian S, Cox C (2005) Multimodal databases of everyday emotion: facing up to complexity. In: 9th European Conference on Speech communication and technology (Interspeech’2005), Lisbon, Portugal, pp 813–816

    Google Scholar 

  • Duncan S, Fiske D (1977) Face-to-face interaction: research, methods and theory. Lawrence Erlbaum, Hillsdale, N J

    Google Scholar 

  • Duy Bui T (2004) Creating emotions and facial expressions for embodied agents. PhD Thesis. University of Twente

    Google Scholar 

  • Ech Chafai N, Pelachaud C, Pelé D, Breton G (2006) Gesture Expressivity Modulations in an ECA Application. In: 6th international conference on intelligent virtual agents (IVA’06) Marina del Rey, CA, pp 181–192

    Google Scholar 

  • Ekman P (1982a) Emotion in the human face. Cambridge University Press

    Google Scholar 

  • Ekman P, Friesen W (1982) Felt, false, miserable smiles. J Nonverb Behav 6:4

    Article  Google Scholar 

  • Ekman P (1999) Basic emotions. In: Dalgleish T, Power MJ (eds) Handbook of cognition & emotion. Wiley, New York, NY, pp 301–320

    Google Scholar 

  • Ekman P (2003a) Emotions revealed. Understanding faces and feelings. Weidenfeld and Nicolson, London

    Google Scholar 

  • Ekman P (2003b) The face revealed. Weidenfeld and Nicolson, London

    Google Scholar 

  • Ekman P, Friesen WV (1975) Unmasking the face. A guide to recognizing emotions from facial clues. Prentice-Hall, Englewood Cliffs, NJ.

    Google Scholar 

  • Ekman P, Friesen WC, Hager JC (2002) Facial action coding system. The Manual on CD ROM

    Google Scholar 

  • Engle RA (2000) Toward a theory of multimodal communication: combining speech, gestures, diagrams and demonstrations in instructional explanations. PhD Thesis, Stanford University

    Google Scholar 

  • Feldman RS, Rim B (1991) Fundamentals of nonverbal behavior. Studies in emotion and social interaction. Cambridge University Press, Cambridge

    Google Scholar 

  • Gallaher P (1992) Individual differences in nonverbal behavior: Dimensions of style. J Pers Soc Psychol 63:133–145

    Google Scholar 

  • Garcia-Rojas A, Vexo F, Thalmann D (2007) Semantic representation of individualized reaction movements for virtual humans. J Virtual Reality 6(1):25–32

    Google Scholar 

  • Gouta K, Miyamoto M (2000) Emotion recognition, facial components associated with various emotions. Shinrigaku Kenkyu 71(3):211–218

    Google Scholar 

  • Harrigan JA, Rosenthal R, Scherer K (2005) The new handbook of methods in nonverbal behavior research. Series in Affective Science. Oxford University Press, Oxford

    Google Scholar 

  • Hartmann B, Mancini M, Pelachaud C (2002) Formational parameters and adaptive prototype instantiation for MPEG-4 compliant gesture synthesis. In: Computer animation (CA’2002) Geneva, Switzerland, pp 111–119

    Google Scholar 

  • Hartmann B, Mancini M, Pelachaud C (2005) Implementing expressive gesture synthesis for embodied conversational agents. In: Gesture Workshop (GW’2005), Vannes, France

    Google Scholar 

  • Hill R, Han C, van Lent M (2002) Perceptually driven cognitive mapping of urban environments. In: First international joint conference on autonomous agents and multiagent systems, Bologna, Italy

    Google Scholar 

  • Isbister K, Nass C (2000) Consistency of personality in interactive characters: verbal cues, non-verbal cues, and user characteristics. Int J Hum Comput Stud 53:251–267

    Article  Google Scholar 

  • Jacques PA, Vicari RM, Pesty S, Bonneville J-F (2004) Applying affective tactics for a better learning. In: 16th European Conference on Artificial Intelligence (ECAI 2004). Valncia, Spain, IOS, Amsterdam, pp 109–113

    Google Scholar 

  • Karunaratne S, Yan H (2006) Modelling and combining emotions, visual speech and gestures in virtual head models. Signal Process Image Commun 21:429–449

    Google Scholar 

  • Kendon A (2004) Gesture : visible action as utterance. Cambridge University Press, Cambridge

    Google Scholar 

  • Kipp M (2004) Gesture generation by imitation. From human behavior to computer character animation. Boca Raton, Dissertation.com Florida

    Google Scholar 

  • Kipp M (2006) Creativity meets automation: combining nonverbal action authoring with rules and machine learning. In: 6th international conference on intelligent virtual agents (IVA’06) Marina del Rey, CA, pp 230–242

    Google Scholar 

  • Kleinsmith A, Bianchi-Berthouze N (2007) Recognizing affective dimensions from body posture. In: 2nd international conference on affective computing and intelligent interaction (ACII 2007) Lisbon, Portugal, pp 48–58

    Google Scholar 

  • Knapp ML, Hall JA (2006) Nonverbal communication in human interaction, 16th edition. Thomson and Wadsworth, Belmont, CA

    Google Scholar 

  • Kopp S, Jung B, Lessmann N, Wachsmuth I (2003) Max - A multimodal assistant in virtual reality construction. KI-Künstliche Intelligenz. Vol. 4/03, pp 11–17

    Google Scholar 

  • Lee J, Marsella S (2006) Nonverbal behavior generator for embodied conversational agents. In: 6th international conference on intelligent virtual agents (IVA’06) Marina del Rey, CA, pp 243–255

    Google Scholar 

  • Lee B, Kao E, Soo V (2006) Feeling ambivalent: a model of mixed emotions for virtual agents. In: 6th international conference on intelligent virtual agents (IVA’06) Marina del Rey, CA, pp 329–342

    Google Scholar 

  • Lester J, Converse S, Kahler S, Barlow T, Stone B, Bhogal R (1997) The Persona effect: affective impact of animated pedagogical Agents. In: CHI ’97 Atlanta, pp 359–366

    Google Scholar 

  • Lester JC, Towns SG, Callaway CB, Voerman JL, P, F (2000) Deictic and emotive communication in animated pedagogical agents. Embodied conversational agents. The MIT Press, Cambridge, MA

    Google Scholar 

  • Malatesta L, Raouzaiou A, Karpouzis K, Kollias S (2007) MPEG-4 facial expression synthesis. J Pers Ubiquitous Comput ‘Emerg Multimodal Interfaces’ (Special issue) following the special session of the AIAI 2006 Conference. Springer, 13(1):77–83

    Google Scholar 

  • Martin JC, Béroule D (1993) Types et buts de coopérations entre modalités. Cinquièmes Journées sur l’Ingénierie des Interfaces Homme-Machine Lyon, France, pp 17–22

    Google Scholar 

  • Martin JC, Grimard S, Alexandri K (2001) On the annotation of the multimodal behavior and computation of cooperation between modalities. In: Workshop on “Representing, Annotating, and Evaluating Non-Verbal and Verbal Communicative Acts to Achieve Contextual Embodied Agents” in conjunction with the 5th International Conference on Autonomous Agents (AAMAS’2001) Montreal, Canada, pp 1–7

    Google Scholar 

  • Martin J-C, den Os E, Kuhnlein P, Boves L, Paggio P, Catizone R (2004) Workshop “Multimodal corpora: models of human behaviour for the specification and evaluation of multimodal input and output interfaces”. In: Association with the 4th international conference on language resources and evaluation LREC2004 URL http://multimodal-corpora.org/. Accessed on 4 November 2010. Centro Cultural de Belem, LISBON, Portugal

  • Martin J-C, Abrilian S, Devillers L (2005) Annotating multimodal behaviors occurring during non basic emotions. In: 1st International Conference on affective computing and intelligent interaction (ACII’2005), Beijing, China, pp 550–557

    Google Scholar 

  • Martin J-C, Kuhnlein P, Paggio P, Stiefelhagen R, Pianesi F (2006) Workshop “Multimodal Corpora: from Multimodal Behaviour Theories to Usable Models”. In: Association with the 5th international conference on language resources and evaluation (LREC2006), Genoa, Italy

    Google Scholar 

  • Martin J-C, Niewiadomski R, Devillers L, Buisine S, Pelachaud C (2006) Multimodal complex emotions: gesture expressivity and blended facial expressions. J Humanoid Rob (Special issue). In: Pelachaud C, Canamero L (eds). Achieving human-like qualities in interactive virtual and physical humanoids. 3(3):269–291

    Google Scholar 

  • Maybury M, Martin J-C (2002) Workshop on “Multimodal Resources and Multi-modal Systems Evaluation”. In: Conference on language resources and evaluation (LREC’2002), Las Palmas, Canary Islands, Spain

    Google Scholar 

  • Moreno R, Mayer RE, Spires HA, Lester JC (2001) The case for social agency in computer-based teaching: do students learn more deeply when they interact with animated pedagogical agents? Cognition Instr 19:177–213

    Google Scholar 

  • Neff M, Fiume E (2005) Methods for exploring expressive stance. Graph Models. Special issue on SCA 2004. 68(2):133–157

    Google Scholar 

  • Niewiadomski RA (2009) model of complex facial expressions in interpersonal relations for animated agents. PhD Thesis. PhD. dissertation, University of Perugia

    Google Scholar 

  • Niewiadomski R, Pelachaud C (2007) Fuzzy similarity of facial expressions of embodied agents. In: 7th international conference on intelligent virtual agents (IVA’2007) Paris, France, pp 86–98

    Google Scholar 

  • Noma T, Zhao L, Badler N (2000) Design of a Virtual Human Presenter. IEEE J Comput Graph Appl 20(4):79–85

    Article  Google Scholar 

  • Ochs M, Niewiadomski R, Pelachaud C, Sadek D (2005) Intelligent expressions of emotions. In: 1st International Conference on Affective Computing and Intelligent Interaction (ACII’2005), Springer-Verlag, Beijing, China, pp 707–714

    Google Scholar 

  • Pandzic IS, Forchheimer R (2002) MPEG-4 facial animation. The standard, implementation and applications. Wiley and Sons, LTD

    Google Scholar 

  • Pelachaud C (2005) Multimodal expressive embodied conversational agent. ACM Multimedia, Brave New Topics session, Singapore, 6–11 November, ACM

    Google Scholar 

  • Pelachaud C, Braffort A, Breton G, Ech Chadai N, Gibet S, Martin J-C, Maubert S, Ochs M, Pelé D, Perrin A, Raynal M, Réveret L, Sadek D (2004) AGENTS CONVERSATIONELS : Systmes d’animation Modlisation des comportements multimodaux applications : agents pdagogiques et agents signeurs. Action Spcifique du CNRS Humain Virtuel

    Google Scholar 

  • Poggi I (1996) Mind markers. In: 5th International pragmatics conference, Mexico City

    Google Scholar 

  • Poggi I (2003) Mind markers. Gestures. Meaning and use. University Fernando Pessoa Press

    Google Scholar 

  • Poggi I (2006) Social influence through face, hands, and body. In: Second Nordic Conference on Multimodality Goteborg, Sweden, pp 5–29

    Google Scholar 

  • Poggi I, Pelachaud C, de Rosis F, Carofiglio V, De Carolis B (2005) GRETA. A Believable Embodied Conversational Agent. In: Stock O, Zancarano M (eds) Multimodal intelligent information presentation. Kluwer, Dordrecht, pp 3–26

    Chapter  Google Scholar 

  • Prendinger H, Ishizuka M (2004) Life-like characters. Tools, affective functions and applications. Springer, Berlin

    Google Scholar 

  • Rehm M, André E (2005) Catch Me If You Can – Exploring Lying Agents in Social Settings. In: International Conference on Autonomous agents and multiagent systems (AAMAS’2005) Utrecht, the Netherlands, pp 937–944

    Google Scholar 

  • Richmond VP, Croskey JC (1999) Non Verbal Behavior in Interpersonal relations. Allyn and Bacon

    Google Scholar 

  • Ruttkay Z, Noot H, ten Hagen P (2003) Emotion Disc and Emotion Squares: tools to explore the facial expression face. Comput Graph Forum 22(1):49–53

    Article  Google Scholar 

  • Scherer KR (1984) Les fonctions des signes non verbaux dans la communication. In: Cosnier J, Brossard A (eds) La communication non verbale. Delachaux & Niestl, Paris, pp 71–100

    Google Scholar 

  • Scherer KR (1998) Analyzing Emotion Blends. In: Proceedings of the 10th conference of the international society for research on emotions Wrzburg, Germany, pp 142–148

    Google Scholar 

  • Scherer KR (2000) Emotion. Introduction to social psychology: a European perspective. Blackwell, Oxford

    Google Scholar 

  • Scherer KR, Ellgring H (2007) Multimodal expression of emotion: affect programs or componential appraisal patterns? Emotion 7:1

    Google Scholar 

  • Siegman AW, Feldstein S (1985) Multichannel integrations of nonverbal behavior. LEA–Routledge, New York, NY

    Google Scholar 

  • Tepper P, Kopp S, Cassell J (2004) Content in context: generating language and iconic gesture without a gestionary. Workshop on balanced perception and action in ECAs at automous agents and multiagent systems (AAMAS), New York, NY, USA

    Google Scholar 

  • Volpe G (2005) Special issue on expressive gesture in performing arts and new media. J New Music Rese Taylor and Francis 34:1

    Article  Google Scholar 

  • Wallbott HG (1998) Bodily expression of emotion. Eur J Soc Psychol 28:879–896

    Google Scholar 

  • Wallbott HG, Scherer KR (1986) Cues and channels in emotion recognition. J Pers Soc Psychol 51(4):690–699

    Article  Google Scholar 

  • Wegener Knudsen M, Martin J-C, Dybkjr L, Berman S, Bernsen NO, Choukri K, Heid U, Kita S, Mapelli V, Pelachaud C, Poggi I, van Elswijk G, Wittenburg P (2002a) Survey of NIMM data resources, current and future user profiles, markets and user needs for NIMm resources. ISLE natural interactivity and multimodality. Working Group Deliverable D8.1

    Google Scholar 

  • Wegener Knudsen M, Martin J-C, Dybkjr L, Machuca Ayuso M-J, Bernsen NO, Carletta J, Heid U, Kita S, Llisterri J, Pelachaud C, Poggi I, Reithinger N, van Elswijk G, Wittenburg P (2002b) Survey of multimodal annotation schemes and best practice. ISLE natural interactivity and multimodality. Working Group Deliverable D9.1

    Google Scholar 

  • Wiggers M (1982) Jugments of facial expressions of emotion predicted from facial behavior. J Nonverb Behav 7(2):101–116

    Article  Google Scholar 

  • Wonisch D, Cooper G (2002) Interface agents: preferred appearance characteristics based upon context. In: Virtual conversational characters: applications, methods, and research challenges in conjunction with HF2002 and OZCHI2002 Melbourne, Australia

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jean-Claude Martin .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2011 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Martin, JC. et al. (2011). Coordinating the Generation of Signs in Multiple Modalities in an Affective Agent. In: Cowie, R., Pelachaud, C., Petta, P. (eds) Emotion-Oriented Systems. Cognitive Technologies. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-15184-2_18

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-15184-2_18

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-15183-5

  • Online ISBN: 978-3-642-15184-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics