Advertisement

Multimodal Authoring Tool for Populating a Database of Emotional Reactive Animations

  • Alejandra García-Rojas
  • Mario Gutiérrez
  • Daniel Thalmann
  • Frédéric Vexo
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3869)

Abstract

We aim to create a model of emotional reactive virtual humans. This model will help to define realistic behavior for virtual characters based on emotions and events in the Virtual Environment to which they react. A large set of pre-recorded animations will be used to obtain such model. We have defined a knowledge-based system to store animations of reflex movements taking into account personality and emotional state. Populating such a database is a complex task. In this paper we describe a multimodal authoring tool that provides a solution to this problem. Our multimodal tool makes use of motion capture equipment, a handheld device and a large projection screen.

Keywords

Virtual Environment Multiagent System Motion Capture Five Factor Model International Joint Conference 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Abaci, T., de Bondeli, R., Ciger, J., Clavien, M., Erol, F., Gutierrez, M., Noverraz, S., Renault, O., Vexo, F., Thalmann, D.: Magic wand and enigma of the sphinx. Computers & Graphics 28(4), 477–484 (August 2004)CrossRefGoogle Scholar
  2. 2.
    Amara, Y., Gutierrez, M., Vexo, F., Thalmann, D.: A maya exporting plug-in for mpeg-4 fba human characters. In: Proceedings of the First International Workshop on Interactive Rich Media Content Production: Architectures, Technologies, Applications and Tools (RICHMEDIA 2003), pp. 121–130 (2003)Google Scholar
  3. 3.
    André, E., Klesen, M., Gebhard, P., Allen, S., Rist, T.: Integrating models of personality and emotions into lifelike characters, pp. 150–165 (2000)Google Scholar
  4. 4.
    Ascension Technology Corporation, http://www.ascension-tech.com/
  5. 5.
    Babski, C., Thalmann, D.: Real-time animation and motion capture in web human director (whd). In: VRML 2000: Proceedings of the fifth symposium on Virtual reality modeling language (Web3D-VRML), pp. 139–145 (2000)Google Scholar
  6. 6.
    Bolt, R.A.: Put-that-there: Voice and gesture at the graphics interface. In: SIGGRAPH 1980: Proceedings of the 7th annual conference on Computer graphics and interactive techniques, pp. 262–270. ACM Press, New York (1980)Google Scholar
  7. 7.
    Ciger, J., Gutierrez, M., Vexo, F., Thalmann, D.: The magic wand. In: SCCG 2003: Proceedings of the 19th spring conference on Computer graphics, pp. 119–124. ACM Press, New York, NY, USA (2003)Google Scholar
  8. 8.
    A. S. Corp. Maya, 3d animation and effects software, http://www.alias.com
  9. 9.
    Doyle, P.: Believability through context using knowledge in the world to create intelligent characters. In: AAMAS 2002: Proceedings of the first international joint conference on Autonomous agents and multiagent systems, pp. 342–349. ACM Press, New York (2002)CrossRefGoogle Scholar
  10. 10.
    Egges, A., Kshirsagar, S., Thalmann, N.M.: A model for personality and emotion simulation. In: Palade, V., Howlett, R.J., Jain, L. (eds.) KES 2003. LNCS, vol. 2774. Springer, Heidelberg (2003)Google Scholar
  11. 11.
    Ekman, P.: Emotion in the Human Face. Cambridge University Press, New York (1982)Google Scholar
  12. 12.
    Grunvogel, S., Piesk, J., Schwichtenberg, S., Buchel, G.: Amoba: A database system for annotating captured human movements. In: CA 2002: Proceedings of the Computer Animation, pp. 98–102. IEEE Computer Society, Los Alamitos (2002)CrossRefGoogle Scholar
  13. 13.
    Gutierrez, M., Thalmann, D., Vexo, F., Moccozet, L., Magnenat-Thalmann, N., Mortara, M., Spagnuolo, M.: An ontology of virtual humans: incorporating semantics into human shapes. In: Proceedings of Workshop towards Semantic Virtual Environments (SVE 2005), March 2005, pp. 57–67 (2005)Google Scholar
  14. 14.
    Gutierrez, M., Vexo, F., Thalmann, D.: The mobile animator: Interactive character animation in collaborative virtual environments. In: VR 2004: Proceedings of the IEEE Virtual Reality 2004 (VR 2004), Washington, DC, USA, p. 125. IEEE Computer Society, Los Alamitos (2004)Google Scholar
  15. 15.
    H-anim. The humanoid animation working group, http://www.h-anim.org
  16. 16.
    Herbelin, B.: Shared Input Device Controller, http://vrlab.epfl.ch/bhbn/birdnet/index.html
  17. 17.
    Huang, A., Huang, Z., Prabhakaran, B., Ruiz, J.C.R.: Interactive visual method for motion and model reuse. In: GRAPHITE 2003: Proceedings of the 1st international conference on Computer graphics and interactive techniques in Australasia and South East Asia, pp. 29–36. ACM Press, New York (2003)Google Scholar
  18. 18.
    ISO/IEC 14496-2:1999. Information Technology – Coding of Audio-Visual Objects, Part 1: Systems (MPEG-4 v.2) (December 1999); ISO/IEC JTC 1/SC 29/WG 11 Document No. W2739 Google Scholar
  19. 19.
    Kshirsagar, S.: A multilayer personality model. In: SMARTGRAPH 2002: Proceedings of the 2nd international symposium on Smart graphics, pp. 107–115. ACM Press, New York (2002)Google Scholar
  20. 20.
    Kshirsagar, S., Magnenat-Thalmann, N.: Virtual humans personified. In: AAMAS 2002: Proceedings of the first international joint conference on Autonomous agents and multiagent systems, New York, NY, USA, pp. 356–357 (2002)Google Scholar
  21. 21.
    Lee, J., Chai, J., Reitsma, P.S.A., Hodgins, J.K., Pollard, N.S.: Interactive control of avatars animated with human motion data. In: SIGGRAPH 2002: Proceedings of the 29th annual conference on Computer graphics and interactive techniques, pp. 491–500. ACM Press, New York (2002)CrossRefGoogle Scholar
  22. 22.
    Marsella, S., Gratch, J.: A step toward irrationality: using emotion to change belief. In: AAMAS 2002: Proceedings of the first international joint conference on Autonomous agents and multiagent systems, pp. 334–341. ACM Press, New York, NY, USA (2002)CrossRefGoogle Scholar
  23. 23.
    McCrae, R.R., John, O.P.: An introduction to the five-factor model and its applications. Journal of Personality (60), 175–215 (1992)Google Scholar
  24. 24.
    Ortony, A., Clore, G.L., Collins, A.: The Cognitive Structure of Emotions. Cambridge University Press, Cambridge (1998)Google Scholar
  25. 25.
    Oviatt, S.: Multimodal interfaces, pp. 286–304 (2003)Google Scholar
  26. 26.
    Schertenleib, S., Gutierrez, M., Vexo, F., Thalmann, D.: Conducting a virtual orchestra. IEEE MultiMedia 11(3), 40–49 (2004)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Alejandra García-Rojas
    • 1
  • Mario Gutiérrez
    • 1
  • Daniel Thalmann
    • 1
  • Frédéric Vexo
    • 1
  1. 1.Virtual Reality Laboratory (VRlab)École Polytechnique Fédérale de Lausanne (EPFL)LausanneSwitzerland

Personalised recommendations