Abstract
Designing affective Human Computer-Interfaces such as Embodied Conversational Agents requires modeling the relations between spontaneous emotions and behaviors in several modalities. There have been a lot of psychological researches on emotion and nonverbal communication. Yet, these studies were based mostly on acted basic emotions. This paper explores how manual annotation and image processing might cooperate towards the representation of spontaneous emotional behavior in low resolution videos from TV. We describe a corpus of TV interviews and the manual annotations that have been defined. We explain the image processing algorithms that have been designed for the automatic estimation of movement quantity. Finally, we explore several ways to compare the manual annotations and the cues extracted by image processing.
Chapter PDF
Similar content being viewed by others
References
Abrilian, S., Devillers, L., Buisine, S., Martin, J.-C.: EmoTVl: Annotation of Real-life Emotions for the Specification of Multimodal Affective Interfaces. 11th International Conference on Human-Computer Interaction (HCII’2005) (2005a) Las Vegas, Nevada, USA
Boone, R. T., Cunningham, J. G.: Children’s decoding of emotion in expressive body movement: The development of cue attunement. Developmental Psychology 345 (1998)
Choi, S. M., Kim, Y. G.: An Affective User Interface Based on Facial Expression Recognition and Eye Gaze Tracking. 1st International Conference on Affective Computing and Intelligent Interaction (ACII’2005) (2005) Beijing, China 907–915
De Silva, P. R., Kleinsmith, A., Bianchi-Berthouze, N.: Towards unsupervised detection of affective body posture nuances. 1st International Conference on Affective Computing and Intelligent Interaction (ACII’2005) (2005) Beijing, China 32–40
DeMeijer, M.: The contribution of general features of body movement to the attribution of emotions. Journal of Nonverbal Behavior 13 (1989)
Devillers, L., Abrilian, S., Martin, J.-C: Representing real life emotions in audiovisual data with non basic emotional patterns and context features. First International Conference on Affective Computing & Intelligent Interaction (ACII’2005) (2005) Beijing, China 519–526
Douglas-Cowie, E., Campbell, N., Cowie, R., Roach, P.: Emotional speech; Towards a new generation of databases. Speech Communication 40 (2003)
Ekman, P.: Basic emotions. Handbook of Cognition & Emotion. J. Wiley (1999)
Ekman, P., Walla, F.: Facial Action Coding System (FACS). (1978)
el Kaliouby, R., Robinson, P.: Generalization of a Vision based Computational Model of Mind Reading. 1st International Conference on Affective Computing and Intelligent Interaction (ACII’2005) (2005) Beijing, China 582–590
Gunes, H., Piccardi, M.: Fusing Face and Body Display for Bi-modal Emotion Recognition: Single Frame Analysis and Multi-Frame Post Integration. 1st International Conference on Affective Computing and Intelligent Interaction (ACIF2005) (2005) Beijing, China 102–110
Hartmann, B., Mancini, M., Pelachaud, C: Implementing Expressive Gesture Synthesis for Embodied Conversational Agents. Gesture Workshop (GW’2005) (2005) Vannes, France
Kapur, A., Kapur, A., Virji-Babul, N., Tzanetakis, G., Driessen, P. F.: Gesture Based Affective Computing on Motion Capture Data. 1st International Conference on Affective Computing and Intelligent Interaction (ACIF2005) (2005) Beijing, China 1–8
Kipp, M.: Gesture Generation by Imitation. From Human Behavior to Computer Character Animation. Boca Raton, Dissertation.com Florida (2004)
Martin, J.-C., Abrilian, S., Devillers, L.: Annotating Multimodal Behaviors Occurring during Non Basic Emotions. 1st International Conference on Affective Computing & Intelligent Interaction (ACIF2005) (2005) Beijing, China 550–557
McNeill, D.: Hand and mind-what gestures reveal about thoughts. University of Chicago Press, IL (1992)
Newlove, J.: Laban for actors and dancers. Routledge New York (1993)
Wallbott, H. G.: Bodily expression of emotion. European Journal of Social Psychology 28 (1998)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2006 International Federation for Information Processing
About this paper
Cite this paper
Martin, JC., Caridakis, G., Devillers, L., Karpouzis, K., Abrilian, S. (2006). Manual Annotation and Automatic Image Processing of Multimodal Emotional Behaviors in TV Interviews. In: Maglogiannis, I., Karpouzis, K., Bramer, M. (eds) Artificial Intelligence Applications and Innovations. AIAI 2006. IFIP International Federation for Information Processing, vol 204. Springer, Boston, MA . https://doi.org/10.1007/0-387-34224-9_42
Download citation
DOI: https://doi.org/10.1007/0-387-34224-9_42
Publisher Name: Springer, Boston, MA
Print ISBN: 978-0-387-34223-8
Online ISBN: 978-0-387-34224-5
eBook Packages: Computer ScienceComputer Science (R0)