Skip to main content

Continuous Analysis of Affect from Voice and Face

  • Chapter

Abstract

Human affective behavior is multimodal, continuous and complex. Despite major advances within the affective computing research field, modeling, analyzing, interpreting and responding to human affective behavior still remains a challenge for automated systems. Therefore, affective and behavioral computing researchers have recently invested increased effort in exploring how to best model, analyze and interpret the subtlety, complexity and continuity of affective behavior in terms of latent dimensions (e.g., arousal, power and valence) and appraisals, rather than in terms of a small number of discrete emotion categories (e.g., happiness and sadness). This chapter aims to (i) give a brief overview of the existing efforts and the major accomplishments in modeling and analysis of emotional expressions in dimensional and continuous space while focusing on open issues and new challenges in the field, and (ii) introduce a representative approach for multimodal continuous analysis of affect from voice and face.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD   54.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    http://www.semaine-project.eu

References

  1. Affectiva’s homepage: http://www.affectiva.com/ (2011)

  2. Alvarado, N.: Arousal and valence in the direct scaling of emotional response to film clips. Motiv. Emot. 21, 323–348 (1997)

    Article  Google Scholar 

  3. Baldi, P., Brunak, S., Frasconi, P., Pollastri, G., Soda, G.: Exploiting the past and the future in protein secondary structure prediction. Bioinformatics 15, 937–946 (1999)

    Article  Google Scholar 

  4. Bartneck, C.: Integrating the occ model of emotions in embodied characters. In: Proc. of the Workshop on Virtual Conversational Characters, pp. 39–48 (2002)

    Google Scholar 

  5. Beck, A., Canamero, L., Bard, K.A.: Towards an affect space for robots to display emotional body language. In: Proc. IEEE Int. Symp. in Robot and Human Interactive Communication, pp. 464–469 (2010)

    Google Scholar 

  6. Berntson, G.G., Bigger, J.T., Eckberg, D.L., Grossman, P., Kaufmann, P.G., Malik, M., Nagaraja, H.N., Porges, S.W., Saul, J.P., Stone, P.H., van der Molen, M.W.: Heart rate variability: origins, methods, and interpretive caveats. Psychophysiology 34(6), 623 (1997)

    Article  Google Scholar 

  7. Calvo, R.A., D’Mello, S.: Affect detection: An interdisciplinary review of models, methods, and their applications. IEEE Trans. Affect. Comput. 1(1), 18–37 (2010)

    Article  Google Scholar 

  8. Caridakis, G., Malatesta, L., Kessous, L., Amir, N., Raouzaiou, A., Karpouzis, K.: Modeling naturalistic affective states via facial and vocal expressions recognition. In: Proc. of ACM Int. Conf. on Multimodal Interfaces, pp. 146–154 (2006)

    Chapter  Google Scholar 

  9. Chanel, G., Ansari-Asl, K., Pun, T.: Valence-arousal evaluation using physiological signals in an emotion recall paradigm. In: Proc. of IEEE Int. Conf. on Systems, Man and Cybernetics, pp. 2662–2667, October 2007

    Google Scholar 

  10. Chanel, G., Kierkels, J.J.M., Soleymani, M., Pun, T.: Short-term emotion assessment in a recall paradigm. Int. J. Hum.-Comput. Stud. 67(8), 607–627 (2009)

    Article  Google Scholar 

  11. Cowie, R., Douglas-Cowie, E., Savvidou, S., McMahon, E., Sawey, M., Schroder, M.: Feeltrace: An instrument for recording perceived emotion in real time. In: Proc. of ISCA Workshop on Speech and Emotion, pp. 19–24 (2000)

    Google Scholar 

  12. Cowie, R., Douglas-Cowie, E., Tsapatsoulis, N., Votsis, G., Kollias, S., Fellenz, W., Taylor, J.G.: Emotion recognition in human–computer interaction. IEEE Signal Process. Mag. 18, 33–80 (2001)

    Article  Google Scholar 

  13. Cowie, R., Gunes, H., McKeown, G., Vaclau-Schneider, L., Armstrong, J., Douglas-Cowie, E.: The emotional and communicative significance of head nods and shakes in a naturalistic database. In: Proc. of LREC Int. Workshop on Emotion, pp. 42–46 (2010)

    Google Scholar 

  14. Davitz, J.: Auditory correlates of vocal expression of emotional feeling. In: The Communication of Emotional Meaning, pp. 101–112. McGraw-Hill, New York (1964)

    Google Scholar 

  15. de Gelder, B., Vroomen, J.: The perception of emotions by ear and by eye. Cogn. Emot. 23, 289–311 (2000)

    Article  Google Scholar 

  16. Douglas-Cowie, E., Cowie, R., Sneddon, I., Cox, C., Lowry, L., McRorie, M., Martin, L. Jean-Claude, Devillers, J.-C., Abrilian, A., Batliner, S., Noam, A., Karpouzis, K.: The HUMAINE database: addressing the needs of the affective computing community. In: Proc. of the Second Int. Conf. on Affective Computing and Intelligent Interaction, pp. 488–500 (2007)

    Chapter  Google Scholar 

  17. Ekman, P., Friesen, W.V.: Head and body cues in the judgment of emotion: A reformulation. Percept. Mot. Skills 24, 711–724 (1967)

    Article  Google Scholar 

  18. Ekman, P., Friesen, W.V.: Unmasking the Face: A Guide to Recognizing Emotions from Facial Clues. Prentice Hall, New Jersey (1975)

    Google Scholar 

  19. Espinosa, H.P., Garcia, C.A.R., Pineda, L.V.: Features selection for primitives estimation on emotional speech. In: Proc. IEEE Int. Conf. on Acoustics Speech and Signal Processing, pp. 5138–5141 (2010)

    Chapter  Google Scholar 

  20. Eyben, F., Wöllmer, M., Poitschke, T., Schuller, B., Blaschke, C., Färber, B., Nguyen-Thien, N.: Emotion on the road—necessity, acceptance, and feasibility of affective computing in the car. Adv. Hum.-Comput. Interact. 2010, 263593 (2010), 17 pages

    Google Scholar 

  21. Eyben, F., Wöllmer, M., Valstar, M., Gunes, H., Schuller, B., Pantic, M.: String-based audiovisual fusion of behavioural events for the assessment of dimensional affect. In: Proc. of IEEE Int. Conf. on Automatic Face and Gesture Recognition (2011)

    Google Scholar 

  22. Faghihi, U., Fournier-Viger, P., Nkambou, R., Poirier, P., Mayers, A.: How emotional mechanism helps episodic learning in a cognitive agent. In: Proc. IEEE Symp. on Intelligent Agents, pp. 23–30 (2009)

    Chapter  Google Scholar 

  23. Feldman, L.: Valence focus and arousal focus: Individual differences in the structure of affective experience. J. Pers. Soc. Psychol. 69, 153–166 (1995)

    Article  Google Scholar 

  24. Fletcher, R., Dobson, K., Goodwin, M.S., Eydgahi, H., Wilder-Smith, O., Fernholz, D., Kuboyama, Y., Hedman, E., Poh, M.Z., Picard, R.W.: iCalm: Wearable sensor and network architecture for wirelessly communicating and logging autonomic activity. IEEE Tran. on Information Technology in Biomedicine 14(2), 215

    Google Scholar 

  25. Fontaine, J.R., Scherer, K.R., Roesch, E.B., Ellsworth, P.: The world of emotion is not two-dimensional. Psychol. Sci. 18, 1050–1057 (2007)

    Article  Google Scholar 

  26. Fragopanagos, N., Taylor, J.G.: Emotion recognition in human–computer interaction. Neural Netw. 18(4), 389–405 (2005)

    Article  Google Scholar 

  27. Frijda, N.H.: The Emotions. Cambridge University Press, Cambridge (1986)

    Google Scholar 

  28. Gilroy, S.W., Cavazza, M., Niiranen, M., Andre, E., Vogt, T., Urbain, J., Benayoun, M., Seichter, H., Billinghurst, M.: Pad-based multimodal affective fusion. In: Proc. Int. Conf. on Affective Computing and Intelligent Interaction Workshops, pp. 1–8 (2009)

    Chapter  Google Scholar 

  29. Glowinski, D., Camurri, A., Volpe, G., Dael, N., Scherer, K.: Technique for automatic emotion recognition by body gesture analysis. In: Proc. of Computer Vision and Pattern Recognition Workshops, pp. 1–6 (2008)

    Chapter  Google Scholar 

  30. Gökçay, D., Yıldırım, G.: Affective Computing and Interaction: Psychological, Cognitive and Neuroscientific Perspectives. IGI Global, Hershey (2011)

    Google Scholar 

  31. Grandjean, D., Sander, D., Scherer, K.R.: Conscious emotional experience emerges as a function of multilevel, appraisal-driven response synchronization. Conscious. Cogn. 17(2), 484–495 (2008)

    Article  Google Scholar 

  32. Graves, A., Schmidhuber, J.: Framewise phoneme classification with bidirectional LSTM and other neural network architectures. Neural Netw. 18, 602–610 (2005)

    Article  Google Scholar 

  33. Grimm, M., Kroschel, K.: Emotion estimation in speech using a 3d emotion space concept. In: Proc. IEEE Automatic Speech Recognition and Understanding Workshop, pp. 381–385 (2005)

    Chapter  Google Scholar 

  34. Grimm, M., Mower, E., Kroschel, K., Narayanan, S.: Primitives based estimation and evaluation of emotions in speech. Speech Commun. 49, 787–800 (2007)

    Article  Google Scholar 

  35. Grimm, M., Kroschel, K., Narayanan, S.: The Vera am Mittag German audio-visual emotional speech database. In: ICME, pp. 865–868. IEEE Press, New York (2008)

    Google Scholar 

  36. Grundlehner, B., Brown, L., Penders, J., Gyselinckx, B.: The design and analysis of a real-time, continuous arousal monitor. In: Proc. Int. Workshop on Wearable and Implantable Body Sensor Networks, pp. 156–161 (2009)

    Chapter  Google Scholar 

  37. Gunes, H., Pantic, M.: Automatic, dimensional and continuous emotion recognition. Int. J. Synth. Emot. 1(1), 68–99 (2010)

    Article  Google Scholar 

  38. Gunes, H., Pantic, M.: Automatic measurement of affect in dimensional and continuous spaces: Why, what, and how. In: Proc. of Measuring Behavior, pp. 122–126 (2010)

    Google Scholar 

  39. Gunes, H., Pantic, M.: Dimensional emotion prediction from spontaneous head gestures for interaction with sensitive artificial listeners. In: Proc. of International Conference on Intelligent Virtual Agents, pp. 371–377 (2010)

    Google Scholar 

  40. Gunes, H., Piccardi, M., Pantic, M.: Affective computing: focus on emotion expression, synthesis, and recognition. In: Or, J. (ed.) From the Lab to the Real World: Affect Recognition using Multiple Cues and Modalities, pp. 185–218. I-Tech Education and Publishing, Vienna (2008)

    Google Scholar 

  41. Haag, A., Goronzy, S., Schaich, P., Williams, J.: Emotion recognition using bio-sensors: First steps towards an automatic system. In: LNCS, vol. 3068, pp. 36–48 (2004)

    Google Scholar 

  42. Hochreiter, S.: Untersuchungen zu dynamischen neuronalen Netzen. Diploma thesis, Institut für Informatik, Lehrstuhl Prof. Brauer, Technische Universität München (1991)

    Google Scholar 

  43. Hochreiter, S.: The vanishing gradient problem during learning recurrent neural nets and problem solutions. Int. J. Uncertain. Fuzziness Knowl.-Based Syst. 6(2), 107–116 (1998)

    Article  MathSciNet  MATH  Google Scholar 

  44. Hoque, M.E., El Kaliouby, R., Picard, R.W.: When human coders (and machines) disagree on the meaning of facial affect in spontaneous videos. In: Proc. of Intelligent Virtual Agents, pp. 337–343 (2009)

    Chapter  Google Scholar 

  45. Huang, T.S., Hasegawa-Johnson, M.A., Chu, S.M., Zeng, Z., Tang, H.: Sensitive talking heads. IEEE Signal Process. Mag. 26, 67–72 (2009)

    Article  Google Scholar 

  46. Huttar, G.L.: Relations between prosodic variables and emotions in normal American english utterances. J. Speech Hear. Res. 11, 481–487 (1968)

    Google Scholar 

  47. Ioannou, S., Raouzaiou, A., Tzouvaras, V., Mailis, T., Karpouzis, K., Kollias, S.: Emotion recognition through facial expression analysis based on a neurofuzzy method. Neural Netw. 18(4), 423–435 (2005)

    Article  Google Scholar 

  48. Jia, J., Zhang, S., Meng, F., Wang, Y., Cai, L.: Emotional audio-visual speech synthesis based on PAD. IEEE Trans. Audio Speech Lang. Process. PP(9), 1 (2010)

    Google Scholar 

  49. Jurafsky, D., Martin, J.H.: Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics and Speech Recognition, 2nd edn. Prentice-Hall, New York (2008)

    Google Scholar 

  50. Kanluan, I., Grimm, M., Kroschel, K.: Audio-visual emotion recognition using an emotion recognition space concept. In: Proc. of the 16th European Signal Processing Conference (2008)

    Google Scholar 

  51. Karg, M., Schwimmbeck, M., Kühnlenz, K., Buss, M.: Towards mapping emotive gait patterns from human to robot. In: Proc. IEEE Int. Symp. in Robot and Human Interactive Communication, pp. 258–263 (2010)

    Google Scholar 

  52. Khalili, Z., Moradi, M.H.: Emotion recognition system using brain and peripheral signals: Using correlation dimension to improve the results of EEG. In: Proc. Int. Joint Conf. on Neural Networks, pp. 1571–1575 (2009)

    Chapter  Google Scholar 

  53. Kierkels, J.J.M., Soleymani, M., Pun, T.: Queries and tags in affect-based multimedia retrieval. In: Proc. IEEE Int. Conf. on Multimedia and Expo, pp. 1436–1439 (2009)

    Google Scholar 

  54. Kim, J.: Robust speech recognition and understanding. In: Grimm, M., Kroschel, K. (eds.) Bimodal Emotion Recognition using Speech and Physiological Changes, pp. 265–280. I-Tech Education and Publishing, Vienna (2007)

    Google Scholar 

  55. Kim, J., Andre, E.: Emotion recognition based on physiological changes in music listening. IEEE Trans. Pattern Anal. Mach. Intell. 30(12), 2067–2083 (2008)

    Article  Google Scholar 

  56. Kipp, M., Martin, J.-C.: Gesture and emotion: Can basic gestural form features discriminate emotions? In: Proc. Int. Conf. on Affective Computing and Intelligent Interaction Workshops, pp. 1–8 (2009)

    Chapter  Google Scholar 

  57. Kleinsmith, A., Bianchi-Berthouze, N.: Recognizing affective dimensions from body posture. In: Proc. of the Int. Conf. on Affective Computing and Intelligent Interaction, pp. 48–58 (2007)

    Chapter  Google Scholar 

  58. Kleinsmith, A., De Silva, P.R., Bianchi-Berthouze, N.: Recognizing emotion from postures: Cross–cultural differences in user modeling. In: Proc. of the Conf. on User Modeling, pp. 50–59 (2005)

    Chapter  Google Scholar 

  59. Kulic, D., Croft, E.A.: Affective state estimation for human-robot interaction. IEEE Trans. Robot. 23(5), 991–1000 (2007)

    Article  Google Scholar 

  60. Lang, P.J.: The Cognitive Psychophysiology of Emotion: Anxiety and the Anxiety Disorders. Erlbaum, Hillside (1985)

    Google Scholar 

  61. Levenson, R.: Emotion and the autonomic nervous system: A prospectus for research on autonomic specificity. In: Social Psychophysiology and Emotion: Theory and Clinical Applications, pp. 17–42 (1988)

    Google Scholar 

  62. McKeown, G., Valstar, M.F., Cowie, R., Pantic, M.: The SEMAINE corpus of emotionally coloured character interactions. In: Proc. of IEEE Int’l Conf. Multimedia, Expo (ICME’10), pp. 1079–1084, July 2010

    Chapter  Google Scholar 

  63. Mehrabian, A.: Pleasure-arousal-dominance: A general framework for describing and measuring individual differences in temperament. Curr. Psychol. 14, 261–292 (1996)

    Article  MathSciNet  Google Scholar 

  64. Mihelj, M., Novak, D., Munih, M.: Emotion-aware system for upper extremity rehabilitation. In: Proc. Int. Conf. on Virtual Rehabilitation, pp. 160–165 (2009)

    Chapter  Google Scholar 

  65. Nakasone, A., Prendinger, H., Ishizuka, M.: Emotion recognition from electromyography and skin conductance. In: Proc. of the 5th International Workshop on Biosignal Interpretation, pp. 219–222 (2005)

    Google Scholar 

  66. Nicolaou, M.A., Gunes, H., Pantic, M.: Audio-visual classification and fusion of spontaneous affective data in likelihood space. In: Proc. of IEEE Int. Conf. on Pattern Recognition, pp. 3695–3699 (2010)

    Google Scholar 

  67. Nicolaou, M.A., Gunes, H., Pantic, M.: Automatic segmentation of spontaneous data using dimensional labels from multiple coders. In: Proc. of LREC Int. Workshop on Multimodal Corpora: Advances in Capturing, Coding and Analyzing Multimodality, pp. 43–48 (2010)

    Google Scholar 

  68. Nicolaou, M.A., Gunes, H., Pantic, M.: Continuous prediction of spontaneous affect from multiple cues and modalities in valence–arousal space. IEEE Trans. Affect. Comput. 2(2), 92–105 (2011)

    Google Scholar 

  69. Nicolaou, M.A., Gunes, H., Pantic, M.: Output-associative RVM regression for dimensional and continuous emotion prediction. In: Proc. of IEEE Int. Conf. on Automatic Face and Gesture Recognition (2011)

    Google Scholar 

  70. Oliveira, A.M., Teixeira, M.P., Fonseca, I.B., Oliveira, M.: Joint model-parameter validation of self-estimates of valence and arousal: Probing a differential-weighting model of affective intensity. In: Proc. of the 22nd Annual Meeting of the Int. Society for Psychophysics, pp. 245–250 (2006)

    Google Scholar 

  71. Ortony, A., Clore, G.L., Collins, A.: The Cognitive Structure of Emotions. Cambridge University Press, Cambridge (1988)

    Book  Google Scholar 

  72. Parkinson, B.: Ideas and Realities of Emotion. Routledge, London (1995)

    Google Scholar 

  73. Patras, I., Pantic, M.: Particle filtering with factorized likelihoods for tracking facial features. In: Proc. of the IEEE Int. Conf. on Automatic Face and Gesture Recognition, pp. 97–102 (2004)

    Chapter  Google Scholar 

  74. Paul, B.: Accurate short-term analysis of the fundamental frequency and the harmonics-to-noise ratio of a sampled sound. In: Proceedings of the Institute of Phonetic Sciences, pp. 97–110 (1993)

    Google Scholar 

  75. Petridis, S., Gunes, H., Kaltwang, S., Pantic, M.: Static vs. dynamic modeling of human nonverbal behavior from multiple cues and modalities. In: Proc. of ACM Int. Conf. on Multimodal Interfaces, pp. 23–30 (2009)

    Google Scholar 

  76. Picard, R.W.: Emotion research by the people, for the people. Emotion Review 2(3), 250–254

    Google Scholar 

  77. Plutchik, R., Conte, H.R.: Circumplex Models of Personality and Emotions. APA, Washington (1997)

    Book  Google Scholar 

  78. Poh, M.Z., Swenson, N.C., Picard, R.W.: A wearable sensor for unobtrusive, long-term assessment of electrodermal activity. IEEE Trans. Inf. Technol. Biomed. 57(5), 1243–1252 (2010)

    Google Scholar 

  79. Pun, T., Alecu, T.I., Chanel, G., Kronegg, J., Voloshynovskiy, S.: Brain–computer interaction research at the Computer Vision and Multimedia Laboratory, University of Geneva. IEEE Trans. Neural Syst. Rehabil. Eng. 14, 210–213 (2006)

    Article  Google Scholar 

  80. Rehm, M., Wissner, M.: Gamble-a multiuser game with an embodied conversational agent. In: Lecture Notes in Computer Science, vol. 3711, pp. 180–191 (2005)

    Google Scholar 

  81. Roseman, I.J.: Cognitive determinants of emotion: A structural theory. In: Shaver, P. (ed.) Review of Personality & Social Psychology, Beverly Hills, CA, vol. 5, pp. 11–36. Sage, Thousand Oaks (1984)

    Google Scholar 

  82. Russell, J.A.: A circumplex model of affect. J. Pers. Soc. Psychol. 39, 1161–1178 (1980)

    Article  Google Scholar 

  83. Salahuddin, L., Cho, J., Jeong, M.G., Kim, D.: Ultra short term analysis of heart rate variability for monitoring mental stress in mobile settings. In: Proc. of the IEEE 29th International Conference of the EMBS, pp. 39–48 (2007)

    Google Scholar 

  84. Sander, D., Grandjean, D., Scherer, K.R.: A systems approach to appraisal mechanisms in emotion. Neural Netw. 18(4), 317–352 (2005)

    Article  Google Scholar 

  85. Scherer, K.R., Oshinsky, J.S.: Cue utilization in emotion attribution from auditory stimuli. Motiv. Emot. 1, 331–346 (1977)

    Article  Google Scholar 

  86. Scherer, K.R., Schorr, A., Johnstone, T.: Appraisal Processes in Emotion: Theory, Methods, Research. Oxford University Press, Oxford/New York (2001)

    Google Scholar 

  87. Schröder, M.: Speech and emotion research: an overview of research frameworks and a dimensional approach to emotional speech synthesis. Ph.D. dissertation, Univ. of Saarland, Germany (2003)

    Google Scholar 

  88. Schröder, M., Heylen, D., Poggi, I.: Perception of non-verbal emotional listener feedback. In: Hoffmann, R., Mixdorff, H. (eds.) Speech Prosody, pp. 1–4 (2006)

    Google Scholar 

  89. Schröder, M., Bevacqua, E., Eyben, F., Gunes, H., Heylen, D., Maat, M., Pammi, S., Pantic, M., Pelachaud, C., Schuller, B., Sevin, E., Valstar, M., Wöllmer, M.: A demonstration of audiovisual sensitive artificial listeners. In: Proc. of Int. Conf. on Affective Computing and Intelligent Interaction, vol. 1, pp. 263–264 (2009)

    Google Scholar 

  90. Schröder, M., Pammi, S., Gunes, H., Pantic, M., Valstar, M., Cowie, R., McKeown, G., Heylen, D., ter Maat, M., Eyben, F., Schuller, B., Wöllmer, M., Bevacqua, E., Pelachaud, C., de Sevin, E.: Come and have an emotional workout with sensitive artificial listeners! In: Proc. of IEEE Int. Conf. on Automatic Face and Gesture Recognition (2011)

    Google Scholar 

  91. Schuller, B., Müller, R., Eyben, F., Gast, J., Hörnler, B., Wöllmer, M., Rigoll, G., Höthker, A., Konosu, H.: Being bored? Recognising natural interest by extensive audiovisual integration for real-life application. Image Vis. Comput. 27, 1760–1774 (2009)

    Article  Google Scholar 

  92. Schuller, B., Vlasenko, B., Eyben, F., Rigoll, G., Wendemuth, A.: Acoustic emotion recognition: A benchmark comparison of performances. In: Proc. of Automatic Speech Recognition and Understanding Workshop, pp. 552–557 (2009)

    Chapter  Google Scholar 

  93. Schuller, B., Steidl, S., Batliner, A., Burkhardt, F., Devillers, L., Müller, C., Narayanan, S.: The INTERSPEECH 2010 paralinguistic challenge. In: Proc. INTERSPEECH, pp. 2794–2797 (2010)

    Google Scholar 

  94. Schuster, M., Paliwal, K.K.: Bidirectional recurrent neural networks. IEEE Trans. Signal Process. 45, 2673–2681 (1997)

    Article  Google Scholar 

  95. Shen, X., Fu, X., Xuan, Y.: Do different emotional valences have same effects on spatial attention. In: Proc. of Int. Conf. on Natural Computation, vol. 4, pp. 1989–1993 (2010)

    Google Scholar 

  96. Sneddon, I., McKeown, G., McRorie, M., Vukicevic, T.: Cross-cultural patterns in dynamic ratings of positive and negative natural emotional behaviour. PLoS ONE 6, e14679–e14679 (2011)

    Article  Google Scholar 

  97. Soleymani, M., Davis, J., Pun, T.: A collaborative personalized affective video retrieval system. In: Proc. Int. Conf. on Affective Computing and Intelligent Interaction and Workshops, pp. 1–2 (2009)

    Chapter  Google Scholar 

  98. Sun, K., Yu, J., Huang, Y., Hu, X.: An improved valence-arousal emotion space for video affective content representation and recognition. In: Proc. IEEE Int. Conf. on Multimedia and Expo, pp. 566–569 (2009)

    Google Scholar 

  99. Trouvain, J., Barry, W.J.: The prosody of excitement in horse race commentaries. In: Proc. ISCA Workshop Speech Emotion, pp. 86–91 (2000)

    Google Scholar 

  100. Truong, K.P., van Leeuwen, D.A. Neerincx, M.A. de Jong, F.M.G.: Arousal and valence prediction in spontaneous emotional speech: Felt versus perceived emotion. In: Proc. INTERSPEECH, pp. 2027–2030 (2009)

    Google Scholar 

  101. Tsai, T.-C., Chen, J.-J., Lo, W.-C.: Design and implementation of mobile personal emotion monitoring system. In: Proc. Int. Conf. on Mobile Data Management: Systems, Services and Middleware, pp. 430–435 (2009)

    Chapter  Google Scholar 

  102. Tsiamyrtzis, P., Dowdall, J., Shastri, D., Pavlidis, I.T., Frank, M.G., Ekman, P.: Imaging facial physiology for the detection of deceit. Int. J. Comput. Vis. (2007)

    Google Scholar 

  103. Wang, P., Ji, Q.: Performance modeling and prediction of face recognition systems. In: Proc. of IEEE Conference on Computer Vision and Pattern Recognition, vol. 2, pp. 1566–1573 (2006)

    Google Scholar 

  104. Wassermann, K.C., Eng, K., Verschure, P.F.M.J.: Live soundscape composition based on synthetic emotions. IEEE Multimed. 10, 82–90 (2003)

    Article  Google Scholar 

  105. Wöllmer, M., Eyben, F., Reiter, S., Schuller, B., Cox, C., Douglas-Cowie, E., Cowie, R.: Abandoning emotion classes—towards continuous emotion recognition with modelling of long-range dependencies. In: Proc. INTERSPEECH, pp. 597–600 (2008)

    Google Scholar 

  106. Wöllmer, M., Schuller, B., Eyben, F., Rigoll, G.: Combining long short-term memory and dynamic Bayesian networks for incremental emotion-sensitive artificial listening. IEEE J. Sel. Top. Signal Process. 4(5), 867–881 (2010)

    Article  Google Scholar 

  107. Yang, Y.-H., Lin, Y.-C., Su, Y.-F., Chen, H.H.: Music emotion classification: A regression approach. In: Proc. of IEEE Int. Conf. on Multimedia and Expo, pp. 208–211 (2007)

    Chapter  Google Scholar 

  108. Yu, C., Aoki, P.M., Woodruff, A.: Detecting user engagement in everyday conversations. In: Proc. of 8th Int. Conf. on Spoken Language Processing (2004)

    Google Scholar 

  109. Zeng, Z., Pantic, M., Roisman, G.I., Huang, T.S.: A survey of affect recognition methods: Audio, visual, and spontaneous expressions. IEEE Trans. Pattern Anal. Mach. Intell. 31, 39–58 (2009)

    Article  Google Scholar 

Download references

Acknowledgements

This work has been funded by EU [FP7/2007-2013] Grant agreement No. 211486 (SEMAINE) and the ERC Starting Grant agreement No. ERC-2007-StG-203143 (MAHNOB).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hatice Gunes .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2011 Springer-Verlag London Limited

About this chapter

Cite this chapter

Gunes, H., Nicolaou, M.A., Pantic, M. (2011). Continuous Analysis of Affect from Voice and Face. In: Salah, A., Gevers, T. (eds) Computer Analysis of Human Behavior. Springer, London. https://doi.org/10.1007/978-0-85729-994-9_10

Download citation

  • DOI: https://doi.org/10.1007/978-0-85729-994-9_10

  • Publisher Name: Springer, London

  • Print ISBN: 978-0-85729-993-2

  • Online ISBN: 978-0-85729-994-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics