Advertisement

Continuous Analysis of Affect from Voice and Face

  • Hatice Gunes
  • Mihalis A. Nicolaou
  • Maja Pantic

Abstract

Human affective behavior is multimodal, continuous and complex. Despite major advances within the affective computing research field, modeling, analyzing, interpreting and responding to human affective behavior still remains a challenge for automated systems. Therefore, affective and behavioral computing researchers have recently invested increased effort in exploring how to best model, analyze and interpret the subtlety, complexity and continuity of affective behavior in terms of latent dimensions (e.g., arousal, power and valence) and appraisals, rather than in terms of a small number of discrete emotion categories (e.g., happiness and sadness). This chapter aims to (i) give a brief overview of the existing efforts and the major accomplishments in modeling and analysis of emotional expressions in dimensional and continuous space while focusing on open issues and new challenges in the field, and (ii) introduce a representative approach for multimodal continuous analysis of affect from voice and face.

Keywords

Root Mean Square Error Facial Expression Emotion Category Affect Recognition Arousal Dimension 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Notes

Acknowledgements

This work has been funded by EU [FP7/2007-2013] Grant agreement No. 211486 (SEMAINE) and the ERC Starting Grant agreement No. ERC-2007-StG-203143 (MAHNOB).

References

  1. 1.
    Affectiva’s homepage: http://www.affectiva.com/ (2011)
  2. 2.
    Alvarado, N.: Arousal and valence in the direct scaling of emotional response to film clips. Motiv. Emot. 21, 323–348 (1997) CrossRefGoogle Scholar
  3. 3.
    Baldi, P., Brunak, S., Frasconi, P., Pollastri, G., Soda, G.: Exploiting the past and the future in protein secondary structure prediction. Bioinformatics 15, 937–946 (1999) CrossRefGoogle Scholar
  4. 4.
    Bartneck, C.: Integrating the occ model of emotions in embodied characters. In: Proc. of the Workshop on Virtual Conversational Characters, pp. 39–48 (2002) Google Scholar
  5. 5.
    Beck, A., Canamero, L., Bard, K.A.: Towards an affect space for robots to display emotional body language. In: Proc. IEEE Int. Symp. in Robot and Human Interactive Communication, pp. 464–469 (2010) Google Scholar
  6. 6.
    Berntson, G.G., Bigger, J.T., Eckberg, D.L., Grossman, P., Kaufmann, P.G., Malik, M., Nagaraja, H.N., Porges, S.W., Saul, J.P., Stone, P.H., van der Molen, M.W.: Heart rate variability: origins, methods, and interpretive caveats. Psychophysiology 34(6), 623 (1997) CrossRefGoogle Scholar
  7. 7.
    Calvo, R.A., D’Mello, S.: Affect detection: An interdisciplinary review of models, methods, and their applications. IEEE Trans. Affect. Comput. 1(1), 18–37 (2010) CrossRefGoogle Scholar
  8. 8.
    Caridakis, G., Malatesta, L., Kessous, L., Amir, N., Raouzaiou, A., Karpouzis, K.: Modeling naturalistic affective states via facial and vocal expressions recognition. In: Proc. of ACM Int. Conf. on Multimodal Interfaces, pp. 146–154 (2006) CrossRefGoogle Scholar
  9. 9.
    Chanel, G., Ansari-Asl, K., Pun, T.: Valence-arousal evaluation using physiological signals in an emotion recall paradigm. In: Proc. of IEEE Int. Conf. on Systems, Man and Cybernetics, pp. 2662–2667, October 2007 Google Scholar
  10. 10.
    Chanel, G., Kierkels, J.J.M., Soleymani, M., Pun, T.: Short-term emotion assessment in a recall paradigm. Int. J. Hum.-Comput. Stud. 67(8), 607–627 (2009) CrossRefGoogle Scholar
  11. 11.
    Cowie, R., Douglas-Cowie, E., Savvidou, S., McMahon, E., Sawey, M., Schroder, M.: Feeltrace: An instrument for recording perceived emotion in real time. In: Proc. of ISCA Workshop on Speech and Emotion, pp. 19–24 (2000) Google Scholar
  12. 12.
    Cowie, R., Douglas-Cowie, E., Tsapatsoulis, N., Votsis, G., Kollias, S., Fellenz, W., Taylor, J.G.: Emotion recognition in human–computer interaction. IEEE Signal Process. Mag. 18, 33–80 (2001) CrossRefGoogle Scholar
  13. 13.
    Cowie, R., Gunes, H., McKeown, G., Vaclau-Schneider, L., Armstrong, J., Douglas-Cowie, E.: The emotional and communicative significance of head nods and shakes in a naturalistic database. In: Proc. of LREC Int. Workshop on Emotion, pp. 42–46 (2010) Google Scholar
  14. 14.
    Davitz, J.: Auditory correlates of vocal expression of emotional feeling. In: The Communication of Emotional Meaning, pp. 101–112. McGraw-Hill, New York (1964) Google Scholar
  15. 15.
    de Gelder, B., Vroomen, J.: The perception of emotions by ear and by eye. Cogn. Emot. 23, 289–311 (2000) CrossRefGoogle Scholar
  16. 16.
    Douglas-Cowie, E., Cowie, R., Sneddon, I., Cox, C., Lowry, L., McRorie, M., Martin, L. Jean-Claude, Devillers, J.-C., Abrilian, A., Batliner, S., Noam, A., Karpouzis, K.: The HUMAINE database: addressing the needs of the affective computing community. In: Proc. of the Second Int. Conf. on Affective Computing and Intelligent Interaction, pp. 488–500 (2007) CrossRefGoogle Scholar
  17. 17.
    Ekman, P., Friesen, W.V.: Head and body cues in the judgment of emotion: A reformulation. Percept. Mot. Skills 24, 711–724 (1967) CrossRefGoogle Scholar
  18. 18.
    Ekman, P., Friesen, W.V.: Unmasking the Face: A Guide to Recognizing Emotions from Facial Clues. Prentice Hall, New Jersey (1975) Google Scholar
  19. 19.
    Espinosa, H.P., Garcia, C.A.R., Pineda, L.V.: Features selection for primitives estimation on emotional speech. In: Proc. IEEE Int. Conf. on Acoustics Speech and Signal Processing, pp. 5138–5141 (2010) CrossRefGoogle Scholar
  20. 20.
    Eyben, F., Wöllmer, M., Poitschke, T., Schuller, B., Blaschke, C., Färber, B., Nguyen-Thien, N.: Emotion on the road—necessity, acceptance, and feasibility of affective computing in the car. Adv. Hum.-Comput. Interact. 2010, 263593 (2010), 17 pages Google Scholar
  21. 21.
    Eyben, F., Wöllmer, M., Valstar, M., Gunes, H., Schuller, B., Pantic, M.: String-based audiovisual fusion of behavioural events for the assessment of dimensional affect. In: Proc. of IEEE Int. Conf. on Automatic Face and Gesture Recognition (2011) Google Scholar
  22. 22.
    Faghihi, U., Fournier-Viger, P., Nkambou, R., Poirier, P., Mayers, A.: How emotional mechanism helps episodic learning in a cognitive agent. In: Proc. IEEE Symp. on Intelligent Agents, pp. 23–30 (2009) CrossRefGoogle Scholar
  23. 23.
    Feldman, L.: Valence focus and arousal focus: Individual differences in the structure of affective experience. J. Pers. Soc. Psychol. 69, 153–166 (1995) CrossRefGoogle Scholar
  24. 24.
    Fletcher, R., Dobson, K., Goodwin, M.S., Eydgahi, H., Wilder-Smith, O., Fernholz, D., Kuboyama, Y., Hedman, E., Poh, M.Z., Picard, R.W.: iCalm: Wearable sensor and network architecture for wirelessly communicating and logging autonomic activity. IEEE Tran. on Information Technology in Biomedicine 14(2), 215 Google Scholar
  25. 25.
    Fontaine, J.R., Scherer, K.R., Roesch, E.B., Ellsworth, P.: The world of emotion is not two-dimensional. Psychol. Sci. 18, 1050–1057 (2007) CrossRefGoogle Scholar
  26. 26.
    Fragopanagos, N., Taylor, J.G.: Emotion recognition in human–computer interaction. Neural Netw. 18(4), 389–405 (2005) CrossRefGoogle Scholar
  27. 27.
    Frijda, N.H.: The Emotions. Cambridge University Press, Cambridge (1986) Google Scholar
  28. 28.
    Gilroy, S.W., Cavazza, M., Niiranen, M., Andre, E., Vogt, T., Urbain, J., Benayoun, M., Seichter, H., Billinghurst, M.: Pad-based multimodal affective fusion. In: Proc. Int. Conf. on Affective Computing and Intelligent Interaction Workshops, pp. 1–8 (2009) CrossRefGoogle Scholar
  29. 29.
    Glowinski, D., Camurri, A., Volpe, G., Dael, N., Scherer, K.: Technique for automatic emotion recognition by body gesture analysis. In: Proc. of Computer Vision and Pattern Recognition Workshops, pp. 1–6 (2008) CrossRefGoogle Scholar
  30. 30.
    Gökçay, D., Yıldırım, G.: Affective Computing and Interaction: Psychological, Cognitive and Neuroscientific Perspectives. IGI Global, Hershey (2011) Google Scholar
  31. 31.
    Grandjean, D., Sander, D., Scherer, K.R.: Conscious emotional experience emerges as a function of multilevel, appraisal-driven response synchronization. Conscious. Cogn. 17(2), 484–495 (2008) CrossRefGoogle Scholar
  32. 32.
    Graves, A., Schmidhuber, J.: Framewise phoneme classification with bidirectional LSTM and other neural network architectures. Neural Netw. 18, 602–610 (2005) CrossRefGoogle Scholar
  33. 33.
    Grimm, M., Kroschel, K.: Emotion estimation in speech using a 3d emotion space concept. In: Proc. IEEE Automatic Speech Recognition and Understanding Workshop, pp. 381–385 (2005) CrossRefGoogle Scholar
  34. 34.
    Grimm, M., Mower, E., Kroschel, K., Narayanan, S.: Primitives based estimation and evaluation of emotions in speech. Speech Commun. 49, 787–800 (2007) CrossRefGoogle Scholar
  35. 35.
    Grimm, M., Kroschel, K., Narayanan, S.: The Vera am Mittag German audio-visual emotional speech database. In: ICME, pp. 865–868. IEEE Press, New York (2008) Google Scholar
  36. 36.
    Grundlehner, B., Brown, L., Penders, J., Gyselinckx, B.: The design and analysis of a real-time, continuous arousal monitor. In: Proc. Int. Workshop on Wearable and Implantable Body Sensor Networks, pp. 156–161 (2009) CrossRefGoogle Scholar
  37. 37.
    Gunes, H., Pantic, M.: Automatic, dimensional and continuous emotion recognition. Int. J. Synth. Emot. 1(1), 68–99 (2010) CrossRefGoogle Scholar
  38. 38.
    Gunes, H., Pantic, M.: Automatic measurement of affect in dimensional and continuous spaces: Why, what, and how. In: Proc. of Measuring Behavior, pp. 122–126 (2010) Google Scholar
  39. 39.
    Gunes, H., Pantic, M.: Dimensional emotion prediction from spontaneous head gestures for interaction with sensitive artificial listeners. In: Proc. of International Conference on Intelligent Virtual Agents, pp. 371–377 (2010) Google Scholar
  40. 40.
    Gunes, H., Piccardi, M., Pantic, M.: Affective computing: focus on emotion expression, synthesis, and recognition. In: Or, J. (ed.) From the Lab to the Real World: Affect Recognition using Multiple Cues and Modalities, pp. 185–218. I-Tech Education and Publishing, Vienna (2008) Google Scholar
  41. 41.
    Haag, A., Goronzy, S., Schaich, P., Williams, J.: Emotion recognition using bio-sensors: First steps towards an automatic system. In: LNCS, vol. 3068, pp. 36–48 (2004) Google Scholar
  42. 42.
    Hochreiter, S.: Untersuchungen zu dynamischen neuronalen Netzen. Diploma thesis, Institut für Informatik, Lehrstuhl Prof. Brauer, Technische Universität München (1991) Google Scholar
  43. 43.
    Hochreiter, S.: The vanishing gradient problem during learning recurrent neural nets and problem solutions. Int. J. Uncertain. Fuzziness Knowl.-Based Syst. 6(2), 107–116 (1998) MathSciNetMATHCrossRefGoogle Scholar
  44. 44.
    Hoque, M.E., El Kaliouby, R., Picard, R.W.: When human coders (and machines) disagree on the meaning of facial affect in spontaneous videos. In: Proc. of Intelligent Virtual Agents, pp. 337–343 (2009) CrossRefGoogle Scholar
  45. 45.
    Huang, T.S., Hasegawa-Johnson, M.A., Chu, S.M., Zeng, Z., Tang, H.: Sensitive talking heads. IEEE Signal Process. Mag. 26, 67–72 (2009) CrossRefGoogle Scholar
  46. 46.
    Huttar, G.L.: Relations between prosodic variables and emotions in normal American english utterances. J. Speech Hear. Res. 11, 481–487 (1968) Google Scholar
  47. 47.
    Ioannou, S., Raouzaiou, A., Tzouvaras, V., Mailis, T., Karpouzis, K., Kollias, S.: Emotion recognition through facial expression analysis based on a neurofuzzy method. Neural Netw. 18(4), 423–435 (2005) CrossRefGoogle Scholar
  48. 48.
    Jia, J., Zhang, S., Meng, F., Wang, Y., Cai, L.: Emotional audio-visual speech synthesis based on PAD. IEEE Trans. Audio Speech Lang. Process. PP(9), 1 (2010) Google Scholar
  49. 49.
    Jurafsky, D., Martin, J.H.: Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics and Speech Recognition, 2nd edn. Prentice-Hall, New York (2008) Google Scholar
  50. 50.
    Kanluan, I., Grimm, M., Kroschel, K.: Audio-visual emotion recognition using an emotion recognition space concept. In: Proc. of the 16th European Signal Processing Conference (2008) Google Scholar
  51. 51.
    Karg, M., Schwimmbeck, M., Kühnlenz, K., Buss, M.: Towards mapping emotive gait patterns from human to robot. In: Proc. IEEE Int. Symp. in Robot and Human Interactive Communication, pp. 258–263 (2010) Google Scholar
  52. 52.
    Khalili, Z., Moradi, M.H.: Emotion recognition system using brain and peripheral signals: Using correlation dimension to improve the results of EEG. In: Proc. Int. Joint Conf. on Neural Networks, pp. 1571–1575 (2009) CrossRefGoogle Scholar
  53. 53.
    Kierkels, J.J.M., Soleymani, M., Pun, T.: Queries and tags in affect-based multimedia retrieval. In: Proc. IEEE Int. Conf. on Multimedia and Expo, pp. 1436–1439 (2009) Google Scholar
  54. 54.
    Kim, J.: Robust speech recognition and understanding. In: Grimm, M., Kroschel, K. (eds.) Bimodal Emotion Recognition using Speech and Physiological Changes, pp. 265–280. I-Tech Education and Publishing, Vienna (2007) Google Scholar
  55. 55.
    Kim, J., Andre, E.: Emotion recognition based on physiological changes in music listening. IEEE Trans. Pattern Anal. Mach. Intell. 30(12), 2067–2083 (2008) CrossRefGoogle Scholar
  56. 56.
    Kipp, M., Martin, J.-C.: Gesture and emotion: Can basic gestural form features discriminate emotions? In: Proc. Int. Conf. on Affective Computing and Intelligent Interaction Workshops, pp. 1–8 (2009) CrossRefGoogle Scholar
  57. 57.
    Kleinsmith, A., Bianchi-Berthouze, N.: Recognizing affective dimensions from body posture. In: Proc. of the Int. Conf. on Affective Computing and Intelligent Interaction, pp. 48–58 (2007) CrossRefGoogle Scholar
  58. 58.
    Kleinsmith, A., De Silva, P.R., Bianchi-Berthouze, N.: Recognizing emotion from postures: Cross–cultural differences in user modeling. In: Proc. of the Conf. on User Modeling, pp. 50–59 (2005) CrossRefGoogle Scholar
  59. 59.
    Kulic, D., Croft, E.A.: Affective state estimation for human-robot interaction. IEEE Trans. Robot. 23(5), 991–1000 (2007) CrossRefGoogle Scholar
  60. 60.
    Lang, P.J.: The Cognitive Psychophysiology of Emotion: Anxiety and the Anxiety Disorders. Erlbaum, Hillside (1985) Google Scholar
  61. 61.
    Levenson, R.: Emotion and the autonomic nervous system: A prospectus for research on autonomic specificity. In: Social Psychophysiology and Emotion: Theory and Clinical Applications, pp. 17–42 (1988) Google Scholar
  62. 62.
    McKeown, G., Valstar, M.F., Cowie, R., Pantic, M.: The SEMAINE corpus of emotionally coloured character interactions. In: Proc. of IEEE Int’l Conf. Multimedia, Expo (ICME’10), pp. 1079–1084, July 2010 CrossRefGoogle Scholar
  63. 63.
    Mehrabian, A.: Pleasure-arousal-dominance: A general framework for describing and measuring individual differences in temperament. Curr. Psychol. 14, 261–292 (1996) MathSciNetCrossRefGoogle Scholar
  64. 64.
    Mihelj, M., Novak, D., Munih, M.: Emotion-aware system for upper extremity rehabilitation. In: Proc. Int. Conf. on Virtual Rehabilitation, pp. 160–165 (2009) CrossRefGoogle Scholar
  65. 65.
    Nakasone, A., Prendinger, H., Ishizuka, M.: Emotion recognition from electromyography and skin conductance. In: Proc. of the 5th International Workshop on Biosignal Interpretation, pp. 219–222 (2005) Google Scholar
  66. 66.
    Nicolaou, M.A., Gunes, H., Pantic, M.: Audio-visual classification and fusion of spontaneous affective data in likelihood space. In: Proc. of IEEE Int. Conf. on Pattern Recognition, pp. 3695–3699 (2010) Google Scholar
  67. 67.
    Nicolaou, M.A., Gunes, H., Pantic, M.: Automatic segmentation of spontaneous data using dimensional labels from multiple coders. In: Proc. of LREC Int. Workshop on Multimodal Corpora: Advances in Capturing, Coding and Analyzing Multimodality, pp. 43–48 (2010) Google Scholar
  68. 68.
    Nicolaou, M.A., Gunes, H., Pantic, M.: Continuous prediction of spontaneous affect from multiple cues and modalities in valence–arousal space. IEEE Trans. Affect. Comput. 2(2), 92–105 (2011) Google Scholar
  69. 69.
    Nicolaou, M.A., Gunes, H., Pantic, M.: Output-associative RVM regression for dimensional and continuous emotion prediction. In: Proc. of IEEE Int. Conf. on Automatic Face and Gesture Recognition (2011) Google Scholar
  70. 70.
    Oliveira, A.M., Teixeira, M.P., Fonseca, I.B., Oliveira, M.: Joint model-parameter validation of self-estimates of valence and arousal: Probing a differential-weighting model of affective intensity. In: Proc. of the 22nd Annual Meeting of the Int. Society for Psychophysics, pp. 245–250 (2006) Google Scholar
  71. 71.
    Ortony, A., Clore, G.L., Collins, A.: The Cognitive Structure of Emotions. Cambridge University Press, Cambridge (1988) CrossRefGoogle Scholar
  72. 72.
    Parkinson, B.: Ideas and Realities of Emotion. Routledge, London (1995) Google Scholar
  73. 73.
    Patras, I., Pantic, M.: Particle filtering with factorized likelihoods for tracking facial features. In: Proc. of the IEEE Int. Conf. on Automatic Face and Gesture Recognition, pp. 97–102 (2004) CrossRefGoogle Scholar
  74. 74.
    Paul, B.: Accurate short-term analysis of the fundamental frequency and the harmonics-to-noise ratio of a sampled sound. In: Proceedings of the Institute of Phonetic Sciences, pp. 97–110 (1993) Google Scholar
  75. 75.
    Petridis, S., Gunes, H., Kaltwang, S., Pantic, M.: Static vs. dynamic modeling of human nonverbal behavior from multiple cues and modalities. In: Proc. of ACM Int. Conf. on Multimodal Interfaces, pp. 23–30 (2009) Google Scholar
  76. 76.
    Picard, R.W.: Emotion research by the people, for the people. Emotion Review 2(3), 250–254 Google Scholar
  77. 77.
    Plutchik, R., Conte, H.R.: Circumplex Models of Personality and Emotions. APA, Washington (1997) CrossRefGoogle Scholar
  78. 78.
    Poh, M.Z., Swenson, N.C., Picard, R.W.: A wearable sensor for unobtrusive, long-term assessment of electrodermal activity. IEEE Trans. Inf. Technol. Biomed. 57(5), 1243–1252 (2010) Google Scholar
  79. 79.
    Pun, T., Alecu, T.I., Chanel, G., Kronegg, J., Voloshynovskiy, S.: Brain–computer interaction research at the Computer Vision and Multimedia Laboratory, University of Geneva. IEEE Trans. Neural Syst. Rehabil. Eng. 14, 210–213 (2006) CrossRefGoogle Scholar
  80. 80.
    Rehm, M., Wissner, M.: Gamble-a multiuser game with an embodied conversational agent. In: Lecture Notes in Computer Science, vol. 3711, pp. 180–191 (2005) Google Scholar
  81. 81.
    Roseman, I.J.: Cognitive determinants of emotion: A structural theory. In: Shaver, P. (ed.) Review of Personality & Social Psychology, Beverly Hills, CA, vol. 5, pp. 11–36. Sage, Thousand Oaks (1984) Google Scholar
  82. 82.
    Russell, J.A.: A circumplex model of affect. J. Pers. Soc. Psychol. 39, 1161–1178 (1980) CrossRefGoogle Scholar
  83. 83.
    Salahuddin, L., Cho, J., Jeong, M.G., Kim, D.: Ultra short term analysis of heart rate variability for monitoring mental stress in mobile settings. In: Proc. of the IEEE 29th International Conference of the EMBS, pp. 39–48 (2007) Google Scholar
  84. 84.
    Sander, D., Grandjean, D., Scherer, K.R.: A systems approach to appraisal mechanisms in emotion. Neural Netw. 18(4), 317–352 (2005) CrossRefGoogle Scholar
  85. 85.
    Scherer, K.R., Oshinsky, J.S.: Cue utilization in emotion attribution from auditory stimuli. Motiv. Emot. 1, 331–346 (1977) CrossRefGoogle Scholar
  86. 86.
    Scherer, K.R., Schorr, A., Johnstone, T.: Appraisal Processes in Emotion: Theory, Methods, Research. Oxford University Press, Oxford/New York (2001) Google Scholar
  87. 87.
    Schröder, M.: Speech and emotion research: an overview of research frameworks and a dimensional approach to emotional speech synthesis. Ph.D. dissertation, Univ. of Saarland, Germany (2003) Google Scholar
  88. 88.
    Schröder, M., Heylen, D., Poggi, I.: Perception of non-verbal emotional listener feedback. In: Hoffmann, R., Mixdorff, H. (eds.) Speech Prosody, pp. 1–4 (2006) Google Scholar
  89. 89.
    Schröder, M., Bevacqua, E., Eyben, F., Gunes, H., Heylen, D., Maat, M., Pammi, S., Pantic, M., Pelachaud, C., Schuller, B., Sevin, E., Valstar, M., Wöllmer, M.: A demonstration of audiovisual sensitive artificial listeners. In: Proc. of Int. Conf. on Affective Computing and Intelligent Interaction, vol. 1, pp. 263–264 (2009) Google Scholar
  90. 90.
    Schröder, M., Pammi, S., Gunes, H., Pantic, M., Valstar, M., Cowie, R., McKeown, G., Heylen, D., ter Maat, M., Eyben, F., Schuller, B., Wöllmer, M., Bevacqua, E., Pelachaud, C., de Sevin, E.: Come and have an emotional workout with sensitive artificial listeners! In: Proc. of IEEE Int. Conf. on Automatic Face and Gesture Recognition (2011) Google Scholar
  91. 91.
    Schuller, B., Müller, R., Eyben, F., Gast, J., Hörnler, B., Wöllmer, M., Rigoll, G., Höthker, A., Konosu, H.: Being bored? Recognising natural interest by extensive audiovisual integration for real-life application. Image Vis. Comput. 27, 1760–1774 (2009) CrossRefGoogle Scholar
  92. 92.
    Schuller, B., Vlasenko, B., Eyben, F., Rigoll, G., Wendemuth, A.: Acoustic emotion recognition: A benchmark comparison of performances. In: Proc. of Automatic Speech Recognition and Understanding Workshop, pp. 552–557 (2009) CrossRefGoogle Scholar
  93. 93.
    Schuller, B., Steidl, S., Batliner, A., Burkhardt, F., Devillers, L., Müller, C., Narayanan, S.: The INTERSPEECH 2010 paralinguistic challenge. In: Proc. INTERSPEECH, pp. 2794–2797 (2010) Google Scholar
  94. 94.
    Schuster, M., Paliwal, K.K.: Bidirectional recurrent neural networks. IEEE Trans. Signal Process. 45, 2673–2681 (1997) CrossRefGoogle Scholar
  95. 95.
    Shen, X., Fu, X., Xuan, Y.: Do different emotional valences have same effects on spatial attention. In: Proc. of Int. Conf. on Natural Computation, vol. 4, pp. 1989–1993 (2010) Google Scholar
  96. 96.
    Sneddon, I., McKeown, G., McRorie, M., Vukicevic, T.: Cross-cultural patterns in dynamic ratings of positive and negative natural emotional behaviour. PLoS ONE 6, e14679–e14679 (2011) CrossRefGoogle Scholar
  97. 97.
    Soleymani, M., Davis, J., Pun, T.: A collaborative personalized affective video retrieval system. In: Proc. Int. Conf. on Affective Computing and Intelligent Interaction and Workshops, pp. 1–2 (2009) CrossRefGoogle Scholar
  98. 98.
    Sun, K., Yu, J., Huang, Y., Hu, X.: An improved valence-arousal emotion space for video affective content representation and recognition. In: Proc. IEEE Int. Conf. on Multimedia and Expo, pp. 566–569 (2009) Google Scholar
  99. 99.
    Trouvain, J., Barry, W.J.: The prosody of excitement in horse race commentaries. In: Proc. ISCA Workshop Speech Emotion, pp. 86–91 (2000) Google Scholar
  100. 100.
    Truong, K.P., van Leeuwen, D.A. Neerincx, M.A. de Jong, F.M.G.: Arousal and valence prediction in spontaneous emotional speech: Felt versus perceived emotion. In: Proc. INTERSPEECH, pp. 2027–2030 (2009) Google Scholar
  101. 101.
    Tsai, T.-C., Chen, J.-J., Lo, W.-C.: Design and implementation of mobile personal emotion monitoring system. In: Proc. Int. Conf. on Mobile Data Management: Systems, Services and Middleware, pp. 430–435 (2009) CrossRefGoogle Scholar
  102. 102.
    Tsiamyrtzis, P., Dowdall, J., Shastri, D., Pavlidis, I.T., Frank, M.G., Ekman, P.: Imaging facial physiology for the detection of deceit. Int. J. Comput. Vis. (2007) Google Scholar
  103. 103.
    Wang, P., Ji, Q.: Performance modeling and prediction of face recognition systems. In: Proc. of IEEE Conference on Computer Vision and Pattern Recognition, vol. 2, pp. 1566–1573 (2006) Google Scholar
  104. 104.
    Wassermann, K.C., Eng, K., Verschure, P.F.M.J.: Live soundscape composition based on synthetic emotions. IEEE Multimed. 10, 82–90 (2003) CrossRefGoogle Scholar
  105. 105.
    Wöllmer, M., Eyben, F., Reiter, S., Schuller, B., Cox, C., Douglas-Cowie, E., Cowie, R.: Abandoning emotion classes—towards continuous emotion recognition with modelling of long-range dependencies. In: Proc. INTERSPEECH, pp. 597–600 (2008) Google Scholar
  106. 106.
    Wöllmer, M., Schuller, B., Eyben, F., Rigoll, G.: Combining long short-term memory and dynamic Bayesian networks for incremental emotion-sensitive artificial listening. IEEE J. Sel. Top. Signal Process. 4(5), 867–881 (2010) CrossRefGoogle Scholar
  107. 107.
    Yang, Y.-H., Lin, Y.-C., Su, Y.-F., Chen, H.H.: Music emotion classification: A regression approach. In: Proc. of IEEE Int. Conf. on Multimedia and Expo, pp. 208–211 (2007) CrossRefGoogle Scholar
  108. 108.
    Yu, C., Aoki, P.M., Woodruff, A.: Detecting user engagement in everyday conversations. In: Proc. of 8th Int. Conf. on Spoken Language Processing (2004) Google Scholar
  109. 109.
    Zeng, Z., Pantic, M., Roisman, G.I., Huang, T.S.: A survey of affect recognition methods: Audio, visual, and spontaneous expressions. IEEE Trans. Pattern Anal. Mach. Intell. 31, 39–58 (2009) CrossRefGoogle Scholar

Copyright information

© Springer-Verlag London Limited 2011

Authors and Affiliations

  • Hatice Gunes
    • 1
  • Mihalis A. Nicolaou
    • 2
  • Maja Pantic
    • 2
    • 3
  1. 1.Queen Mary University of LondonLondonUK
  2. 2.Imperial CollegeLondonUK
  3. 3.University of TwenteTwenteThe Netherlands

Personalised recommendations