Advertisement

Affect Recognition in Real Life Scenarios

  • Theodoros Kostoulas
  • Todor Ganchev
  • Nikos Fakotakis
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6456)

Abstract

Affect awareness is important for improving human-computer interaction, but also facilitates the detection of atypical behaviours, danger, or crisis situations in surveillance and in human behaviour monitoring applications. The present work aims at the detection and recognition of specific affective states, such as panic, anger, happiness in close to real-world conditions. The affect recognition scheme investigated here relies on an utterance-level audio parameterization technique and a robust pattern recognition scheme based on the Gaussian Mixture Models with Universal Background Modelling (GMM-UBM) paradigm. We evaluate the applicability of the suggested architecture on the PROMETHEUS database, implemented in a number of indoor and outdoor conditions. The experimental results demonstrate the potential of the suggested architecture on the challenging task of affect recognition in real world conditions. However, further enhancement of the affect recognition performance would be needed before any deployment of practical applications.

Keywords

affect recognition emotion recognition real-world data 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Clavel, C., Devillers, L., Richard, G., Vasilexcu, I., Ehrette, T.: Detection and analysis of abnormal situations through fear type acoustic manifestations. In: Proc. of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2007), vol. 4, pp. 21–24 (2007)Google Scholar
  2. 2.
    Ntalampiras, S., Potamitis, I., Fakotakis, N.: An adaptive framework for acoustic monitoring of potential hazards. EURASIP Journal on Audio, Speech, and Music Processing 2009, article ID: 594103 (2009)Google Scholar
  3. 3.
    Clavel, C., Vasilescu, I., Devillers, L., Richard, G., Ehrette, T.: Fear-type emotion recognition for future audio-based surveillance systems. Speech Communication 50(6), 487–503 (2008)CrossRefGoogle Scholar
  4. 4.
    Schuller, B., Steidl, S., Batliner, A.: The Interspeech 2009 Emotion Challenge. In: Proc. of Interspeech 2009, ISCA, Brighton, UK, pp. 312–315 (2009)Google Scholar
  5. 5.
    Kockmann, M., Burget, L., Cernocky, J.: Brno University of Technology System for Interspeech 2009 Emotion Challenge. In: Proc. of Interspeech 2009, ISCA, Brighton, UK, pp. 348–351 (2009)Google Scholar
  6. 6.
    Steidl, S., Schuller, B., Seppi, D., Batliner, A.: The Hinterland of Emotions: Facing the Open-Microphone Challenge. In: Proc. of 4th International HUMAINE Association Conference on Affective Computing and Intelligent Interaction (ACII 2009), vol. I, pp. 690–697 (2009)Google Scholar
  7. 7.
    Callejas, Z., Lopez-Cozar, R.: Influence of contextual information in emotion annotation for spoken dialogue systems. Speech Communication, 416–433 (2008)Google Scholar
  8. 8.
    Seppi, D., Batliner, A., Schuller, B., Steidl, S., Vogt, T., Wagner, J., Devillers, L., Vidrascu, L., Amir, N., Aharonson, V.: Patterns, prototypes, performance: classifying emotional user states. In: Proc. of Interspeech 2008, pp. 601–604 (2008)Google Scholar
  9. 9.
    Steidl, S.: Automatic classification of emotion-related user states in spontaneous children’s speech. In: Studien zur Mustererkennung, Bd, 28. Logos Verlag, Berlin (2009) ISBN 978-3-8325-2145-5Google Scholar
  10. 10.
    Batliner, A., Steidl, S., Hacker, C., Nöth, E.: Private emotions vs. social interaction – a data-driven approach towards analysing emotion in speech. In: User Modeling and User-Adpated Interaction (umuai), vol. 18(1-2), pp. 175–206 (2008)Google Scholar
  11. 11.
    Brendel, M., Zaccarelli, R., Devillers, L.: Building a system for emotions detection from speech to control an affective avatar. In: Proc. of LREC 2010, pp. 2205–2210 (2010)Google Scholar
  12. 12.
    Ntalampiras, S., Arsić, D., Stïormer, A., Ganchev, T., Potamitis, I., Fakotakis, N.: PROMETHEUS database: A multimodal corpus for research on modeling and interpreting human behavior. In: Proc. of the 16th International Conference on Digital Signal Processing (DSP 2009), Santorini, Greece (2009)Google Scholar
  13. 13.
    Ntalampiras, S., Ganchev, T., Potamitis, I., Fakotakis, N.: Heterogeneous sensor database in support of human behaviour analysis in unrestricted environments: The audio part. In: Valletta, Malta, Calzolari, N., et al. (eds.) Proc. of LREC 2010. ELRA, pp. 3006–3010 (2010) ISBN: 2-9517408-6-7Google Scholar
  14. 14.
    Eyben, F., Wollmer, M., Schuller, B.: openEAR – Introducing the Munich open-source emotion and affect recognition toolkit. In: Proc. of the 4th International HUMAINE Association Conference on Affective Computing and Intelligent Interaction 2009 (ACII 2009). IEEE, Amsterdam (2009)Google Scholar
  15. 15.
    Reynolds, D.A., Rose, R.C.: Robust text-independent speaker identification using Gaussian mixture speaker models. IEEE Transactions on Speech and Audio Processing 3, 72–83 (1995)CrossRefGoogle Scholar
  16. 16.
    Dempster, A.P., Laird, N.M., Rubin, D.B.: Maximum likelihood from incomplete data via the EM algorithm. J. Roy. Stat. Soc. 39, 1–38 (1977)MathSciNetzbMATHGoogle Scholar
  17. 17.
    Reynolds, D.A., Quatieri, T.F., Dunn, R.B.: Speaker verification using adapted Gaussian mixture models. Digital Signal Processing 10, 19–41 (2000)CrossRefGoogle Scholar
  18. 18.
    Cowie, R., Douglas-Cowie, E., Tsapatsoulis, N., Votsis, G., Kollias, S., Fellenz, W., Taylor, J.G.: Emotion recognition in human-computer interaction. IEEE Signal Processing Magazine 18(1), 32–80 (2001)CrossRefGoogle Scholar
  19. 19.
    Eyben, F., Batliner, A., Schuller, B., Seppi, D., Steidl, S.: Cross-corpus classification of realistic emotions – some pilot experiments. In: Proc. 3rd International Workshop on Emotion (satellite of LREC), Valletta, Malta, pp. 77–82 (2010)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Theodoros Kostoulas
    • 1
  • Todor Ganchev
    • 1
  • Nikos Fakotakis
    • 1
  1. 1.Wire Communications Laboratory, Department of Electrical and Computer EngineeringUniversity of PatrasRion-PatrasGreece

Personalised recommendations