Advertisement

Can an Affect-Sensitive System Afford to Be Context Independent?

  • Andreas MarpaungEmail author
  • Avelino Gonzalez
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10257)

Abstract

There has been a wave of interest in affect recognition among researchers in the field of affective computing. Most of these research use a context independent approach. Since humans may misunderstand other’s observed facial, vocal, or body behavior without any contextual knowledge, we question whether any of these human-centric affect-sensitive systems can be robust enough without any contextual knowledge. To answer this question, we conducted a study using previously studied audio files in three different settings; these include: no contextual indication, one level of contextual knowledge (either action or relationship/environment), and two levels of contextual knowledge (both action and relationship/environment). Our work confirms that indeed the contextual knowledge can improve recognition of human emotion.

Keywords

Affect recognition Affective computing Speech Paralinguistic Context-centric Contextual knowledge 

References

  1. 1.
    Ekman, P., Friesen, W.: Facial Action Coding System: A Technique for the Measurement of Facial Movement. Investigator’s Guide 2 Parts. Consulting Psychologists Press, Sunnyvale (1978)Google Scholar
  2. 2.
    Frijda, N.H.: The Emotions. Cambridge University Press, Cambridge (1986)Google Scholar
  3. 3.
    Scherer, K.R.: Vocal correlates of emotional arousal and affective disturbance. In: Wagner, H., Manstead, A. (eds.) Handbook of Social Psychophysiology, pp. 165–197. Wiley, New York (1989)Google Scholar
  4. 4.
    Parkinson, B., Fischer, A., Manstead, A.: Emotion in Social Relations: Cultural, Group, and İnterpersonal Processes. Psychology Press, New York (2005)Google Scholar
  5. 5.
    Kitayama, S., Markus, H.: Emotion and culture: Empirical studies of mutual influence. American Psychological Association, Washington, D.C. (1994). http://dx.doi.org/10.1037/10152-000 CrossRefGoogle Scholar
  6. 6.
    Chung, C., Pennebaker, J.: Linguistic ınquiry and word count (LIWC): pronounced ‘‘Luke,’’… and other useful facts: In: McCarthy, P., Boonthum-Denecke, C. (eds.) Applied Natural Language Processing: Identification, Investigation and Resolution, pp. 206–229 (2012)Google Scholar
  7. 7.
    Picard, R.W.: Affective Computing. MIT Press, Cambridge (1997)CrossRefGoogle Scholar
  8. 8.
    Zeng, Z., Pantic, M., Roisman, G., Huang, T.: A survey of affect recognition methods: audio, visual, and spontaneous expressions. IEEE Trans. Pattern Anal. Mach. Intell. 31(1), 39–58 (2009)CrossRefGoogle Scholar
  9. 9.
    Calvo, R.A., D’Mello, S.: Affect detection: an ınterdisciplinary review of models, methods, and their applications. IEEE Trans. Affect. Comput. 1(1), 18–37 (2010)CrossRefGoogle Scholar
  10. 10.
    Gunes, H., Hung, H.: Is automatic facial expression recognition of emotions coming to a dead end? The rise of the new kids on the block. Image Vis. Comput. 55, 6–8 (2016)CrossRefGoogle Scholar
  11. 11.
    Cheng, W.C., Liao, H.C., Pan, M.H., Chen, C.C.: A fatigue detection system with eyeglasses removal. In: 15th International Conference on Advanced Communication Technology (ICACT). IEEE (2013)Google Scholar
  12. 12.
    Meng, H., Huang, D., Wang, H., Yang, H., AI-Shuraifi, M., Wang, Y.: Depression recognition based on dynamic facial and vocal expression features using partial least square regression. In: Proceedings of the 3rd ACM International Workshop on Audio/Visual Emotion Challenge (2013)Google Scholar
  13. 13.
    Khan, R.A., Meyer, A., Konik, H., Bouakaz, S.: Pain detection through shape and appearance features. In: 2013 IEEE International Conference on Multimedia and Expo (ICME). IEEE (2013)Google Scholar
  14. 14.
    Bousmalis, K., Marc, M., Maja, P.: Towards the automatic detection of spontaneous agreement and disagreement based on nonverbal behaviour: a survey of related cues, databases, and tools. Image Vis. Comput. 31(2), 203–221 (2013)CrossRefGoogle Scholar
  15. 15.
    Cha, S., Wookhyun, K.: Analyze the learner’s concentration using detection of facial feature points (2015)Google Scholar
  16. 16.
    Thepade, S.D., Bidwai, P.V.: Contemplation of ımage based Iris recognition. Int. J. Eng. Res. Appl. 3(2), 1056–1066 (2013). ISSN-2248-9622Google Scholar
  17. 17.
    Bosch, N., Chen, Y., D’Mello, S.: It’s written on your face: detecting affective states from facial expressions while learning computer programming. In: Trausan-Matu, S., Boyer, K.E., Crosby, M., Panourgia, K. (eds.) ITS 2014. LNCS, vol. 8474, pp. 39–44. Springer, Cham (2014). doi: 10.1007/978-3-319-07221-0_5 CrossRefGoogle Scholar
  18. 18.
    Vasiete, E., Tom, Y.: Multimodal frustration detection on smartphones. In: Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems. ACM (2015)Google Scholar
  19. 19.
    Brown, M., Salverda, A.P., Gunlogson, C., Tanenhaus, M.K.: Interpreting prosodic cues in discourse context. Lang. Cogn. Neurosci. 30(1-2), 149–166 (2015)CrossRefGoogle Scholar
  20. 20.
    Sudhakar, R., Manjare, C.A.: Analysis of speech features for emotion detection: a review. In: 2015 International Conference on Computing Communication Control and Automation (ICCUBEA). IEEE (2015)Google Scholar
  21. 21.
    Kakouros, S., Okko, R.: Statistical learning of prosodic patterns and reversal of perceptual cues for sentence prominence. In: Proceedings of the 38th Annual Conference of the Cognitive Science Society, Philadelphia, Pennsylvania (2016)Google Scholar
  22. 22.
    Laban, R., Ullmann, L.: The mastery of movement, 4th edn. Princeton Book Company Publishers, Hightstown (1988)Google Scholar
  23. 23.
    Scherer, K.R.: Personality inference from voice quality: the loud voice of extraversion. Eur. J. Soc. Psychol. 8, 467–487 (1978)CrossRefGoogle Scholar
  24. 24.
    Hess, U., Hareli, S.: The impact of context on the perception of emotions. In: The Expression of Emotion Philosophical, Psychological and Legal PerspectivesGoogle Scholar
  25. 25.
    Shields, S.A.: The Politics of emotion in everyday life appropriate emotion and claims on ıdentity. Rev. Gen. Psychol. 9(1), 3–15 (2005)CrossRefGoogle Scholar
  26. 26.
    Algoe, S.B., Brenda, N., John, D.: Gender and job status as contextual cues for the interpretation of facial expression of emotion. Sex Roles 42(3–4), 183–208 (2000)CrossRefGoogle Scholar
  27. 27.
    Matsumoto, D., Hwang, H.: Judging faces in context. Soc. Pers. Psychol. Compnass 4(6), 393–402 (2010)CrossRefGoogle Scholar
  28. 28.
    Donges, U., Kersting, A., Suslow, T.: Women’s greater ability to perceive happy facial emotion automatically: gender differences in affective priming. PLoS ONE 7(7), e41745 (2012). doi: 10.1371/journal.pone.0041745. Zalla, T. (ed.) Ecole Normale Supe’rieure, FranceCrossRefGoogle Scholar
  29. 29.
    Kitayama, S., Markus, H., Kurakawa, M.: Culture, emotion, and well-being: good feelings in Japan and the United States. Cogn. Emot. 14(1), 93–124 (2000)CrossRefGoogle Scholar
  30. 30.
    Hareli, S., Kafetsios, K., Hess, U.: A cross-cultural study on emotion expression and the learning of social norms. Front. Psychol. 6, 1501 (2015). http://dx.doi.org/10.3389/fpsyg.2015.01501 CrossRefGoogle Scholar
  31. 31.
    Kitaoka, N., Enami, D., Nakagawa, S.: Effect of acoustic and linguistic contexts on human and machine speech recognition. Comput. Speech Lang. 28, 769–787 (2014)CrossRefGoogle Scholar
  32. 32.
    Hammal, Z., Suarez, M.T.: Towards context based affective computing. In: 2013 Humaine Association Conference on Affective Computing Intelligent Interaction, 2013, p. 802 (2013). ISBN: 9780769550480Google Scholar
  33. 33.
    Lazarus, R.S.: Psychological Stress and the Coping Process. McGraw Hill, New York (1966)Google Scholar
  34. 34.
    Scherer, K., Ellgring, H.: Multimodal expression of emotion: affect programs or componential appraisal patterns? Emotion 7, 158–171 (2007)CrossRefGoogle Scholar
  35. 35.
    Johnstone, T., Reekum, C, and Scherer, K.: Vocal expression correlates of appraisal processes. In: Appraisal Processes in Emotion – Theory, Methods, and Research (2001)Google Scholar
  36. 36.
    Kaiser, S., and Thomas W.: Facial expressions as indicators of appraisal processes. In: Appraisal Processes in Emotion: Theory, Methods, Research, pp. 285–300 (2001)Google Scholar
  37. 37.
    Pecchinenda, A.: The psychophysiology of appraisals (2001)Google Scholar
  38. 38.
    Jürgens, U.: Neural pathways underlying vocal control. Neurosci. Biobehav. Rev. 26(2), 235–258 (2002)CrossRefGoogle Scholar
  39. 39.
    Dey, A.K.: Understanding and using context. Pers. Ubiquit. Comput. Spec. Issue Situated Interact. Ubiquit. Comput. 5(1), 4–7 (2001)Google Scholar
  40. 40.
    Russell, J.A.: Core affect and the psychological construction of emotion. Psychol. Rev. 110, 145–172 (2003)CrossRefGoogle Scholar
  41. 41.
    Panksepp, J.: Affective Neuroscience: The Foundations of Human and Animal Emotions. Oxford University Press, Cambridge (1998)Google Scholar
  42. 42.
    Keltner, D., Haidt, J.: Social functions of emotions. In: Mayne, T.J., Bonanno, G.A. (eds.) Emotions: Current Issues and Future Directions, pp. 192–213. Guilford Press, New York (2001)Google Scholar
  43. 43.
    Bachorowski, J., Owren, M.: Vocal expression of emotion—acoustic properties of speech are associated with emotional ıntensity and context. Psychol. Sci. 6, 219–224 (1995)CrossRefGoogle Scholar
  44. 44.
    Bänziger, T., Mortillaro, M., Scherer, K.R.: Introducing the Geneva multimodal expression corpus for experimental research on emotion perception. Emotion (2011). Advance online publication. doi: 10.1037/a0025827, http://www.affective-sciences.org/gemep/coreset
  45. 45.
  46. 46.
    Marpaung, A., Gonzalez, A.: Toward building automatic affect recognition machine using acoustics features. In: FLAIRS Conference (2014)Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  1. 1.Intelligent System Lab, Department of Computer ScienceUniversity of Central FloridaOrlandoUSA

Personalised recommendations