Skip to main content

Investigating Acoustic Cues in Automatic Detection of Learners’ Emotion from Auto Tutor

  • Conference paper
Affective Computing and Intelligent Interaction (ACII 2011)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 6975))

  • 4420 Accesses

Abstract

This study investigates the emotion-discriminant ability of acoustic cues from speech collected in the automatic computer tutoring system named as Auto Tutor. The purpose of this study is to examine the acoustic cues for emotion detection of the speech channel from the learning system, and to compare the emotion-discriminant performance of acoustic cues (in this study) with the conversational cues (available in previous work). Comparison between the classification performance obtained using acoustic cues and conversational cues shows that the emotions: flow and boredom are better captured in acoustics than conversational cues while conversational cues play a more important role in multiple-emotion classification.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Litman, D.J., Forbes-Riley, K.: Recognizing student emotions and attitudes on the basis of utterances in spoken tutoring dialogues with both human and computer tutors. Speech Communication 48, 559–590 (2006)

    Article  Google Scholar 

  2. D’Mello, S., Graesser, A.: Multimodal semi-automated affect detection from conversational cues, gross body language, and facial features. User Modeling and User-Adapted Interaction 20, 147–187 (2010)

    Article  Google Scholar 

  3. Fragopanagos, N., Taylor, J.G.: Emotion recognition in human-computer interaction. Neural Networks 18, 389–405 (2005)

    Article  Google Scholar 

  4. Cowie, R., Douglas-Cowie, E., Tsapatsoulis, N., Votsis, G., Kollias, S., Fellenz, W., Taylor, J.G.: Emotion recognition in human-computer interaction. IEEE Signal Processing Magazine 18, 32–80 (2001)

    Article  Google Scholar 

  5. D’Mello, S., Graesser, A.: Automatic detection of learners emotions from gross body language. Applied Artificial Intelligence 23, 123–150 (2009)

    Article  Google Scholar 

  6. D’Mello, S., Craig, S.D., Witherspoon, A., McDaniel, B., Graesser, A.: Automatic detection of learners affect from conversational cues. User Modeling and User-Adapted Interaction 18, 45–80 (2008)

    Article  Google Scholar 

  7. McKeown, G., Valstar, M.F., Cowie, R., Pantic, M.: The semaine corpus of emotionally coloured character interactions. In: 2010 IEEE International Conference on Multimedia and Expo. (ICME), pp. 1079–1084 (2010)

    Google Scholar 

  8. Busso, C., Sungbok, L., Narayanan, S.: Analysis of emotionally salient aspects of fundamental frequency for emotion detection. IEEE Transactions on Audio, Speech, and Language Processing 17, 582–596 (2009)

    Article  Google Scholar 

  9. Moore, E., Clements, M.A., Peifer, J.W., Weisser, L.: Critical analysis of the impact of glottal features in the classification of clinical depression in speech. IEEE Transactions on Biomedical Engineering 55, 96–107 (2008)

    Article  Google Scholar 

  10. Sun, R., Moore, E., Torres, J.: Investigating glottal parameters for differentiating emotional categories with similar prosodics. In: IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2009, Taipei, Taiwan (2009)

    Google Scholar 

  11. Graesser, A., D’Mello, S., Chipman, P., King, B., McDaniel, B.: Exploring relationship between affect and learning with autotutor. In: The 13th International Conference on Artificial Intelligence in Education, pp. 16–23 (2007)

    Google Scholar 

  12. Cummings, K.E., Clements, M.A.: Analysis of the glottal excitation of emotionally styled and stressed speech. The Journal of the Acoustical Society of America 98, 88–98 (1995)

    Article  Google Scholar 

  13. Moore, E., Clements, M., Peifer, J., Weisser, L.: Investigating the role of glottal features in classifying clinical depression. In: Proceedings of the 25th Annual International Conference of the IEEE, Engineering in Medicine and Biology Society, vol. 3, pp. 2849–2852 (2003)

    Google Scholar 

  14. Moore, E., Torres, J.: A performance assessment of objective measures for evaluating the quality of glottal waveform estimates. In: Speech Communication (2007) (in press)

    Google Scholar 

  15. Patrick, A.N., Anastasis, K., Jon, G., Mike, B.: Estimation of glottal closure instants in voiced speech using the dypsa algorithm. IEEE Transactions on Audio, Speech, and Language Processing 15, 34–43 (2007)

    Article  Google Scholar 

  16. Moore, E., Torres, J.: A performance assessment of objective measures for evaluating the quality of glottal waveform estimates. In: Speech Communication (2007) (in press)

    Google Scholar 

  17. Airas, M., Pulakka, H., Backstrom, T., Alku, P.: A toolkit for voice inverse filtering and parametrisation. In: Interspeech (2005)

    Google Scholar 

  18. Laukkanen, A.M., Vilkman, E., Alku, P., Oksanen, H.: Physical variations related to stress and emotional state: a preliminary study. Journal of Phonetics 24, 313–335 (1996)

    Article  Google Scholar 

  19. Titze, I.R., Sundberg, J.: Vocal intensity in speakers and singers. The Journal of the Acoustical Society of America 91, 2936–2946 (1992)

    Article  Google Scholar 

  20. Childers, D.G.: Vocal quality factors: Analysis, synthesis, and perception. The Journal of the Acoustical Society of America 90, 2394–2410 (1991)

    Article  Google Scholar 

  21. Eyben, F., Wollmer, M., Schuller, B.: Openear - introducing the munich open-source emotion and affect recognition toolkit. In: 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops, ACII 2009, pp. 1–6 (2009)

    Google Scholar 

  22. Schuller, B., Steidl, S., Batliner, A., Schiel, F., Krajewski, J.: The interspeech 2011 speaker state challenge. In: Interspeech, Italy (2011)

    Google Scholar 

  23. Hall, M., Frank, E., Holmes, G., Pfahringer, B., Reutemann, P., Witten, I.H.: The weka data mining software: An update. SIGKDD Explorations 11 (2009)

    Google Scholar 

  24. Freund, Y., Schapire, R.E.: Experiments with a new boosting algorithm. In: The Thirteenth Interantional Conference on Machine Learning (1996)

    Google Scholar 

  25. Witten, I.H., Freank, E.: Data Mining: Practical Machine Learning Tools and Techniques. Morgan Kaufmann, San Francisco (2005)

    Google Scholar 

  26. Hirschberg, J., Liscombe, J., Venditti, J.: Experiments in emotional speech. In: ISCA and IEEE Workshop on Spontanous Speech Processing and Recognition, Tokyo, Japan, pp. 119–125 (2003)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2011 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Sun, R., Moore, E. (2011). Investigating Acoustic Cues in Automatic Detection of Learners’ Emotion from Auto Tutor. In: D’Mello, S., Graesser, A., Schuller, B., Martin, JC. (eds) Affective Computing and Intelligent Interaction. ACII 2011. Lecture Notes in Computer Science, vol 6975. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-24571-8_10

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-24571-8_10

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-24570-1

  • Online ISBN: 978-3-642-24571-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics