Skip to main content

Categorical vs. Dimensional Representations in Multimodal Affect Detection during Learning

  • Conference paper
Book cover Intelligent Tutoring Systems (ITS 2012)

Part of the book series: Lecture Notes in Computer Science ((LNPSE,volume 7315))

Included in the following conference series:

Abstract

Learners experience a variety of emotions during learning sessions with Intelligent Tutoring Systems (ITS). The research community is building systems that are aware of these experiences, generally represented as a category or as a point in a low-dimensional space. State-of-the-art systems detect these affective states from multimodal data, in naturalistic scenarios. This paper provides evidence of how the choice of representation affects the quality of the detection system. We present a user-independent model for detecting learners’ affective states from video and physiological signals using both the categorical and dimensional representations. Machine learning techniques are used for selecting the best subset of features and classifying the various degrees of emotions for both representations. We provide evidence that dimensional representation, particularly using valence, produces higher accuracy.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. D’Mello, S., Craig, S., Witherspoon, A., Mcdaniel, B., Graesser, A.: Automatic detection of learner’s affect from conversational cues. User Modeling and User-Adapted Interaction 18, 45–80 (2008)

    Article  Google Scholar 

  2. Csikszentmihalyi, M.: Flow: The psychology of optimal experience. Harper and Row, New York (1990)

    Google Scholar 

  3. Graesser, A., McDaniel, B., Chipman, P., Witherspoon, A., D’Mello, S., Gholson, B.: Detection of emotions during learning with AutoTutor. In: Proceedings of the 28th Annual Meetings of the Cognitive Science Society, pp. 285–290 (2006)

    Google Scholar 

  4. Craig, S., Graesser, A., Sullins, J., Gholson, B.: Affect and learning: an exploratory look into the role of affect in learning with AutoTutor. Learning, Media and Technology 29, 241–250 (2004)

    Google Scholar 

  5. Calvo, R.A., D’Mello, S.: New Perspectives on Affect and Learning Technologies. Explorations in the Learning Sciences, Instructional Systems and Performance Technologies, vol. 3. Springer, New York (2011)

    Google Scholar 

  6. Klein, J., Moon, Y., Picard, R.: This computer responds to user frustration: Theory, design, and results. Interacting with Computers 14, 119–140 (2002)

    Article  Google Scholar 

  7. Hussain, M.S., AlZoubi, O., Calvo, R.A., D’Mello, S.K.: Affect Detection from Multichannel Physiology during Learning Sessions with AutoTutor. In: Biswas, G., Bull, S., Kay, J., Mitrovic, A. (eds.) AIED 2011. LNCS(LNAI), vol. 6738, pp. 131–138. Springer, Heidelberg (2011)

    Chapter  Google Scholar 

  8. Graesser, A.C., Chipman, P., Haynes, B.C., Olney, A.: AutoTutor: An intelligent tutoring system with mixed-initiative dialogue. IEEE Transactions on Education 48, 612–618 (2005)

    Article  Google Scholar 

  9. Cowie, R., Douglas-Cowie, E., Tsapatsoulis, N., Votsis, G., Kollias, S., Fellenz, W., Taylor, J.: Emotion recognition in human-computer interaction. IEEE Signal Processing Magazine 18, 32–80 (2001)

    Article  Google Scholar 

  10. Polzin, T.: Detecting Verbal and Non-verbal cues in the communication of emotion. Unpublished Doctoral Dissertation, School of Computer Science, Carnegie Mellon University (2000)

    Google Scholar 

  11. Yacoob, Y., Davis, L.: Recognizing human facial expressions from long image sequences using optical flow. IEEE Transactions on Pattern Analysis and Machine Intelligence 18, 636–642 (1996)

    Article  Google Scholar 

  12. Aghaei Pour, P., Hussain, M.S., AlZoubi, O., D’Mello, S., Calvo, R.A.: The Impact of System Feedback on Learners’ Affective and Physiological States. In: Aleven, V., Kay, J., Mostow, J. (eds.) ITS 2010. LNCS, vol. 6094, pp. 264–273. Springer, Heidelberg (2010)

    Chapter  Google Scholar 

  13. Calvo, R.A., D’Mello, S.: Affect Detection: An Interdisciplinary Review of Models, Methods, and their Applications. IEEE Transactions on Affective Computing 1, 18–37 (2010)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2012 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Hussain, M.S., Monkaresi, H., Calvo, R.A. (2012). Categorical vs. Dimensional Representations in Multimodal Affect Detection during Learning. In: Cerri, S.A., Clancey, W.J., Papadourakis, G., Panourgia, K. (eds) Intelligent Tutoring Systems. ITS 2012. Lecture Notes in Computer Science, vol 7315. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-30950-2_11

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-30950-2_11

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-30949-6

  • Online ISBN: 978-3-642-30950-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics