Marker-Less Gesture and Facial Expression Based Affect Modeling
Many affective Intelligent Tutoring Systems (ITSs) today use multi-modal approaches in recognizing student affect. These researches have had achieved promising results but they have their own limitations. Most are difficult to deploy because it requires special equipment/s which are disruptive to student activities. This work is an effort towards developing affective ITSs that are easy to deploy, scalable, accurate and inexpensive. This study uses a webcam and Microsoft Kinect to detect the facial expressions and body gestures of the students respectively. A corpus for 8 students were built and SVM PolyKernel, SVM PUK, SVM RBF, LogitBoost, and Multilayer Perceptron machine learning algorithms were applied to discover patterns over the facial expression and C4.5 was used for body gesture features. The body gestures and facial point distances data sets were used to build user-specific models. The range of f-Measure produced for fusion of gesture and face is 0.017 to 0.342.
KeywordsFacial Expression Emotion Recognition Neutral Position Confusion Matrix Intelligent Tutor System
Unable to display preview. Download preview PDF.
- 1.Alepis, E., Stathopoulou, I.-O., Virvou, M., Tsihrintzis, G., Kabassi, K.: Audio-lingual and visual-facial emotion recognition: Towards a bi-modal interaction system. In: 2010 22nd IEEE International Conference on Tools with Artificial Intelligence (ICTAI), vol. 2, pp. 274–281 (October 2010)Google Scholar
- 2.Boyer, K.E., Phillips, R., Ingram, A., Ha, E.Y., Wallis, M., Vouk, M., Lester, J.: Characterizing the effectiveness of tutorial dialogue with hidden markov models. In: Aleven, V., Kay, J., Mostow, J. (eds.) ITS 2010, Part I. LNCS, vol. 6094, pp. 55–64. Springer, Heidelberg (2010)CrossRefGoogle Scholar
- 4.Butko, N., Theocharous, G., Philipose, M., Movellan, J.: Automated facial affect analysis for one-on-one tutoring applications. In: 2011 IEEE International Conference on Automatic Face Gesture Recognition and Workshop (fg 2011), pp. 382–387 (March 2011)Google Scholar
- 5.Cu, J., Latorre, A., Solomon, K.Y., Tensuan, P.: SAM-D2: Modeling Spontaneous Affect. Unpublished Undergraduate thesis. DeLasalle UniversityGoogle Scholar
- 7.Ekman, P.: Darwin, deception, and facial expression (2003)Google Scholar
- 9.Gupta, P., da Vitoria Lobo, N., Laviola, J.: Markerless tracking using polar correlation of camera optical flow. In: 2010 IEEE Virtual Reality Conference (VR), pp. 223–226 (2010)Google Scholar
- 10.Kapoor, A., Picard, R.W.: Multimodal affect recognition in learning environments. In: ACM Multimedia Conference, pp. 677–682 (2005)Google Scholar
- 11.Kort, B., Reilly, R., Picard, R.: An affective model of interplay between emotions and learning: reengineering educational pedagogy-building a learning companion. In: Proceedings of the IEEE International Conference on Advanced Learning Technologies, pp. 43–46 (2001)Google Scholar
- 12.Kleinsmith, A., Bianchi-Berthouze, N.: Affectivebody expression perception and recognition: A survey, vol. PP, p. 1 (2012)Google Scholar
- 13.Li, J., Oussalah, M.: Automatic face emotion recognition system. In: 2010 IEEE 9th International Conference on Cybernetic Intelligent Systems (CIS), pp. 1–6 (September 2010)Google Scholar
- 14.Nicolaou, M., Gunes, H., Pantic, M.: Continuous prediction of spontaneous affect from multiple cues and modalities in valence and arousal space. IEEE Transactions on Affective Computing PP(99), 1 (2011)Google Scholar
- 15.Picard, R.W.: Affective computing (1997)Google Scholar