Marker-Less Gesture and Facial Expression Based Affect Modeling

  • Sherlo Yvan Cantos
  • Jeriah Kjell Miranda
  • Melisa Renee Tiu
  • Mary Czarinelle Yeung
Part of the Proceedings in Information and Communications Technology book series (PICT, volume 7)


Many affective Intelligent Tutoring Systems (ITSs) today use multi-modal approaches in recognizing student affect. These researches have had achieved promising results but they have their own limitations. Most are difficult to deploy because it requires special equipment/s which are disruptive to student activities. This work is an effort towards developing affective ITSs that are easy to deploy, scalable, accurate and inexpensive. This study uses a webcam and Microsoft Kinect to detect the facial expressions and body gestures of the students respectively. A corpus for 8 students were built and SVM PolyKernel, SVM PUK, SVM RBF, LogitBoost, and Multilayer Perceptron machine learning algorithms were applied to discover patterns over the facial expression and C4.5 was used for body gesture features. The body gestures and facial point distances data sets were used to build user-specific models. The range of f-Measure produced for fusion of gesture and face is 0.017 to 0.342.


Facial Expression Emotion Recognition Neutral Position Confusion Matrix Intelligent Tutor System 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Alepis, E., Stathopoulou, I.-O., Virvou, M., Tsihrintzis, G., Kabassi, K.: Audio-lingual and visual-facial emotion recognition: Towards a bi-modal interaction system. In: 2010 22nd IEEE International Conference on Tools with Artificial Intelligence (ICTAI), vol. 2, pp. 274–281 (October 2010)Google Scholar
  2. 2.
    Boyer, K.E., Phillips, R., Ingram, A., Ha, E.Y., Wallis, M., Vouk, M., Lester, J.: Characterizing the effectiveness of tutorial dialogue with hidden markov models. In: Aleven, V., Kay, J., Mostow, J. (eds.) ITS 2010, Part I. LNCS, vol. 6094, pp. 55–64. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  3. 3.
    Bustos, D.M., Chua, G.L., Cruz, R.T., Santos, J.M., Suarez, M.T.: Gesture-based affect modeling for intelligent tutoring systems. In: Biswas, G., Bull, S., Kay, J., Mitrovic, A. (eds.) AIED 2011. LNCS, vol. 6738, pp. 426–428. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  4. 4.
    Butko, N., Theocharous, G., Philipose, M., Movellan, J.: Automated facial affect analysis for one-on-one tutoring applications. In: 2011 IEEE International Conference on Automatic Face Gesture Recognition and Workshop (fg 2011), pp. 382–387 (March 2011)Google Scholar
  5. 5.
    Cu, J., Latorre, A., Solomon, K.Y., Tensuan, P.: SAM-D2: Modeling Spontaneous Affect. Unpublished Undergraduate thesis. DeLasalle UniversityGoogle Scholar
  6. 6.
    D’Mello, S., Graesser, A.: Multimodal semi- automated affect detection from conversational cues, gross body language, and facial features. User Modeling and User- Adapted Interaction 20, 147–187 (2010),, doi:10.1007/s11257-010-9074-4CrossRefGoogle Scholar
  7. 7.
    Ekman, P.: Darwin, deception, and facial expression (2003)Google Scholar
  8. 8.
    Ekman, P., Friesen, W.: Detecting deception from the body or face. Journal of Personality and Social Psychology 29, 288–298 (1974)CrossRefGoogle Scholar
  9. 9.
    Gupta, P., da Vitoria Lobo, N., Laviola, J.: Markerless tracking using polar correlation of camera optical flow. In: 2010 IEEE Virtual Reality Conference (VR), pp. 223–226 (2010)Google Scholar
  10. 10.
    Kapoor, A., Picard, R.W.: Multimodal affect recognition in learning environments. In: ACM Multimedia Conference, pp. 677–682 (2005)Google Scholar
  11. 11.
    Kort, B., Reilly, R., Picard, R.: An affective model of interplay between emotions and learning: reengineering educational pedagogy-building a learning companion. In: Proceedings of the IEEE International Conference on Advanced Learning Technologies, pp. 43–46 (2001)Google Scholar
  12. 12.
    Kleinsmith, A., Bianchi-Berthouze, N.: Affectivebody expression perception and recognition: A survey, vol. PP, p. 1 (2012)Google Scholar
  13. 13.
    Li, J., Oussalah, M.: Automatic face emotion recognition system. In: 2010 IEEE 9th International Conference on Cybernetic Intelligent Systems (CIS), pp. 1–6 (September 2010)Google Scholar
  14. 14.
    Nicolaou, M., Gunes, H., Pantic, M.: Continuous prediction of spontaneous affect from multiple cues and modalities in valence and arousal space. IEEE Transactions on Affective Computing PP(99), 1 (2011)Google Scholar
  15. 15.
    Picard, R.W.: Affective computing (1997)Google Scholar

Copyright information

© Springer Tokyo 2013

Authors and Affiliations

  • Sherlo Yvan Cantos
    • 1
  • Jeriah Kjell Miranda
    • 1
  • Melisa Renee Tiu
    • 1
  • Mary Czarinelle Yeung
    • 1
  1. 1.De La Salle University-ManilaManilaPhilippines

Personalised recommendations