Skip to main content

Emotion Determination in eLearning Environments Based on Facial Landmarks

  • Conference paper
  • First Online:
Learning Technology for Education in Cloud – The Changing Face of Education (LTEC 2016)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 620))

Included in the following conference series:

Abstract

Massive Open Online Courses (MOOCs) are a new kind of e-Learning environment, which enables us to address untold numbers of students. MOOCs allow students all over the world to participate in lectures independent of place and time. The sessions that are in some cases joined by more than 100,000 students are based on small units of teaching material containing videos or texts.

However today’s MOOCs are static environments, which do not take into account the diversity of the students and their situational context. Current MOOCs can be seen as mass processing but not as an individual treatment of individual students. Thus MOOCs need to be personalized in addition to massive.

In order to personalize an e-Learning environment it is first of all necessary to collect data, or personal factors, about the student, his or her current environment and his or her situational context. This data should later be processed and used as input for adaptive functions. Basically there are many input factors imaginable, such as cognitive style, preknowledge, currently used device or personal goals. The input factors can be grouped into technical, personal and situational factors. Especially situational factors may help to support students in different learning situations.

This paper describes an approach to detect the student’s current mood as a situational input factor. The mood of a student in a learning situation might be an interesting feature that can be used as an instant feedback for the currently used teaching materials. The proposed approach is based on widespread availability of built-in cameras in devices that are used by students, such as smart-phones, tablets or laptop computers. The captured frames from these devices are processed by a Java-based server component that detects selected facial landmarks. Based on the relative position of these landmarks the potential shown emotion is determined.

The output of the system may be used to adjust the difficulty level of tests or to determine the preferred media type.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Burnett, D.C., Narayanan, A.: Media Capture and Streams. W3C Editor’s Draft, 25 June 2012. http://dev.w3.org/2011/webrtc/editor/getusermedia.html. Reffered on 8 Jan 2016

  2. https://html.spec.whatwg.org/multipage/comms.html#network

  3. https://eclipse.org/jetty/

  4. Jesorsky, O., Kirchberg, K.J., Frischholz, R.W.: Robust face detection using the hausdorff distance. In: Bigun, J., Smeraldi, F. (eds.) AVBPA 2001. LNCS, vol. 2091, pp. 90–95. Springer, Heidelberg (2001). ISBN 3-540-42216-1, https://facedetection.com/wp-content/uploads/AVBPA01BioID.pdf

    Chapter  Google Scholar 

  5. Fröba, B., Külbeck, C.: Real-time face detection using edge-orientation matching. In: Bigun, J., Smeraldi, F. (eds.) AVBPA 2001. LNCS, vol. 2091, pp. 78–83. Springer, Heidelberg (2001). ISBN 3-540-42216-1. http://link.springer.com/chapter/10.1007%2F3-540-45344-X_12

    Chapter  Google Scholar 

  6. Viola, P., Jones, M.: Robust real-time object detection. Int. J. Comput. Vis. 4, 34–47 (2001)

    Google Scholar 

  7. Lienhart, R., Kuranov, A., Pisarevsky, V.: Empirical analysis of detection cascades of boosted classifiers for rapid object detection. In: Michaelis, B., Krell, G. (eds.) DAGM 2003. LNCS, vol. 2781, pp. 297–304. Springer, Heidelberg (2003)

    Chapter  Google Scholar 

  8. Lienhart, R., Maydt, J.: An extended set of haar-like features for rapid object detection. In: Proceedings of 2002 International Conference on Image Processing, vol. 1, pp. I–900. IEEE (2002)

    Google Scholar 

  9. Amit, Y., Geman, D., Wilder, K.: Joint induction of shape features and tree classifiers. IEEE Trans. Pattern Anal. Mach. Intell. 19(11), 1300–1305 (1997)

    Article  Google Scholar 

  10. http://docs.opencv.org/2.4/modules/objdetect/doc/cascade_classification.html

  11. Turk, M., Pentland, A.: Eigenfaces for recognition. J. Cogn. Neurosci. 3(1), 71–86 (1991a)

    Google Scholar 

  12. Turk, M.A., Pentland, A.P.: Face recognition using eigenfaces. In: Computer Vision and Pattern Recognition, Proceedings CVPR 1991, IEEE Computer Society Conference, pp. 586–591. IEEE (1991b)

    Google Scholar 

  13. Uricar, M., Franc, V., Hlavac, V.: Detector of facial landmarks learned by the structured output SVM. In: VISAPP 2012: Proceedings of the 7th International Conference on Computer Vision Theory and Applications (2012)

    Google Scholar 

  14. http://docs.opencv.org/2.4/modules/highgui/doc/reading_and_writing_images_and_video.html?highlight=imread#imread

  15. http://docs.opencv.org/2.4/modules/imgproc/doc/miscellaneous_transformations.html

  16. http://docs.opencv.org/2.4/modules/imgproc/doc/geometric_transformations.html#resize

  17. http://docs.opencv.org/2.4/doc/tutorials/imgproc/histograms/histogram_equalization/histogram_equalization.html

  18. Kanade, T., Cohn, J.F., Tian, Y.: Comprehensive database for facial expression analysis. In: Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition (FG 2000), Grenoble, France, pp. 46–53 (2000)

    Google Scholar 

  19. Lucey, P., Cohn, J.F., Kanade, T., Saragih, J., Ambadar, Z., Matthews, I.: The Extended Cohn-Kanade Dataset (CK+): a complete expression dataset for action unit and emotion-specified expression. In: Proceedings of the Third International Workshop on CVPR for Human Communicative Behavior Analysis (CVPR4HB 2010), San Francisco, USA, pp. 94–101 (2010)

    Google Scholar 

  20. Felzenszwalb, P.F., Girshick, R.B., McAllester, D., Ramanan, D.: Object detection with discriminatively trained part-based models. IEEE Trans. Pattern Anal. Mach. Intell. 32(9), 1627–1645 (2010)

    Article  Google Scholar 

  21. Dalal, N., Triggs, B.: Histograms of oriented gradients for human detection. In: CVPR, pp. I:886–I:893 (2005)

    Google Scholar 

  22. Uřičář, M.: CLandmark Open Source Landmarking Library (2016). http://cmp.felk.cvut.cz/~uricamic/clandmark/

  23. Wiarda, J.: Rankings sind was für Angeber, Interview with John Hennessy, Die Zeit, Nr. 14, 23.003.2016

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tobias Augustin .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing Switzerland

About this paper

Cite this paper

Augustin, T. (2016). Emotion Determination in eLearning Environments Based on Facial Landmarks. In: Uden, L., Liberona, D., Feldmann, B. (eds) Learning Technology for Education in Cloud – The Changing Face of Education. LTEC 2016. Communications in Computer and Information Science, vol 620. Springer, Cham. https://doi.org/10.1007/978-3-319-42147-6_11

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-42147-6_11

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-42146-9

  • Online ISBN: 978-3-319-42147-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics