Advertisement

Inferring Human Knowledgeability from Eye Gaze in Mobile Learning Environments

  • Oya CeliktutanEmail author
  • Yiannis Demiris
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11134)

Abstract

What people look at during a visual task reflects an interplay between ocular motor functions and cognitive processes. In this paper, we study the links between eye gaze and cognitive states to investigate whether eye gaze reveal information about an individual’s knowledgeability. We focus on a mobile learning scenario where a user and a virtual agent play a quiz game using a hand-held mobile device. To the best of our knowledge, this is the first attempt to predict user’s knowledgeability from eye gaze using a noninvasive eye tracking method on mobile devices: we perform gaze estimation using front-facing camera of mobile devices in contrast to using specialised eye tracking devices. First, we define a set of eye movement features that are discriminative for inferring user’s knowledgeability. Next, we train a model to predict users’ knowledgeability in the course of responding to a question. We obtain a classification performance of 59.1% achieving human performance, using eye movement features only, which has implications for (1) adapting behaviours of the virtual agent to user’s needs (e.g., virtual agent can give hints); (2) personalising quiz questions to the user’s perceived knowledgeability.

Keywords

Assistive mobile applications Noninvasive gaze tracking Analysis of eye movements Human knowledgeability prediction 

Notes

Acknowledgements

This work was funded by the Horizon 2020 Framework Programme of the European Union under grant agreement no. 643783 (project PAL).

References

  1. 1.
    Ahad, M.A.R., Tan, J.K., Kim, H., Ishikawa, S.: Motion history image: its variants and applications. Mach. Vis. Appl. 23, 255–281 (2010)CrossRefGoogle Scholar
  2. 2.
    Alyuz, N., et al.: Towards an emotional engagement model: can affective states of a learner be automatically detected in a 1:1 learning scenario. In: Proceedings of the 6th Workshop on Personalization Approaches in Learning Environments (PALE 2016). 24th Conference on User Modeling, Adaptation, and Personalization (UMAP 2016), CEUR Workshop Proceedings (2016)Google Scholar
  3. 3.
    Baltrušaitis, T., Robinson, P., Morency, L.P.: OpenFace: an open source facial behavior analysis toolkit. In: IEEE Winter Conference on Applications of Computer Vision (2016)Google Scholar
  4. 4.
    Baranes, A., Oudeyer, P.Y., Gottlieb, J.: Eye movements reveal epistemic curiosity in human observers. Vis. Res. 117(Suppl. C), 81–90 (2015)CrossRefGoogle Scholar
  5. 5.
    Bednarik, R., Eivazi, S., Vrzakova, H.: A computational approach for prediction of problem-solving behavior using support vector machines and eye-tracking data. In: Nakano, Y., Conati, C., Bader, T. (eds.) Eye Gaze in Intelligent User Interfaces, pp. 111–134. Springer, London (2013).  https://doi.org/10.1007/978-1-4471-4784-8_7CrossRefGoogle Scholar
  6. 6.
    Bourai, A., Baltrušaitis, T., Morency, L.P.: Automatically predicting human knowledgeability through non-verbal cues. In: International Conference on Multimodal Interaction, ICMI 2017, pp. 60–67. ACM, New York (2017)Google Scholar
  7. 7.
    Broekens, J., Kosters, W.A. De Vries, T.: Eye movements disclosure decisions in set. In: Benelux Conference on Artificial Intelligence, pp. 29–30 (2009)Google Scholar
  8. 8.
    Bulling, A., Roggen, D.: Recognition of visual memory recall processes using eye movement analysis. In: Proceedings of the 13th International Conference on Ubiquitous Computing, UbiComp 2011, pp. 455–464. ACM, New York (2011)Google Scholar
  9. 9.
    Cole, M.J., Gwizdka, J., Liu, C., Belkin, N.J., Zhang, X.: Inferring user knowledge level from eye movement patterns. Inf. Process. Manage. 49(5), 1075–1091 (2013)CrossRefGoogle Scholar
  10. 10.
    Huang, Q., Veeraraghavan, A., Sabharwal, A.: TabletGaze: dataset and analysis for unconstrained appearance-based gaze estimation in mobile tablets. Mach. Vis. Appl. 28(5), 445–461 (2017)CrossRefGoogle Scholar
  11. 11.
    Knoblich, G., Öllinger, M., Spivey, M.: Tracking the eyes to obtain insight into insight problem solving, July 2005Google Scholar
  12. 12.
    Krafka, K., et al.: Eye tracking for everyone. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016Google Scholar
  13. 13.
    Li, Y., Xu, P., Lagun, D., Navalpakkam, V.: Towards measuring and inferring user interest from gaze. In: International Conference on World Wide Web Companion, WWW 2017 Companion, International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva, Switzerland, pp. 525–533 (2017)Google Scholar
  14. 14.
    Open Trivia DB: Free to use, user-contributed trivia question database. opentdb.com. Accessed 21 Feb 2018
  15. 15.
    Quoc Viet Hung, N., Tam, N.T., Tran, L.N., Aberer, K.: An evaluation of aggregation techniques in crowdsourcing. In: Lin, X., Manolopoulos, Y., Srivastava, D., Huang, G. (eds.) WISE 2013. LNCS, vol. 8181, pp. 1–15. Springer, Heidelberg (2013).  https://doi.org/10.1007/978-3-642-41154-0_1CrossRefGoogle Scholar
  16. 16.
    Shrout, P., Fleiss, J.: Intraclass correlations: uses in assessing rater reliability. Psychology Bull. (1979)Google Scholar
  17. 17.
    Surakka, V., Illi, M., Isokoski, P.: Voluntary eye movements in human-computer interaction. In: The Mind’s Eye, pp. 473–491. North-Holland, Amsterdam (2003)CrossRefGoogle Scholar
  18. 18.
    Tessendorf, B., et al.: Recognition of hearing needs from body and eye movements to improve hearing instruments. In: Lyons, K., Hightower, J., Huang, E.M. (eds.) Pervasive 2011. LNCS, vol. 6696, pp. 314–331. Springer, Heidelberg (2011).  https://doi.org/10.1007/978-3-642-21726-5_20CrossRefGoogle Scholar
  19. 19.
    Underwood, G.: Cognitive Processes in Eye Guidance. Oxford University Press, Oxford (2005)CrossRefGoogle Scholar
  20. 20.
    Vendetti, M.S., Starr, A., Johnson, E.L., Modavi, K., Bunge, S.A.: Eye movements reveal optimal strategies for analogical reasoning. Frontiers Psychol. 8, 932 (2017)CrossRefGoogle Scholar
  21. 21.
    van Wermeskerken, M., Litchfield, D., van Gog, T.: Eye see what you are doing: inferring task performance from eye movement data. In: European Conference on Eye Movements (2017)Google Scholar
  22. 22.
    Wood, E., Bulling, A.: EyeTab: model-based gaze estimation on unmodified tablet computers. In: Proceedings of the Symposium on Eye Tracking Research and Applications, ETRA 2014, pp. 207–210. ACM, New York (2014)Google Scholar
  23. 23.
    Zhang, X., Sugano, Y., Fritz, M., Bulling, A.: It’s written all over your face: full-face appearance-based gaze estimation. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 2299–2308, July 2017Google Scholar
  24. 24.
    Zhang, X., Sugano, Y., Fritz, M., Bulling, A.: MPIIGaze: real-world dataset and deep appearance-based gaze estimation. IEEE Trans. Pattern Anal. Mach. Intell. PP(99), 1 (2018)Google Scholar
  25. 25.
    Zhang, X., Sugano, Y., Bulling, A.: Everyday eye contact detection using unsupervised gaze target discovery. In: Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology, UIST 2017, pp. 193–203. ACM, New York (2017)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Personal Robotics Laboratory, Department of Electrical and Electronic EngineeringImperial College LondonLondonUK

Personalised recommendations