Skip to main content

VIP: A Unifying Framework for Computational Eye-Gaze Research

  • Conference paper
Human Behavior Understanding (HBU 2013)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 8212))

Included in the following conference series:

Abstract

Eye-gaze is an emerging modality in many research areas and applications. We present our VIP framework, which captures the dependence of eye-gaze on Visual stimulus, Intent, and Person. The unifying framework characterizes current eye-gaze computational models. It allows computer scientists to formally define their research problems and to compare with other work. We review the state-of-art computational eye-gaze research and applications with reference to our framework. With the framework, we identify gaps in eye-gaze research and present our work on the new research problem of attribute classification. The accuracy of 0.92 is achieved for classification of Introvert/Extrovert.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Arbeláez, P., Cohen, L.: Constrained image segmentation from hierarchical boundaries. In: CVPR 2008, pp. 1–8. IEEE (2008)

    Google Scholar 

  2. Bagon, S., Boiman, O., Irani, M.: What is a good image segment? a unified approach to segment extraction. In: Forsyth, D., Torr, P., Zisserman, A. (eds.) ECCV 2008, Part IV. LNCS, vol. 5305, pp. 30–44. Springer, Heidelberg (2008)

    Chapter  Google Scholar 

  3. Bednarik, R., Kinnunen, T., Mihaila, A., Fränti, P.: Eye-movements as a biometric. In: Image Analysis, pp. 16–26 (2005)

    Google Scholar 

  4. Bednarik, R., Vrzakova, H., Hradis, M.: What do you want to do next: a novel approach for intent prediction in gaze-based interaction. In: Proceedings of the Symposium on Eye Tracking Research and Applications, pp. 83–90. ACM (2012)

    Google Scholar 

  5. Borji, A., Itti, L.: State-of-the-art in visual attention modeling. IEEE Transactions on Pattern Analysis and Machine Intelligence 35(1), 185–207 (2013)

    Article  MathSciNet  Google Scholar 

  6. Bradley, M.M., Miccoli, L., Escrig, M.A., Lang, P.J.: The pupil as a measure of emotional arousal and autonomic activation. Psychophysiology 45(4), 602–607 (2008)

    Article  Google Scholar 

  7. Bruce, N., Tsotsos, J.: Saliency based on information maximization. Advances in Neural Information Processing Systems 18, 155 (2006)

    Google Scholar 

  8. Bulling, A., Ward, J., Gellersen, H., Troster, G.: Eye movement analysis for activity recognition using electrooculography. Pattern Analysis and Machine Intelligence 33(4), 741–753 (2011)

    Article  Google Scholar 

  9. Chua, H., Boland, J., Nisbett, R.: Cultural variation in eye movements during scene perception. Proceedings of the National Academy of Sciences of the United States of America 102(35), 12629–12633 (2005)

    Article  Google Scholar 

  10. Dorr, M., Martinetz, T., Gegenfurtner, K., Barth, E.: Variability of eye movements when viewing dynamic natural scenes. Journal of Vision 10(10) (2010)

    Google Scholar 

  11. Elazary, L., Itti, L.: Interesting objects are visually salient. Journal of Vision 8(3) (2008)

    Google Scholar 

  12. Frintrop, S., Rome, E., Christensen, H.I.: Computational visual attention systems and their cognitive foundations: A survey. ACM Transactions on Applied Perception (TAP) 7(1), 6 (2010)

    Google Scholar 

  13. Gao, Y., Barreto, A., Adjouadi, M.: Monitoring and processing of the pupil diameter signal for affective assessment of a computer user. In: Jacko, J.A. (ed.) Human-Computer Interaction, Part I, HCII 2009. LNCS, vol. 5610, pp. 49–58. Springer, Heidelberg (2009)

    Google Scholar 

  14. Goldstein, R., Woods, R., Peli, E.: Where people look when watching movies: Do all viewers look at the same place? Computers in Biology and Medicine 37(7), 957–964 (2007)

    Article  Google Scholar 

  15. Holland, C., Komogortsev, O.V.: Biometric identification via eye movement scanpaths in reading. In: 2011 International Joint Conference on Biometrics (IJCB), pp. 1–8. IEEE (2011)

    Google Scholar 

  16. Judd, T., Durand, F., Torralba, A.: A benchmark of computational models of saliency to predict human fixations. Tech. rep. MIT (January 2012)

    Google Scholar 

  17. Judd, T., Ehinger, K., Durand, F., Torralba, A.: Learning to predict where humans look. In: IEEE International Conference on Computer Vision (ICCV) (2009)

    Google Scholar 

  18. Jung, C.G., Baynes, H., Hull, R.: Psychological types. Routledge, London (1991)

    Google Scholar 

  19. Katti, H., Yadati, K., Kankanhalli, M., Chua, T.S.: Affective video summarization and story board generation using pupillary dilation and eye gaze. In: 2011 IEEE International Symposium on Multimedia (ISM), pp. 319–326. IEEE (2011)

    Google Scholar 

  20. Kinnunen, T., Sedlak, F., Bednarik, R.: Towards task-independent person authentication using eye movement signals. In: Proceedings of the 2010 Symposium on Eye-Tracking Research & Applications, pp. 187–190. ACM (2010)

    Google Scholar 

  21. Lang, C., Nguyen, T.V., Katti, H., Yadati, K., Kankanhalli, M., Yan, S.: Depth matters: influence of depth cues on visual saliency. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012, Part II. LNCS, vol. 7573, pp. 101–115. Springer, Heidelberg (2012)

    Chapter  Google Scholar 

  22. Le Meur, O., Le Callet, P., Barba, D., Thoreau, D.: A coherent computational approach to model bottom-up visual attention. Pattern Analysis and Machine Intelligence 28(5), 802–817 (2006)

    Article  Google Scholar 

  23. Mishra, A., Aloimonos, Y., Cheong, F.L.: Active segmentation with fixation. In: 2009 IEEE 12th International Conference on Computer Vision, pp. 468–475. IEEE (2009)

    Google Scholar 

  24. Ouerhani, N., Von Wartburg, R., Hugli, H., Muri, R.: Empirical validation of the saliency-based model of visual attention. Electronic Letters on Computer Vision and Image Analysis 3(1), 13–24 (2004)

    Google Scholar 

  25. Pantic, M., Vinciarelli, A.: Implicit human-centered tagging [social sciences]. IEEE Signal Processing Magazine 26(6), 173–180 (2009)

    Article  Google Scholar 

  26. Peng, H., Long, F., Ding, C.: Feature selection based on mutual information criteria of max-dependency, max-relevance, and min-redundancy. IEEE Transactions on Pattern Analysis and Machine Intelligence 27(8), 1226–1238 (2005)

    Article  Google Scholar 

  27. Ramanathan, S., Katti, H., Huang, R., Chua, T.S., Kankanhalli, M.: Automated localization of affective objects and actions in images via caption text-cum-eye gaze analysis. In: Proceedings of the 17th ACM International Conference on Multimedia, pp. 729–732. ACM (2009)

    Google Scholar 

  28. Ramanathan, S., Katti, H., Sebe, N., Kankanhalli, M., Chua, T.-S.: An eye fixation database for saliency detection in images. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010, Part IV. LNCS, vol. 6314, pp. 30–43. Springer, Heidelberg (2010)

    Chapter  Google Scholar 

  29. Reynolds, D.A., Quatieri, T.F., Dunn, R.B.: Speaker verification using adapted gaussian mixture models. Digital Signal Processing 10(1), 19–41 (2000)

    Article  Google Scholar 

  30. Rigas, I., Economou, G., Fotopoulos, S.: Human eye movements as a trait for biometrical identification. In: 2012 IEEE Fifth International Conference on Biometrics: Theory, Applications and Systems (BTAS), pp. 217–222. IEEE (2012)

    Google Scholar 

  31. Risko, E.F., Anderson, N.C., Lanthier, S., Kingstone, A.: Curious eyes: Individual differences in personality predict eye movement behavior in scene-viewing. Cognition (2011)

    Google Scholar 

  32. Samsung Galaxy S4 - Life Task, http://www.samsung.com/global/microsite/galaxys4/lifetask.html#page=pausescroll (accessed April 2, 2013)

  33. Schleicher, R., Galley, N., Briest, S., Galley, L.: Blinks and saccades as indicators of fatigue in sleepiness warnings: looking tired? Ergonomics 51(7), 982–1010 (2008)

    Article  Google Scholar 

  34. Shen, J., Itti, L.: Top-down influences on visual attention during listening are modulated by observer sex. Vision Research 65, 62–76 (2012)

    Article  Google Scholar 

  35. Vural, U., Akgul, Y.S.: Eye-gaze based real-time surveillance video synopsis. Pattern Recognition Letters 30(12), 1151–1159 (2009)

    Article  Google Scholar 

  36. Yadati, K., Katti, H., Kankanhalli, M.: Interactive video advertising: A multimodal affective approach. In: Li, S., El Saddik, A., Wang, M., Mei, T., Sebe, N., Yan, S., Hong, R., Gurrin, C. (eds.) MMM 2013, Part I. LNCS, vol. 7732, pp. 106–117. Springer, Heidelberg (2013)

    Chapter  Google Scholar 

  37. Yarbus, A., Haigh, B., Rigss, L.: Eye movements and vision, vol. 2. Plenum Press, New York (1967)

    Book  Google Scholar 

  38. Zhang, L., Nejati, H., Foo, L., Ma, K.T., Guo, D., Sim, T.: A talking profile to distinguish identical twins. In: Proceedings of the 10th International Conference on Automatic Face and Gesture Recognition. IEEE (2013)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2013 Springer International Publishing Switzerland

About this paper

Cite this paper

Ma, KT., Sim, T., Kankanhalli, M. (2013). VIP: A Unifying Framework for Computational Eye-Gaze Research. In: Salah, A.A., Hung, H., Aran, O., Gunes, H. (eds) Human Behavior Understanding. HBU 2013. Lecture Notes in Computer Science, vol 8212. Springer, Cham. https://doi.org/10.1007/978-3-319-02714-2_18

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-02714-2_18

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-02713-5

  • Online ISBN: 978-3-319-02714-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics