Skip to main content
Log in

Gaze behavior data profiling and analysis system platform based on visual content representation

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

This paper presents a real-time system platform for profiling and analyzing the gaze behavior based on visual contents. The proposed system captures the gaze information from multiple users and provides the ability to measure the degree of visual content perception by the users through statistical analysis. Visual content representation scheme is presented for capturing and annotating the gaze behavior effectively. Information correlation property among multiple image frames is defined for providing the ability to analyze the pattern of perception of user based on complex visual contents. In order to monitor the real time gaze behavior of the users, the monitoring rules are incorporated in to the representation template. The number of users and the duration of observation time may significantly increase the profile data size. To alleviate the problem, an adaptive data compression techniques is incorporated. The capabilities and functionalities of the proposed system are verified for multiple users with different gaze behavior on a sequence of commercial image data.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19
Fig. 20
Fig. 21

Similar content being viewed by others

References

  1. Alex Poole and Linden J. Ball, (2006) Eye tracking in HCI and usability research, pp. 211–219, Encyclopedia of Human Computer Interaction

  2. F Alt, AS Shirazi, T Kubitza, A Schmidt, (2013) Interaction techniques for creating and exchanging content with public displays, CHI 2013, Changing Perspectives, pp. 1709–1718

  3. Armato A, Lanatà A (2013) Enzo Pasquale scilingo, comparitive study on photometric normalization algorithms for an innovative, robust and real-time eye gaze tracker. Journal of Real-Time Image Processing 8(1):21–33

    Article  Google Scholar 

  4. Juan C Augusto, Vic Callaghan, Diane Cook, Achilles Kameas4 and Ichiro Satoh, (2013) Intelligent Environments: a manifesto, Human-centric Computing and Information Sciences, pp. 3–12

  5. D Beymer, DM Russell, (2005) WebGazeAnalyzer: a system for capturing and analyzing web reading behavior using eye gaze, CHI’05 Extended Abstracts on Human Factors in Computing Systems, pp. 1913–1916

  6. Bloehdorn S, Petridis K, Saathoff C et al (2005) Semantic annotation of images and videos for multimedia analysis: the semantic Web. Research and Applications Lecture Notes in Computer Science 3532:592–607

    Article  Google Scholar 

  7. G Buscher, A Dengel, L van Elst, (2008) Eye movements as implicit relevance feedback, CHI’08 Extended Abstracts on Human Factors in Computing Systems, pp. 2991–2996

  8. Castagnos S, Pearl P (2010) Consumer decision patterns through Eye gaze analysis. IUI 2010:1–10

    Google Scholar 

  9. Chee Kyun NG, Jiang Gi F, Gee Keng E, Nor Kamariah N (2013) Finger triggered virtual musical instruments. Jounal of Convergence 4(1):39–46

    Google Scholar 

  10. Newton Howard and Erik Cambria, (2013) Intention awareness: improving upon situation awareness in human-centric environments, Human-centric Computing and Information Sciences, pp. 3–9

  11. G Zoric, A Engstrm, L Barkhuus, et al.,(2013) Gesture interaction with rich TV content in the social setting, ACM SIGCHI Conference on Human Factors in Computing Systems, p. 1–4

  12. Goswami K, Hong G-S, Kim B-G (2013) A novel mesh-based moving object detection technique in video sequence. Jounal of Convergence 4(3):20–24

    Google Scholar 

  13. Huynh Trung Manh and Gueesang Lee, (2013) Small Object Segmentation Based on Visual Saliency in Natural Images, Journal of Information Processing Systems, Volume. 9, No. 4, pp. 592–601

  14. Hannah Faye Chua, Julie E. Boland, and Richard E. Nisbett, (2005) Cultural variation in eye movements during scene perception, PNAS vol. 102 no. 35, pp. 12629–12633

  15. Ho Y-S (2013) Challenging technical issues of 3D video processing. Jounal of Convergence 4(1):1–6

    Google Scholar 

  16. J. Jeon, V. Lavrenko and R. Manmatha, (2003) Automatic Image Annotation and Retrieval using Cross-Media Relevance Models, 26th annual international ACM SIGIR conference on Research and development in informaion retrieval, pp. 119–126

  17. Kim GPS (2013) An architecture for home-oriented IPTV service platform on residential gateway. Journal of Information Processing Systems 9(3):425–434

    Article  Google Scholar 

  18. Kiwon Yun, Yifan Peng, Dimitris Samaras, Gregory J. Zelinsky, and Tamara L. Berg, (2013) Exploring the role of gaze behavior and object detection in scene understanding, Front Psychol. 4, 917, pp. 1–14

  19. Konstantinos Chorianopoulos, (2013) Collective intelligence within web video, Human-centric Computing and Information Sciences, pp. 3–10

  20. Li X, Weiming H, Shen C, Zhang Z, Dick A, van den Hengel A (2013) A survey of appearance models in visual object tracking. ACM Transactions on Intelligent Systems and Technology (TIST) 4(4):1–42

    Article  Google Scholar 

  21. Pushpa B. Patil, Manesh B. Kokare, (2013) Interactive Semantic Image Retrieval, Journal of Information Processing Systems, Vol. 9, No.3, pp.349-364

  22. Rajashekar U, Cormack LK, Bovik AC (2004) Point of gaze analysis reveals visual search strategies, human vision and electronic imaging. IX 2004:1–12

    Google Scholar 

  23. Russell BC, Torralba A, Murphy KP, Freeman WT (2008) LabelMe: a database and Web-based tool for image annotation. International Journal of Computer Vision 77(1–3):157–173

    Article  Google Scholar 

  24. TJ Smith, PK Mital, (2013) Attentional synchrony and the influence of viewing task on gaze behavior in static and dynamic scenes, Journal of Vision, vol. 13 no. 8 article 16, pp. 1–24

  25. Stephen R (2004) Gulliver, gheorghita ghinea, stars in their eyes: what Eye-tracking reveals about multimedia perceptual quality. IEEE Transactions on systmes man and cybernetics - part A: systems and humans 34(4):472–482

    Article  Google Scholar 

  26. Umesh Rajashekar, Ian van der Linde, Alan C. Bovik, Lawrence K. Cormack, (2008) GAFFE: A Gaze-Attentive Fixation Finding Engine, IEEE Transaction on Image Processing, Vol. 17, No. 4, pp. 564–573

  27. Vanessa El-Khoury, Martin Jergler, David Coquil, Harald Kosch, (2012) Semantic Video Content Annotation at the Object Level, MoMM2012, pp.1-10

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Nammee Moon.

Additional information

This research was supported by the ‘Cross-Ministry Giga KOREA Project’ of the Ministry of Science, ICT and Future Planning, Republic of Korea (ROK).[GK14P0100, Development of Tele-Experience Service SW Platform based on Giga Media]. Also, this paper is extended and improved from accepted paper of KCIC-2013/FCC-2014 conferences.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Oh, JM., Hong, S. & Moon, N. Gaze behavior data profiling and analysis system platform based on visual content representation. Multimed Tools Appl 75, 15211–15227 (2016). https://doi.org/10.1007/s11042-014-2285-7

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-014-2285-7

Keywords

Navigation