Advertisement

The Analysis and Prediction of Eye Gaze When Viewing Statistical Graphs

  • Andre HarrisonEmail author
  • Mark A. Livingston
  • Derek Brock
  • Jonathan Decker
  • Dennis Perzanowski
  • Christopher Van Dolson
  • Joseph Mathews
  • Alexander Lulushi
  • Adrienne Raglin
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10284)

Abstract

Statistical graphs are images that display quantitative information in a visual format that allows for the easy and consistent interpretation of the information. Often, statistical graphs are in the form of line graphs or bar graphs. In fields, such as cybersecurity, sets of statistical graphs are used to present complex information; however, the interpretation of these more complex graphs is often not obvious. Unless the viewer has been trained to understand each graph used, the interpretation of the data may be limited or incomplete [1]. In order to study the perception of statistical graphs, we tracked users’ eyes while studying simple statistical graphs. Participants studied a graph, and later viewed a graph purporting to be a subset of the data. They were asked to look for a substantive change in the meaning of the second graph compared to the first.

To model where the participants would direct their attention, we ran several visual saliency models over the graphs [2, 3, 4]. Visual saliency models try to predict where people will look in an image; however, visual saliency models are typically designed and evaluated to predict where people look in natural images (images of natural or real world scenes), which have lots of potential information, subjective interpretations, and are not typically very quantitative. The ideal observer model [2], unlike most saliency models, tries to predict where people look based on the amount of information contained within each location in an image. The underlying theory of the ideal observer model is that when a person sees a new image, they want to understand that image as quickly as possible. To do this, the observer directs their attention first to the locations in the image that will provide the most information (i.e. give the best understanding of the information).

Within this paper, we have analyzed the eye gaze from a study on statistical graphs to evaluate the consistency between participants in the way they gazed at graphs and how well a saliency model can predict where those people are likely to look in the graph. During the study, as a form of mental diversion to the primary task, participants also looked at natural images, between each set of graphs. When the participants looked at the images, they did so without guidance, i.e. they weren’t told to look at the images for any particular reason or objective. This allowed the viewing pattern for graphs to be compared to eye gaze data for the natural images, while also showing the differences, in the processing of simple graphs versus complex natural images.

An interesting result shows that viewers processed the graphs differently than natural images. The center of the graph was not a strong predictor of attention. In natural images, a Gaussian kernel at the center of an image can achieve a receiver operating characteristic (ROC) score of over 80% due to inherent center bias in both the selection of natural images and the gaze patterns of participants [5]. This viewing pattern was present when participants looked at the natural images during the diversion task, but it was not present when they studied the graphs. Results from the study also found fairly consistent, but unusually low inter-subject consistency ROC scores. Inter-subject consistency is the ability to predict one participant’s gaze locations using the gaze positions of the other (n − 1) participants [3]. The saliency model itself was an inconsistent predictor of participants’ eye gaze by default. Like the participants, the saliency model identified titles and axis labels as salient. The saliency model also found the bars and lines on the graphs to be salient; however, the eye gaze of most participants rarely fell or focused on the line or bar graphs. This may be due to the simplicity of the graphs, implying that very little time or attention needed to be directed to the actual bar or line graph in order to remember it.

Keywords

Cognitive modeling Perception Emotion and interaction Understanding human cognition and behavior in complex tasks and environments Visual salience Information theory Statistical graphics 

References

  1. 1.
    Kosslyn, S.M.: Understanding charts and graphs. Appl. Cogn. Psychol. 3, 185–225 (1989)CrossRefGoogle Scholar
  2. 2.
    Harrison, A., Etienne-Cummings, R.: An entropy based ideal observer model for visual saliency. In: 2012 46th Annual Conference on Information Sciences and Systems (CISS), pp. 1–6. IEEE (2012)Google Scholar
  3. 3.
    Harel, J., Koch, C., Perona, P.: Graph-based visual saliency. In: Advances in Neural Information Processing Systems, pp. 545–552 (2007)Google Scholar
  4. 4.
    Itti, L., Koch, C., Niebur, E.: A model of saliency-based visual attention for rapid scene analysis. IEEE Trans. Pattern Anal. Mach. Intell. 20, 1254–1259 (1998)CrossRefGoogle Scholar
  5. 5.
    Zhang, L., Tong, M.H., Marks, T.K., Shan, H., Cottrell, G.W.: SUN: a Bayesian framework for saliency using natural statistics. J. Vis. 8, 32.1–32.20 (2008)Google Scholar
  6. 6.
    Gattis, M., Holyoak, K.J.: How graphs mediate analog and symbolic representation. In: Proceedings of the 16th Annual Conference of the Cognitive Science Society (1994)Google Scholar
  7. 7.
    Aumer-Ryan, P.: Visual rating system for HFES graphics: design and analysis. In: Proceedings of the Human Factors and Ergonomics Society Annual Meeting, pp. 2124–2128 (2006)Google Scholar
  8. 8.
    Fausset, C.B., Rogers, W.A., Fisk, A.D.: Visual graph display guidelines, Atlanta, GA (2008)Google Scholar
  9. 9.
    Gillan, D.J., Wickens, C.D., Hollands, J.G., Carswell, C.M.: Guidelines for presenting quantitative data in HFES publications. Hum. Factors: J. Hum. Factors Ergon. Soc. 40, 28–41 (1998)CrossRefGoogle Scholar
  10. 10.
    Petkosek, M.A., Moroney, W.F.: Guidelines for constructing graphs. In: Human Factors and Ergonomics Society Annual Meeting, pp. 1006–1010 (2004)Google Scholar
  11. 11.
    Borkin, M.A., Bylinskii, Z., Kim, N.W., Bainbridge, C.M., Yeh, C.S., Borkin, D., Pfister, H., Member, S., Oliva, A.: Beyond memorability: visualization recognition and recall. IEEE Trans. Vis. Comput. Graph. 22, 519–528 (2016)CrossRefGoogle Scholar
  12. 12.
    Borkin, M.A., Vo, A.A., Bylinskii, Z., Isola, P., Sunkavalli, S., Oliva, A., Pfister, H.: What makes a visualization memorable? IEEE Trans. Vis. Comput. Graph. 19, 2306–2315 (2013)CrossRefGoogle Scholar
  13. 13.
    Vessey, I., Galletta, D.: Cognitive fit: an empirical study of information acquisition. Inf. Syst. Res. 2, 63–84 (1991)CrossRefGoogle Scholar
  14. 14.
    Vessey, I.: Cognitive fit: a theory-based analysis of the graphs versus tables literature. Decis. Sci. 22, 219–240 (1991)CrossRefGoogle Scholar
  15. 15.
    Ngo, D.C.L., Samsudin, A., Abdullah, R.: Aesthetic measures for assessing graphic screens. J. Inf. Sci. Eng. 16, 97–116 (2000)Google Scholar
  16. 16.
    Zen, M., Vanderdonckt, J.: Towards an evaluation of graphical user interfaces aesthetics based on metrics (2014)Google Scholar
  17. 17.
    Acartürk, C.: Towards a systematic understanding of graphical cues in communication through statistical graphs. J. Vis. Lang. Comput. 25, 76–88 (2014)CrossRefGoogle Scholar
  18. 18.
    Greenberg, R.A.: Graph comprehension: difficulties, individual differences, and instruction (2014)Google Scholar
  19. 19.
    Halford, G.S., Baker, R., McCredden, J.E., Bain, J.D.: How many variables can humans process? Psychol. Sci. 16, 70–76 (2005)CrossRefGoogle Scholar
  20. 20.
    Pinker, S.: Theory of graph comprehension (1959)Google Scholar
  21. 21.
    Trickett, S.B., Trafton, J.G.: Toward a comprehensive model of graph comprehension: making the case for spatial cognition. In: Barker-Plummer, D., Cox, R., Swoboda, N. (eds.) Diagrams 2006. LNCS (LNAI), vol. 4045, pp. 286–300. Springer, Heidelberg (2006). doi: 10.1007/11783183_38 CrossRefGoogle Scholar
  22. 22.
    Treisman, A.M., Gelade, G.: A feature-integration theory of attention. Cogn. Psychol. 12, 97–136 (1980)CrossRefGoogle Scholar
  23. 23.
    Oliva, A., Torralba, A., Castelhano, M.S., Henderson, J.M.: Top-down control of visual attention in object detection. In: Proceedings 2003 International Conference on Image Processing (Cat. No. 03CH37429), p. I-253-6. IEEE (2003)Google Scholar
  24. 24.
    Gao, D., Vasconcelos, N.: Integrated learning of saliency, complex features, and object detectors from cluttered scenes. In: 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2005), pp. 282–287. IEEE (2005)Google Scholar
  25. 25.
    Gao, D., Vasconcelos, N.: Discriminant saliency for visual recognition from cluttered scenes. Adv. Neural. Inf. Process. Syst. 17, 1 (2004)Google Scholar
  26. 26.
    Torralba, A., Oliva, A., Castelhano, M.S., Henderson, J.M.: Contextual guidance of eye movements and attention in real-world scenes: the role of global features in object search. Psychol. Rev. 113, 766–786 (2006)CrossRefGoogle Scholar
  27. 27.
    Parkhurst, D.J., Law, K., Niebur, E.: Modeling the role of salience in the allocation of overt visual attention. Vis. Res. 42, 107–123 (2002)CrossRefGoogle Scholar
  28. 28.
    Itti, L., Koch, C.: A saliency-based search mechanism for overt and covert shifts of visual attention. Vis. Res. 40, 1489–1506 (2000)CrossRefGoogle Scholar
  29. 29.
    Zhao, Q., Koch, C.: Learning a saliency map using fixated locations in natural scenes. J. Vis. 11, 1–15 (2011)Google Scholar
  30. 30.
    Chauvin, A., Herault, J., Marendaz, C., Peyrin, C.: Natural scene perception: visual attractors and images processing. In: Connectionist Models of Cognition and Perception - Proceedings of the Seventh Neural Computation and Psychology Workshop, pp. 236–248. World Scientific Publishing Co. Pte. Ltd., Singapore (2002)Google Scholar
  31. 31.
    Lin, Y., Fang, B., Tang, Y.: A computational model for saliency maps by using local entropy. In: AAAI Conference on Artificial Intelligence (2010)Google Scholar
  32. 32.
    Koch, C., Ullman, S.: Shifts in selective visual attention: towards the underlying neural circuitry. Hum. Neurobiol. 4, 219–227 (1985)Google Scholar
  33. 33.
    Peters, R.J., Iyer, A., Itti, L., Koch, C.: Components of bottom-up gaze allocation in natural images. Vis. Res. 45, 2397–2416 (2005)CrossRefGoogle Scholar
  34. 34.
    Kadir, T., Brady, M.: Saliency, scale and image description. Int. J. Comput. Vis. 45, 83–105 (2001)CrossRefzbMATHGoogle Scholar
  35. 35.
    Tamayo, N., Traver, V.J.J.: Entropy-based saliency computation in log-polar images. In: Proceedings of the International Conference on Computer Vision Theory and Applications, pp. 501–506 (2008)Google Scholar
  36. 36.
    Wang, W., Wang, Y., Huang, Q., Gao, W.: Measuring visual saliency by site entropy rate. In: 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 2368–2375. IEEE (2010)Google Scholar
  37. 37.
    Bruce, N.D.B., Tsotsos, J.K.: Saliency based on information maximization. Adv. Neural. Inf. Process. Syst. 18, 155–162 (2006)Google Scholar
  38. 38.
    Bruce, N.D.B., Tsotsos, J.K.: Saliency, attention, and visual search: an information theoretic approach. J. Vis. 9, 5.1–5.24 (2009)CrossRefGoogle Scholar
  39. 39.
    Itti, L., Baldi, P.: Bayesian surprise attracts human attention. Adv. Neural Inf. Process. Syst. 18, 547 (2006)Google Scholar
  40. 40.
    Judd, T., Ehinger, K., Durand, F., Torralba, A.: Learning to predict where humans look. In: 2009 IEEE 12th International Conference on Computer Vision, pp. 2106–2113. IEEE (2009)Google Scholar
  41. 41.
    Dahlberg, J.: Eye tracking with eye glasses (2010)Google Scholar
  42. 42.
    Tatler, B.W.: The central fixation bias in scene viewing: selecting an optimal viewing position independently of motor biases and image feature distributions. J. Vis. 7, 4 (2007)CrossRefGoogle Scholar
  43. 43.
    Hart, B.M., Vockeroth, J., Schumann, F., Bartl, K., Schneider, E., Konig, P., Einhäuser, W., Marius, B., Vockeroth, J., Bartl, K., Schneider, E., Einhäuser, W.: Gaze allocation in natural stimuli: comparing free exploration to head-fixed viewing conditions. Vis. Cogn. 17, 1132–1158 (2009)CrossRefGoogle Scholar
  44. 44.
    Schumann, F., Einhäuser-Treyer, W., Vockeroth, J., Bartl, K., Schneider, E., König, P.: Salient features in gaze-aligned recordings of human visual input during free exploration of natural environments. J. Vis. 8, 12.1–12.17 (2008)CrossRefGoogle Scholar
  45. 45.
    Bylinskii, Z., Borkin, M.A.: Eye fixation metrics for large scale analysis of information visualizations. In: ETVIS Workshop on Eye Tracking and Visualization (2015)Google Scholar
  46. 46.
    Borji, A., Itti, L.: State-of-the-art in visual attention modeling. IEEE Trans. Pattern Anal. Mach. Intell. 35, 185–207 (2013)CrossRefGoogle Scholar
  47. 47.
    Cooper, R.A., Plaisted-Grant, K.C., Baron-Cohen, S., Simons, J.S.: Eye movements reveal a dissociation between memory encoding and retrieval in adults with autism. Cognition 159, 127–138 (2017)CrossRefGoogle Scholar
  48. 48.
    Tilke, J., Ehinger, K., Durand, F., Torralba, A.: Learning to predict where humans look. In: Proceedings of IEEE International Conference on Computer Vision, pp. 2106–2113 (2009)Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • Andre Harrison
    • 1
    Email author
  • Mark A. Livingston
    • 2
  • Derek Brock
    • 2
  • Jonathan Decker
    • 2
  • Dennis Perzanowski
    • 2
  • Christopher Van Dolson
    • 2
  • Joseph Mathews
    • 2
  • Alexander Lulushi
    • 2
  • Adrienne Raglin
    • 1
  1. 1.Army Research LaboratoryAdelphiUSA
  2. 2.Naval Research LaboratoryWashington, DCUSA

Personalised recommendations