Patterns of Attention: How Data Visualizations Are Read

  • Laura E. Matzen
  • Michael J. Haass
  • Kristin M. Divis
  • Mallory C. Stites
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10284)

Abstract

Data visualizations are used to communicate information to people in a wide variety of contexts, but few tools are available to help visualization designers evaluate the effectiveness of their designs. Visual saliency maps that predict which regions of an image are likely to draw the viewer’s attention could be a useful evaluation tool, but existing models of visual saliency often make poor predictions for abstract data visualizations. These models do not take into account the importance of features like text in visualizations, which may lead to inaccurate saliency maps. In this paper we use data from two eye tracking experiments to investigate attention to text in data visualizations. The data sets were collected under two different task conditions: a memory task and a free viewing task. Across both tasks, the text elements in the visualizations consistently drew attention, especially during early stages of viewing. These findings highlight the need to incorporate additional features into saliency models that will be applied to visualizations.

Keywords

Data visualizations Text Eye tracking 

Notes

Acknowledgements

This work was funded by the Laboratory Directed Research and Development (LDRD) Program at Sandia National Laboratories. Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the Department of Energy’s National Nuclear Security Administration under Contract DE-AC04-94AL85000.

The authors would like to thank Deborah Cronin and Jim Crowell for collecting the eye tracking data at the University of Illinois at Urbana-Champaign, as well as Hank Kaczmarski and Camille Goudeseune for their support.

References

  1. 1.
    Acarturk, C., Habel, C., Cagiltay, K., Alacam, O.: Multi-media comprehension of language and graphics. J. Eye Mov. Res. 1(3), 2, 1–15 (2008). doi: 10.16910/jemr.1.3.2
  2. 2.
    Borji, A., Itti, L.: CAT2000: a large scale fixation dataset for boosting saliency research. In: CVPR 2015 Workshop on “Future of Datasets” (2015). arXiv preprint: arXiv:1505.03581
  3. 3.
    Borkin, M., Bylinskii, Z., Kim, N., Bainbridge, C.M., Yeh, C., Borkin, D., Pfister, H., Oliva, A.: Beyond memorability: visualization recognition and recall. IEEE Trans. Vis. Comput. Graph. (Proc. InfoVis) (2015). doi: 10.1109/TVCG.2015.2467732
  4. 4.
    Borkin, M., Vo, A., Bylinskii, Z., Isola, P., Sunkavalli, S., Oliva, A., Pfister, H.: What makes a visualization memorable? IEEE Trans. Vis. Comput. Graph. (Proc. InfoVis) (2013). doi: 10.1109/TVCG.2013.234 Google Scholar
  5. 5.
    Bylinskii, Z., Borkin, M.A.: Eye fixation metrics for large scale analysis of information visualizations. In: Proceedings of ETVIS 2015, First Workshop on Eyetracking and Visualizations (2015)Google Scholar
  6. 6.
    Bylinskii, Z., Judd, T., Borji, A., Itti, L., Durand, F., Oliva, A., Torralba, A.: MIT saliency benchmark. http://saliency.mit.edu/
  7. 7.
    Canham, M., Hegarty, M.: Effects of knowledge and display design on comprehension of complex graphics. Learn. Instr. 20, 155–166 (2010). doi: 10.1016/j.learninstruc.2009.02.014 CrossRefGoogle Scholar
  8. 8.
    Carpenter, P.A., Shah, P.: A model of the perceptual and conceptual processes in graph comprehension. J. Exp. Psychol. Appl. 4(2), 75–100 (1998). doi: 10.1037//1076-898x.4.2.75 CrossRefGoogle Scholar
  9. 9.
    Connor, C.E., Egeth, H.E., Yantis, S.: Visual attention: bottom-up versus top-down. Curr. Biol. 14(19), R850–R852 (2004). doi: 10.1016/j.cub.2004.09.041 CrossRefGoogle Scholar
  10. 10.
    Fu, B., Noy, N.F., Storey, M.A.: Eye tracking the user experience – an evaluation of ontology visualization techniques. Semant. Web 8(1), 23–41 (2017). doi: 10.3233/SW-140163 CrossRefGoogle Scholar
  11. 11.
    Goldberg, J.H., Helfman, J.I.: Comparing information graphics: a critical look at eye tracking. In: Proceedings of the 3rd BELIV 2010 Workshop: BEyond Time and Errors: Novel EvaLuation Methods for Information Visualization, pp. 71–78 (2010). doi: 10.1145/2110192.2110203
  12. 12.
    Goldberg, J.H., Helfman, J.I.: Eye tracking for visualization evaluation: reading values on linear versus radial graphs. Inf. Vis. 10(3), 182–195 (2011). doi: 10.1177/1473871611406623 CrossRefGoogle Scholar
  13. 13.
    Haass, M.J., Wilson, A.T., Matzen, L.E., Divis, K.M.: Modeling human comprehension of data visualizations. In: Lackey, S., Shumaker, R. (eds.) VAMR 2016. LNCS, vol. 9740, pp. 125–134. Springer, Cham (2016). doi: 10.1007/978-3-319-39907-2_12 Google Scholar
  14. 14.
    Higgins, E., Leigenger, M., Rayner, K.: Eye movements when viewing advertisements. Front. Psychol. 5, 210 (2014). doi: 10.3389/fpsyg.2014.00210 CrossRefGoogle Scholar
  15. 15.
    Ishihara, S.: Tests for Colour-Blindness: 24 Plates Edition. Kanehara Shuppan Co., Ltd., Tokyo (1972)Google Scholar
  16. 16.
    Itti, L., Koch, C.: Computational modelling of visual attention. Nat. Rev. Neurosci. 2, 194–203 (2001). doi: 10.1038/35058500 CrossRefGoogle Scholar
  17. 17.
    Judd, T., Durand, F., Torralba, A.: A benchmark of computational models of saliency to predict human fixations (2012). https://dspace.mit.edu/handle/1721.1/68590
  18. 18.
    Kim, S., Lombardino, L.J.: Comparing graphs and text: effects of complexity and task. J. Eye Mov. Res. 8(3), 2, 1–17 (2015). doi: 10.16910/jemr.8.3.2
  19. 19.
    MATLAB Release 2015b: The MathWorks, Inc., NatickGoogle Scholar
  20. 20.
    Matzen, L.E., Haass, M.J., Tran, J., McNamara, L.A.: Using eye tracking metrics and visual saliency maps to assess image utility. Electron. Imaging 16, 1–8 (2016). doi: 10.2352/ISSN.2470-1173.2016.16.HVEI-127 CrossRefGoogle Scholar
  21. 21.
    Peebles, D., Cheng, P.C.-H.: Modeling the effect of task and graphical representation on response latency in a graph reading task. Hum. Factors 45(1), 28–46 (2003). doi: 10.1518/hfes.45.1.28.27225 CrossRefGoogle Scholar
  22. 22.
    Pinto, Y., van der Leij, A., Sligte, I.G., Lamme, V.A.F., Scholte, H.S.: Bottom-up and top-down attention are independent. J. Vis. 13, 1–14 (2013). doi: 10.1167/13.3.16 CrossRefGoogle Scholar
  23. 23.
    Rayner, K.: Eye movements in reading and information processing: 20 years of research. Psychol. Bull. 124(3), 372–422 (1998). doi: 10.1037/0033-2909.124.3.372 CrossRefGoogle Scholar
  24. 24.
    Rayner, K.: Eye movements and attention in reading, scene perception, and visual search. Q. J. Exp. Psychol. 62(8), 1457–1506 (2009). doi: 10.1080/17470210902816461 CrossRefGoogle Scholar
  25. 25.
    Rosenholtz, R., Dorai, A., Freeman, R.: Do predictions of visual perception aid design? ACM Trans. Appl. Percept. (TAP) 8(2), 12 (2011). doi: 10.1145/1870076.1870080 Google Scholar
  26. 26.
    Shah, P., Hoeffner, J.: Review of graph comprehension research: implications for instruction. Educ. Psychol. Rev. 14(1), 47–69 (2002). doi: 10.1023/A:1013180410169 CrossRefGoogle Scholar
  27. 27.
    Strobel, B., Sass, S., Lindner, M.A., Köller, O.: Do graph readers prefer the graph type most suited to a given task? Insights from eye tracking. J. Eye Mov. Res. 9(4), 4, 1–15 (2016). doi: 10.16910/jemr.9.4.4
  28. 28.
    Toker, D., Conati, C., Steichen, B., Carenini, G.: Individual user characteristics and information visualization: connecting the dots through eye tracking. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 295–304 (2013). doi: 10.1145/2470654.2470696

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • Laura E. Matzen
    • 1
  • Michael J. Haass
    • 1
  • Kristin M. Divis
    • 1
  • Mallory C. Stites
    • 1
  1. 1.Sandia National LaboratoriesAlbuquerqueUSA

Personalised recommendations