Patterns of Attention: How Data Visualizations Are Read
Data visualizations are used to communicate information to people in a wide variety of contexts, but few tools are available to help visualization designers evaluate the effectiveness of their designs. Visual saliency maps that predict which regions of an image are likely to draw the viewer’s attention could be a useful evaluation tool, but existing models of visual saliency often make poor predictions for abstract data visualizations. These models do not take into account the importance of features like text in visualizations, which may lead to inaccurate saliency maps. In this paper we use data from two eye tracking experiments to investigate attention to text in data visualizations. The data sets were collected under two different task conditions: a memory task and a free viewing task. Across both tasks, the text elements in the visualizations consistently drew attention, especially during early stages of viewing. These findings highlight the need to incorporate additional features into saliency models that will be applied to visualizations.
KeywordsData visualizations Text Eye tracking
This work was funded by the Laboratory Directed Research and Development (LDRD) Program at Sandia National Laboratories. Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the Department of Energy’s National Nuclear Security Administration under Contract DE-AC04-94AL85000.
The authors would like to thank Deborah Cronin and Jim Crowell for collecting the eye tracking data at the University of Illinois at Urbana-Champaign, as well as Hank Kaczmarski and Camille Goudeseune for their support.
- 1.Acarturk, C., Habel, C., Cagiltay, K., Alacam, O.: Multi-media comprehension of language and graphics. J. Eye Mov. Res. 1(3), 2, 1–15 (2008). doi: 10.16910/jemr.1.3.2
- 2.Borji, A., Itti, L.: CAT2000: a large scale fixation dataset for boosting saliency research. In: CVPR 2015 Workshop on “Future of Datasets” (2015). arXiv preprint: arXiv:1505.03581
- 3.Borkin, M., Bylinskii, Z., Kim, N., Bainbridge, C.M., Yeh, C., Borkin, D., Pfister, H., Oliva, A.: Beyond memorability: visualization recognition and recall. IEEE Trans. Vis. Comput. Graph. (Proc. InfoVis) (2015). doi: 10.1109/TVCG.2015.2467732
- 5.Bylinskii, Z., Borkin, M.A.: Eye fixation metrics for large scale analysis of information visualizations. In: Proceedings of ETVIS 2015, First Workshop on Eyetracking and Visualizations (2015)Google Scholar
- 6.Bylinskii, Z., Judd, T., Borji, A., Itti, L., Durand, F., Oliva, A., Torralba, A.: MIT saliency benchmark. http://saliency.mit.edu/
- 11.Goldberg, J.H., Helfman, J.I.: Comparing information graphics: a critical look at eye tracking. In: Proceedings of the 3rd BELIV 2010 Workshop: BEyond Time and Errors: Novel EvaLuation Methods for Information Visualization, pp. 71–78 (2010). doi: 10.1145/2110192.2110203
- 15.Ishihara, S.: Tests for Colour-Blindness: 24 Plates Edition. Kanehara Shuppan Co., Ltd., Tokyo (1972)Google Scholar
- 17.Judd, T., Durand, F., Torralba, A.: A benchmark of computational models of saliency to predict human fixations (2012). https://dspace.mit.edu/handle/1721.1/68590
- 18.Kim, S., Lombardino, L.J.: Comparing graphs and text: effects of complexity and task. J. Eye Mov. Res. 8(3), 2, 1–17 (2015). doi: 10.16910/jemr.8.3.2
- 19.MATLAB Release 2015b: The MathWorks, Inc., NatickGoogle Scholar
- 27.Strobel, B., Sass, S., Lindner, M.A., Köller, O.: Do graph readers prefer the graph type most suited to a given task? Insights from eye tracking. J. Eye Mov. Res. 9(4), 4, 1–15 (2016). doi: 10.16910/jemr.9.4.4
- 28.Toker, D., Conati, C., Steichen, B., Carenini, G.: Individual user characteristics and information visualization: connecting the dots through eye tracking. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 295–304 (2013). doi: 10.1145/2470654.2470696