Skip to main content

Scanpath Complexity: Modeling Reading/Annotation Effort Using Gaze Information

  • Chapter
  • First Online:

Part of the book series: Cognitive Intelligence and Robotics ((CIR))

Abstract

In the previous chapter, we discussed how cognitive information derived from the eye-movement patterns of the annotators can be used to model annotation complexity for translation and sentiment annotation. We realize, gaze data, a form of subconscious annotation can be useful for labeling training data with complexity scores, when manually assigning such labels becomes extremely difficult due to its highly subjective nature. We rather rely on simplistic gaze-based measures like total fixation duration to label our data, and then predict the labels using derivable textual features. While measuring annotation complexity through total fixation/saccade duration may seem robust under the assumption that “complex tasks require more time,” it seems more intuitive to consider the complexity of eye-movement patterns in their entirety to derive such labels.

Declaration: Consent of the subjects participating in the eye-tracking experiments for collecting data used for the work reported in this chapter has been obtained.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD   109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    To get a nonzero product, attributes with values as zero are discarded.

  2. 2.

    We chose asymmetric Gaussian over other similar distribution since it is easy to control the shape of the left and right part of the distribution.

  3. 3.

    Integrating \(P(x_{t_{i+1}})\) from \(-\infty \) to \(\infty \) & equating to 1 yields Z.

  4. 4.

    https://en.wikipedia.org/.

  5. 5.

    https://simple.wikipedia.org/.

  6. 6.

    http://www.cfilt.iitb.ac.in/cognitive-nlp.

  7. 7.

    Parafovea or the parafoveal belt is a region of the retina, that captures information within two degrees (approximately 6–8 characters) of the point of fixation being processed in foveal vision.

  8. 8.

    Too insignificant to report in Table 4.3.

  9. 9.

    Labels to train \(ScaComp_{l}\) are obviously not available for the experiment.

References

  • Anderson, J. R., Bothell, D., & Douglass, S. (2004). Eye movements do not reflect retrieval processes limits of the eye-mind hypothesis. Psychological Science, 15(4), 225–231.

    Article  Google Scholar 

  • Antonenko, P., Paas, F., Grabner, R., & van Gog, T. (2010). Using electroencephalography to measure cognitive load. Educational Psychology Review, 22(4), 425–438.

    Article  Google Scholar 

  • Bicknell, K., & Levy, R. (2010). A rational model of eye movement control in reading. In Proceedings of the 48th Annual Meeting of the ACL, (pp. 1168–1178). ACL.

    Google Scholar 

  • Bird, S. (2006). Nltk: The natural language toolkit. In Proceedings of the COLING/ACL on Interactive Presentation Sessions (pp. 69–72). Association for Computational Linguistics.

    Google Scholar 

  • Coco, M. I., & Keller, F. (2012). Scan patterns predict sentence production in the cross-modal processing of visual scenes. Cognitive Science, 36(7), 1204–1223.

    Article  Google Scholar 

  • Cristino, F., Mathôt, S., Theeuwes, J., & Gilchrist, I. D. (2010). Scanmatch: A novel method for comparing fixation sequences. Behavior Research Methods, 42(3), 692–700.

    Article  Google Scholar 

  • Demberg, V., & Keller, F. (2008). Data from eye-tracking corpora as evidence for theories of syntactic processing complexity. Cognition, 109(2), 193–210.

    Article  Google Scholar 

  • Dewhurst, R., Nyström, M., Jarodzka, H., Foulsham, T., Johansson, R., & Holmqvist, K. (2012). It depends on how you look at it: Scanpath comparison in multiple dimensions with multimatch, a vector-based approach. Behavior Research Methods, 44(4), 1079–1100.

    Article  Google Scholar 

  • Engbert, R., & Krügel, A. (2010). Readers use bayesian estimation for eye movement control. Psychological Science, 21(3), 366–371.

    Article  Google Scholar 

  • Engbert, R., Nuthmann, A., Richter, E. M., & Kliegl, R. (2005). Swift: A dynamical model of saccade generation during reading. Psychological Review, 112(4), 777.

    Article  Google Scholar 

  • Fraley, C., & Raftery, A. E. (2002). Model-based clustering, discriminant analysis, and density estimation. Journal of the American Statistical Association, 97(458), 611–631.

    Article  MathSciNet  Google Scholar 

  • Goldberg, J. H., & Kotval, X. P. (1999). Computer interface evaluation using eye movements: methods and constructs. International Journal of Industrial Ergonomics, 24(6), 631–645.

    Article  Google Scholar 

  • Gunning, R. (1969). The fog index after twenty years. Journal of Business Communication, 6(2), 3–13.

    Article  Google Scholar 

  • Holmqvist, K., Nyström, M., Andersson, R., Dewhurst, R., Jarodzka, H., & Van de Weijer, J. (2011). Eye tracking: A comprehensive guide to methods and measures. Oxford: Oxford University Press.

    Google Scholar 

  • Holsanova, J., Holmberg, N., & Holmqvist, K. (2009). Reading information graphics: The role of spatial contiguity and dual attentional guidance. Applied Cognitive Psychology, 23(9), 1215–1226.

    Article  Google Scholar 

  • Irwin, D. E. (2004). Fixation location and fixation duration as indices of cognitive processing. In The Interface of Language, Vision, and Action: Eye Movements and the Visual World (pp. 105–134).

    Google Scholar 

  • Josephson, S., & Holmes, M. E. (2002). Visual attention to repeated internet images: testing the scanpath theory on the world wide web. In Proceedings of the 2002 Symposium on Eye Tracking Research and Applications, (pp. 43–49). ACM.

    Google Scholar 

  • Joshi, A., Mishra, A., Senthamilselvan, N., & Bhattacharyya, P. (2014). Measuring sentiment annotation complexity of text. In ACL (Daniel Marcu 22 June 2014 to 27 June 2014). ACL.

    Google Scholar 

  • Just, M. A., & Carpenter, P. A. (1980). A theory of reading: From eye fixations to comprehension. Psychological Review, 87(4), 329.

    Article  Google Scholar 

  • Kincaid, J. P., Fishburne, R. P, Jr., Rogers, R. L., & Chissom, B. S. (1975). Derivation of new readability formulas (automated readability index, fog count and flesch reading ease formula) for navy enlisted personnel. Technical report, DTIC Document.

    Google Scholar 

  • Kliegl, R., Grabner, E., Rolfs, M., & Engbert, R. (2004). Length, frequency, and predictability effects of words on eye movements in reading. European Journal of Cognitive Psychology, 16(1–2), 262–284.

    Article  Google Scholar 

  • Kruskal, J. B. (1964). Multidimensional scaling by optimizing goodness of fit to a nonmetric hypothesis. Psychometrika, 29(1), 1–27.

    Article  MathSciNet  Google Scholar 

  • Lin, D. (1996). On the structural complexity of natural language sentences. In Proceedings of the 16th Conference on Computational Linguistics-Volume 2 (pp. 729–733). Association for Computational Linguistics.

    Google Scholar 

  • Liversedge, S. P., & Findlay, J. M. (2000). Saccadic eye movements and cognition. Trends in Cognitive Sciences, 4(1), 6–14.

    Article  Google Scholar 

  • Lu, X. (2012). The relationship of lexical richness to the quality of esl learners’ oral narratives. The Modern Language Journal, 96(2), 190–208.

    Article  Google Scholar 

  • Manning, C. D., Surdeanu, M., Bauer, J., Finkel, J., Bethard, S. J., & McClosky, D. (2014). The Stanford CoreNLP natural language processing toolkit. In Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations (pp. 55–60).

    Google Scholar 

  • Mayer, R. E., & Moreno, R. (2003). Nine ways to reduce cognitive load in multimedia learning. Educational Psychologist, 38(1), 43–52.

    Article  Google Scholar 

  • Mc Laughlin, G. H. (1969). Smog grading-a new readability formula. Journal of Reading, 12(8), 639–646.

    Google Scholar 

  • Mishra, A., Bhattacharyya, P., Carl, M., & CRITT, I. (2013). Automatically predicting sentence translation difficulty. ACL, 2, 346–351.

    Google Scholar 

  • Paas, F., Tuovinen, J. E., Tabbers, H., & Van Gerven, P. W. (2003). Cognitive load measurement as a means to advance cognitive load theory. Educational Psychologist, 38(1), 63–71.

    Article  Google Scholar 

  • Parasuraman, R., & Rizzo, M. (2006). Neuroergonomics: The brain at work. Oxford: Oxford University Press.

    Book  Google Scholar 

  • Rayner, K. (1998). Eye movements in reading and information processing: 20 years of research. Psychological Bulletin, 124(3), 372.

    Article  Google Scholar 

  • Rayner, K., & Duffy, S. A. (1986). Lexical complexity and fixation times in reading: Effects of word frequency, verb complexity, and lexical ambiguity. Memory and Cognition, 14(3), 191–201.

    Article  Google Scholar 

  • Reichle, E. D., & Laurent, P. A. (2006). Using reinforcement learning to understand the emergence of “intelligent” eye-movement behavior during reading. Psychological Review, 113(2), 390.

    Article  Google Scholar 

  • Reichle, E. D., Rayner, K., & Pollatsek, A. (2003). The ez reader model of eye-movement control in reading: Comparisons to other models. Behavioral and Brain Sciences, 26(04), 445–476.

    Article  Google Scholar 

  • Reichle, E. D., Pollatsek, A., & Rayner, K. (2006). E-z reader: A cognitive-control, serial-attention model of eye-movement behavior during reading. Cognitive Systems Research, 7(1), 4–22.

    Article  Google Scholar 

  • Schnotz, W., & Kürschner, C. (2007). A reconsideration of cognitive load theory. Educational Psychology Review, 19(4), 469–508.

    Article  Google Scholar 

  • Stenner, A., Horabin, I., Smith, D. R., & Smith, M. (1988). The lexile framework. Durham: MetaMetrics.

    Google Scholar 

  • Sweller, J. (1988). Cognitive load during problem solving: Effects on learning. Cognitive Science, 12(2), 257–285.

    Article  Google Scholar 

  • Sweller, J. (1994). Cognitive load theory, learning difficulty, and instructional design. Learning and Instruction, 4(4), 295–312.

    Article  Google Scholar 

  • Tomanek, K., Hahn, U., Lohmann, S., & Ziegler, J. (2010). A cognitive cost model of annotations based on eye-tracking data. In Proceedings of the 48th Annual Meeting of the ACL (pp. 1158–1167). ACL.

    Google Scholar 

  • Underwood, G., Chapman, P., Brocklehurst, N., Underwood, J., & Crundall, D. (2003). Visual attention while driving: Sequences of eye fixations made by experienced and novice drivers. Ergonomics, 46(6), 629–646.

    Article  Google Scholar 

  • Von der Malsburg, T., & Vasishth, S. (2011). What is the scanpath signature of syntactic reanalysis? Journal of Memory and Language, 65(2), 109–127.

    Article  Google Scholar 

  • Von der Malsburg, T., Kliegl, R., & Vasishth, S. (2015). Determinants of scanpath regularity in reading. Cognitive Science, 39(7), 1675–1703.

    Google Scholar 

  • Williams, L. M., Loughland, C. M., Gordon, E., & Davidson, D. (1999). Visual scanpaths in schizophrenia: Is there a deficit in face recognition? Schizophrenia Research, 40(3), 189–199.

    Article  Google Scholar 

  • Wood, E., & Bulling, A. (2014). Eyetab: Model-based gaze estimation on unmodified tablet computers. In Proceedings of the Symposium on Eye Tracking Research and Applications (pp. 207–210). ACM.

    Google Scholar 

  • Yamamoto, M., Nakagawa, H., Egawa, K., & Nagamatsu, T. (2013). Development of a mobile tablet pc with gaze-tracking function. In Human Interface and the Management of Information. Information and Interaction for Health, Safety, Mobility and Complex Environments (pp. 421–429). Springer.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Abhijit Mishra .

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Nature Singapore Pte Ltd.

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Mishra, A., Bhattacharyya, P. (2018). Scanpath Complexity: Modeling Reading/Annotation Effort Using Gaze Information. In: Cognitively Inspired Natural Language Processing. Cognitive Intelligence and Robotics. Springer, Singapore. https://doi.org/10.1007/978-981-13-1516-9_4

Download citation

  • DOI: https://doi.org/10.1007/978-981-13-1516-9_4

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-13-1515-2

  • Online ISBN: 978-981-13-1516-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics