Advertisement

Preliminary Study on Visual Attention Maps of Experts and Nonexperts When Examining Pathological Microscopic Images

  • Wangyang Yu
  • Menghan HuEmail author
  • Shuning Xu
  • Qingli Li
Conference paper
  • 45 Downloads
Part of the Communications in Computer and Information Science book series (CCIS, volume 1181)

Abstract

Pathological microscopic image is regarded as a gold standard for the diagnosis of disease, and eye tracking technology is considered as a very effective tool for medical education. It will be very interesting if we use the eye tracking to predict where pathologists or doctors and persons with no or little experience look at the pathological microscopic image. In the current work, we first establish a pathological microscopic image database with the eye movement data of experts and nonexperts (PMIDE), including a total of 425 pathological microscopic images. The statistical analysis is afterwards conducted on PMIDE to analyze the difference in eye movement behavior between experts and nonexperts. The results show that although there is no significant difference in general, the experts focus on a broader scope than nonexperts. This inspires us to respectively develop saliency models for experts and nonexperts. Furthermore, the existing 10 saliency models are tested on PMIDE, and the performance of these models are all unacceptable with AUC, CC, NSS and SAUC below 0.73, 0.47, 0.78 and 0.52, respectively. This study indicates that the saliency models specific to pathological microscopic images urgent need to be developed using our database—PMIDE or the other related databases.

Keywords

Pathological microscopic images Visual attention map Saliency model Database 

Notes

Acknowledgement

This work is sponsored by the Shanghai Sailing Program (No. 19YF1414100), the National Natural Science Foundation of China (No. 61831015, No. 61901172), the STCSM (No. 18DZ2270700), and the China Postdoctoral Science Foundation funded project (No. 2016M600315).

References

  1. 1.
    Glaser, A.K., et al.: Light-sheet microscopy for slide-free non-destructive pathology of large clinical specimens. Nat. Biomed. Eng. 1(7), 0084 (2017)CrossRefGoogle Scholar
  2. 2.
    Mohapatra, S., et al.: Blood microscopic image segmentation using rough sets. In: 2011 International Conference on Image Information Processing. IEEE (2011)Google Scholar
  3. 3.
    Itti, L., et al.: Computational modelling of visual attention. Nat. Rev. Neurosci. 2(3), 194 (2001)CrossRefGoogle Scholar
  4. 4.
    Cornish, L., et al.: Eye-tracking reveals how observation chart design features affect the detection of patient deterioration: an experimental study. Appl. Ergon. 75, 230–242 (2019)CrossRefGoogle Scholar
  5. 5.
    Lévêque, L., et al.: State of the art: eye-tracking studies in medical imaging. IEEE Access 6, 37023–37034 (2018)CrossRefGoogle Scholar
  6. 6.
    Duan, H., et al.: Learning to predict where the children with ASD look. In: 2018 25th IEEE International Conference on Image Processing (ICIP). IEEE, pp. 704–708 (2018)Google Scholar
  7. 7.
    Li, R., et al.: Modeling eye movement patterns to characterize perceptual skill in image-based diagnostic reasoning processes. Comput. Vis. Image Underst. 151, 138–152 (2016)CrossRefGoogle Scholar
  8. 8.
    Van der Gijp, A., et al.: How visual search relates to visual diagnostic performance: a narrative systematic review of eye-tracking research in radiology. Adv. Health Sci. Educ. 22(3), 765–787 (2017)CrossRefGoogle Scholar
  9. 9.
    Liu, H., et al.: Visual attention in objective image quality assessment: based on eye-tracking data. IEEE Trans. Circuits Syst. Video Technol. 21(7), 971–982 (2011)CrossRefGoogle Scholar
  10. 10.
    Min, X., et al.: Fixation prediction through multimodal analysis. ACM Trans. Multimed. Comput. Commun. Appl. (TOMM) 13(1), 6 (2017)Google Scholar
  11. 11.
    Min, X., et al.: Visual attention analysis and prediction on human faces. Inf. Sci. 420, 417–430 (2017)CrossRefGoogle Scholar
  12. 12.
    Gu, K., et al.: Visual saliency detection with free energy theory. IEEE Signal Process. Lett. 22(10), 1552–1555 (2015)CrossRefGoogle Scholar
  13. 13.
    Bylinskii, Z., et al.: What do different evaluation metrics tell us about saliency models? IEEE Trans. Pattern Anal. Mach. Intell. 41(3), 740–757 (2019)CrossRefGoogle Scholar
  14. 14.
    Bylinskii, Z., et al.: Mit saliency benchmark, vol. 12, p. 13 (2014/2015). http://saliency.mit.edu/resultsmit300.html
  15. 15.
    Walther, D., et al.: Modeling attention to salient proto-objects. Neural Netw. 19, 1395–1407 (2006)CrossRefGoogle Scholar
  16. 16.
    Bruce, N.D.B., et al.: Saliency based on information maximization. In: Proceedings of the Advances in Neural Information Processing Systems (NIPS), pp. 155–162 (2005)Google Scholar
  17. 17.
    Seo, H.J., et al.: Static and space-time visual saliency detection by self-resemblance. J. Vis. 9(12), 15, 1–27 (2009)CrossRefGoogle Scholar
  18. 18.
    Goferman, S., et al.: Context-aware saliency detection. IEEE Trans. Pattern Anal. Mach. Intell. 34(10), 1915–1926 (2012)CrossRefGoogle Scholar
  19. 19.
    Zhang, L., et al.: SUN: a Bayesian framework for saliency using natural statistics. J. Vis. 8(7), 1–20 (2008)CrossRefGoogle Scholar
  20. 20.
    Harel, J., et al.: Graph-based visual saliency. In: Advances in Neural Information Processing Systems (2007)Google Scholar
  21. 21.
    Hou, X., et al.: Dynamic attention: searching for coding length increments. In: Proceedings of the Advances in Neural Information Processing Systems (NIPS), pp. 681–688 (2008)Google Scholar
  22. 22.
    Garcia-Diaz, A., Fdez-Vidal, X.R., Pardo, X.M., Dosil, R.: Decorrelation and distinctiveness provide with human-like saliency. In: Blanc-Talon, J., Philips, W., Popescu, D., Scheunders, P. (eds.) ACIVS 2009. LNCS, vol. 5807, pp. 343–354. Springer, Heidelberg (2009).  https://doi.org/10.1007/978-3-642-04697-1_32CrossRefGoogle Scholar
  23. 23.
    Hou, et al.:Saliency detection: a spectral residual approach. In 2007 IEEE Conference on Computer Vision and Pattern Recognition. Ieee (2007)Google Scholar
  24. 24.
    Judd, T., et al.: Learning to predict where humans look. In: 2009 IEEE 12th International Conference on Computer Vision. IEEE, pp. 2106–2113 (2009)Google Scholar
  25. 25.
    Harel, J., et al.: Graph-based visual saliency. In: Advances in Neural Information Processing Systems, pp. 545–552 (2007)Google Scholar
  26. 26.
    Goferman, S., et al.: Context-aware saliency detection. IEEE Trans. Pattern Anal. Mach. Intell. 34(10), 1915–1926 (2011)CrossRefGoogle Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2020

Authors and Affiliations

  • Wangyang Yu
    • 1
  • Menghan Hu
    • 1
    Email author
  • Shuning Xu
    • 1
  • Qingli Li
    • 1
  1. 1.Shanghai Key Laboratory of Multidimensional Information ProcessingEast China Normal UniversityShanghaiChina

Personalised recommendations