Advertisement

A Pathology Image Diagnosis Network with Visual Interpretability and Structured Diagnostic Report

  • Kai Ma
  • Kaijie Wu
  • Hao Cheng
  • Chaochen Gu
  • Rui Xu
  • Xinping Guan
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11306)

Abstract

Despite recent advances in medical diagnosis domain, many challenges remain in obtaining more accurate conclusions and in presenting semantically and visually interpretable results during the diagnosis process. An interpretable diagnosis process is proposed through the implementation of a deep learning model. This consists of three interrelated models, an image model, an attention model and a conclusion model. The proposed image model extracts the semantic feature using convolutional neural networks (CNNs). The conclusion model, integrated with the semantic attributes attention model, aims to predict the conclusion label by long-short term memory (LSTM), which captures the discriminative relationship between semantic attributes. The network is trained in end-to-end way with different weight of each model. Based upon a cervical intraepithelial neoplasia images, diagnostic report and labels (CINDRAL) dataset, the approach demonstrates significant improvement when comparing the baseline in the conclusion result.

Keywords

Deep learning Visual interpretability Pathology diagnosis process 

Notes

Acknowledgements

This work is supported by the National Key Scientific Instruments and Equipment Development Program of China (2013YQ03065101), the National Natural Science Foundation of China under Grant 61521063 and Grant 61503243.

References

  1. 1.
    Zhang, X., Su, H., Yang, L., Zhang, S.: Fine-grained histopathological image analysis via robust segmentation and large-scale retrieval. In: Computer Vision and Pattern Recognition, pp. 5361–5368 (2015)Google Scholar
  2. 2.
    Chang, H., Zhou, Y., Borowsky, A., Barner, K., Spellman, P., Parvin, B.: Stacked predictive sparse decomposition for classification of histology sections. Int. J. Comput. Vis. 113(1), 3–18 (2015)MathSciNetCrossRefGoogle Scholar
  3. 3.
    Cireşan, D.C., Giusti, A., Gambardella, L.M., Schmidhuber, J.: Mitosis detection in breast cancer histology images with deep neural networks. In: Mori, K., Sakuma, I., Sato, Y., Barillot, C., Navab, N. (eds.) MICCAI 2013. LNCS, vol. 8150, pp. 411–418. Springer, Heidelberg (2013).  https://doi.org/10.1007/978-3-642-40763-5_51CrossRefGoogle Scholar
  4. 4.
    Russakovsky, O., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2014)MathSciNetCrossRefGoogle Scholar
  5. 5.
    Kisilev, P., Walach, E., Hashoul, S., Barkan, E., Ophir, B., Alpert, S.: Semantic description of medical image findings: structured learning approach. In: British Machine Vision Conference, pp. 171.1–171.11 (2015)Google Scholar
  6. 6.
    Esteva, A., et al.: Corrigendum: dermatologist-level classification of skin cancer with deep neural networks. Nature 542(7639), 115–118 (2017)CrossRefGoogle Scholar
  7. 7.
    Chartrand, G., et al.: Deep learning: a primer for radiologists. Radiographics 37(7), 2113–2131 (2017)CrossRefGoogle Scholar
  8. 8.
    Zhang, Z., Chen, P., Sapkota, M., Yang, L.: TandemNet: distilling knowledge from medical images using diagnostic reports as optional semantic references. In: Descoteaux, M., Maier-Hein, L., Franz, A., Jannin, P., Collins, D.L., Duchesne, S. (eds.) MICCAI 2017. LNCS, vol. 10435, pp. 320–328. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-66179-7_37CrossRefGoogle Scholar
  9. 9.
    Kisilev, P., Walach, E., Barkan, E., Ophir, B.: From medical image to automatic medical report generation. IBM J. Res. Dev. 59(2/3), 2:1–2:7 (2015)CrossRefGoogle Scholar
  10. 10.
    Jing, B., Xie, P., Xing, E.: On the automatic generation of medical imaging reports (2017). arXiv:1711.08195
  11. 11.
    Surhone, L.M., Tennoe, M.T., Henssonow, S.F.: Long Short Term Memory. Beta Script Publishing (2010)Google Scholar
  12. 12.
    Zhang, Z., Xie, Y., Xing, F., Mcgough, M., Yang, L.: Mdnet: a semantically and visually interpretable medical image diagnosis network, pp. 3549–3557 (2017)Google Scholar
  13. 13.
    Wang, Z., Chen, T., Li, G., Xu, R., Lin, L.: Multi-label image recognition by recurrently discovering attentional regions. In: IEEE International Conference on Computer Vision, pp. 464–472 (2017)Google Scholar
  14. 14.
    Shi, X., Xing, F., Xie, Y., Su, H., Yang, L.: Cell encoding for histopathology image classification. In: Descoteaux, M., Maier-Hein, L., Franz, A., Jannin, P., Collins, D.L., Duchesne, S. (eds.) MICCAI 2017. LNCS, vol. 10434, pp. 30–38. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-66185-8_4CrossRefGoogle Scholar
  15. 15.
    Nam, H., Ha, J.W., Kim, J.: Dual attention networks for multimodal reasoning and matching, pp 2156–2164 (2016)Google Scholar
  16. 16.
    Pedersoli, M., Lucas, T., Schmid, C., Verbeek, J.: Areas of attention for image captioning, pp. 1251–1259 (2017)Google Scholar
  17. 17.
    Vinyals, O., Fortunato, M., Jaitly, N.: Pointer networks. In: Computer Science (2015)Google Scholar
  18. 18.
    Xu, K., Ba, J., Kiros, R., Cho, K., Courville, A., Salakhutdinov, R., Zemel, R., Bengio, Y.: Show, attend and tell: neural image caption generation with visual attention. In: Computer Science, pp. 2048–2057 (2015)Google Scholar
  19. 19.
    Yu, D., Fu, J., Mei, T., Rui, Y.: Multi-level attention networks for visual question answering. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 4187–4195 (2017)Google Scholar
  20. 20.
    Lu, J., Xiong, C., Parikh, D., Socher, R.: Knowing when to look: adaptive attention via a visual sentinel for image captioning, pp. 3242–3250 (2016)Google Scholar
  21. 21.
    Shin, H.C., Roberts, K., Lu, L., Demnerfushman, D., Yao, J., Summers, R.M.: Learning to read chest x-rays: recurrent neural cascade model for automated image annotation, pp. 2497–2506 (2016)Google Scholar
  22. 22.
    Wang, X., Peng, Y., Lu, L., Lu, Z., Summers, R.M.: Tienet: text-image embedding network for common thorax disease classification and reporting in chest x-rays (2018). arXiv:1801.04334
  23. 23.
    Everingham, M., Winn, J.: The pascal visual object classes challenge 2010 development kit contents. In: International Conference on Machine Learning Challenges: Evaluating Predictive Uncertainty Visual Object Classification, pp. 117–176 (2011)Google Scholar
  24. 24.
    Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. Computer Science (2014)Google Scholar
  25. 25.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks, pp 1097–1105 (2012)Google Scholar
  26. 26.
    Huang, G., Liu, Z., Maaten, L.V.D., Weinberger, K.Q.: Densely connected convolutional networks. In: CVPR (2017)Google Scholar
  27. 27.
    Zhou, B., Khosla, A., Lapedriza, A., Oliva, A., Torralba, A.: Learning deep features for discriminative localization. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 2921–2929 (2016)Google Scholar
  28. 28.
    He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition, pp. 770–778 (2015)Google Scholar
  29. 29.
    Wang, J., Yang, Y., Mao, J., Huang, Z., Huang, C., Xu, W.: Cnn-rnn: a unified framework for multi-label image classification, pp. 2285–2294 (2016)Google Scholar
  30. 30.
    Lin, T.-Y., et al.: Microsoft COCO: common objects in context. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8693, pp. 740–755. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-10602-1_48CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Kai Ma
    • 1
  • Kaijie Wu
    • 1
  • Hao Cheng
    • 1
  • Chaochen Gu
    • 1
  • Rui Xu
    • 1
  • Xinping Guan
    • 1
  1. 1.Shanghai Jiao Tong UniversityShanghaiChina

Personalised recommendations