Advertisement

MULAN: Multitask Universal Lesion Analysis Network for Joint Lesion Detection, Tagging, and Segmentation

  • Ke YanEmail author
  • Youbao Tang
  • Yifan Peng
  • Veit Sandfort
  • Mohammadhadi Bagheri
  • Zhiyong Lu
  • Ronald M. Summers
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11769)

Abstract

When reading medical images such as a computed tomography (CT) scan, radiologists generally search across the image to find lesions, characterize and measure them, and then describe them in the radiological report. To automate this process, we propose a multitask universal lesion analysis network (MULAN) for joint detection, tagging, and segmentation of lesions in a variety of body parts, which greatly extends existing work of single-task lesion analysis on specific body parts. MULAN is based on an improved Mask R-CNN framework with three head branches and a 3D feature fusion strategy. It achieves the state-of-the-art accuracy in the detection and tagging tasks on the DeepLesion dataset, which contains 32K lesions in the whole body. We also analyze the relationship between the three tasks and show that tag predictions can improve detection accuracy via a score refinement layer.

Notes

Acknowledgments

This research was supported by the Intramural Research Programs of the National Institutes of Health (NIH) Clinical Center and National Library of Medicine (NLM). It was also supported by NLM of NIH under award number K99LM013001. We thank NVIDIA for GPU card donations.

Supplementary material

490281_1_En_22_MOESM1_ESM.pdf (840 kb)
Supplementary material 1 (pdf 840 KB)

References

  1. 1.
    Diamant, I., et al.: Improved patch-based automated liver lesion classification by separate analysis of the interior and boundary regions. IEEE J. Biomed. Health Inform. 20(6), 1585–1594 (2016)CrossRefGoogle Scholar
  2. 2.
    Eisenhauer, E.A., et al.: New response evaluation criteria in solid tumours: revised RECIST guideline (version 1.1). Eur. J. Cancer 45(2), 228–247 (2009)CrossRefGoogle Scholar
  3. 3.
    He, K., Gkioxari, G., Dollar, P., Girshick, R.: Mask R-CNN. In: ICCV, pp. 2980–2988 (2017)Google Scholar
  4. 4.
    Huang, G., Liu, Z., Weinberger, K.Q., van der Maaten, L.: Densely connected convolutional networks. In: CVPR (2017)Google Scholar
  5. 5.
    Liao, F., Liang, M., Li, Z., Hu, X., Song, S.: Evaluate the malignancy of pulmonary nodules using the 3D deep leaky noisy-or network. IEEE Trans. Neural Netw. Learn. Syst. (2019) Google Scholar
  6. 6.
    Lin, T.Y., Dollár, P., Girshick, R., He, K., Hariharan, B., Belongie, S.: Feature pyramid networks for object detection. In: CVPR (2017)Google Scholar
  7. 7.
    Ribli, D., Horváth, A., Unger, Z., Pollner, P., Csabai, I.: Detecting and classifying lesions in mammograms with deep learning. Sci. Rep. 8(1), 4165 (2018)CrossRefGoogle Scholar
  8. 8.
    Sahiner, B., et al.: Deep learning in medical imaging and radiation therapy. Med. Phys. 46(1), e1–e36 (2019)CrossRefGoogle Scholar
  9. 9.
    Tang, Y., Harrison, A.P., Bagheri, M., Xiao, J., Summers, R.M.: Semi-automatic RECIST labeling on CT scans with cascaded convolutional neural networks. In: Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds.) MICCAI 2018. LNCS, vol. 11073, pp. 405–413. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-00937-3_47. http://arxiv.org/abs/1806.09507CrossRefGoogle Scholar
  10. 10.
    Tang, Y., Yan, K., Tang, Y.X., Liu, J., Xiao, J., Summers, R.M.: ULDor: a universal lesion detector for CT scans with pseudo masks and hard negative example mining. In: ISBI (2019)Google Scholar
  11. 11.
    Wu, B., Zhou, Z., Wang, J., Wang, Y.: Joint learning for pulmonary nodule segmentation, attributes and malignancy prediction. In: ISBI, pp. 1109–1113 (2018)Google Scholar
  12. 12.
    Yan, K., Bagheri, M., Summers, R.M.: 3D context enhanced region-based convolutional neural network for end-to-end lesion detection. In: Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds.) MICCAI 2018. LNCS, vol. 11070, pp. 511–519. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-00928-1_58CrossRefGoogle Scholar
  13. 13.
    Yan, K., Peng, Y., Sandfort, V., Bagheri, M., Lu, Z., Summers, R.M.: Holistic and comprehensive annotation of clinically significant findings on diverse CT images: learning from radiology reports and label ontology. In: CVPR (2019)Google Scholar
  14. 14.
    Yan, K., Wang, X., Lu, L., Summers, R.M.: DeepLesion: automated mining of large-scale lesion annotations and universal lesion detection with deep learning. J. Med. Imaging 5(3) (2018).  https://doi.org/10.1117/1.JMI.5.3.036501CrossRefGoogle Scholar

Copyright information

© This is a U.S. government work and not under copyright protection in the U.S.; foreign copyright protection may apply 2019

Authors and Affiliations

  • Ke Yan
    • 1
    Email author
  • Youbao Tang
    • 1
  • Yifan Peng
    • 2
  • Veit Sandfort
    • 1
  • Mohammadhadi Bagheri
    • 1
  • Zhiyong Lu
    • 2
  • Ronald M. Summers
    • 1
  1. 1.Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Clinical CenterNational Institutes of HealthBethesdaUSA
  2. 2.National Center for Biotechnology Information, National Library of MedicineNational Institutes of HealthBethesdaUSA

Personalised recommendations