Skip to main content

Improving Pathological Structure Segmentation via Transfer Learning Across Diseases

  • Conference paper
  • First Online:
Domain Adaptation and Representation Transfer and Medical Image Learning with Less Labels and Imperfect Data (DART 2019, MIL3ID 2019)

Abstract

One of the biggest challenges in developing robust machine learning techniques for medical image analysis is the lack of access to large-scale annotated image datasets needed for supervised learning. When the task is to segment pathological structures (e.g. lesions, tumors) from patient images, training on a dataset with few samples is very challenging due to the large class imbalance and inter-subject variability. In this paper, we explore how to best leverage a segmentation model that has been pre-trained on a large dataset of patients images with one disease in order to successfully train a deep learning pathology segmentation model for a different disease, for which only a relatively small patient dataset is available. Specifically, we train a UNet model on a large-scale, proprietary, multi-center, multi-scanner Multiple Sclerosis (MS) clinical trial dataset containing over 3500 multi-modal MRI samples with expert-derived lesion labels. We explore several transfer learning approaches to leverage the learned MS model for the task of multi-class brain tumor segmentation on the BraTS 2018 dataset. Our results indicate that adapting and fine-tuning the encoder and decoder of the network trained on the larger MS dataset leads to improvement in brain tumor segmentation when few instances are available. This type of transfer learning outperforms training and testing the network on the BraTS dataset from scratch as well as several other transfer learning approaches, particularly when only a small subset of the dataset is available.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Please note that the predictions made on the BraTS 2018 Validation set must contain all four tumor sub-classes, which are then uploaded onto the BraTS web portal for evaluation.

  2. 2.

    http://pytorch.org/.

  3. 3.

    http://cim.mcgill.ca/~barleenk/MICCAI2019_transfer_appendix.pdf.

  4. 4.

    The percentage improvement is calculated as the ratio of difference in the baseline and FT-All Dice scores over the baseline.

References

  1. Avants, B.B., et al.: A reproducible evaluation of ANTs similarity metric performance in brain image registration. Neuroimage 54(3), 2033ā€“2044 (2011)

    ArticleĀ  Google ScholarĀ 

  2. Bakas, S., et al.: Advancing the cancer genome atlas glioma MRI collections with expert segmentation labels and radiomic features. Sci. Data 4, 170117 (2017)

    ArticleĀ  Google ScholarĀ 

  3. Bakas, S., et al.: Segmentation labels and radiomic features for the pre-operative scans of the TCGA-LGG collection. TCIA, vol. 286 (2017)

    Google ScholarĀ 

  4. Cheplygina, V., et al.: Not-so-supervised: a survey of semi-supervised, multi-instance, and transfer learning in medical image analysis. MIA 54, 280ā€“296 (2019)

    Google ScholarĀ 

  5. Chu, B., Madhavan, V., Beijbom, O., Hoffman, J., Darrell, T.: Best practices for fine-tuning visual classifiers to new domains. In: Hua, G., JĆ©gou, H. (eds.) ECCV 2016. LNCS, vol. 9915, pp. 435ā€“442. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-49409-8_34

    ChapterĀ  Google ScholarĀ 

  6. ƇiƧek, Ɩ., Abdulkadir, A., Lienkamp, S.S., Brox, T., Ronneberger, O.: 3D U-Net: learning dense volumetric segmentation from sparse annotation. In: Ourselin, S., Joskowicz, L., Sabuncu, M.R., Unal, G., Wells, W. (eds.) MICCAI 2016. LNCS, vol. 9901, pp. 424ā€“432. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46723-8_49

    ChapterĀ  Google ScholarĀ 

  7. Ghafoorian, M., et al.: Transfer learning for domain adaptation in MRI: application in brain lesion segmentation. In: Descoteaux, M., Maier-Hein, L., Franz, A., Jannin, P., Collins, D.L., Duchesne, S. (eds.) MICCAI 2017. LNCS, vol. 10435, pp. 516ā€“524. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-66179-7_59

    ChapterĀ  Google ScholarĀ 

  8. Hussein, S., Cao, K., Song, Q., Bagci, U.: Risk stratification of lung nodules using 3D CNN-based multi-task learning. In: Niethammer, M., et al. (eds.) IPMI 2017. LNCS, vol. 10265, pp. 249ā€“260. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-59050-9_20

    ChapterĀ  Google ScholarĀ 

  9. Huynh, B.Q., et al.: Digital mammographic tumor classification using transfer learning from deep convolutional neural networks. JMI 3(3), 034501 (2016)

    Google ScholarĀ 

  10. Hwang, S., Kim, H.-E.: Self-transfer learning for weakly supervised lesion localization. In: Ourselin, S., Joskowicz, L., Sabuncu, M.R., Unal, G., Wells, W. (eds.) MICCAI 2016. LNCS, vol. 9901, pp. 239ā€“246. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46723-8_28

    ChapterĀ  Google ScholarĀ 

  11. Ioffe, S., et al.: Batch normalization: accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167 (2015)

  12. Jesson, A., Arbel, T.: Brain tumor segmentation using a 3D FCN with multi-scale loss. In: Crimi, A., Bakas, S., Kuijf, H., Menze, B., Reyes, M. (eds.) BrainLes 2017. LNCS, vol. 10670, pp. 392ā€“402. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-75238-9_34

    ChapterĀ  Google ScholarĀ 

  13. Kamnitsas, K., et al.: Efficient multi-scale 3D CNN with fully connected CRF for accurate brain lesion segmentation. MIA 36, 61ā€“78 (2017)

    Google ScholarĀ 

  14. Mehta, R., Arbel, T.: 3D U-Net for brain tumour segmentation. In: Crimi, A., Bakas, S., Kuijf, H., Keyvan, F., Reyes, M., van Walsum, T. (eds.) BrainLes 2018. LNCS, vol. 11384, pp. 254ā€“266. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-11726-9_23

    ChapterĀ  Google ScholarĀ 

  15. Menegola, A., et al.: Knowledge transfer for melanoma screening with deep learning. ISBI 2017, 297ā€“300 (2017)

    Google ScholarĀ 

  16. Menze, B.H., et al.: The multimodal brain tumor image segmentation benchmark (BraTS). TMI 34(10), 1993ā€“2024 (2014)

    Google ScholarĀ 

  17. NyĆŗl, L.G., et al.: New variants of a method of MRI scale standardization. TMI 19(2), 143ā€“150 (2000)

    Google ScholarĀ 

  18. Sled, J.G., et al.: A nonparametric method for automatic correction of intensity nonuniformity in MRI data. TMI 17(1), 87ā€“97 (1998)

    Google ScholarĀ 

  19. Smith, S.M.: Fast robust automated brain extraction. HBM 17(3), 143ā€“155 (2002)

    ArticleĀ  Google ScholarĀ 

  20. Tajbakhsh, N., et al.: Convolutional neural networks for medical image analysis: full training or fine tuning? IEEE TMI 35(5), 1299ā€“1312 (2016)

    Google ScholarĀ 

  21. Yosinski, J., et al.: How transferable are features in deep neural networks? In: Proceeding of NIPS, pp. 3320ā€“3328 (2014)

    Google ScholarĀ 

  22. Zhang, D., Shen, D., Alzheimerā€™s Disease Neuroimaging Initiative: Multi-modal multi-task learning for joint prediction of multiple regression and classification variables in Alzheimerā€™s disease. NeuroImage, 59(2), 895ā€“907 (2012)

    Google ScholarĀ 

Download references

Acknowledgments

The MS dataset was provided through an award from the International Progressive MS Alliance (PA-1603-08175). The authors would also like to thank Nicholas J. Tustison for his guidance on using ANTs tool.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Barleen Kaur .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 910 KB)

Rights and permissions

Reprints and permissions

Copyright information

Ā© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Kaur, B. et al. (2019). Improving Pathological Structure Segmentation via Transfer Learning Across Diseases. In: Wang, Q., et al. Domain Adaptation and Representation Transfer and Medical Image Learning with Less Labels and Imperfect Data. DART MIL3ID 2019 2019. Lecture Notes in Computer Science(), vol 11795. Springer, Cham. https://doi.org/10.1007/978-3-030-33391-1_11

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-33391-1_11

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-33390-4

  • Online ISBN: 978-3-030-33391-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics