Skip to main content

Models Genesis: Generic Autodidactic Models for 3D Medical Image Analysis

  • Conference paper
  • First Online:
Medical Image Computing and Computer Assisted Intervention – MICCAI 2019 (MICCAI 2019)

Abstract

Transfer learning from natural image to medical image has established as one of the most practical paradigms in deep learning for medical image analysis. However, to fit this paradigm, 3D imaging tasks in the most prominent imaging modalities (e.g., CT and MRI) have to be reformulated and solved in 2D, losing rich 3D anatomical information and inevitably compromising the performance. To overcome this limitation, we have built a set of models, called Generic Autodidactic Models, nicknamed Models Genesis, because they are created ex nihilo (with no manual labeling), self-taught (learned by self-supervision), and generic (served as source models for generating application-specific target models). Our extensive experiments demonstrate that our Models Genesis significantly outperform learning from scratch in all five target 3D applications covering both segmentation and classification. More importantly, learning a model from scratch simply in 3D may not necessarily yield performance better than transfer learning from ImageNet in 2D, but our Models Genesis consistently top any 2D approaches including fine-tuning the models pre-trained from ImageNet as well as fine-tuning the 2D versions of our Models Genesis, confirming the importance of 3D anatomical information and significance of our Models Genesis for 3D medical imaging. This performance is attributed to our unified self-supervised learning framework, built on a simple yet powerful observation: the sophisticated yet recurrent anatomy in medical images can serve as strong supervision signals for deep models to learn common anatomical representation automatically via self-supervision. As open science, all pre-trained Models Genesis are available at https://github.com/MrGiovanni/ModelsGenesis.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://wiki.cancerimagingarchive.net/display/Public/LIDC-IDRI.

  2. 2.

    https://biometry.nci.nih.gov/cdas/nlst/.

  3. 3.

    https://nihcc.app.box.com/v/ChestXray-NIHCC.

  4. 4.

    Appendix can be found in the full version at tinyurl.com/ModelsGenesisFullVersion.

  5. 5.

    NiftyNet Model Zoo: https://github.com/NifTK/NiftyNetModelZoo.

  6. 6.

    3D U-Net Convolution Neural Network: https://github.com/ellisdg/3DUnetCNN.

  7. 7.

    Segmentation Models: https://github.com/qubvel/segmentation_models.

References

  1. Deng, J., et al.: ImageNet: A large-scale hierarchical image database. In: CVPR, 248–255 (2009)

    Google Scholar 

  2. Doersch, C., et al.: Multi-task self-supervised visual learning. In: ICCV 2051–2060, (2017)

    Google Scholar 

  3. Jamaludin, A., Kadir, T., Zisserman, A.: Self-supervised learning for spinal MRIs. In: Cardoso, M.J., et al. (eds.) DLMIA/ML-CDS -2017. LNCS, vol. 10553, pp. 294–302. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-67558-9_34

    Chapter  Google Scholar 

  4. Jing, L., et al.: Self-supervised visual feature learning with deep neural networks: A survey. arXiv:1902.06162 (2019)

  5. Kang, G., et al.: Patchshuffle regularization. arXiv:1707.07103 (2017)

  6. Pathak, D., et al.: Context encoders: Feature learning by inpainting. In: CVPR, 2536–2544 (2016)

    Google Scholar 

  7. Shin, H.C., et al.: Deep convolutional neural networks for computer-aided detection: CNN architectures, dataset characteristics and transfer learning. TMI 35(5), 1285–1298 (2016)

    Google Scholar 

  8. Spitzer, H., et al.: Improving cytoarchitectonic segmentation of human brain areas with self-supervised siamese networks. In: MICCAI, 663–671 (2018)

    Google Scholar 

  9. Tajbakhsh, N., et al.: Convolutional neural networks for medical image analysis: Full training or fine tuning? TMI 35(5), 1299–1312 (2016)

    Google Scholar 

  10. Tajbakhsh, N., et al.: Surrogate supervision for medical image analysis: Effective deep learning from limited quantities of labeled data. In: ISBI, 1251–1255 (2019)

    Google Scholar 

Download references

Acknowledgments

This research has been supported partially by ASU and Mayo Clinic through a Seed Grant and an Innovation Grant, and partially by NIH under Award Number R01HL128785. The content is solely the responsibility of the authors and does not necessarily represent the official views of NIH.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jianming Liang .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 20553 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zhou, Z. et al. (2019). Models Genesis: Generic Autodidactic Models for 3D Medical Image Analysis. In: Shen, D., et al. Medical Image Computing and Computer Assisted Intervention – MICCAI 2019. MICCAI 2019. Lecture Notes in Computer Science(), vol 11767. Springer, Cham. https://doi.org/10.1007/978-3-030-32251-9_42

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-32251-9_42

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-32250-2

  • Online ISBN: 978-3-030-32251-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics