Advertisement

How to Learn from Unlabeled Volume Data: Self-supervised 3D Context Feature Learning

  • Maximilian BlendowskiEmail author
  • Hannes Nickisch
  • Mattias P. Heinrich
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11769)

Abstract

The vast majority of 3D medical images lacks detailed image-based expert annotations. The ongoing advances of deep convolutional neural networks clearly demonstrate the benefit of supervised learning to successfully extract relevant anatomical information and aid image-based analysis and interventions, but it heavily relies on labeled data. Self-supervised learning, that requires no expert labels, provides an appealing way to discover data-inherent patterns and leverage anatomical information freely available from medical images themselves. In this work, we propose a new approach to train effective convolutional feature extractors based on a new concept of image-intrinsic spatial offset relations with an auxiliary heatmap regression loss. The learned features successfully capture semantic, anatomical information and enable state-of-the-art accuracy for a k-NN based one-shot segmentation task without any subsequent fine-tuning.

Keywords

Self-supervised learning Volumetric image segmentation 

Notes

Acknowledgements

This work was supported by the German Research Foundation (DFG) under grant number 320997906 (HE 7364/2-1). We gratefully acknowledge the support of the NVIDIA Corporation with their GPU donations for this research.

References

  1. 1.
    Calonder, M., Lepetit, V., Strecha, C., Fua, P.: BRIEF: binary robust independent elementary features. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010. LNCS, vol. 6314, pp. 778–792. Springer, Heidelberg (2010).  https://doi.org/10.1007/978-3-642-15561-1_56CrossRefGoogle Scholar
  2. 2.
    Doersch, C., Gupta, A., Efros, A.A.: Unsupervised visual representation learning by context prediction. In: ICCV (2015)Google Scholar
  3. 3.
    Doersch, C., Zisserman, A.: Multi-task self-supervised visual learning. In: ICCV (2017)Google Scholar
  4. 4.
    Ferrante, E., Dokania, P.K., Silva, R.M., Paragios, N.: Weakly-supervised learning of metric aggregations for deformable image registration. IEEE J. Biomed. Health Inform. (2018) Google Scholar
  5. 5.
    Heinrich, M.P., Blendowski, M.: Multi-organ segmentation using vantage point forests and binary context features. In: Ourselin, S., Joskowicz, L., Sabuncu, M.R., Unal, G., Wells, W. (eds.) MICCAI 2016. LNCS, vol. 9901, pp. 598–606. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46723-8_69CrossRefGoogle Scholar
  6. 6.
    Jamaludin, A., Kadir, T., Zisserman, A.: Self-supervised learning for spinal MRIs. In: DLMIA (2017)CrossRefGoogle Scholar
  7. 7.
    Maier-Hein, L., et al.: Crowd-algorithm collaboration for large-scale endoscopic image annotation with confidence. In: Ourselin, S., Joskowicz, L., Sabuncu, M.R., Unal, G., Wells, W. (eds.) MICCAI 2016. LNCS, vol. 9901, pp. 616–623. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46723-8_71CrossRefGoogle Scholar
  8. 8.
    Payer, C., Štern, D., Bischof, H., Urschler, M.: Regressing heatmaps for multiple landmark localization using CNNs. In: Ourselin, S., Joskowicz, L., Sabuncu, M.R., Unal, G., Wells, W. (eds.) MICCAI 2016. LNCS, vol. 9901, pp. 230–238. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46723-8_27CrossRefGoogle Scholar
  9. 9.
    Reed, S., Lee, H., Anguelov, D., Szegedy, C., Erhan, D., Rabinovich, A.: Training deep neural networks on noisy labels with bootstrapping. ICLR workshop (2015)Google Scholar
  10. 10.
    Roy, A.G., Siddiqui, S., Pölsterl, S., Navab, N., Wachinger, C.: ‘squeeze & excite’ guided few-shot segmentation of volumetric images. arXiv:1902.01314 (2019)
  11. 11.
    Shin, H.C., et al.: Deep convolutional neural networks for computer-aided detection: CNN architectures, dataset characteristics and transfer learning. IEEE Trans. Med. Imaging 35(5), 1285–1298 (2016)CrossRefGoogle Scholar
  12. 12.
    Tajbakhsh, N., et al.: Surrogate supervision for medical image analysis: Effective deep learning from limited quantities of labeled data. In: ISBI (2019)Google Scholar
  13. 13.
    Jimenez-del Toro, O., et al.: Cloud-based evaluation of anatomical structure segmentation and landmark detection algorithms: visceral anatomy benchmarks. IEEE Trans. Med. Imaging 35(11), 2459–2475 (2016)CrossRefGoogle Scholar
  14. 14.
    de Vos, B.D., Berendsen, F.F., Viergever, M.A., Sokooti, H., Staring, M., Išgum, I.: A deep learning framework for unsupervised affine and deformable image registration. Med. Image Anal. 52, 128–143 (2019)CrossRefGoogle Scholar
  15. 15.
    Zhang, R., Isola, P., Efros, A.A.: Colorful image colorization. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9907, pp. 649–666. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46487-9_40CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Institute of Medical InformaticsUniversity of LübeckLübeckGermany
  2. 2.Philips Research HamburgHamburgGermany

Personalised recommendations