Advertisement

Transfer Learning for Breast Cancer Malignancy Classification based on Dynamic Contrast-Enhanced MR Images

  • Christoph Haarburger
  • Peter Langenberg
  • Daniel Truhn
  • Hannah Schneider
  • Johannes Thüring
  • Simone Schrading
  • Christiane K. Kuhl
  • Dorit Merhof
Conference paper
Part of the Informatik aktuell book series (INFORMAT)

Zusammenfassung

In clinical contexts with very limited annotated data, such as breast cancer diagnosis, training state-of-the art deep neural networks is not feasible. As a solution, we transfer parameters of networks pretrained on natural RGB images to malignancy classification of breast lesions in dynamic contrast-enhanced MR images. Since DCE-MR images comprise several contrasts and timepoints, a direct finetuning of pretrained networks expecting three input channels is not possible. Based on the hypothesis that a subset of the acquired image data is sufficient for a computer-aided diagnosis, we provide an experimental comparison of all possible subsets of MR image contrasts and determine the best combination for malignancy classification. A subset of images acquired at three timepoints of dynamic T1-weighted images which closely corresponds to human interpretation performs best with an AUC of 0.839.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Literatur

  1. 1.
    Siegel R, Ma J, Zou Z, et al. Cancer statistics, 2014. CA Cancer J Clin. 2014;64(1):9–29.Google Scholar
  2. 2.
    Kuhl CK. The current status of breast MR imaging: part I: choice of technique, image interpretation, diagnostic accuracy, and transfer to clinical practice. Radiology. 2007;244(2):356–378.Google Scholar
  3. 3.
    Kuhl CK. Current status of breast MR imaging: part 2: clinical applications. Radiology. 2007;244(3):672–691.Google Scholar
  4. 4.
    Esteva A, Kuprel B, Novoa RA, et al. Dermatologist-level classification of skin cancer with deep neural networks. Nature. 2017 Jan;542(7639):115–118.Google Scholar
  5. 5.
    Revealing hidden potentials of the q-space signal in breast cancer. Proc MICCAI. 2017; p. 664–671.Google Scholar
  6. 6.
    Tajbakhsh N, Shin JY, Gurudu SR, et al. Convolutional neural networks for medical image analysis: full training or fine tuning? IEEE Trans Med Imaging. 2016 May;35(5):1299–1312.Google Scholar
  7. 7.
    Hadad O, Bakalo R, Ben-Ari R, et al. Classification of breast lesions using crossmodal deep learning. Proc ISBI. 2017; p. 109–112.Google Scholar
  8. 8.
    Marrone S, Piantadosi G, Fusco R, et al. An investigation of deep learning for lesions malignancy classification in breast DCE-MRI. Proc ICIAP. 2017; p. 479–489.Google Scholar
  9. 9.
    Antropova N, Huynh B, Giger M. Performance comparison of deep learning and segmentation-based radiomic methods in the task of distinguishing benign and malignant breast lesions on DCE-MRI. Proc SPIE. 2017;(10134).Google Scholar
  10. 10.
    Tustison NJ, Avants BB, Cook PA, et al. N4ITK: improved N3 bias correction. IEEE Trans Med Imaging. 2010 June;29(6):1310–1320.Google Scholar
  11. 11.
    He K, Zhang X, Ren S, et al. Deep residual learning for image recognition. Proc CVPR. 2016 June; p. 770–778.Google Scholar
  12. 12.
    Russakovsky O, Deng J, Su H, et al. ImageNet large scale visual recognition challenge. Int J Comput Vis;(3):211–252.Google Scholar

Copyright information

© Springer-Verlag GmbH Deutschland 2018

Authors and Affiliations

  • Christoph Haarburger
    • 1
  • Peter Langenberg
    • 1
  • Daniel Truhn
    • 2
  • Hannah Schneider
    • 2
  • Johannes Thüring
    • 2
  • Simone Schrading
    • 2
  • Christiane K. Kuhl
    • 2
  • Dorit Merhof
    • 1
  1. 1.Institute of Imaging and Computer VisionRWTH Aachen UniversityAachenDeutschland
  2. 2.Department of Diagnostic and Interventional RadiologyUniversity Hospital AachenAachenDeutschland

Personalised recommendations