Classification of contrast-enhanced spectral mammography (CESM) images
- 127 Downloads
Contrast-enhanced spectral mammography (CESM) is a recently developed breast imaging technique. CESM relies on dual-energy acquisition following contrast agent injection to improve mammography sensitivity. CESM is comparable to contrast-enhanced MRI in terms of sensitivity, at a fraction of the cost. However, since lesion variability is large, even with the improved visibility provided by CESM, differentiation between benign and malignant enhancement is not accurate and a biopsy is usually performed for final assessment. Breast biopsies can be stressful to the patient and are expensive to healthcare systems. Moreover, as the biopsies results are most of the time benign, a specificity improvement in the radiologist diagnosis is required. This work presents a deep learning-based decision support system, which aims at improving the specificity of breast cancer diagnosis by CESM without affecting sensitivity.
We compare two analysis approaches, fine-tuning a pretrained network and fully training a convolutional neural network, for classification of CESM breast mass as benign or malignant. Breast Imaging Reporting and Data Systems (BIRADS) is a radiological lexicon, used with breast images, to categorize lesions. We improve each classification network by incorporating BIRADS textual features as an additional input to the network. We evaluate two ways of BIRADS fusion as network input: feature fusion and decision fusion. This leads to multimodal network architectures. At classification, we also exploit information from apparently normal breast tissue in the CESM of the considered patient, leading to a patient-specific classification.
We evaluate performance using fivefold cross-validation, on 129 randomly selected breast lesions annotated by an experienced radiologist. Each annotation includes a contour of the mass in the image, biopsy-proven label of benign or malignant lesion and BIRADS descriptors. At 100% sensitivity, specificity of 66% was achieved using a multimodal network, which combines inputs at feature level and patient-specific classification.
The presented multimodal network may significantly reduce benign biopsies, without compromising sensitivity.
KeywordsDeep learning Contrast-enhanced spectral mammography (CESM) Breast cancer Multimodal neural networks Computer vision
Compliance with ethical standards
Conflict of interest
The authors declare that they have no conflict of interest.
All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards.
For this type of study formal consent is not required.
- 1.Carney PA, Miglioretti DL, Yankaskas BC, Kerlikowske K, Rosenberg R, Rutter CM, Geller BM, Abraham LA, Taplin SH, Dignan M, Cutter G (2003) Individual and combined effects of age, breast density, and hormone replacement therapy use on the accuracy of screening mammography. Ann Intern Med 138(3):168–75CrossRefGoogle Scholar
- 4.Li L, Roth R, Germaine P, Ren S, Lee M, Hunter K, Tinney E, Liao L (2017) Contrast-enhanced spectral mammography (CESM) versus breast magnetic resonance imaging (MRI): a retrospective comparison in 66 breast lesions. Diagn Interv Imaging 98:113123Google Scholar
- 9.Vasantha M, Bharathi DVS, Dhamodharan R (2010) Medical image feature, extraction, selection and classification. IJEST 1(2):2071–2076Google Scholar
- 11.Mateos MJ, Gastelum A, Mrquez J, Brandan ME (2016) Texture analysis of contrast-enhanced digital mammography (CEDM) images. International Workshop on Digital Mammography. LNCS 9699, Springer, pp 585–592Google Scholar
- 15.Cortes C, Vapnik V (1995) Support-vector networks. Mach Learn 20(3):273–297Google Scholar
- 16.American College of Radiology, ACR BI-RADS Atlas 5th Edition, 125–143 (2013)Google Scholar
- 18.Ngiam J, Khosla A, Kim M, Nam J, Lee H, Ng AY (2011) Multimodal deep learning. In: Proceedings of the 28th ICMLGoogle Scholar
- 21.Krizhevsky A, Sutskever I, Hinton GE (2012) Imagenet classification with deep convolutional neural networks. NIPS 25:1097–1105Google Scholar
- 22.Tajbakhsh N, Shin JY, Gurudu SR, Hurst RT, Kendall CB, Gotway MB, Liang J (2016) Convolutional neural networks for medical image analysis: Full training or fine tuning? IEEE TMI 35(5):1299–1312Google Scholar
- 23.Karpathy A, Toderici G, Shetty S, Leung T, Sukthankar R, Fei-Fei L (2014) Large-scale video classification with convolutional neural networks. In: IEEE conference CVPR, pp 1725–1732Google Scholar
- 24.Ioffe S, Szegedy C (2015) Batch normalization: accelerating deep network training by reducing internal covariate shift. In: ICML, pp 448–456Google Scholar
- 25.Kingma D, Adam JB (2014) A method for stochastic optimization. arXiv preprint arXiv:1412.6980