Pre-training on Grayscale ImageNet Improves Medical Image Classification

  • Yiting Xie
  • David RichmondEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11134)


Deep learning is quickly becoming the de facto standard approach for solving a range of medical image analysis tasks. However, large medical image datasets appropriate for training deep neural network models from scratch are difficult to assemble due to privacy restrictions and expert ground truth requirements, with typical open source datasets ranging from hundreds to thousands of images. A standard approach to counteract limited-size medical datasets is to pre-train models on large datasets in other domains, such as ImageNet for classification of natural images, before fine-tuning on the specific medical task of interest. However, ImageNet contains color images, which introduces artefacts and inefficiencies into models that are intended for single-channel medical images. To address this issue, we pre-trained an Inception-V3 model on ImageNet after converting the images to grayscale through a common transformation. Surprisingly, these models do not show a significant degradation in performance on the original ImageNet classification task, suggesting that color is not a critical feature of natural image classification. Furthermore, models pre-trained on grayscale ImageNet outperformed color ImageNet models in terms of both speed and accuracy when refined on disease classification from chest X-ray images.


Domain adaptation Transfer learning 


  1. 1.
    Lijens, G., et al.: A survey on deep learning in medical image analysis. Med. Image Anal. 42, 60–88 (2017)CrossRefGoogle Scholar
  2. 2.
    Rajpurkar, P., et al.: CheXNet: radiologist-level pneumonia detection on chest X-Rays with deep learning.
  3. 3.
    van Ginneken, B., Setio, A.A., Jacobs, C., Ciompi, F.: Off-the-shelf convolutional neural network features for pulmonary nodule detection in computed tomography scans. In: IEEE ISBI, pp. 286–289 (2015)Google Scholar
  4. 4.
    Wang, J., MacKenzie, J.D., Ramachandran, R., Chen, D.Z.: A deep learning approach for semantic segmentation in histology tissue images. In: Ourselin, S., Joskowicz, L., Sabuncu, M.R., Unal, G., Wells, W. (eds.) MICCAI 2016. LNCS, vol. 9901, pp. 176–184. Springer, Cham (2016). Scholar
  5. 5.
  6. 6.
    Wang, X., Peng, Y., Lu, L., Lu, Z., Bagheri, M., Summers, R.M.: ChestX-ray8: hospital-scale Chest X-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases. In: IEEE CVPR, pp. 2097–2106 (2017)Google Scholar
  7. 7.
  8. 8.
    Open-i Biomedical Image Search Engine.
  9. 9.
    Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z.: Rethinking the inception architecture for computer vision. In: CVPR, pp. 2818–2826 (2016)Google Scholar
  10. 10.
    Russakovsky, O., et al.: ImageNet large scale visual recognition challenge. IJCV 115(3), 211–252 (2015)MathSciNetCrossRefGoogle Scholar
  11. 11.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: NIPS, pp. 1097–1105 (2012)Google Scholar
  12. 12.
    TensorFlow-Slim image classification model library.
  13. 13.
    DeLong, E.R., DeLong, D.M., Clarke-Pearson, D.L.: Comparing the areas under two or more correlated receiver operating characteristic curves: a nonparametric approach. Biometrics 44(3), 837–845 (1988)CrossRefGoogle Scholar
  14. 14.
    Larsson, G., Maire, M., Shakhnarovich, G.: Learning representations for automatic colorization. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9908, pp. 577–593. Springer, Cham (2016). Scholar
  15. 15.
    Zhang, R., Isola, P., Efros, A.A.: Colorful image colorization.

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.IBM, Watson HealthCambridgeUSA

Personalised recommendations