Investigating the Effects of Transfer Learning on ROI-based Classification of Chest CT Images: A Case Study on Diffuse Lung Diseases

  • Shingo MabuEmail author
  • Ami Atsumo
  • Shoji Kido
  • Takashi Kuremoto
  • Yasushi Hirano


Research on Computer-Aided Diagnosis (CAD) of medical images has been actively conducted to support decisions of radiologists. Since deep learning has shown distinguished abilities in classification, detection, segmentation, etc. in various problems, many studies on CAD have been using deep learning. One of the reasons behind the success of deep learning is the availability of large application-specific annotated datasets. However, it is quite tough work for radiologists to annotate hundreds or thousands of medical images for deep learning, and thus it is difficult to obtain large scale annotated datasets for various organs and diseases. Therefore, many techniques that effectively train deep neural networks have been proposed, and one of the techniques is transfer learning. This paper focuses on transfer learning and especially conducts a case study on ROI-based opacity classification of diffuse lung diseases in chest CT images. The aim of this paper is to clarify what characteristics of the datasets for pre-training and what kinds of structures of deep neural networks for fine-tuning contribute to enhance the effectiveness of transfer learning. In addition, the numbers of training data are set at various values and the effectiveness of transfer learning is evaluated. In the experiments, nine conditions of transfer learning and a method without transfer learning are compared to analyze the appropriate conditions. From the experimental results, it is clarified that the pre-training dataset with more (various) classes and the compact structure for fine-tuning show the best accuracy in this work.


Computer-aided diagnosis Diffuse lung diseases Transfer learning Convolutional neural network CT 



This work was financially supported by JSPS Grant-in-Aid for Scientific Research on Innovative Areas, Multidisciplinary Computational Anatomy, JSPS KAKENHI Grant Number 26108009; JSPS KAKENHI Grant Number 16K16116; and JSPS KAKENHI Grant Number 19K12120.

Compliance with Ethical Standards

Conflict of interests

The authors declare that they have no conflict of interest.


  1. 1.
  2. 2.
    Cifar-10 and cifar-100 datasets.
  3. 3.
    Anthimopoulos, M., Christodoulidis, S., Ebner, L., Christe, A., Mougiakakou, S. (2016). Lung pattern classification for interstitial lung diseases using a deep convolutional neural network. IEEE Transactions on Medical Imaging, 35(5), 1207–1216. Scholar
  4. 4.
    Bar, Y., Diamant, I., Wolf, L., Greenspan, H. (2015). Deep learning with non-medical training used for chest pathology identification. In Medical Imaging 2015: Computer-aided Diagnosis, (Vol. 9414 p. 94140V): International Society for Optics and Photonics.Google Scholar
  5. 5.
    Chapelle, O., Scholkopf, B., Zien, A. (2006). Semi-supervised learning.Google Scholar
  6. 6.
    Frid-Adar, M., Diamant, I., Klang, E., Amitai, M., Goldberger, J., Greenspan, H. (2018). Gan-based synthetic medical image augmentation for increased cnn performance in liver lesion classification. Neurocomputing, 321, 321–331.CrossRefGoogle Scholar
  7. 7.
    Gao, M., Bagci, U., Lu, L., Wu, A., Buty, M., Shin, H.C., Roth, H., Papadakis, G.Z., Depeursinge, A., Summers, R.M., Xu, Z., Mollura, D.J. (2018). Holistic classification of ct attenuation patterns for interstitial lung diseases via deep convolutional neural networks. Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization, 6(1), 1–6. Scholar
  8. 8.
    Greenspan, H., van Ginneken, B., Summers, R.M. (2016). Guest editorial deep learning in medical imaging: Overview and future promise of an exciting new technique. IEEE Transactions on Medical Imaging, 35(5), 1153–1159. Scholar
  9. 9.
    Ioffe, S., & Szegedy, C. (2015). Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv:1502.03167.
  10. 10.
    Kingma, D., & Ba, J. (2014). Adam: A method for stochastic optimization. arXiv:1412.6980.
  11. 11.
    Krizhevsky, A. (2009). Learning multiple layers of features from tiny images. Tech rep.Google Scholar
  12. 12.
    Krizhevsky, A., Sutskever, I., Hinton, G.E. (2012). Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems (pp. 1097–1105).Google Scholar
  13. 13.
    LeCun, Y., Bengio, Y., Hinton, G. (2015). Deep learning. Nature, 521, 436–444.CrossRefGoogle Scholar
  14. 14.
    Litjens, G., Kooi, T., Bejnordi, B.E., Setio, A.A.A., Ciompi, F., Ghafoorian, M., Van Der Laak, J.A., Van Ginneken, B., Sánchez, C.I. (2017). A survey on deep learning in medical image analysis. Medical Image Analysis, 42, 60–88.CrossRefGoogle Scholar
  15. 15.
    Moghbel, M., & Mashohor, S. (2013). A review of computer assisted detection/diagnosis (cad) in breast thermography for breast cancer detection. Artificial Intelligence Review, 39(4), 305–313.CrossRefGoogle Scholar
  16. 16.
    Rasmus, A., Berglund, M., Honkala, M., Valpola, H., Raiko, T. (2015). Semi-supervised learning with ladder networks. In Cortes, C., Lawrence, N. D., Lee, D. D., Sugiyama, M., Garnett, R. (Eds.) Advances in Neural Information Processing Systems 28 (pp. 3546–3554). Curran Associates, Inc: Red Hook.Google Scholar
  17. 17.
    Rui, X., Hirano, Y., Tachibana, R., Shoji, K. (2013). A bag-of-features approach to classify six types of pulmonary textures on high-resolution computed tomography. IEICE Transactions on Information and Systems, 96(4), 845–855.Google Scholar
  18. 18.
    Shin, H.C., Roth, H.R., Gao, M., Lu, L., Xu, Z., Nogues, I., Yao, J., Mollura, D., Summers, R.M. (2016). Deep convolutional neural networks for computer-aided detection: Cnn architectures, dataset characteristics and transfer learning. IEEE Transactions on Medical Imaging, 35(5), 1285–1298.CrossRefGoogle Scholar
  19. 19.
    Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556.
  20. 20.
    Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R. (2014). Dropout: a simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1), 1929–1958.MathSciNetzbMATHGoogle Scholar
  21. 21.
    Tajbakhsh, N., Shin, J.Y., Gurudu, S.R., Hurst, R.T., Kendall, C.B., Gotway, M.B., Liang, J. (2016). Convolutional neural networks for medical image analysis: Full training or fine tuning? IEEE Transactions on Medical Imaging, 35(5), 1299–1312.CrossRefGoogle Scholar
  22. 22.
    Wang, X., Peng, Y., Lu, L., Lu, Z., Bagheri, M., Summers, R.M. (2017). Chestx-ray8: Hospital-scale chest x-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 2097–2106).Google Scholar
  23. 23.
    Yan, K., Wang, X., Lu, L., Summers, R.M. (2018). Deeplesion: automated mining of large-scale lesion annotations and universal lesion detection with deep learning. Journal of Medical Imaging, 5(3), 036,501.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2020

Authors and Affiliations

  1. 1.Graduate School of Sciences and Technology for InnovationYamaguchi UniversityYamaguchiJapan
  2. 2.Department of Artificial Intelligence Diagnostic Radiology, Graduate School of MedicineOsaka UniversityOsakaJapan

Personalised recommendations