Abstract
One of the characteristics of medical image analysis is that several medical images are not in the structure domain like natural images but in the texture domain. This chapter introduces a new transfer learning method, called “two-stage feature transfer,” to analyze textural medical images by deep convolutional neural networks. In the process of the two-stage feature transfer learning, the models are successively pre-trained with both natural image dataset and textural image dataset to get a better feature representation which cannot be derived from either of these datasets. Experimental results show that the two-stage feature transfer improves the generalization performance of the convolutional neural network on a textural lung CT pattern classification. To explain the mechanism of a transfer learning on convolutional neural networks, this chapter also shows analysis results of the obtained feature representations by an activation visualization method, and by measuring the frequency response of trained neural networks, in both qualitative and quantitative ways, respectively. These results demonstrate that such successive transfer learning enables networks to grasp both structural and textural visual features and be helpful to extracting good features from the textural medical images.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
Interchanging the order of source domains for pretraining, i.e., using textural images before natural images, make the generalization performance worse because of the catastrophic forgetting of knowledge [8]. First pretraining should be done with a natural image dataset also by considering the computational cost of the training with natural images and the availability of the pretrained model.
- 2.
Most studies on DLDs classify them into six classes with no discrimination between RET and GGO. This work explicitly discriminates them in a more precise and up-to-date manner.
- 3.
In fact, DCNNs works as a basis decomposition, which corresponds to a convolution “kernel” literally, instead of Fourier systems identified in frequency space. However, because the images have another isomorphic representation in Fourier space, this analysis can be meaningful.
References
Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp 1097–1105 (2012)
Litjens, G., Kooi, T., Bejnordi, B.E., Setio, A.A.A., Ciompi, F., Ghafoorian, M., Van Der Laak, J.A., Van Ginneken, B., Sánchez, C.I.: A survey on deep learning in medical image analysis. Med. Image Anal. 42, 60–88 (2017)
Fukushima, K.: Neocognitron: a self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biol. Cybern. 36(4), 193–202 (1980)
Hubel, D.H., Wiesel, T.N.: Receptive fields, binocular interaction and functional architecture in the cat’s visual cortex. J. Physiol. 160(1), 106–154 (1962)
Suzuki, A., Sakanashi, H., Kido, S., Hayaru, S.: Feature representation analysis of deep convolutional neural network using two-stage feature transfer - an application for diffuse lung disease classification. IPSJ Trans. Math. Model. Its Appl. 100–110 (2018)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)
Pan, S.J., Yang, Q.: A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 22(10), 1345–1359 (2010)
Kirkpatrick, J., Pascanu, R., Rabinowitz, N., Veness, J., Desjardins, G., Rusu, A.A., Milan, K., Quan, J., Ramalho, T., Grabska-Barwinska, A., et al.: Overcoming catastrophic forgetting in neural networks. Proc. Natl. Acad. Sci. 114(13), 3521–3526 (2017)
Uchiyama, Y., Katsuragawa, S., Abe, H., Shiraishi, J., Li, F., Li, Q., Zhang, C.T., Suzuki, K., et al.: Quantitative computerized analysis of diffuse lung disease in high-resolution computed tomography. Med. Phys. 30(9), 2440–2454 (2003)
Simonyan, K., Vedaldi, A., Zisserman, A.: Deep inside convolutional networks: visualising image classification models and saliency maps (2013). arXiv preprint arXiv:13126034
Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D.: Grad-cam: Visual explanations from deep networks via gradient-based localization. In: ICCV, pp 618–626 (2017)
Mahendran, A., Vedaldi, A.: Salient deconvolutional networks. In: European Conference on Computer Vision, pp 120–135. Springer (2016)
Julesz, B., Caelli, T.: On the limits of fourier decompositions in visual texture perception. Perception 8(1), 69–73 (1979)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this chapter
Cite this chapter
Suzuki, A., Sakanashi, H., Kido, S., Shouno, H. (2020). Deep Learning in Textural Medical Image Analysis. In: Chen, YW., Jain, L. (eds) Deep Learning in Healthcare. Intelligent Systems Reference Library, vol 171. Springer, Cham. https://doi.org/10.1007/978-3-030-32606-7_7
Download citation
DOI: https://doi.org/10.1007/978-3-030-32606-7_7
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-32605-0
Online ISBN: 978-3-030-32606-7
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)