Deep Learning in Healthcare pp 111-126 | Cite as
Deep Learning in Textural Medical Image Analysis
Abstract
One of the characteristics of medical image analysis is that several medical images are not in the structure domain like natural images but in the texture domain. This chapter introduces a new transfer learning method, called “two-stage feature transfer,” to analyze textural medical images by deep convolutional neural networks. In the process of the two-stage feature transfer learning, the models are successively pre-trained with both natural image dataset and textural image dataset to get a better feature representation which cannot be derived from either of these datasets. Experimental results show that the two-stage feature transfer improves the generalization performance of the convolutional neural network on a textural lung CT pattern classification. To explain the mechanism of a transfer learning on convolutional neural networks, this chapter also shows analysis results of the obtained feature representations by an activation visualization method, and by measuring the frequency response of trained neural networks, in both qualitative and quantitative ways, respectively. These results demonstrate that such successive transfer learning enables networks to grasp both structural and textural visual features and be helpful to extracting good features from the textural medical images.
References
- 1.Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp 1097–1105 (2012)Google Scholar
- 2.Litjens, G., Kooi, T., Bejnordi, B.E., Setio, A.A.A., Ciompi, F., Ghafoorian, M., Van Der Laak, J.A., Van Ginneken, B., Sánchez, C.I.: A survey on deep learning in medical image analysis. Med. Image Anal. 42, 60–88 (2017)CrossRefGoogle Scholar
- 3.Fukushima, K.: Neocognitron: a self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biol. Cybern. 36(4), 193–202 (1980)CrossRefGoogle Scholar
- 4.Hubel, D.H., Wiesel, T.N.: Receptive fields, binocular interaction and functional architecture in the cat’s visual cortex. J. Physiol. 160(1), 106–154 (1962)CrossRefGoogle Scholar
- 5.Suzuki, A., Sakanashi, H., Kido, S., Hayaru, S.: Feature representation analysis of deep convolutional neural network using two-stage feature transfer - an application for diffuse lung disease classification. IPSJ Trans. Math. Model. Its Appl. 100–110 (2018)Google Scholar
- 6.He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)Google Scholar
- 7.Pan, S.J., Yang, Q.: A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 22(10), 1345–1359 (2010)CrossRefGoogle Scholar
- 8.Kirkpatrick, J., Pascanu, R., Rabinowitz, N., Veness, J., Desjardins, G., Rusu, A.A., Milan, K., Quan, J., Ramalho, T., Grabska-Barwinska, A., et al.: Overcoming catastrophic forgetting in neural networks. Proc. Natl. Acad. Sci. 114(13), 3521–3526 (2017)MathSciNetCrossRefGoogle Scholar
- 9.Uchiyama, Y., Katsuragawa, S., Abe, H., Shiraishi, J., Li, F., Li, Q., Zhang, C.T., Suzuki, K., et al.: Quantitative computerized analysis of diffuse lung disease in high-resolution computed tomography. Med. Phys. 30(9), 2440–2454 (2003)CrossRefGoogle Scholar
- 10.Simonyan, K., Vedaldi, A., Zisserman, A.: Deep inside convolutional networks: visualising image classification models and saliency maps (2013). arXiv preprint arXiv:13126034
- 11.Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D.: Grad-cam: Visual explanations from deep networks via gradient-based localization. In: ICCV, pp 618–626 (2017)Google Scholar
- 12.Mahendran, A., Vedaldi, A.: Salient deconvolutional networks. In: European Conference on Computer Vision, pp 120–135. Springer (2016)Google Scholar
- 13.Julesz, B., Caelli, T.: On the limits of fourier decompositions in visual texture perception. Perception 8(1), 69–73 (1979)CrossRefGoogle Scholar