Advertisement

A Combined Radio-Histological Approach for Classification of Low Grade Gliomas

  • Aditya Bagari
  • Ashish Kumar
  • Avinash Kori
  • Mahendra Khened
  • Ganapathy KrishnamurthiEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11383)

Abstract

Deep learning based techniques have shown to be beneficial for automating various medical image tasks like segmentation of lesions and automation of disease diagnosis. In this work, we demonstrate the utility of deep learning and radiomics features for classification of low grade gliomas (LGG) into astrocytoma and oligodendroglioma. In this study the objective is to use whole-slide H&E stained images and Magnetic Resonance (MR) images of the brain to make a prediction about the class of the glioma. We treat both the pathology and radiology datasets separately for in-depth analysis and then combine the predictions made by the individual models to get the final class label for a patient. The pre-processing of the whole slide images involved region of interest detection, stain normalization and patch extraction. An autoencoder was trained to extract features from each patch and these features are then used to find anomaly patches among the entire set of patches for a single Whole Slide Image. These anomaly patches from all the training slides form the dataset for training the classification model. A deep neural network based classification model was used to classify individual patches among the two classes. For the radiology dataset based analysis, each MRI scan was fed into a pre-processing pipeline which involved skull-stripping, co-registration of MR sequences to T1c, re-sampling of MR volumes to isotropic voxels and segmentation of brain lesion. The lesions in the MR volumes were automatically segmented using a fully convolutional Neural Network (CNN) trained on BraTS-2018 segmentation challenge dataset. From the segmentation maps 64\(\,\times \,\)64\(\,\times \,\)64 cube patches centered around the tumor were extracted from the T1 MR images for extraction of high level radiomic features. These features were then used to train a logistic regression classifier. After developing the two models, we used a confidence based prediction methodology to get the final class labels for each patient. This combined approach achieved a classification accuracy of 90% on the challenge test set (n = 20). These results showcase the emerging role of deep learning and radiomics in analyzing whole-slide images and MR scans for lesion characterization.

References

  1. Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. In: Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, pp. 249–256 (2010)Google Scholar
  2. Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: CVPR, vol. 1, p. 3 (2017)Google Scholar
  3. Ibanez, L., Schroeder, W., Ng, L., Cates, J.: The ITK software guide (2005)Google Scholar
  4. Kamnitsas, K., et al.: Efficient multi-scale 3D CNN with fully connected CRF for accurate brain lesion segmentation. Med. Image Anal. 36, 61–78 (2017)CrossRefGoogle Scholar
  5. Liu, F.T., Ting, K.M., Zhou, Z.-H.: Isolation forest. In: 2008 Eighth IEEE International Conference on Data Mining, pp. 413–422. IEEE (2008)Google Scholar
  6. Menze, B.H., et al.: The multimodal brain tumor image segmentation benchmark (BRATS). IEEE Trans. Med. Imaging 34(10), 1993–2024 (2015)CrossRefGoogle Scholar
  7. Reinhard, E., Adhikhmin, M., Gooch, B., Shirley, P.: Color transfer between images. IEEE Comput. Graph. Appl. 21(5), 34–41 (2001)CrossRefGoogle Scholar
  8. van Griethuysen, J.J., et al.: Computational radiomics system to decode the radiographic phenotype. Cancer Res. 77(21), e104–e107 (2017)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Indian Institute of Technology MadrasChennaiIndia

Personalised recommendations