Discriminative Feature Selection by Optimal Manifold Search for Neoplastic Image Recognition
An endocytoscope provides ultramagnified observation that enables physicians to achieve minimally invasive and real-time diagnosis in colonoscopy. However, great pathological knowledge and clinical experiences are required for this diagnosis. The computer-aided diagnosis (CAD) system is required that decreases the chances of overlooking neoplastic polyps in endocytoscopy. Towards the construction of a CAD system, we have developed texture-feature-based classification between neoplastic and non-neoplastic images of polyps. We propose a feature-selection method that selects discriminative features from texture features for such two-category classification by searching for an optimal manifold. With an optimal manifold, where selected features are distributed, the distance between two linear subspaces is maximised. We experimentally evaluated the proposed method by comparing the classification accuracy before and after the feature selection for texture features and deep-learning features. Furthermore, we clarified the characteristics of an optimal manifold by exploring the relation between the classification accuracy and the output probability of a support vector machine (SVM). The classification with our feature-selection method achieved 84.7% accuracy, which is 7.2% higher than the direct application of Haralick features and SVM.
KeywordsFeature selection Manifold learning Texture feature Convolutional neural network Endocytoscopic images Automated pathological diagnosis
Parts of this research were supported by the Research on Development of New Medical Devices from the Japan Agency for Medical Research and Development (No. 18hk0102034h0103), and MEXT KAKENHI (No. 26108006, No. 17H00867).
- 8.Hamm, J., Lee, D.: Grassmann discriminant analysis: a unifying view on subspace-based learning. In: Proceedings of International Conference on Machine Learning, pp. 376–383 (2008)Google Scholar
- 10.Harandi, M., Sanderson, C., Shen, C., Lovell, B.: Dictionary learning and sparse coding on Grassmann manifolds: an extrinsic solution. In: Proceedings of The IEEE International Conference on Computer Vision, pp. 3120–3127 (2013)Google Scholar
- 11.Iijima, T.: Theory of pattern recognition. In: Electronics and Communications in Japan, pp. 123–134 (1963)Google Scholar
- 12.Itoh, H., Mori, Y., Misawa, M., Oda, M., Kudo, S.E., Mori, K.: Cascade classification of endocytoscopic images of colorectal lesions for automated pathological diagnosis. In: Proceedings of SPIE Medical Imaging (2018, in Press)Google Scholar
- 13.Jia, Y., et al.: Caffe: convolutional architecture for fast feature embedding. arXiv preprint arXiv:1408.5093 (2014)
- 15.Krizhevsky, A., Sutskever, I., Hinton, G.: ImageNet classification with deep convolutional neural networks. In: Proceedings of International Conference on Neural Information Processing Systems, vol. 1, pp. 1097–1105 (2012)Google Scholar
- 20.Oja, E.: Subspace Methods of Pattern Recognition. Research Studies Press, Boston (1983)Google Scholar
- 21.Platt, J.C.: Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods. Adv. Large Margin Classif. 10, 61–74 (1999). MIT PressGoogle Scholar
- 22.Shigenaka, R., Raytchev, B., Tamaki, T., Kaneda, K.: Face sequence recognition using Grassmann distances and Gassmann kernels. In: Proceedings of International Joint Conference on Neural Networks, pp. 1–7 (2012)Google Scholar
- 23.Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: Proceedings of International Conference on Learning Representations (2015)Google Scholar
- 25.Tamaki, T., et al.: Computer-aided colorectal tumor classification in NBI endoscopy using CNN features. In: Proceedings of Korea-Japan Joint Workshop on Frontiers of Computer Vision (2016)Google Scholar
- 28.Watanabe, S., Pakvasa, N.: Subspace method of pattern recognition. In: Proceedings of of the 1st International Joint Conference of Pattern Recognition (1973)Google Scholar