Discriminative locally linear mapping for medical diagnosis
Medical diagnosis based on machine learning has received growing interest in recent years. However, traditional classification algorithms often fail to appropriately deal with medical datasets because of their high dimensionality. Manifold learning is a branch of nonlinear dimension reduction algorithms that can map the high dimensional data into a low-dimensional space. In this paper, we propose a novel manifold-based medical diagnosis algorithm named Discriminative Locally Linear Mapping (DL2M). DL2M is built on the basis of the well-known manifold leaning algorithm LLE (Locally Linear Embedding). It incorporates the discriminative information of training data into the manifold transformation of LLE, and then propagates the discriminative mapping into out-of-sample extension. DL2M is not only advantageous in preserving the local structure of original manifold, but also maps the different classes of data as far as possible in the low-dimensional feature space. The time complexity of DL2M algorithm is also discussed. Sufficient experimental results demonstrate that our method exhibits promising classification performance on the real-world medical datasets.
KeywordsMedical diagnosis Manifold learning Classification Locally linear embedding Out-of-sample extension
Data used in preparation of this article were obtained from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) database (adni.loni.usc.edu). As such, the investigator within the ADNI contributed to the design and implementation of ADNI and/or provided data but did not participate in analysis or writing of this report. A complete listing of ADNI investigators can be found at: http://adni.loni.usc.edu/wp- content/uploads/how to apply/ADNI Acknowledgement List.pdf.
This research was supported in part by the Chinese National Natural Science Foundation with Grant nos. 61402395, 61472343, 61702441 and 61802336, Natural Science Foundation of Jiangsu Province under contracts BK20140492 and BK20151314, Jiangsu government scholarship funding, Jiangsu overseas research and training program for university prominent young and middle-aged teachers and presidents.
- 3.Chen X, Li Q, Song Y (2012) Supervised geodesic propagation for semantic label transfer. European Conference on Computer Vision, vol 7574 (1), pp 553–565Google Scholar
- 4.De Ridder D, Duin R P W (2002) Locally linear embedding for classification. Pattern recognition group. Dept. of Imaging Science & Technology, Delft University of Technology, Delft, The Netherlands, Tech. Rep. PH-2002-01:1-12Google Scholar
- 10.Liu X, Deng S, Yin J (2009) Locality sensitive discriminant analysis based on matrix representation. J Zhejiang Univ-sc A 2:019Google Scholar
- 12.Lu H, Li B, Zhu J et al (2016) Wound intensity correction and segmentation with convolutional neural networks. Concurr Comp-pract E 29(6):1–10Google Scholar
- 16.Platt J (1998) A fast algorithm for training support vector machines. J Inf Technol 2(5):1–28Google Scholar
- 17.Ridder DD, Kouropteva O, Okun O (2003) Supervised locally linear embedding. In: Joint international conference on artificial neural networks and neural information processing. Springer-Verlag, pp 333–341Google Scholar
- 19.Sung F, Yang Y (2018) Learning to compare: relation network for few shot learning. In: 2018 IEEE conference on computer vision and pattern recognition (CVPR)Google Scholar
- 21.Wang D, Lu H (2014) Visual tracking via probability continuous outlier model. In: 2014 IEEE conference on computer vision and pattern recognition (CVPR), pp 3478–3485Google Scholar
- 24.Weston J (1999) Support vector machines for multi-class pattern recognition. In: Proc. European symposium on artificial neural networks, vol 17, pp 219–224Google Scholar
- 26.Xu X, He L, Lu H et al (2018) Deep adversarial metric learning for cross-modal retrieval. World Wide Web. https://doi.org/10.1007/s11280-018-0541-x
- 29.Zhang S, Li X, Zong M (2017) Learning k, for kNN classification. ACM Trans Intell Syst Technol 8(3):43Google Scholar