Parallel and Distributed Processing for Unsupervised Patient Phenotype Representation
The value of data-driven healthcare is the possibility to detect new patterns for inpatient care, treatment, prevention, and comprehension of disease or to predict the duration of hospitalization, its cost or whether death is likely to occur during the hospital stay.
Modeling precise patients phenotype representation from clinical data is challenging over its high-dimensionality, noisy and missing data to be processed into a new low-dimensionality space. Likewise, processing unsupervised learning models into a growing clinical data raises many issues, in terms of algorithmic complexity, such as time to model convergence and memory capacity.
This paper presents DiagnoseNET framework to automate patient phenotype extractions and apply them to predict different medical targets. It provides three high-level features: a full-workflow orchestration into stage pipelining for mining clinical data and using unsupervised feature representations to initialize supervised models; a data resource management for training parallel and distributed deep neural networks.
As a case of study, we have used a clinical dataset from admission and hospital services to build a general purpose inpatient phenotype representation to be used in different medical targets, the first target is to classify the main purpose of inpatient care.
The research focuses on managing the data according to its dimensions, the model complexity, the workers number selected and the memory capacity, for training unsupervised staked denoising auto-encoders over a Mini-Cluster Jetson TX2.
Therefore, mapping tasks that fit over computational resources is a key factor to minimize the number of epochs necessary to model converge, reducing the execution time and maximizing the energy efficiency.
KeywordsHealth care decision-making Unsupervised representation learning Distributed deep neural networks
This work is partly funded by the French government labelled PIA program under its IDEX UCAJEDI project (ANR−15−IDEX−0001). The PhD thesis of John Anderson García Henao is funded by the French government labelled PIA program under its LABEX UCN@Sophia project (ANR−11−LABX−0031−01).
- 2.Cheng, Y., Wang, F., Zhang, P., Hu, J.: A Deep Learning Approach, Risk Prediction with Electronic Health Records (2016)Google Scholar
- 3.Lasko, T.A., Denny, J.C., Levy, M.A.: Computational Phenotype Discovery Using Unsupervised Feature Learning over Noisy, Sparse, and Irregular Clinical Data (2013)Google Scholar
- 7.Wang, X., Wang, F., Hu, J.: A multi-task learning framework for joint disease risk prediction and comorbidity discovery. In: Proceedings of the 2014 22nd International Conference on Pattern Recognition, ICPR 2014, pp. 220–225. IEEE Computer Society, Washington, DC (2014)Google Scholar
- 9.Perros, I., et al.: SPARTan: scalable PARAFAC2 for large & sparse data. In: Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Halifax, NS, Canada, 13–17 August, 2017, pp. 375–384 (2017)Google Scholar
- 10.Perros, I., et al.: SUSTain: scalable unsupervised scoring for tensors and its application to phenotyping. CoRR, abs/1803.05473 (2018)Google Scholar
- 11.Choi, E., Bahadori, M.T., Searles, E., Coffey, C., Sun, J.: Multi-layer representation learning for medical concepts. CoRR, abs/1602.05568 (2016)Google Scholar
- 12.Bengio, Y., Courville, A., Vincent, P.: Representation learning: a review and new perspectives, April 2014Google Scholar
- 15.Choi, E., Bahadori, M.T., Song, L., Stewart, W.F., Sun, J.: GRAM: graph-based attention model for healthcare representation learning. CoRR, abs/1611.07012 (2016)Google Scholar
- 16.Dean, J., et al.: Large scale distributed deep networks. In: NIPS (2012)Google Scholar
- 17.Keuper, J., Preundt, F.-J.: Distributed training of deep neural networks: theoretical and practical limits of parallel scalability. In: Proceedings of the Workshop on Machine Learning in High Performance Computing Environments, MLHPC 2016, pp. 19–26, IEEE Press, Piscataway (2016)Google Scholar
- 18.Zhang, W., Wang, F., Gupta, S.: Model accuracy and runtime tradeoff in distributed deep learning: a systematic study. In: Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI 2017, pp. 4854–4858 (2017)Google Scholar
- 19.Zhang, L., Ren, Y., Zhang, W., Wang, Y.: Nexus: bringing efficient and scalable training to deep learning frameworks. In: 25th IEEE International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems, MASCOTS 2017, Banff, AB, Canada, 20–22 September, 2017 (2017)Google Scholar
- 20.Dünner, C., Parnell, T.P., Sarigiannis, D., Ioannou, N., Pozidis, H.: Snap Machine Learning. CoRR, abs/1803.06333 (2018)Google Scholar
- 22.Hripcsak, G., Albers, D.J.: Next-generation phenotyping of electronic health records. JAMIA 20(1), 117–121 (2013)Google Scholar
- 23.Bengio, Y.: Deep learning of representations for unsupervised and transfer learning. In: Proceedings of the 2011 International Conference on Unsupervised and Transfer Learning Workshop - Volume 27, UTLW 2011, pp. 17–37. JMLR.org (2011)Google Scholar
- 24.Mikolov, T., Sutskever, I., Chen, K., Corrado, G., Dean, J.: Distributed representations of words and phrases and their compositionality. In: Proceedings of the 26th International Conference on Neural Information Processing Systems - Volume 2, NIPS 2013, pp. 3111–3119. Curran Associates Inc., USA (2013)Google Scholar
- 25.Vincent, P., Larochelle, H., Lajoie, I., Bengio, Y., Manzagol, P.-A.: Stacked denoising autoencoders: learning useful representations in a deep network with a local denoising criterion (2010)Google Scholar
- 26.Abadi, M., et al.: TensorFlow: large-scale machine learning on heterogeneous distributed systems. CoRR, abs/1603.04467 (2016)Google Scholar
- 27.Srivastava, N., Mansimov, E., Salakhudinov, R.: Unsupervised learning of video representations using LSTMs. In: Proceedings of the 32nd International Conference on Machine Learning (2015)Google Scholar