Advertisement

Parallel and Distributed Processing for Unsupervised Patient Phenotype Representation

  • John Anderson García HeanoEmail author
  • Frédéric PreciosoEmail author
  • Pascal StacciniEmail author
  • Michel RiveillEmail author
Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 979)

Abstract

The value of data-driven healthcare is the possibility to detect new patterns for inpatient care, treatment, prevention, and comprehension of disease or to predict the duration of hospitalization, its cost or whether death is likely to occur during the hospital stay.

Modeling precise patients phenotype representation from clinical data is challenging over its high-dimensionality, noisy and missing data to be processed into a new low-dimensionality space. Likewise, processing unsupervised learning models into a growing clinical data raises many issues, in terms of algorithmic complexity, such as time to model convergence and memory capacity.

This paper presents DiagnoseNET framework to automate patient phenotype extractions and apply them to predict different medical targets. It provides three high-level features: a full-workflow orchestration into stage pipelining for mining clinical data and using unsupervised feature representations to initialize supervised models; a data resource management for training parallel and distributed deep neural networks.

As a case of study, we have used a clinical dataset from admission and hospital services to build a general purpose inpatient phenotype representation to be used in different medical targets, the first target is to classify the main purpose of inpatient care.

The research focuses on managing the data according to its dimensions, the model complexity, the workers number selected and the memory capacity, for training unsupervised staked denoising auto-encoders over a Mini-Cluster Jetson TX2.

Therefore, mapping tasks that fit over computational resources is a key factor to minimize the number of epochs necessary to model converge, reducing the execution time and maximizing the energy efficiency.

Keywords

Health care decision-making Unsupervised representation learning Distributed deep neural networks 

Notes

Acknowledgments

This work is partly funded by the French government labelled PIA program under its IDEX UCAJEDI project (ANR−15−IDEX−0001). The PhD thesis of John Anderson García Henao is funded by the French government labelled PIA program under its LABEX UCN@Sophia project (ANR−11−LABX−0031−01).

References

  1. 1.
    Heinzmann, K., Carter, L., Lewis, J.S., Aboagye, E.O.: Multiplexed imaging for diagnosis and therapy. Nature Biomed. Eng. 1, 09 (2017)CrossRefGoogle Scholar
  2. 2.
    Cheng, Y., Wang, F., Zhang, P., Hu, J.: A Deep Learning Approach, Risk Prediction with Electronic Health Records (2016)Google Scholar
  3. 3.
    Lasko, T.A., Denny, J.C., Levy, M.A.: Computational Phenotype Discovery Using Unsupervised Feature Learning over Noisy, Sparse, and Irregular Clinical Data (2013)Google Scholar
  4. 4.
    Matheny, M.E., et al.: Development of inpatient risk stratification models of acute kidney injury for use in electronic health records. Med. Decis. Making 30(6), 639–650 (2010)CrossRefGoogle Scholar
  5. 5.
    Kennedy, E.H., Wiitala, W.L., Hayward, R.A., Sussman, J.B.: Improved cardiovascular risk prediction using nonparametric regression and electronic health record data. Med. Care 51(3), 251–258 (2013)CrossRefGoogle Scholar
  6. 6.
    Sheng, Y., et al.: Toward high-throughput phenotyping: unbiased automated feature extraction and selection from knowledge sources. J. Am. Med. Inform. Assoc. 22(5), 993–1000 (2015)CrossRefGoogle Scholar
  7. 7.
    Wang, X., Wang, F., Hu, J.: A multi-task learning framework for joint disease risk prediction and comorbidity discovery. In: Proceedings of the 2014 22nd International Conference on Pattern Recognition, ICPR 2014, pp. 220–225. IEEE Computer Society, Washington, DC (2014)Google Scholar
  8. 8.
    Ho, J.C., et al.: Limestone: high-throughput candidate phenotype generation via tensor factorization. J. Biomed. Inform. 52, 199–211 (2014)CrossRefGoogle Scholar
  9. 9.
    Perros, I., et al.: SPARTan: scalable PARAFAC2 for large & sparse data. In: Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Halifax, NS, Canada, 13–17 August, 2017, pp. 375–384 (2017)Google Scholar
  10. 10.
    Perros, I., et al.: SUSTain: scalable unsupervised scoring for tensors and its application to phenotyping. CoRR, abs/1803.05473 (2018)Google Scholar
  11. 11.
    Choi, E., Bahadori, M.T., Searles, E., Coffey, C., Sun, J.: Multi-layer representation learning for medical concepts. CoRR, abs/1602.05568 (2016)Google Scholar
  12. 12.
    Bengio, Y., Courville, A., Vincent, P.: Representation learning: a review and new perspectives, April 2014Google Scholar
  13. 13.
    Miotto, R., Li, L., Kidd, B.A., Dudley, J.T.: Deep patient: an unsupervised representation to predict the future of patients from the electronic health records. Sci. Rep. 6, 26094 (2016)CrossRefGoogle Scholar
  14. 14.
    Nguyen, P., Tran, T., Wickramasinghe, N., Venkatesh, S.: Deepr: a convolutional net for medical records. IEEE J. Biomed. Health Inform. 21(1), 22–30 (2017)CrossRefGoogle Scholar
  15. 15.
    Choi, E., Bahadori, M.T., Song, L., Stewart, W.F., Sun, J.: GRAM: graph-based attention model for healthcare representation learning. CoRR, abs/1611.07012 (2016)Google Scholar
  16. 16.
    Dean, J., et al.: Large scale distributed deep networks. In: NIPS (2012)Google Scholar
  17. 17.
    Keuper, J., Preundt, F.-J.: Distributed training of deep neural networks: theoretical and practical limits of parallel scalability. In: Proceedings of the Workshop on Machine Learning in High Performance Computing Environments, MLHPC 2016, pp. 19–26, IEEE Press, Piscataway (2016)Google Scholar
  18. 18.
    Zhang, W., Wang, F., Gupta, S.: Model accuracy and runtime tradeoff in distributed deep learning: a systematic study. In: Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI 2017, pp. 4854–4858 (2017)Google Scholar
  19. 19.
    Zhang, L., Ren, Y., Zhang, W., Wang, Y.: Nexus: bringing efficient and scalable training to deep learning frameworks. In: 25th IEEE International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems, MASCOTS 2017, Banff, AB, Canada, 20–22 September, 2017 (2017)Google Scholar
  20. 20.
    Dünner, C., Parnell, T.P., Sarigiannis, D., Ioannou, N., Pozidis, H.: Snap Machine Learning. CoRR, abs/1803.06333 (2018)Google Scholar
  21. 21.
    Jensen Peter, B., Jensen Lars, J., Søren, B.: Mining electronic health records: towards better research applications and clinical care. Nature Rev. Genet. 13, 395 (2012)CrossRefGoogle Scholar
  22. 22.
    Hripcsak, G., Albers, D.J.: Next-generation phenotyping of electronic health records. JAMIA 20(1), 117–121 (2013)Google Scholar
  23. 23.
    Bengio, Y.: Deep learning of representations for unsupervised and transfer learning. In: Proceedings of the 2011 International Conference on Unsupervised and Transfer Learning Workshop - Volume 27, UTLW 2011, pp. 17–37. JMLR.org (2011)Google Scholar
  24. 24.
    Mikolov, T., Sutskever, I., Chen, K., Corrado, G., Dean, J.: Distributed representations of words and phrases and their compositionality. In: Proceedings of the 26th International Conference on Neural Information Processing Systems - Volume 2, NIPS 2013, pp. 3111–3119. Curran Associates Inc., USA (2013)Google Scholar
  25. 25.
    Vincent, P., Larochelle, H., Lajoie, I., Bengio, Y., Manzagol, P.-A.: Stacked denoising autoencoders: learning useful representations in a deep network with a local denoising criterion (2010)Google Scholar
  26. 26.
    Abadi, M., et al.: TensorFlow: large-scale machine learning on heterogeneous distributed systems. CoRR, abs/1603.04467 (2016)Google Scholar
  27. 27.
    Srivastava, N., Mansimov, E., Salakhudinov, R.: Unsupervised learning of video representations using LSTMs. In: Proceedings of the 32nd International Conference on Machine Learning (2015)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Université Côte d’Azur, CNRS, Laboratoire I3SSophia AntipolisFrance
  2. 2.Université Côte d’Azur, CHU NiceNiceFrance

Personalised recommendations