Interactive Distributed Deep Learning with Jupyter Notebooks
Deep learning researchers are increasingly using Jupyter notebooks to implement interactive, reproducible workflows with embedded visualization, steering and documentation. Such solutions are typically deployed on small-scale (e.g. single server) computing systems. However, as the sizes and complexities of datasets and associated neural network models increase, high-performance distributed systems become important for training and evaluating models in a feasible amount of time. In this paper we describe our vision for Jupyter notebook solutions to deploy deep learning workloads onto high-performance computing systems. We demonstrate the effectiveness of notebooks for distributed training and hyper-parameter optimization of deep neural networks with efficient, scalable backends.
KeywordsJupyter Deep learning Distributed training Hyperparameter optimization High-performance computing Genetic algorithms
This research used resources of the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. This work was in part supported by the NERSC Big Data Center; we acknowledge Cray for their funding support.
- 8.Bhimji, W., Farrell, S.A., Kurth, T., Paganini, M., Racah, E., Prabhat: Deep neural networks for physics analysis on low-level whole-detector data at the LHC. arXiv preprint arXiv:1711.03573 (2017)
- 9.Chollet, F.: keras (2015). https://github.com/fchollet/keras
- 10.Crow, J.F.: Advantages of sexual reproduction. Genesis 15(3), 205–213 (1994)Google Scholar
- 11.Dask Development Team: Dask: Library for dynamic task scheduling (2016). http://dask.pydata.org
- 12.Jaderberg, M., et al.: Population based training of neural networks. CoRR abs/1711.09846 (2017). http://arxiv.org/abs/1711.09846
- 13.Kluyver, T., et al.: Jupyter notebooks – a publishing format for reproducible computational workfows. In: Loizides, F., Schmidt, B. (eds.) Positioning and Power in Academic Publishing: Players, Agents and Agendas. pp. 87 – 90. IOS Press (2016)Google Scholar
- 15.Mendygral, P., Hill, N., Kandalla, K., Davis, M., Balma, J., Schongens, M.: High performance scalable deep learning with the cray programming environments deep learning plugin. In: Proceedings of CUG (2018)Google Scholar
- 16.Rocklin, M.: Dask: parallel computation with blocked algorithms and task scheduling. In: Huff, K., Bergstra, J. (eds.) Proceedings of the 14th Python in Science Conference, pp. 130–136 (2015)Google Scholar
- 17.Sergeev, A., Balso, M.D.: Horovod: fast and easy distributed deep learning in TensorFlow. arXiv preprint arXiv:1802.05799 (2018)