Abstract
Neural machine translation - using neural networks to translate human language - is an area of active research exploring new neuron types and network topologies with the goal of dramatically improving machine translation performance. Current state-of-the-art approaches, such as the multi-head attention-based transformer, require very large translation corpuses and many epochs to produce models of reasonable quality. Recent attempts to parallelize the official TensorFlow “Transformer” model across multiple nodes have hit roadblocks due to excessive memory use and resulting out of memory errors when performing MPI collectives.
This paper describes modifications made to the Horovod MPI-based distributed training framework to reduce memory usage for transformer models by converting assumed-sparse tensors to dense tensors, and subsequently replacing sparse gradient gather with dense gradient reduction. The result is a dramatic increase in scale-out capability, with CPU-only scaling tests achieving 91% weak scaling efficiency up to 1200 MPI processes (300 nodes), and up to 65% strong scaling efficiency up to 400 MPI processes (200 nodes) using the Stampede2 supercomputer.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate. Computing Research Repository (CoRR), abs/11409.0473v7, September 2014
Cho, K., van Merrienboer, B., Gulcehre, C., Bougares, F., Schwenk, H., Bengio, Y.: Learning phrase representations using RNN encoder-decoder for statistical machine translation. Computing Research Repository (CoRR), abs/1406.1078v3, September 2014
Collobert, R., Puhrsch, C., Synnaeve, G.: Wav2Letter: an End-to-End ConvNet-based speech recognition system. Computing Research Repository (CoRR), abs/1609.03193v2, September 2016
Gehring, J., et al.: Convolutional sequence to sequence learning. Computing Research Repository (CoRR), abs/1705.03122v3, July 2017
Horovod. compute\(\_\)gradients() in horovod/tensorflow/\(\_\_\)init\(\_\_\).py. https://github.com/uber/horovod/blob/085cb1b5f3b30734a34d047841b098c15a6e1bae/horovod/tensorflow/__init__.py#L195
Horovod. Release 0.15.2. https://github.com/uber/horovod/releases/tag/v0.15.2
Hu, J., Shen, L., Sun, G.: Squeeze-and-excitation networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7132–7141 (2018)
Jia, Y., et al.: Caffe: convolutional architecture for fast feature embedding. In Proceedings of the 22nd ACM International Conference on Multimedia, pp. 675–678, November 2014
Johnson, M., et al.: Google’s multilingual neural machine translation system: enabling zero-shot translation. Computing Research Repository (CoRR), abs/1611.04558v2, August 2017
Koehn, P., Och, F.J., Marcu, D.: Statistical phrase-based translation. In: Proceedings of 2003 Human Language Technology Conference (HLT-NAACL), pp. 48–54, June 2003
Kuchaiev, O., Ginsburg, B., Gitman, I., Lavrukhin, V., Case, C., Micikevicius, P.: Mixed-precision training for NLP and speech recognition with OpenSeq2Seq. Computing Research Repository (CoRR), abs/1805.10387v2, November 2018
Ott, M., Edunov, S., Grangier, D., Auli, M.: Scaling neural machine translation. Computing Research Repository (CoRR), abs/1806.00187v3, September 2018
Papineni, K., Roukos, S., Ward, T., Zhu, W.-J.: BLEU: a method for automatic evaluation of machine translation. In: Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, pp. 311–318. Association for Computational Linguistics (2002)
Paszke, A.: Automatic differentiation in PyTorch, December 2017
Popel, M., Bojar, O.: Training tips for the transformer model. Computing Research Repository (CoRR), abs/1804.00247v2, May 2018
Rush, A.M., Chopra, S., Weston, J.: A neural attention model for abstractive sentence summarization. Computing Research Repository (CoRR), abs/1509.00685v2, September 2015
Schwan, P., et al.: Lustre: building a file system for 1000-node clusters. In: Proceedings of the 2003 Linux Symposium, vol. 2003, pp. 380–386 (2003)
Stanzione, D., et al.: Stampede 2: the evolution of an XSEDE supercomputer. In: Proceedings of the Practice and Experience in Advanced Research Computing 2017 on Sustainability, Success and Impact, PEARC 2017, pp. 15:1–15:8. ACM, New York (2017)
Sutskever, I., Vinyals, O., Le, Q.V.: Sequence to sequence learning with neural networks. In: NIPS Proceedings Advances in Neural Information Processing Systems 27, pp. 3104–3112, December 2014
TensorFlow. \(\_\)AggregatedGrads() in tensorflow/python/ops/gradients\(\_\)impl.py. https://github.com/tensorflow/tensorflow/blob/c95ca05536144451ef78ca6e2c15f0f65ebaaf95/tensorflow/python/ops/gradients_impl.py#L1183
TensorFlow. Official Transformer Model. https://github.com/tensorflow/models/blob/cdcd3ec276bdccd77a9a35c38f5aaec39c15cc0b/official/transformer/README.md
Vaswani, A., et al.: Attention is all you need. Computing Research Repository (CoRR), abs/1706.03762v5, December 2017
Xie, S., Girshick, R., Dollár, P., Tu, Z., He, K.: Aggregated residual transformations for deep neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1492–1500 (2017)
Yamashita, T., Hirasawa, K., Hu, J.: Application of multi-branch neural networks to stock market prediction. In: Proceedings. 2005 IEEE International Joint Conference on Neural Networks 2005, vol. 4, pp. 2544–2548. IEEE (2005)
Yamashita, T., Hirasawa, K., Hu, J., Murata, J.: Multi-branch structure of layered neural networks. In: Proceedings of the 9th International Conference on Neural Information Processing 2002, ICONIP 2002, vol. 1, pp. 243–247. IEEE (2002)
Acknowledgement
The authors acknowledge the Texas Advanced Computing Center (TACC) at The University of Texas at Austin for providing HPC resources that have contributed to the research results reported within this paper http://www.tacc.utexas.edu.
Author information
Authors and Affiliations
Corresponding authors
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Cavdar, D. et al. (2019). Densifying Assumed-Sparse Tensors. In: Weiland, M., Juckeland, G., Trinitis, C., Sadayappan, P. (eds) High Performance Computing. ISC High Performance 2019. Lecture Notes in Computer Science(), vol 11501. Springer, Cham. https://doi.org/10.1007/978-3-030-20656-7_2
Download citation
DOI: https://doi.org/10.1007/978-3-030-20656-7_2
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-20655-0
Online ISBN: 978-3-030-20656-7
eBook Packages: Computer ScienceComputer Science (R0)