Abstract
Deep Neural Networks (DNNs) based learning methods have brought revolutionary advances in computer vision and machine learning. However, training a DNN model often requires very intensive computational resources. For edge incremental learning, more energy efficient learning solutions are called for. Heterogeneous computing is more power efficient, and has been increasingly popular for embedded platforms. Therefore, how to deploy training models on heterogeneous platforms to support edge learning is a critical issue.
Due to the increasing size of DNNs, it is rather difficult to determine how to dispatch a large number of operations to proper devices. One state-of-art approach uses reinforcement learning to address this device placement issue, but is too costly to apply in an embedded setting. In this paper, our approach leverages the information available from the computational graph of the model, and the dynamic profiles of run time and communication time of each device, to more efficiently deploy operations on heterogeneous systems. We use Critical Earliest Finish Time (CEFT) algorithm together with the Partitioned Boolean Quadratic Assignment Problem (PBQP) solver to find a cost-effective placement, and dynamically adjust assignments during the training process, which makes our method more adaptive and effective for different computational environments. On AlexNet, VGG, Inception, ResNet, RNNLM and other well-known models, our approach significantly outperforms traditional algorithms and reinforcement learning based methods.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Mirhoseini, A., et al.: Device Placement Optimization with Reinforcement Learning (2017)
Scholz, B., Ecksteini, E.: Register Allocation for Irregular Architectures (2002)
Vasudevan, A., Gregg, D.: Mutual Inclusivity of the Critical Path and its Partial Schedule on Heterogeneous Systems (2017)
Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet Classification with Deep Convolutional Neural Networks (2012)
Simonyan, K., Zisserman, A.: Very Deep Convolution Networks for Large-Scale Image Recognition (2014)
Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J.: Rethinking the Inception Architecture for Computer Vision (2016)
He, K., Zhang, X.: Deep Residual Learning for Image Recognition. (2016)
Zaremba, W., Sutskever, I., Vinyals, O.: Recurrent neural network regularization (2014)
Jozefowicz, R., Vinyals, O., Schuster, M., Shazeer, N., Wu, Y.: Exploring the limits of language modeling (2016)
Chevalier, C., Pellegrini, F.: Improvement of the efficiency of genetic algorithms for scalable parallel graph partitioning in a multi-level framework. In: Nagel, W.E., Walter, W.V., Lehner, W. (eds.) Euro-Par 2006. LNCS, vol. 4128, pp. 243–252. Springer, Heidelberg (2006). https://doi.org/10.1007/11823285_25
Mirhoseini, A., Goldie, A., Pham, H., Steiner, B., Le, Q.V., Dean, J.: A hierarchical model for device placement (2018)
Gao, Y., Li, C., Li, B.: Spotlight: Optimizing Device Placement for Training Deep Neural Networks (2018)
Tjalling, C.: Assignment problems and the location of economic activities. Econometrica 25, 53–76 (1957)
Eckstein, E., König, O., Scholz, B.: Code instruction selection based on SSA-graphs. In: Krall, A. (ed.) SCOPES 2003. LNCS, vol. 2826, pp. 49–65. Springer, Heidelberg (2003). https://doi.org/10.1007/978-3-540-39920-9_5
Scholz, B., Eckstein, E.: Register Allocation for Irregular Architectures (2002)
Anderson, A., Gregg, D.: Optimal DNN Primitive Selection with Partitioned Boolean Quadratic Programming (2018)
Dean, J.: TensorFlow (2016). https://autodiff-workshop.github.io/
Karypis, G., Kumar, V.: Metis: Software package for partitioning unstructured graphs, partitioning meshes, and computing fill-reducing orderings of sparse matrices (1998)
Shulman, J.: Optimizing Expectations: From Deep Reinforcement Learning to Stochastic Computation Graphs (2016)
Pham, H., Guan, M.Y., Zoph, B., Le, Q.V., Dean, J.: Efficient neural architecture search with parameter sharing (2018)
Karypis, G.: Metis–unstructured graph partitioning and sparse matrix ordering system (1995)
Kohler, W.H.: A preliminary evaluation of the critical path method for scheduling tasks on multiprocessor systems. IEEE Trans. Comput. 100(12), 1235–1238 (1975)
Topcuoglu, H., Hariri, S., Wu, M.-Y.: Performance-effective and low-complexity task scheduling for heterogeneous computing. IEEE Trans. Parallel Distrib. Syst. 13(3), 260–274 (2002)
Kwok, Y.-K., Ahmad, I.: Dynamic critical-path scheduling: an effective technique for allocating task graphs to multiprocessors. IEEE Trans. Parallel Distrib. Syst. 7(5), 506–521 (1996)
Seide, F., Fu, H., Droppo, J., Li, G., Yu, D.: 1-bit stochastic gradient descent and its application to data-parallel distributed training of speech DNNs (2014)
Zhou, S., Ni, Z., Zhou, X., Wen, H., Wu, Y., Zou, Y.: Dorefa-net: Training low bitwidth convolutional neural networks with low bitwidth gradients (2016)
Strom, N.: Scalable distributed DNN training using commodity GPU cloud computing
Krizhevsky, A.: One weird trick for parallelizing convolutional neural network (2014)
Paul, G.: Allen School of Computer Science & Engineering. University of Washington, Amazon AI team (2017). https://tvm.ai/2017/10/06/nnvm-compiler-announcement.html
Rotem, N., Fix, J., Abdulrasool, S., et al.: Glow: Graph Lowering Compiler Techniques for Neural Networks (2018)
Wei, R., Schwartz, L., Adve, V.: DLVM: A Modern Compiler Infrastructure for Deep Learning Systems (2018)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Huang, Z.X., Fu, S.Y., Hsu, W.C. (2019). Efficient Dynamic Device Placement for Deep Neural Network Training on Heterogeneous Systems. In: Pnevmatikatos, D., Pelcat, M., Jung, M. (eds) Embedded Computer Systems: Architectures, Modeling, and Simulation. SAMOS 2019. Lecture Notes in Computer Science(), vol 11733. Springer, Cham. https://doi.org/10.1007/978-3-030-27562-4_4
Download citation
DOI: https://doi.org/10.1007/978-3-030-27562-4_4
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-27561-7
Online ISBN: 978-3-030-27562-4
eBook Packages: Computer ScienceComputer Science (R0)