Skip to main content

Efficient Dynamic Device Placement for Deep Neural Network Training on Heterogeneous Systems

  • Conference paper
  • First Online:
Embedded Computer Systems: Architectures, Modeling, and Simulation (SAMOS 2019)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 11733))

Included in the following conference series:

Abstract

Deep Neural Networks (DNNs) based learning methods have brought revolutionary advances in computer vision and machine learning. However, training a DNN model often requires very intensive computational resources. For edge incremental learning, more energy efficient learning solutions are called for. Heterogeneous computing is more power efficient, and has been increasingly popular for embedded platforms. Therefore, how to deploy training models on heterogeneous platforms to support edge learning is a critical issue.

Due to the increasing size of DNNs, it is rather difficult to determine how to dispatch a large number of operations to proper devices. One state-of-art approach uses reinforcement learning to address this device placement issue, but is too costly to apply in an embedded setting. In this paper, our approach leverages the information available from the computational graph of the model, and the dynamic profiles of run time and communication time of each device, to more efficiently deploy operations on heterogeneous systems. We use Critical Earliest Finish Time (CEFT) algorithm together with the Partitioned Boolean Quadratic Assignment Problem (PBQP) solver to find a cost-effective placement, and dynamically adjust assignments during the training process, which makes our method more adaptive and effective for different computational environments. On AlexNet, VGG, Inception, ResNet, RNNLM and other well-known models, our approach significantly outperforms traditional algorithms and reinforcement learning based methods.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Mirhoseini, A., et al.: Device Placement Optimization with Reinforcement Learning (2017)

    Google Scholar 

  2. Scholz, B., Ecksteini, E.: Register Allocation for Irregular Architectures (2002)

    Google Scholar 

  3. Vasudevan, A., Gregg, D.: Mutual Inclusivity of the Critical Path and its Partial Schedule on Heterogeneous Systems (2017)

    Google Scholar 

  4. Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet Classification with Deep Convolutional Neural Networks (2012)

    Google Scholar 

  5. Simonyan, K., Zisserman, A.: Very Deep Convolution Networks for Large-Scale Image Recognition (2014)

    Google Scholar 

  6. Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J.: Rethinking the Inception Architecture for Computer Vision (2016)

    Google Scholar 

  7. He, K., Zhang, X.: Deep Residual Learning for Image Recognition. (2016)

    Google Scholar 

  8. Zaremba, W., Sutskever, I., Vinyals, O.: Recurrent neural network regularization (2014)

    Google Scholar 

  9. Jozefowicz, R., Vinyals, O., Schuster, M., Shazeer, N., Wu, Y.: Exploring the limits of language modeling (2016)

    Google Scholar 

  10. Chevalier, C., Pellegrini, F.: Improvement of the efficiency of genetic algorithms for scalable parallel graph partitioning in a multi-level framework. In: Nagel, W.E., Walter, W.V., Lehner, W. (eds.) Euro-Par 2006. LNCS, vol. 4128, pp. 243–252. Springer, Heidelberg (2006). https://doi.org/10.1007/11823285_25

    Chapter  Google Scholar 

  11. Mirhoseini, A., Goldie, A., Pham, H., Steiner, B., Le, Q.V., Dean, J.: A hierarchical model for device placement (2018)

    Google Scholar 

  12. Gao, Y., Li, C., Li, B.: Spotlight: Optimizing Device Placement for Training Deep Neural Networks (2018)

    Google Scholar 

  13. Tjalling, C.: Assignment problems and the location of economic activities. Econometrica 25, 53–76 (1957)

    Article  MathSciNet  Google Scholar 

  14. Eckstein, E., König, O., Scholz, B.: Code instruction selection based on SSA-graphs. In: Krall, A. (ed.) SCOPES 2003. LNCS, vol. 2826, pp. 49–65. Springer, Heidelberg (2003). https://doi.org/10.1007/978-3-540-39920-9_5

    Chapter  Google Scholar 

  15. Scholz, B., Eckstein, E.: Register Allocation for Irregular Architectures (2002)

    Google Scholar 

  16. Anderson, A., Gregg, D.: Optimal DNN Primitive Selection with Partitioned Boolean Quadratic Programming (2018)

    Google Scholar 

  17. Dean, J.: TensorFlow (2016). https://autodiff-workshop.github.io/

  18. Karypis, G., Kumar, V.: Metis: Software package for partitioning unstructured graphs, partitioning meshes, and computing fill-reducing orderings of sparse matrices (1998)

    Google Scholar 

  19. Shulman, J.: Optimizing Expectations: From Deep Reinforcement Learning to Stochastic Computation Graphs (2016)

    Google Scholar 

  20. Pham, H., Guan, M.Y., Zoph, B., Le, Q.V., Dean, J.: Efficient neural architecture search with parameter sharing (2018)

    Google Scholar 

  21. Karypis, G.: Metis–unstructured graph partitioning and sparse matrix ordering system (1995)

    Google Scholar 

  22. Kohler, W.H.: A preliminary evaluation of the critical path method for scheduling tasks on multiprocessor systems. IEEE Trans. Comput. 100(12), 1235–1238 (1975)

    Article  Google Scholar 

  23. Topcuoglu, H., Hariri, S., Wu, M.-Y.: Performance-effective and low-complexity task scheduling for heterogeneous computing. IEEE Trans. Parallel Distrib. Syst. 13(3), 260–274 (2002)

    Article  Google Scholar 

  24. Kwok, Y.-K., Ahmad, I.: Dynamic critical-path scheduling: an effective technique for allocating task graphs to multiprocessors. IEEE Trans. Parallel Distrib. Syst. 7(5), 506–521 (1996)

    Article  Google Scholar 

  25. Seide, F., Fu, H., Droppo, J., Li, G., Yu, D.: 1-bit stochastic gradient descent and its application to data-parallel distributed training of speech DNNs (2014)

    Google Scholar 

  26. Zhou, S., Ni, Z., Zhou, X., Wen, H., Wu, Y., Zou, Y.: Dorefa-net: Training low bitwidth convolutional neural networks with low bitwidth gradients (2016)

    Google Scholar 

  27. Strom, N.: Scalable distributed DNN training using commodity GPU cloud computing

    Google Scholar 

  28. Krizhevsky, A.: One weird trick for parallelizing convolutional neural network (2014)

    Google Scholar 

  29. Paul, G.: Allen School of Computer Science & Engineering. University of Washington, Amazon AI team (2017). https://tvm.ai/2017/10/06/nnvm-compiler-announcement.html

  30. Rotem, N., Fix, J., Abdulrasool, S., et al.: Glow: Graph Lowering Compiler Techniques for Neural Networks (2018)

    Google Scholar 

  31. Wei, R., Schwartz, L., Adve, V.: DLVM: A Modern Compiler Infrastructure for Deep Learning Systems (2018)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shen Yu Fu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Huang, Z.X., Fu, S.Y., Hsu, W.C. (2019). Efficient Dynamic Device Placement for Deep Neural Network Training on Heterogeneous Systems. In: Pnevmatikatos, D., Pelcat, M., Jung, M. (eds) Embedded Computer Systems: Architectures, Modeling, and Simulation. SAMOS 2019. Lecture Notes in Computer Science(), vol 11733. Springer, Cham. https://doi.org/10.1007/978-3-030-27562-4_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-27562-4_4

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-27561-7

  • Online ISBN: 978-3-030-27562-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics