Skip to main content

Towards a Flexible Accuracy-Oriented Deep Learning Module Inference Latency Prediction Framework for Adaptive Optimization Algorithms

  • Conference paper
  • First Online:
Intelligent Information Processing XII (IIP 2024)

Part of the book series: IFIP Advances in Information and Communication Technology ((IFIPAICT,volume 703))

Included in the following conference series:

  • 97 Accesses

Abstract

With the rapid development of Deep Learning, more and more applications on the cloud and edge tend to utilize large DNN (Deep Neural Network) models for improved task execution efficiency as well as decision-making quality. Due to memory constraints, models are commonly optimized using compression, pruning, and partitioning algorithms to become deployable onto resource-constrained devices. As the conditions in the computational platform change dynamically, the deployed optimization algorithms should accordingly adapt their solutions. To perform frequent evaluations of these solutions in a timely fashion, RMs (Regression Models) are commonly trained to predict the relevant solution quality metrics, such as the resulted DNN module inference latency, which is the focus of this paper. Existing prediction frameworks specify different RM training workflows, but none of them allow flexible configurations of the input parameters (e.g., batch size, device utilization rate) and of the selected RMs for different modules. In this paper, a deep learning module inference latency prediction framework is proposed, which i) hosts a set of customizable input parameters to train multiple different RMs per DNN module (e.g., convolutional layer) with self-generated datasets, and ii) automatically selects a set of trained RMs leading to the highest possible overall prediction accuracy, while keeping the prediction time/space consumption as low as possible. Furthermore, a new RM, namely MEDN (Multi-task Encoder-Decoder Network), is proposed as an alternative solution. Comprehensive experiment results show that MEDN is fast and lightweight, and capable of achieving the highest overall prediction accuracy and R-squared value. The Time/Space-efficient Auto-selection algorithm also manages to improve the overall accuracy by 2.5% and R-squared by 0.39%, compared to the MEDN single-selection scheme.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 129.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Banitalebi-Dehkordi, A., Vedula, N., Pei, J., Xia, F., Wang, L., Zhang, Y.: Auto-split: a general framework of collaborative edge-cloud AI. In: Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining. p. 2543-2553. KDD 2021, Association for Computing Machinery (2021). https://doi.org/10.1145/3447548.3467078

  2. Bank, D., Koenigstein, N., Giryes, R.: Autoencoders (2021)

    Google Scholar 

  3. Brown, T.B., et al.: Language models are few-shot learners (2020)

    Google Scholar 

  4. Hu, C., Li, B.: Distributed inference with deep learning models across heterogeneous edge devices. In: IEEE INFOCOM 2022 - IEEE Conference on Computer Communications, pp. 330–339 (2022). https://doi.org/10.1109/INFOCOM48880.2022.9796896

  5. Kirillov, A., et al.: Segment anything. arXiv:2304.02643 (2023)

  6. Kum, S., Oh, S., Yeom, J., Moon, J.: Optimization of edge resources for deep learning application with batch and model management. Sensors 22(17) (2022). https://doi.org/10.3390/s22176717, https://www.mdpi.com/1424-8220/22/17/6717

  7. Lahiany, A., Aperstein, Y.: Pteenet: post-trained early-exit neural networks augmentation for inference cost optimization. IEEE Access 10, 69680–69687 (2022). https://doi.org/10.1109/ACCESS.2022.3187002

    Article  Google Scholar 

  8. Li, E., Zeng, L., Zhou, Z., Chen, X.: Edge AI: on-demand accelerating deep neural network inference via edge computing (2019)

    Google Scholar 

  9. Lin, P., Shi, Z., Xiao, Z., Chen, C., Li, K.: Latency-driven model placement for efficient edge intelligence service. IEEE Trans. Serv. Comput. 15(2), 591–601 (2022). https://doi.org/10.1109/TSC.2021.3109094

    Article  Google Scholar 

  10. Liu, G., Dai, F., Huang, B., Li, L., Wang, S., Qiang, Z.: Towards accurate latency prediction of DNN layers inference on diverse computing platforms. In: 2022 IEEE International Conference on Dependable, Autonomic and Secure Computing, International Conference on Pervasive Intelligence and Computing, International Conference on Cloud and Big Data Computing, International Conference on Cyber Science and Technology Congress (DASC/PiCom/CBDCom/CyberSciTech), pp. 1–7 (2022). https://doi.org/10.1109/DASC/PiCom/CBDCom/Cy55231.2022.9927862

  11. Mao, J., Chen, X., Nixon, K.W., Krieger, C., Chen, Y.: MoDNN: local distributed mobile computing system for deep neural network. In: Design, Automation & Test in Europe Conference & Exhibition (DATE), 2017. pp. 1396–1401 (2017). https://doi.org/10.23919/DATE.2017.7927211

  12. Mendoza, D.: Predicting latency of neural network inference (2020)

    Google Scholar 

  13. Paszke, A., et al.: PyTorch: An Imperative Style. High-Performance Deep Learning Library. Curran Associates Inc., Red Hook (2019)

    Google Scholar 

  14. Shao, J., Zhang, H., Mao, Y., Zhang, J.: Branchy-GNN: a device-edge co-inference framework for efficient point cloud processing (2023)

    Google Scholar 

  15. Shi, W., Zhou, S., Niu, Z., Jiang, M., Geng, L.: Multiuser co-inference with batch processing capable edge server. IEEE Trans. Wireless Commun. 22(1), 286–300 (2023). https://doi.org/10.1109/TWC.2022.3192613

    Article  Google Scholar 

  16. Tang, X., Chen, X., Zeng, L., Yu, S., Chen, L.: Joint multiuser DNN partitioning and computational resource allocation for collaborative edge intelligence. IEEE Internet Things J. 8(12), 9511–9522 (2021). https://doi.org/10.1109/JIOT.2020.3010258

    Article  Google Scholar 

  17. Teerapittayanon, S., McDanel, B., Kung, H.: Distributed deep neural networks over the cloud, the edge and end devices. In: 2017 IEEE 37th International Conference on Distributed Computing Systems (ICDCS), pp. 328–339 (2017). https://doi.org/10.1109/ICDCS.2017.226

  18. Zeng, L., Chen, X., Zhou, Z., Yang, L., Zhang, J.: Coedge: cooperative DNN inference with adaptive workload partitioning over heterogeneous edge devices. IEEE/ACM Trans. Networking 29(2), 595–608 (2021). https://doi.org/10.1109/TNET.2020.3042320

    Article  Google Scholar 

  19. Zhang, L.L., et al.: NN-meter: towards accurate latency prediction of deep-learning model inference on diverse edge devices. In: Proceedings of the 19th Annual International Conference on Mobile Systems, Applications, and Services. MobiSys 2021, New York, NY, USA, pp. 81-93. Association for Computing Machinery (2021). https://doi.org/10.1145/3458864.3467882, https://doi.org/10.1145/3458864.3467882

Download references

Acknowledgements

This research was supported by: Shenzhen Science and Technology Program, China (No. GJHZ20210705141807022); Guangdong Province Innovative and Entrepreneurial Team Programme, China (No. 2017ZT07X386); SUSTech Research Institute for Trustworthy Autonomous Systems, China. Corresponding author: Georgios Theodoropoulos.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Georgios Theodoropoulos .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 IFIP International Federation for Information Processing

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Shen, J., Tziritas, N., Theodoropoulos, G. (2024). Towards a Flexible Accuracy-Oriented Deep Learning Module Inference Latency Prediction Framework for Adaptive Optimization Algorithms. In: Shi, Z., Torresen, J., Yang, S. (eds) Intelligent Information Processing XII. IIP 2024. IFIP Advances in Information and Communication Technology, vol 703. Springer, Cham. https://doi.org/10.1007/978-3-031-57808-3_3

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-57808-3_3

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-57807-6

  • Online ISBN: 978-3-031-57808-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics