Skip to main content

Consensus Based Vertically Partitioned Multi-layer Perceptrons for Edge Computing

  • Conference paper
  • First Online:
Discovery Science (DS 2021)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 12986))

Included in the following conference series:

  • 1458 Accesses

Abstract

Storing large volumes of data on distributed devices has become commonplace in recent years. Applications involving sensors, for example, capture data in different modalities including image, video, audio, GPS and others. Novel distributed algorithms are required to learn from this rich, multi-modal data. In this paper, we present an algorithm for learning consensus based multi-layer perceptrons on resource-constrained devices. Assuming nodes (devices) in the distributed system are arranged in a graph and contain vertically partitioned data and labels, the goal is to learn a global function that minimizes the loss. Each node learns a feed-forward multi-layer perceptron and obtains a loss on data stored locally. It then gossips with a neighbor, chosen uniformly at random, and exchanges information about the loss. The updated loss is used to run a back propagation algorithm and adjust local weights appropriately. This method enables nodes to learn the global function without exchange of data in the network. Empirical results reveal that the consensus algorithm converges to the centralized model and has performance comparable to centralized multi-layer perceptrons and tree-based algorithms including random forests and gradient boosted decision trees. Since it is completely decentralized, scalable with network size, can be used for binary and multi-class problems, not affected by feature overlap, and has good empirical convergence properties, it can be used for on-device machine learning.

This work was done when the author was a student at the State University of New York at Buffalo.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 69.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 89.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://featurecloud.eu/about/our-vision/.

  2. 2.

    https://cordis.europa.eu/project/id/831472.

  3. 3.

    https://musketeer.eu/project/.

  4. 4.

    This implies that all the nodes have access to all N tuples but have limited number of features i.e. \(n_i \le n\).

  5. 5.

    We assume that the models have the same structure i.e. the same number of input, hidden and output layers and connections.

  6. 6.

    Existence of this clock is of interest only for theoretical analysis.

  7. 7.

    Cross-entropy loss was used in empirical results.

References

  1. Alistarh, D., Grubic, D., Li, J., Tomioka, R., Vojnovic, M.: QSGD: communication-efficient SGD via gradient quantization and encoding. In: Advances in Neural Information Processing Systems, pp. 1709–1720 (2017)

    Google Scholar 

  2. Bekkerman, R., Bilenko, M., Langford, J.: Scaling Up Machine Learning: Parallel and Distributed Approaches. Cambridge University Press, New York (2011)

    Book  Google Scholar 

  3. Bellet, A., Guerraoui, R., Taziki, M., Tommasi, M.: Personalized and private peer-to-peer machine learning. In: International Conference on Artificial Intelligence and Statistics, AISTATS, vol. 84, pp. 473–481 (2018)

    Google Scholar 

  4. Blot, M., Picard, D., Thome, N., Cord, M.: Distributed optimization for deep learning with gossip exchange. Neurocomputing 330, 287–296 (2019)

    Article  Google Scholar 

  5. Bradley, A.P.: The use of the area under the roc curve in the evaluation of machine learning algorithms. Pattern Recogn. 30(7), 1145–1159 (1997)

    Article  Google Scholar 

  6. Breiman, L.: Random forests. Mach. Learn. 45(1), 5–32 (2001)

    Article  Google Scholar 

  7. Chen, T., Guestrin, C.: XGBoost: a scalable tree boosting system. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (2016)

    Google Scholar 

  8. Dean, J., et al.: Large scale distributed deep networks. In: Proceedings of the 25th International Conference on Neural Information Processing Systems - Volume 1, pp. 1223–1231 (2012)

    Google Scholar 

  9. Demers, A., et al.: Epidemic algorithms for replicated database maintenance. In: ACM Symposium on Principles of Distributed Computing, pp. 1–12 (1987)

    Google Scholar 

  10. Gupta, O., Raskar, R.: Distributed learning of deep neural network over multiple agents. J. Netw. Comput. Appl. 116, 1–8 (2018)

    Article  Google Scholar 

  11. Guyon, I., Gunn, S., Hur, A.B., Dror, G.: Result analysis of the NIPS 2003 feature selection challenge. In: Proceedings of the 17th International Conference on Neural Information Processing Systems, NIPS 2004, pp. 545–552 (2004)

    Google Scholar 

  12. Hanley, J., Mcneil, B.: A method of comparing the areas under receiver operating characteristic curves derived from the same cases. Radiology 148, 839–43 (1983)

    Article  Google Scholar 

  13. Huerta, R., Mosqueiro, T., Fonollosa, J., Rulkov, N.F., Rodríguez-Luján, I.: Online decorrelation of humidity and temperature in chemical sensors for continuous monitoring. Chemom. Intell. Lab. Syst. 157, 169–176 (2016)

    Article  Google Scholar 

  14. Jiang, Z., Balu, A., Hegde, C., Sarkar, S.: Collaborative deep learning in fixed topology networks. In: Proceedings of the 31st International Conference on Neural Information Processing Systems, pp. 5906–5916 (2017)

    Google Scholar 

  15. Kairouz, P., et al.: Advances and open problems in federated learning. arXiv:abs/1912.04977 (2019)

  16. Kempe, D., Dobra, A., Gehrke, J.: Gossip-based computation of aggregate information. IEEE Symposium on Foundations of Computer Science, pp. 482–491 (2003)

    Google Scholar 

  17. Lalitha, A., Shekhar, S., Javidi, T., Koushanfar, F.: Fully decentralized federated learning. In: Third Workshop of Bayesian Deep Learning (2018)

    Google Scholar 

  18. McDonald, R., Hall, K., Mann, G.: Distributed training strategies for the structured perceptron. In: Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, HLT 2010, pp. 456–464 (2010)

    Google Scholar 

  19. McMahan, B., Moore, E., Ramage, D., Hampson, S., Agüera y Arcas, B.: Communication-efficient learning of deep networks from decentralized data. In: Proceedings of the International Conference on Artificial Intelligence and Statistics, pp. 1273–1282 (2017)

    Google Scholar 

  20. Montresor, A., Jelasity, M.: PeerSim: a scalable P2P simulator. In: Proceedings of the 9th International Conference on Peer-to-Peer (P2P 2009), pp. 99–100, September 2009

    Google Scholar 

  21. Provodin, A., et al.: Fast incremental learning for off-road robot navigation. CoRR abs/1606.08057 (2016)

    Google Scholar 

  22. Seide, F., Fu, H., Droppo, J., Li, G., Yu, D.: 1-bit stochastic gradient descent and application to data-parallel distributed training of speech DNNs. In: Interspeech 2014, September 2014

    Google Scholar 

  23. Sutton, D.P., Carlisle, M.C., Sarmiento, T.A., Baird, L.C.: Partitioned neural networks. In: Proceedings of the 2009 International Joint Conference on Neural Networks, IJCNN 2009, pp. 2870–2875 (2009)

    Google Scholar 

  24. Teerapittayanon, S., McDanel, B., Kung, H.T.: Distributed deep neural networks over the cloud, the edge and end devices. In: 37th IEEE International Conference on Distributed Computing Systems, ICDCS 2017, Atlanta, GA, USA, pp. 328–339 (2017)

    Google Scholar 

  25. Wang, X., Han, Y., Leung, V.C.M., Niyato, D., Yan, X., Chen, X.: Convergence of edge computing and deep learning: a comprehensive survey. IEEE Commun. Surv. Tutor. 22(2), 869–904 (2020)

    Article  Google Scholar 

  26. Wen, W., et al.: TernGrad: ternary gradients to reduce communication in distributed deep learning. In: Advances in Neural Information Processing Systems 30, pp. 1509–1519 (2017)

    Google Scholar 

  27. Wittkopp, T., Acker, A.: Decentralized federated learning preserves model and data privacy. CoRR abs/2102.00880 (2021)

    Google Scholar 

  28. Yang, Q., Liu, Y., Chen, T., Tong, Y.: Federated machine learning. ACM Trans. Intell. Syst. Technol. (TIST) 10, 1–19 (2019)

    Google Scholar 

  29. Zhang, W., Gupta, S., Lian, X., Liu, J.: Staleness-aware Async-SGD for distributed deep learning. In: Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, IJCAI 2016, pp. 2350–2356 (2016)

    Google Scholar 

  30. Zhou, D., Bousquet, O., Lal, T.N., Weston, J., Schölkopf, B.: Learning with local and global consistency. In: Proceedings of the 16th International Conference on Neural Information Processing Systems, NIPS 2003, pp. 321–328 (2003)

    Google Scholar 

  31. Zhou, S., Wu, Y., Ni, Z., Zhou, X., Wen, H., Zou, Y.: DoReFa-Net: training low bitwidth convolutional neural networks with low bitwidth gradients. arXiv preprint arXiv:1606.06160 (2016)

  32. Zilberstein, S.: Operational rationality through compilation of anytime algorithms. Ph.D. thesis, Computer Science Division, University of California Berkeley (1993)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Haimonti Dutta .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Dutta, H., Mahindre, S.A., Nataraj, N. (2021). Consensus Based Vertically Partitioned Multi-layer Perceptrons for Edge Computing. In: Soares, C., Torgo, L. (eds) Discovery Science. DS 2021. Lecture Notes in Computer Science(), vol 12986. Springer, Cham. https://doi.org/10.1007/978-3-030-88942-5_20

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-88942-5_20

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-88941-8

  • Online ISBN: 978-3-030-88942-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics