Advertisement

Asynchronous Distributed ADMM for Learning with Large-Scale and High-Dimensional Sparse Data Set

  • Dongxia Wang
  • Yongmei LeiEmail author
Conference paper
Part of the Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering book series (LNICST, volume 302)

Abstract

The distributed alternating direction method of multipliers is an effective method to solve large-scale machine learning. At present, most distributed ADMM algorithms need to transfer the entire model parameter in the communication, which leads to high communication cost, especially when the features of model parameter is very large. In this paper, an asynchronous distributed ADMM algorithm (GA-ADMM) based on general form consensus is proposed. First, the GA-ADMM algorithm filters the information transmitted between nodes by analyzing the characteristics of high-dimensional sparse data set: only associated features, rather than all features of the model, need to be transmitted between workers and the master, thus greatly reducing the communication cost. Second, the bounded asynchronous communication protocol is used to further improve the performance of the algorithm. The convergence of the algorithm is also analyzed theoretically when the objective function is non-convex. Finally, the algorithm is tested on the cluster supercomputer “Ziqiang 4000”. The experiments show that the GA-ADMM algorithm converges when appropriate parameters are selected, the GA-ADMM algorithm requires less system time to reach convergence than the AD-ADMM algorithm, and the accuracy of these two algorithms is approximate.

Keywords

GA-ADMM General form consensus Bounded asynchronous Non-convex 

Notes

Acknowledgements

This work is partially supported by the National Natural Science Foundation of China under grant No. U1811461.

References

  1. 1.
    Boyd, S., Parikh, N., Chu, E., Peleato, B., Eckstein, J., et al.: Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends® in Mach. Learn. 3(1), 1–122 (2011)zbMATHGoogle Scholar
  2. 2.
    Chang, T.H., Hong, M., Liao, W.C., Wang, X.: Asynchronous distributed admm for large-scale optimization–part i: algorithm and convergence analysis. IEEE Trans. Signal Process. 64(12), 3118–3130 (2016)MathSciNetzbMATHCrossRefGoogle Scholar
  3. 3.
    Chang, T.H., Liao, W.C., Hong, M., Wang, X.: Asynchronous distributed admm for large-scale optimization–part ii: linear convergence analysis and numerical performance. IEEE Trans. Signal Process. 64(12), 3131–3144 (2016)MathSciNetzbMATHCrossRefGoogle Scholar
  4. 4.
    Chen, T., et al.: Mxnet: a flexible and efficient machine learning library for heterogeneous distributed systems. abs/1512.01274 (2015). https://arxiv.org/abs/1512.01274
  5. 5.
    Fang, L., Lei, Y.: An asynchronous distributed admm algorithm and efficient communication model. In: 14th IntlConf on Pervasive Intelligence and Computing, 2nd International Conference on Big Data Intelligence and Computing and Cyber Science and Technology Congress. IEEE (2016)Google Scholar
  6. 6.
    Hong, M.: A distributed, asynchronous and incremental algorithm for nonconvex optimization: An ADMM based approach. CoRR abs/1412.6058 (2014). http://arxiv.org/abs/1412.6058
  7. 7.
    Kang, D., Lim, W., Shin, K., Sael, L., Kang, U.: Data/feature distributed stochastic coordinate descent for logistic regression. In: CIKM 2014 Proceedings of the 23rd ACM International Conference on Conference on Information and Knowledge Management. dl.acm.org (2014)Google Scholar
  8. 8.
    Li, M., G.Andersen, D., Smola, A.: Distributed delayed proximal gradient methods. In: NIPS Workshop on Optimization for Machine Learning. cs.cmu.edu (2013)Google Scholar
  9. 9.
    Lin, C.J., Weng, R.C., Keerthi, S.S.: Trust region newton method for large scale logistic regression. In: ICML 2007 Proceedings of the 24th International Conference on Machine Learning. dl.acm.org (2007)Google Scholar
  10. 10.
    Liu, J., Wright, S.J.: Asynchronous stochastic coordinate descent: parallelism and convergence properties. SIAM J. Optim. 25(1), 351–376 (2015)MathSciNetzbMATHCrossRefGoogle Scholar
  11. 11.
    Lubell-Doughtie, P., Sondag, J.: Practical distributed classification using the alternating direction method of multipliers algorithm. In: Proceedings of the 33rd International Conference on Machine Learning, vol. 1. IEEE (2013)Google Scholar
  12. 12.
    Martin, Z., Markus, W., Li, L., Smola, A.J.: Parallelized stochastic gradient descent. In: Advances in Neural Information Processing Systems, vol. 23, pp. 2595–2603. Curran Associates, Inc. (2010)Google Scholar
  13. 13.
    Ouyang, H., He, N., Tran, L.Q., Gray, A.: Stochastic alternating direction method of multipliers. In: Proceedings of the 30th International Conference on Machine Learning. vol. 28. jmlr.org (2013)Google Scholar
  14. 14.
    Richtari, P., Takac, M.: Distributed coordinate descent method for learning with big data. J. Mach. Learn. Res. 17, 1–15 (2016)MathSciNetGoogle Scholar
  15. 15.
    Taylor, G., Burmeister, R., Xu, Z., Singh, B., Patel, A., Goldstein, T.: Training neural networks without gradients:a scalable admm approach. In: IEEE International Conferences on Big Data. vol. 48. jmlr.org (2016)Google Scholar
  16. 16.
    Wang, H., Gao, Y., Shi, Y., Wang, R.: Group-based alternating direction method of multipliers for distributed linear classification. IEEE Trans. Cybern. 47(11), 3568–3582 (2017)CrossRefGoogle Scholar
  17. 17.
    Wang, S., Lei, Y.: Fast communication structure for asynchronous distributed ADMM under unbalance process arrival pattern. In: Kůrková, V., Manolopoulos, Y., Hammer, B., Iliadis, L., Maglogiannis, I. (eds.) ICANN 2018. LNCS, vol. 11139, pp. 362–371. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-01418-6_36CrossRefGoogle Scholar
  18. 18.
    Zhang, C., Lee, H., Shin, K.G.: Efficient distributed linear classification algorithms via the alternating direction method of multipliers. In: the 15th Artificial Intelligence and Statistic. jmlr.org (2012)Google Scholar
  19. 19.
    Zhang, R., Kwok, J.: Asynchronous distributed admm for consensus optimization. In: International Conference on Machine Learning, pp. 1701–1709. jmlr.org (2014)Google Scholar
  20. 20.
    Zhong, L.W., Kwok, J.T.: Fast stochastic alternating direction method of multipliers. In: Proceedings of the 31st International Conference on Machine Learning, vol. 32. jmlr.org (2014)Google Scholar

Copyright information

© ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2019

Authors and Affiliations

  1. 1.School of Computer Engineering and Science of Shanghai UniversityShanghaiChina

Personalised recommendations