Advertisement

Continuous-Time Multi-agent Network for Distributed Least Absolute Deviation

  • Qingshan Liu
  • Yan Zhao
  • Long Cheng
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9377)

Abstract

This paper presents a continuous-time multi-agent network for distributed least absolute deviation (DLAD). The objective function of the DLAD problem is a sum of many least absolute deviation functions. In the multi-agent network, each agent connects with its neighbors locally and they cooperate to obtain the optimal solutions with consensus. The proposed multi-agent network is in fact a collective system with each agent being considered as a recurrent neural network. Simulation results on a numerical example are presented to illustrate the effectiveness and characteristics of the proposed distributed optimization method.

Keywords

Distributed least absolute deviation multi-agent network nonsmooth optimization consensus 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Cochocki, A., Unbehauen, R.: Neural Networks for Optimization and Signal Processing. John Wiley & Sons, New York (1993)Google Scholar
  2. 2.
    Boyd, S., Parikh, N., Chu, E., Peleato, B., Eckstein, J.: Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends in Machine Learning 3, 1–122 (2011)CrossRefGoogle Scholar
  3. 3.
    Nedic, A., Ozdaglar, A., Parrilo, P.A.: Constrained consensus and optimization in multi-agent networks. IEEE Transactions on Automatic Control 55, 922–938 (2010)MathSciNetCrossRefGoogle Scholar
  4. 4.
    Zhu, M., Martínez, S.: On distributed convex optimization under inequality and equality constraints. IEEE Transactions on Automatic Control 57, 151–164 (2012)MathSciNetCrossRefGoogle Scholar
  5. 5.
    Gharesifard, B., Cortés, J.: Distributed continuous-time convex optimization on weight-balanced digraphs. IEEE Transactions on Automatic Control 59, 781–786 (2014)MathSciNetCrossRefGoogle Scholar
  6. 6.
    Lobel, I., Ozdaglar, A.: Distributed subgradient methods for convex optimization over random networks. IEEE Transactions on Automatic Control 56, 1291–1306 (2011)MathSciNetCrossRefGoogle Scholar
  7. 7.
    Liu, Q., Wang, J.: A second-order multi-agent network for bound-constrained distributed optimization. IEEE Transactions on Automatic Control (2015), doi:10.1109/TAC.2015.2416927.Google Scholar
  8. 8.
    Lin, P., Jia, Y.: Consensus of a class of second-order multi-agent systems with time-delay and jointly-connected topologies. IEEE Transactions on Automatic Control 55, 778–784 (2010)MathSciNetCrossRefGoogle Scholar
  9. 9.
    Mei, J., Ren, W., Ma, G.: Distributed coordination for second-order multi-agent systems with nonlinear dynamics using only relative position measurements. Automatica 49, 1419–1427 (2013)MathSciNetCrossRefGoogle Scholar
  10. 10.
    Yu, W., Ren, W., Zheng, W.X., Chen, G., Lü, J.: Distributed control gains design for consensus in multi-agent systems with second-order nonlinear dynamics. Automatica 49, 2107–2115 (2013)MathSciNetCrossRefGoogle Scholar
  11. 11.
    Tank, D., Hopfield, J.: Simple neural optimization networks: An a/d converter, signal decision circuit, and a linear programming circuit. IEEE Transactions on Circuits and Systems 33, 533–541 (1986)CrossRefGoogle Scholar
  12. 12.
    Kennedy, M., Chua, L.: Neural networks for nonlinear programming. IEEE Transactions on Circuits and Systems 35, 554–562 (1988)MathSciNetCrossRefGoogle Scholar
  13. 13.
    Xia, Y., Wang, J.: A general projection neural network for solving monotone variational inequalities and related optimization problems. IEEE Transactions on Neural Networks 15, 318–328 (2004)CrossRefGoogle Scholar
  14. 14.
    Hu, X., Wang, J.: Solving the assignment problem using continuous-time and discrete-time improved dual networks. IEEE Transactions on Neural Networks and Learning Systems 23, 821–827 (2012)CrossRefGoogle Scholar
  15. 15.
    Liu, Q., Wang, J.: A one-layer projection neural network for nonsmooth optimization subject to linear equalities and bound constraints. IEEE Transactions on Neural Networks and Learning Systems 24, 812–824 (2013)CrossRefGoogle Scholar
  16. 16.
    Liu, Q., Dang, C., Huang, T.: A one-layer recurrent neural network for real-time portfolio optimization with probability criterion. IEEE Transactions on Cybernetics 43, 14–23 (2013)CrossRefGoogle Scholar
  17. 17.
    Liu, Q., Huang, T.: A neural network with a single recurrent unit for associative memories based on linear optimization. Neurocomputing 118, 263–267 (2013)CrossRefGoogle Scholar
  18. 18.
    Liu, Q., Huang, T., Wang, J.: One-layer continuous- and discrete-time projection neural networks for solving variational inequalities and related optimization problems. IEEE Transactions on Neural Networks and Learning Systems 25, 1308–1318 (2014)CrossRefGoogle Scholar
  19. 19.
    Liu, Q., Wang, J.: A projection neural network for constrained quadratic minimax optimization (2015), doi:10.1109/TNNLS.2015.2425301.Google Scholar
  20. 20.
    Nedic, A., Ozdaglar, A.: Distributed subgradient methods for multi-agent optimization. IEEE Transactions on Automatic Control 54, 48–61 (2009)MathSciNetCrossRefGoogle Scholar
  21. 21.
    Kinderlehrer, D., Stampacchia, G.: An Introduction to Variational Inequalities and Their Applications. Academic, New York (1982)zbMATHGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2015

<SimplePara><Emphasis Type="Bold">Open Access</Emphasis> This chapter is licensed under the terms of the Creative Commons Attribution-NonCommercial 2.5 International License (http://creativecommons.org/licenses/by-nc/2.5/), which permits any noncommercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made. </SimplePara> <SimplePara>The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.</SimplePara>

Authors and Affiliations

  1. 1.School of AutomationHuazhong University of Science and TechnologyWuhanChina
  2. 2.Department of Basic CoursesWannan Medical CollegeWuhuChina
  3. 3.State Key Lab. of Management and Control for Complex Systems, Institute of AutomationChinese Academy of SciencesBeijingChina

Personalised recommendations