Abstract
In recent years, more and more researchers in the control community have focused their attention on distributed coordination due to its broad applications in many fields, the consensus problem is well recognized as a fundamental problem in the cooperation control of multi-agent systems. In this paper, we discuss the noise problem of the discrete linear consensus protocol (DLCP) and point out that noise of DLCP is uncontrollable. A protocol using noise suppression function (NS-DLCP) to control the noise is put forward, the theorem about the reasonable range of the noise suppression function is vigorously proved, and sufficient conditions for noise controllable of NS-DLCP are further presented.
Keywords
You have full access to this open access chapter, Download conference paper PDF
1 Introduction
In recent years, more and more researchers in the control community have focused their attention on distributed coordination of multi-agent systems due to its broad applications in many fields such as sensor networks, e.g. UAV (Unmanned Air Vehicles), MRS (mobile robots systems), robotic teams.
In the cooperative control, a key problem is to design distributed protocols such that group of agents can achieve consensus through local communications. So far, numerous interesting results for consensus problem have been obtained for both discrete-time and continuous-time multi-agent system in the past decade. Reynolds systematically studied and simulated the behavior of biological group such as birds and fishes, and proposed Boidmodel [1] which still has a broad impact in the field of Swarm Intelligence. Vicsek model [2] is proposed based on statistical mechanics theory in which the movement rate of Agent on two-dimensional plane remains unchanged, and the N agents on the 2-D plane determine their motion direction according to the directions of their neighbor agents. One of the most promising tools are the linear consensus algorithms, which are simple distributed algorithms which require only minimal computation, communication and synchronization to compute averages of local quantities that reside in each device. These algorithms have their roots in the analysis of Markov chains [3] and have been deeply studied within the computer science community for load balancing and within the linear algebra community for the asynchronous solution of linear systems [4, 5]. For linear consensus problem, Olfati-Saber et al. established a relatively complete theoretical framework based on graph theory and kinetic theory, and systematically analyzed the different types of consistency issues based on the framework [6–9]. Based on the study of Olfati-Saber, Yu et al. [10–12] discussed three necessary and sufficient conditions for the algorithm to converge to a consistent state when the states of agents has nothing to do with the data transferred, and conducted a meaningful analysis of its correctness, effectiveness and efficiency with the verification in several specific applications. For multi-Agent consensus and synchronization problems of complex networks, Li et al. in their rather deep discussion proposed multi-Agent control architecture based on higher order linear system with a series of fruitful results [13–15]. In [16] average consensus issues are discussed, with the consensus algorithm formulated as matrix factorization problem, machine learning methods are proposed to solve matrix decomposition problem.
For most of consensus results in the literature, it is usually assumed that each agent can obtain its neighbor’s information precisely. Since real networks are often in uncertain communication environments, it is necessary to consider consensus problems under measurement noises. Such consensus problems have been studied, Some research [17–19] have addressed the consensus problem of multi-agents system under multiplicative measurement noises, where the noises’ intensities are considered proportional to the relative states. In [20, 21] the authors studied consensus problems when there exist noisy measurements of the states of neighbors, and a stochastic approximation approach was applied to obtain mean square and almost sure convergence in models with fixed network topologies or with independent communications failures. Necessary and/or sufficient conditions for stochastic consensus of multi-agent systems were established for the case of fixed topology and time varying topologies in [22, 23]. Liu et al. studied signal delay of linear consensus protocol [24], and presented strong consensus and mean square consensus concept under the conditions of the fixed topology and the presence of noise and delay between agents, and gave theoretically necessary and sufficient conditions of strong consensus and mean square under Non_Leader_Follower and Leader_Follower modes. The distributed consensus problem for linear discrete-time multi-agent systems with delays and noises was investigated in [25] by introducing a novel technique to overcome the difficulties induced by the delays and noises. In [26], a novel kind of cluster consensus of multi-agents systems with several different subgroups was considered based on Markov chains and nonnegative matrix analysis.
In this paper, we discussed the noise problem of the linear consensus protocol, and gave a sufficient condition that ensure the noise of linear consensus protocol is controllable. The remainder of this paper is organized as follows. Some preliminaries and definitions are given in Sect. 2. in Sect. 3, we pointed out that noise of DLCP is uncontrollable. in Sect. 4, we proposed the strategy of using noise suppression function to control noise, and put forward Theorem 1 about a reasonable range of noise suppression function; Sect. 5 is devoted to show the conclusions of this paper.
2 Preliminaries
Consider n agents distributed according to a directed graph \( \boldsymbol{\mathcal{G}} = \left( {\boldsymbol{\mathcal{V}},\boldsymbol{\mathcal{E}}} \right) \) consisting of a set of nodes \( \boldsymbol{\mathcal{V}} = \left\{ {1,2, \ldots ,n} \right\} \) and a set of edges \( \boldsymbol{\mathcal{E}} \in \boldsymbol{\mathcal{V}} \times \boldsymbol{\mathcal{V}} \). In the digraph, an edge from node i to node j is denoted as an ordered pair (i, j) where i ≠ j (so there is no edge between a node and itself). A path (from i 1 to i l) consists of a sequence of nodes i 1, i 2, · · ·, i l , l ≥ 2, such that (i k , i k+1 ) \( \in \boldsymbol{\mathcal{E}} \) for k = 1· · ·, l − 1. We say node i is connected to node j(i ≠ j) if there exists a path from i to j. For convenience of exposition, the two names, agent and node, will be used alternatively. The agent A k (resp., node k) is a neighbor of A i (resp., node i) if (k, i) \( \in \boldsymbol{\mathcal{E}} \) where k ≠ i. Denote the neighbors of node i by \( \boldsymbol{\mathcal{N}}_{i} \) = {k|(k, i) \( \in \boldsymbol{\mathcal{E}} \)}. For agent A i , we denote its state at time t by x i (t) ∈ \( {\mathbb{R}} \), where t ∈ \( {\mathbb{Z}}_{ + } \), \( {\mathbb{Z}}_{ + } \) = {0, 1, 2, · · ·}. For each i ∈ \( \boldsymbol{\mathcal{V}} \), agent A i receives information from its neighbors.
Definition 1:
(Discrete Linear Consensus Protocol DLCP) The so-called linear consensus protocol is given by the following (1):
Where \( \alpha_{ij} \left( t \right) > 0 \) is a real-valued function with variable t and \( \sum\limits_{j = 1}^{n} {\alpha_{ij} \left( t \right)} \le 1 \), it is used to characterize the extent of the impact at time \( t \) from agent \( j \) to agent \( i \).
Definition 2:
(Weighted Laplacian Matrix) The matrix \( \boldsymbol{\mathcal{L}}\left( t \right) = \left[ {l_{ij} \left( t \right)} \right]_{n \times n} \) is called weighted Laplacian matrix of graph \( \boldsymbol{\mathcal{G}} \), where
let \( {\mathbf{X}}\left( k \right) = \left[ {x_{1} \left( k \right), \ldots ,x_{n} \left( k \right)} \right]^{T} \), \( \boldsymbol{\mathcal{I}}_{n} \) denote an \( n \) order unit matrix, the matrix form of (1):
where \( \boldsymbol{\mathcal{A}}\left( t \right) = \boldsymbol{\mathcal{I}}_{n} - \boldsymbol{\mathcal{L}}\left( t \right) \).
Suppose \( r \sim N\left( {\mu ,\sigma^{2} } \right) \) is a random number that satisfies the normal distribution, let \( \text{var} \left( r \right) \) represent variance \( \sigma^{2} \) of \( r \), if \( {\mathbf{R}} = \left[ {r_{1} , \ldots ,r_{n} } \right]^{T} \) is a random vector, then \( \text{var} \left( {\mathbf{R}} \right) \) represent the covariance matrix of R, \( \left[ {\text{var} \left( {\mathbf{R}} \right)} \right]_{i} \) denotes the variance of the ith components of R, i.e. \( \left[ {\text{var} \left( {\mathbf{R}} \right)} \right]_{i}\, = \text{var} \left( {r_{i} } \right) \) is ith element of diagonal of matrix \( \text{var} \left( {\mathbf{R}} \right) \).
If the received message of \( Agent \) \( i \) contains a mutually independent and normally distributed noise interference, (1) can be rewritten as:
where \( r_{j} \left( t \right) \sim N\left( {0,\sigma_{j}^{2} } \right) \) is a random number and satisfies normal distribution, which represents the noise carried by the status information \( x_{j} \left( t \right) \). Let \( {\mathbf{R}}\left( t \right) = \left[ {r_{1} \left( t \right), \ldots ,r_{n} \left( t \right)} \right]^{T} \), thus (3) can be rewritten in matrix form:
In the above equation, \( \boldsymbol{\mathcal{W}}\left( t \right) = - \boldsymbol{\mathcal{L}}\left( t \right) - diag\left( { - \boldsymbol{\mathcal{L}}\left( t \right)} \right) \), \( diag\left( { - \boldsymbol{\mathcal{L}}\left( t \right)} \right) \) is the diagonal matrix of \( - \boldsymbol{\mathcal{L}}\left( t \right) \), so we can further get:
In order to facilitate the description, let \( \left( {\prod\limits_{j = t - m + 1}^{t} {\boldsymbol{\mathcal{A}}\left( j \right)} } \right) = \boldsymbol{\mathcal{I}}_{n} \) when \( m = 0 \), then (5) can be simplified as:
let \( \boldsymbol{\mathcal{R}}\left( {t - m} \right) = \boldsymbol{\mathcal{W}}\left( {t - m} \right){\mathbf{R}}\left( {t - m} \right) \), \( {\mathbf{Y}}\left( t \right) = \sum\limits_{m = 0}^{t} {\left( {\prod\limits_{j = t - m + 1}^{t} {\boldsymbol{\mathcal{A}}\left( j \right)\boldsymbol{\mathcal{R}}\left( {t - m} \right)} } \right)} \), \( {\mathbf{B}}\left( t \right) = \prod\limits_{k = 0}^{t} {\boldsymbol{\mathcal{A}}\left( k \right)} {\mathbf{X}}\left( 0 \right) \) thus we get:
Analyzing the random part \( {\mathbf{Y}}\left( t \right) \) of (6), we can find out that it is a linear combination of several random vectors, therefore it is also a random vector satisfying normal distribution.
Definition 3:
(Noise Controllable) Assuming consensus protocol can converge to a consistent state vectors \( {\mathbf{X}}^{*} = \left[ {x^{*} , \ldots ,x^{*} } \right]^{T} \) under the noise-free conditions, we call the consensus protocol described in (6) is noise controllable, if and only if when \( t \to \infty \), lim t→∞ B(t) = X * for \( \forall i,j \in \boldsymbol{\mathcal{V}} \) and there is constant M which will make \( \lim\limits_{t \to \infty } \left[ {\text{var} \left( {{\mathbf{Y}}( t )}\right)} \right]_{i} \le M \), \( i = 1, \ldots ,n \).
3 Noise Uncontrollability of Discrete Linear Consistency Protocol
For any initial state \( {\mathbf{X}}\left( 0 \right) \), assuming that the consistency protocol in (2) converge to a consistent state \( {\mathbf{X}}^{ *} \) associated with \( {\mathbf{X}}\left( 0 \right) \), under this condition, we discuss the impact of the noise on the protocol.
Lemma 1:
Suppose \( {\mathbf{Y}}\left( t \right) \) is random part of consensus protocol (6), when \( t \to \infty \), \( \mathop {\lim\limits}_{t \to \infty } \left[ {\text{var} \left( {{\mathbf{Y}}\left( t \right)} \right)} \right]_{i} = \infty \) for any initial state \( {\mathbf{X}}\left( 0 \right) \), \( i = 1, \ldots ,n \).
Proof:
let \( y(t,m) = \prod\limits_{j = t - m + 1}^{t} {\boldsymbol{\mathcal{A}}\left( j \right)\boldsymbol{\mathcal{R}}\left( {t - m} \right)} \), then \( {\mathbf{Y}}\left( t \right) = \sum\limits_{m = 0}^{t} {\left( {y\left( {t,m} \right)} \right)} \) we have:
where \( \text{var} \left( {\boldsymbol{\mathcal{R}}\left( {t - m} \right)} \right) = \boldsymbol{\mathcal{W}}\left( {t - m} \right)\text{var} \left( {{\mathbf{R}}\left( {t - m} \right)} \right)\boldsymbol{\mathcal{W}}\left( {t - m} \right)^{T} \), \( m = 1, \ldots ,t \). It is known that when there is no noise, and \( t \to \infty \), \( \prod\limits_{k = 0}^{t} {\boldsymbol{\mathcal{A}}\left( k \right)} {\mathbf{X}}\left( 0 \right) \) converges, for a determined constant \( m \), there always is:
Where, \( {\mathbf{V}}_{i} \left( m \right) = \left( {v_{i} , \ldots ,v_{i} } \right)^{T} \) is a constant vector related to m, \( \left( {\zeta \left( m \right)} \right)_{n \times n} \) is a constant matrix, and \( \zeta \left( m \right) > 0 \). So that:
□
In fact, when \( t \to \infty \), \( {\mathbf{B}}\left( t \right) = \prod\limits_{k = 0}^{t} {\boldsymbol{\mathcal{A}}\left( k \right)} {\mathbf{X}}\left( 0 \right) \) will eventually reach a consistent state. Similarly, for a specific constant \( m \), \( \text{var} \left( {y(t,m)} \right) \) will eventually tend to a stable constant when \( t \to \infty \), and \( \text{var} \left( {{\mathbf{Y}}\left( t \right)} \right) \) just is the infinite series accumulated by \( \text{var} \left( {y(t,m)} \right) \), So it will not converge, i.e. the consensus protocol (6) is noise uncontrollable.
4 Noise Suppression Discrete Linear Consensus Protocol(NS-DLCP)
We reconstruct the state transition matrix \( \boldsymbol{\mathcal{A}}\left( t \right) \), let \( \boldsymbol{\mathcal{L}}_{\varepsilon } \left( t \right) = \varepsilon \left( t \right)\boldsymbol{\mathcal{L}}\left( t \right) \), where \( \varepsilon \left( t \right):{\mathbb{R}}_{ + } \to {\mathbb{R}}_{ + } \) is a function whose independent variable is \( t \), \( \varepsilon \left( t \right) > 0 \) and when \( t \to \infty \), \( \varepsilon \left( t \right) \to 0 \), we call \( \varepsilon \left( t \right) \) as noise suppression function, Let \( \boldsymbol{\mathcal{A}}_{\varepsilon } \left( t \right) = \boldsymbol{\mathcal{I}}_{n} - \varepsilon \left( t \right)\boldsymbol{\mathcal{L}}\left( t \right) \), replace \( \boldsymbol{\mathcal{A}}\left( t \right) \) in (2) with \( \boldsymbol{\mathcal{A}}_{\varepsilon } \left( t \right) \), we get:
Here we call (7) as Noise Suppression Consensus Protocol(NS-CP), then we rewrite (7) as the relation between \( X\left( {t + 1} \right) \) and the initial state \( X\left( 0 \right) \), and consider the noise carried by \( Agent \), then (7) is rewritten as:
Similarly, let \( \boldsymbol{\mathcal{R}}_{\varepsilon } \left( {t - m} \right) = \boldsymbol{\mathcal{W}}_{\varepsilon } \left( {t - m} \right){\mathbf{R}}\left( {t - m} \right) \), \( {\mathbf{Y}}_{\varepsilon } \left( t \right) = \sum\limits_{m = 0}^{t} {\left( {\prod\limits_{j = t - m + 1}^{t} {\boldsymbol{\mathcal{A}}_{\varepsilon } \left( j \right)\boldsymbol{\mathcal{R}}_{\varepsilon } \left( {t - m} \right)} } \right)} \), \( {\mathbf{B}}_{\varepsilon } \left( t \right) = \prod\limits_{k = 0}^{t} {\boldsymbol{\mathcal{A}}_{\varepsilon } \left( k \right)} {\mathbf{X}}\left( 0 \right) \) then (8) is simplified as:
Lemma 2:
Suppose consensus protocol (2) can converge to a consistent state \( {\mathbf{X}}^{*} \) under the noise-free conditions, if noise suppression function \( \varepsilon \left( t \right) \) is the low-order infinitesimal of \( t^{ - 1} \), then \( \mathop {\lim\limits}_{t \to \infty } {\mathbf{B}}_{\varepsilon } \left( t \right) = {\mathbf{X}}^{*} \).
Proof:
Study formula (10), we have \( {\mathbf{X}}\left( {t + 1} \right) = {\mathbf{B}}_{\varepsilon } \left( t \right) \) in the case without noise, from the conclusion in [11] we know that \( \left\| {{\mathbf{X}}\left( {t + 1} \right) - {\mathbf{X}}^{*} } \right\| \le \mu_{\varepsilon 2} \left( t \right)\left\| {{\mathbf{X}}\left( t \right) - {\mathbf{X}}^{*} } \right\| \) for any determined \( t \), where \( \mu_{\varepsilon 2} \left( t \right) \) is the second largest eigenvalues of matrix \( \frac{1}{2}\left( {\boldsymbol{\mathcal{A}}_{\varepsilon } \left( t \right) + \boldsymbol{\mathcal{A}}_{\varepsilon } \left( t \right)^{T} } \right) \), let \( \lambda_{2} \left( t \right) \) be the second smallest eigenvalues of \( \frac{1}{2}\left( {\boldsymbol{\mathcal{L}}\left( t \right) + \boldsymbol{\mathcal{L}}\left( t \right)^{T} } \right) \), obviously, \( \mu_{\varepsilon 2} \left( t \right) = 1 - \varepsilon \left( t \right)\lambda_{2} \left( t \right) \), thus:
Let \( \lambda_{2}^{*} \) be the smallest one in the second smallest eigenvalues of \( \frac{1}{2}\left( {\boldsymbol{\mathcal{L}}\left( t \right) + \boldsymbol{\mathcal{L}}\left( t \right)^{T} } \right) \), according to the known conditions that \( O\left( {\varepsilon (t)} \right) < O\left( {t^{ - 1} } \right) \), then we can deduce that \( \mathop {\lim\limits}_{t \to \infty } \left( {1 - \varepsilon \left( t \right)\lambda_{2}^{*} } \right)^{t} = 0 \), and because \( \varepsilon \left( k \right) > 0 \), thus for \( \forall t \), \( 0 \le \prod\limits_{k = 1}^{t} {\left( {1 - \varepsilon \left( k \right)\lambda_{2} \left( k \right)} \right)} \le \left( {1 - \varepsilon \left( t \right)\lambda_{2}^{*} } \right)^{t} \), when \( t \to \infty \), from squeeze theorem we can obtain: \( \mathop {\lim\limits}_{t \to \infty } \prod\limits_{k = 1}^{t} {\left( {1 - \varepsilon \left( k \right)\lambda_{2} \left( k \right)} \right)} = 0 \), that means:
Then we have \( \mathop {\lim\limits}_{t \to \infty } \left\| {{\mathbf{B}}_{\varepsilon } \left( t \right) - {\mathbf{X}}^{*} } \right\| = 0 \), i.e. \( \mathop {\lim\limits}_{t \to \infty } {\mathbf{B}}_{\varepsilon } \left( t \right) = {\mathbf{X}}^{*} \).
Lemma 3:
Suppose consensus protocol (2) can converge to a consistent state \( {\mathbf{X}}^{*} \) under the noise-free conditions, if noise suppression function \( \varepsilon \left( t \right) \) is the high-order infinitesimal of \( t^{ - 0.5} \), then there is constant M which make \( \lim_{t \to \infty } \left[ {\text{var} \left( {{\mathbf{Y}}_{\varepsilon } \left( t \right)} \right)} \right]_{i} \le M \).
Proof:
let \( \left\| \bullet \right\|_{\infty } \) to represent the row sum norm of the matrix, and investigate the row sum norm of the variance matrix of \( \boldsymbol{\mathcal{R}}_{\varepsilon } \left( {t - m} \right) \), then:
let \( y_{\varepsilon } (t,m) = \prod\limits_{j = t - m + 1}^{t} {\boldsymbol{\mathcal{A}}_{\varepsilon } \left( j \right)\boldsymbol{\mathcal{R}}_{\varepsilon } \left( {t - m} \right)} \), then \( {\mathbf{Y}}_{\varepsilon } \left( t \right) = \sum\limits_{m = 0}^{t} {\left( {y_{\varepsilon } (t,m)} \right)} \). Study the norm of the variance matrix of \( y_{\varepsilon } (t,m) \), we have:
In fact, \( \left[ {\text{var} \left( {y_{\varepsilon } \left( {t,m} \right)} \right)} \right]_{i} \) is exactly the ith element of the diagonal of the variance matrix \( \text{var} \left( {y_{\varepsilon } \left( {t,m} \right)} \right) \), denote \( \rho = \hbox{max} \left( {\left[ {\text{var} \left( {y_{\varepsilon } \left( {t,m} \right)} \right)} \right]_{i} } \right) \), obviously \( \left[ {\text{var} \left( {y_{\varepsilon } (t,m)} \right)} \right]_{i} \le \varepsilon^{2} \left( {t - m} \right)\rho \), then
According to the condition that \( O\left( {\varepsilon (t)} \right) > O\left( {t^{ - 0.5} } \right) \), therefore series \( \rho \sum\limits_{m = 0}^{t} {\varepsilon^{2} \left( m \right)} \) will converge when \( t \to \infty \), let \( \mathop {\lim\limits}_{t \to \infty } \rho \sum\limits_{m = 0}^{t} {\varepsilon^{2} \left( m \right)} = M \), we can obtain:\( \lim_{t \to \infty } \left[ {\text{var} \left( {{\mathbf{Y}}_{\varepsilon } (t)} \right)} \right]_{i} \le M \) □
From Lemmas 2 and 3, it easy to get:
Theorem 1:
Suppose consensus protocol (2) can converge to a consistent state \( {\mathbf{X}}^{*} \) under the noise-free conditions, if order of \( \varepsilon \left( t \right) \) satisfies \( O\left( {t^{ - 0.5} } \right) < O\left( {\varepsilon (t)} \right) < O\left( {t^{ - 1} } \right) \) then NS-SDLC (9) is noise controllable.
5 Conclusion
Bases on the above theoretical results and discussion, Table 1 summarized the main conclusions of this paper.
Our main conclusions are:
-
I.
if ε(t) = 1 (Equivalent to \( \varepsilon \left( t \right) \) is useless) or O(ε(t)) ≤ O(t−0.5), the determined part Bε(t) of linear consensus protocol (9) can converge to consistent state vectors \( {\mathbf{X}}^{*} \), but the variance of its random part Yε(t) is unbounded. In this case, linear consensus protocol is noise uncontrollable.
-
II.
When O(ε(t)) ≥ O(t−1), the variance of its random part Yε(t) is bounded, but the determined part Bε(t) of linear consensus protocol can’t converge to consistent state vectors \( {\mathbf{X}}^{*} \), under this circumstances, linear consensus protocol is also noise uncontrollable.
-
III.
If O(t−0.5) < O(ε(t)) < O(t−1), Bε(t) will converge to consistent state vectors \( {\mathbf{X}}^{*} \) and the variance of Yε(t) is bounded, so linear consensus protocol is noise controllable. At this time, every Agent’s state will be a normal distribution with center x*.
References
Reynolds, C.W.: Flocks, herds, and schools: a distributed behavioral model. In: Proceedings of the 14th Annual Conference on Computer Graphics and Interactive Techniques, pp. 25–34ic. ACM, New York (1987)
Vicsek, T., Czirok, A., Ben-Jacob, E., et al.: Novel type of phase transition in a system of self-driven particles. Phys. Rev. Lett. 75(6), 1226–1229 (1995)
Seneta, E.: Non-negative Matrices and Markov Chains. John Wiley & Sons Inc, Springer (2006)
Frommer, A., Szyld, D.B.: On asynchronous iterations. J. Comput. Appl. Math. 123, 201–216 (2000)
Strikwerda, J.C.: A probabilistic analysis of asynchronous iteration. J. Linear Algebra Appl. 349, 125–154 (2002)
Olfati-Saber, R., Fax, J.A., Murray, R.M.: Consensus and cooperation in networked multi-agent systems. Proc. IEEE 95(1), 215–233 (2007)
Olfati-Saber, R.: Evolutionary dynamics of behavior in social networks. In: Proceedings of the 46th IEEE Conference on Decision and Control, 12–14 December 2007 at the Hilton New Orleans Riverside in New Orleans, Louisiana USA (2007)
Olfati-Saber, R., Jalalkamali, P.: Coupled distributed estimation and control for mobile sensor networks. IEEE Trans. Autom. Control 57(9), 2609–2614 (2012)
Olfati-Saber, R.: Flocking for multi-agent dynamic systems: algorithms and theory. IEEE Trans. Autom. Control 51(3), 401–420 (2006)
Yu, C.-H., Nagpal, R.: A self-adaptive framework for modular robots in dynamic environment: theory and applications. Int. J. Robot. Res. 30(8), 1015–1036 (2011)
Yu, C.-H.: Biologically-Inspired Control for Self-Adaptive Multiagent Systems. Doctoral Thesis, Harvard University (2010)
Yu, C.-H., Nagpal, R.: Biologically-inspired control for multi-agent self-adaptive tasks. In: Proceedings of the 24th AAAI Conference on Artifical Intelligence, 11–15 July, Atlanta, pp. 1702–1709 (2010)
Li, Z.K., Duan, Z.S., Chen, G.R., Huang, L.: Consensus of multi-agent systems and synchronization of complex networks: a unified viewpoint. IEEE Trans. Circuits Syst. I Regul. Pap. 57(1), 213–224 (2010)
Li, Z., Duan, Z., Chen, G.: Dynamic consensus of linear multi-agent systems. Control Theory Appl. IET 5(1), 19–28 (2011)
Li, Z., Liu, X., Ren, W.: L Xie Distributed tracking control for linear multiagent systems with a leader of bounded unknown input. IEEE Trans. Autom. Control 58(2), 518–523 (2013)
Tran, T.M.D., Kibangou, A.Y.: Distributed design of finite-time average consensus protocols. In: 4th IFAC Workshop on Distributed Estimation and Control in Networked Systems, vol.4, pp. 227–233. Rhine Moselle Hall, Koblenz, Germany, September 2013
Ni, Y.H., Li, X.: Consensus seeking in multi-agent systems with multiplicative measurement noises. Syst. Control Lett. 62(5), 430–437 (2013)
Djaidja, S., Wu, Q.H.: Leaderless consensus seeking in multi-agent systems under multiplicative measurement noises and switching topologies. In: Proceedings of the 33rd Chinese Control Conference, Nanjing, China, July 2014
Djaidja, S., Wu, Q.H.: Leader-following consensus for single-integratormulti-agent systems with multiplicative noises in directed topologies. Int. J. Syst. Sci. (2014)
Huang, M., Manton, J.H.: Coordination and consensus of networked agents with noisy measurements:stochastic algorithms and asymptotic behavior. SIAM J. Control Optim. 48(1), 134–161 (2009)
Huang, M., Manton, J.H.: Stochastic consensus seeking with noisy and directed inter-agent communication:fixed and randomly varying topologies. Institute of Electrical and Electronics Engineers. Trans. Autom. Control 55(1), 235–241 (2010)
Li, T., Zhang, J.-F.: Mean square average-consensus under measurement noises and fixed topologies: necessary and sufficient conditions. Automatica 45(8), 1929–1936 (2009)
Li, T., Zhang, J.-F.: Consensus conditions of multi-agent systems with time-varying topologies and stochastic communication noises. Institute of Electrical and Electronics Engineers. Trans. Autom. Control 55(9), 2043–2057 (2010)
Wen, G., Duan, Z., Li, Z., Chen, G.: Flocking of multi-agent dynamical systems with intermittent nonlinear velocity measurements. Int. J. Robust Nonlinear Control 22(16), 1790–1805 (2012)
Liu, S., Xie, L., Zhang, H.: Distributed consensus for multi-agent systems with delays and noises in transmission channels. Automatica 47(5), 920–934 (2011)
Chen, Y., Liu, J., Han, F., Yu, X.: On the cluster consensus of discrete-time multi-agent systems. Syst. Control Lett. 60(7), 517–523 (2011)
Acknowledgments
This work was partially supported by the National Natural Science Foundation of China (Nos. 61272244, 61175053, 61173173, 61035003, 61202212), National Key Basic Research Program of China (No. 2013CB329502)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2016 IFIP International Federation for Information Processing
About this paper
Cite this paper
Dou, Q., Shi, Z., Pan, Y. (2016). Noisy Control About Discrete Liner Consensus Protocol. In: Shi, Z., Vadera, S., Li, G. (eds) Intelligent Information Processing VIII. IIP 2016. IFIP Advances in Information and Communication Technology, vol 486. Springer, Cham. https://doi.org/10.1007/978-3-319-48390-0_24
Download citation
DOI: https://doi.org/10.1007/978-3-319-48390-0_24
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-48389-4
Online ISBN: 978-3-319-48390-0
eBook Packages: Computer ScienceComputer Science (R0)