Skip to main content

Distributed Change Detection via Average Consensus over Networks

  • Conference paper
  • First Online:
Stochastic Models, Statistics and Their Applications (SMSA 2019)

Part of the book series: Springer Proceedings in Mathematics & Statistics ((PROMS,volume 294))

Included in the following conference series:

Abstract

Distributed change-point detection has been a fundamental problem when performing real-time monitoring using sensor networks.   We propose a distributed detection algorithm, where each sensor only exchanges CUSUM statistic with their neighbors based on the average consensus scheme, and an alarm is raised when local consensus statistic exceeds a prespecified global threshold. We provide theoretical performance bounds showing that the performance of the fully distributed scheme can match the centralized algorithms under some mild conditions. Numerical experiments demonstrate the good performance of the algorithm, especially, in detecting asynchronous changes.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Boyd, S., Diaconis, P., Xiao, L.: Fastest mixing Markov chain on a graph. SIAM Rev.D 46(4), 667–689 (2004)

    Article  MathSciNet  Google Scholar 

  2. Buldygin, V.V., Kozachenko, Y.V.: Sub-gaussian random variables. Ukr. Math. J. 32(6), 483–489 (1980)

    Article  MathSciNet  Google Scholar 

  3. Chen, J., Kim, S.H., Xie, Y.: \({\rm S}^3 T\): an efficient score-statistic for spatio-temporal surveillance (2017). arXiv:1706.05331

  4. Fellouris, G., Sokolov, G.: Second-order asymptotic optimality in multisensor sequential change detection. IEEE Trans. Inf. Theory 62(6), 3662–3675 (2016)

    Article  MathSciNet  Google Scholar 

  5. Hadjiliadis, O., Zhang, H., Poor, H.V.: One shot schemes for decentralized quickest change detection. IEEE Trans. Inf. Theory 55(7), 3346–3359 (2009)

    Article  MathSciNet  Google Scholar 

  6. Karagiannis, G., Altintas, O., Ekici, E., Heijenk, G., Jarupan, B., Lin, K., Weil, T.: Vehicular networking: a survey and tutorial on requirements, architectures, challenges, standards and solutions. IEEE Commun. Surv. Tutor. 13(4), 584–616 (2011)

    Article  Google Scholar 

  7. Kurt, M.N., Wang, X.: Multi-sensor sequential change detection with unknown change propagation pattern (2017). arXiv:1708.04722

  8. Lakhina, A., Crovella, M., Diot, C.: Diagnosing network-wide traffic anomalies. In: ACM SIGCOMM Computer Communication Review, vol. 34, pp. 219–230. ACM (2004)

    Google Scholar 

  9. Li, S., Wang, X.: Order-2 asymptotic optimality of the fully distributed sequential hypothesis test (2016). arXiv:1606.04203

  10. Liu, K., Mei, Y.: Improved performance properties of the CISPRT algorithm for distributed sequential detection. Submitted (2017)

    Google Scholar 

  11. Lorden, G.: Procedures for reacting to a change in distribution. The Annals of Mathematical Statistics pp. 1897–1908 (1971)

    Google Scholar 

  12. Ludkovski, M.: Bayesian quickest detection in sensor arrays. Seq. Anal. 31(4), 481–504 (2012)

    Article  MathSciNet  Google Scholar 

  13. Mei, Y.: Efficient scalable schemes for monitoring a large number of data streams. Biometrika 97(2), 419–433 (2010)

    Article  MathSciNet  Google Scholar 

  14. Page, E.S.: Continuous inspection schemes. Biometrika 41(1/2), 100–115 (1954)

    Article  MathSciNet  Google Scholar 

  15. Raghavan, V., Veeravalli, V.V.: Quickest change detection of a markov process across a sensor array. IEEE Trans. Inf. Theory 56(4), 1961–1981 (2010)

    Article  MathSciNet  Google Scholar 

  16. Sahu, A.K., Kar, S.: Distributed sequential detection for Gaussian shift-in-mean hypothesis testing. IEEE Trans. Signal Process. 64(1), 89–103 (2016)

    Article  MathSciNet  Google Scholar 

  17. Stankovic, S.S., Ilic, N., Stankovic, M.S., Johansson, K.H.: Distributed change detection based on a consensus algorithm. IEEE Trans. Signal Process. 59(12), 5586–5697 (2011)

    Google Scholar 

  18. Tartakovsky, A.G., Veeravalli, V.V.: An efficient sequential procedure for detecting changes in multichannel and distributed systems. In: Proceedings of the Fifth International Conference on Information Fusion, 2002, vol. 1, pp. 41–48. IEEE (2002)

    Google Scholar 

  19. Tartakovsky, A.G., Veeravalli, V.V.: Quickest change detection in distributed sensor systems. In: Proceedings of the 6th International Conference on Information Fusion, pp. 756–763 (2003)

    Google Scholar 

  20. Tartakovsky, A.G., Veeravalli, V.V.: Asymptotically optimal quickest change detection in distributed sensor systems. Seq. Anal. 27(4), 441–475 (2008)

    Article  MathSciNet  Google Scholar 

  21. Valero, M., Clemente, J., Kamath, G., Xie, Y., Lin, F.C., Song, W.: Real-time ambient noise subsurface imaging in distributed sensor networks. In: 2017 IEEE International Conference on Smart Computing (SMARTCOMP), pp. 1–8. IEEE (2017)

    Google Scholar 

  22. Xiao, L., Boyd, S.: Fast linear iterations for distributed averaging. Syst. Control Lett. 53(1), 65–78 (2004)

    Article  MathSciNet  Google Scholar 

  23. Xiao, L., Boyd, S., Lall, S.: A scheme for robust distributed sensor fusion based on average consensus. In: Proceedings of the 4th International Symposium on Information Processing in Sensor Networks, p. 9. IEEE Press (2005)

    Google Scholar 

  24. Xie, Y., Siegmund, D.: Sequential multi-sensor change-point detection. Ann. Stat. 41(2), 670–692 (2013)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

We would like to thank Professor Ansgar Steland for the opportunity to submit an invited paper. This work was partially supported by NSF grants CCF-1442635, CMMI-1538746, DMS-1830210, an NSF CAREER Award CCF-1650913, and a S.F. Express award.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yao Xie .

Editor information

Editors and Affiliations

Appendix

Appendix

For simplicity, we first indeed the sensors from 1 to N. Use vector \(\mathbf{L}^t\) to represent \(\left( L(\mathbf{x}_1^{t}),\ldots ,L(\mathbf{x}_N^{t})\right) ^\mathrm{T}\), vector \(\mathbf{y}^t\) to represent \(\left( y_1^t,\ldots ,y_N^t\right) ^\mathrm{T}\) and vector \(\mathbf{z}^t\) to represent \(\left( z_1^t,\ldots ,z_N^t\right) ^\mathrm{T}\). Now, our algorithm can be rewritten as

$$\begin{aligned}&\mathbf{y}^{t+1} = ( \mathbf{y}^t + \mathbf{L}^{t+1})^+, \mathbf{z}^{t+1} = \mathbf{W}(\mathbf{z}^t + \mathbf{y}^{t+1}-\mathbf{y}^t),&T_s = \inf \big \{t>0: \Vert \mathbf{z}^t \Vert _{\infty } \ge b\big \}. \end{aligned}$$
(3)

First, we prove some useful lemmas before reaching the main results.

Remark 1

Since \(\mathbf{W}{} \mathbf{1} = \mathbf{1}\), \(\mathbf{W}^\mathrm{T}=\mathbf{W}\) and \(z^0_v = y^0_v=0\), simple proof by m.i. can verify

$$\begin{aligned} \sum _{v\in \mathcal {V}} z_v^t = \sum _{v\in \mathcal {V}} y_v^t \text{ holds } \text{ for } \text{ all } t, \end{aligned}$$
(4)

Lemma 1

(Hoeffding Inequality) Let \(X_i\) be independent, mean zero, \(\sigma _i^2\)-sub-Gaussian random variables. Then for \(K>0\), \( \mathbb {P}(\sum _{i=1}^{n} X_{n}\ge K)\le \exp \left( -\frac{K^2}{2\sum _{i=1}^{n}\sigma _i^2}\right) . \)

Lemma 2

Consider a sequence of random variables \(X_k {\mathop {\sim }\limits ^{i.i.d.}}\mathcal {P}\), for \(k=1,2,\ldots ,t\). \(\mathcal {P}\) is a sub-Gaussian distribution and its mean and variance are defined as \(\mu _1<0\) and \(\sigma _1\), respectively. Given \(K>0\) large enough, we have

$$\begin{aligned} \sum _{k=1}^{t}\mathbb {P}\left( \sum _{q=1}^{k} X_{q} > K\right) < -\frac{2K}{\mu _1} \exp \left( \frac{2K\mu _1}{\sigma _1^{2}}\right) . \end{aligned}$$

Proof

Case 1. For \(0 < t \le [-\frac{2K}{\mu _1}]\), by Hoeffding Inequality, we have

$$\begin{aligned} \sum _{k=1}^{t}\mathbb {P}\left( \sum _{q=1}^{k} X_{q} > K\right) < \sum _{k=1}^{t} \mathrm{exp}\left( -\frac{1}{2}(\frac{K-k\mu _1}{\sqrt{k}\sigma _1})^{2}\right) . \end{aligned}$$
(5)

Using \(\frac{K-k\mu _1}{\sqrt{k}} \ge 2\sqrt{-K\mu _1}\) and \(t \le -\frac{2K}{\mu _1}\), we obtain

$$\begin{aligned} \sum _{k=1}^{t}\mathbb {P}\left( \sum _{q=1}^{k} X_{q} > K\right) < -\frac{2K}{\mu _1} \mathrm{exp}\left( \frac{2K\mu _1}{\sigma _1^{2}}\right) . \end{aligned}$$
(6)

Case 2. For \([-\frac{2K}{\mu _1}]+1 \le t \), by (6), we have

$$\begin{aligned} \sum _{k=1}^{t}\mathbb {P}\left( \sum _{q=1}^{k} X_{q}> K\right) < -\frac{2K}{\mu _1} \mathrm{exp}\left( \frac{2K\mu _1}{\sigma _1^{2}}\right) + \sum _{k=[-\frac{2K}{\mu _1}]+1}^{t}\mathbb {P}\left( \sum _{q=1}^{k} X_{q} > K\right) . \end{aligned}$$
(7)

Utilizing Hoeffding Inequality and \(k\ge [-\frac{2K}{\mu _1}]+1\), we obtain

$$\begin{aligned} \mathbb {P}\left( \sum _{q=1}^{k} X_{q} > K\right) <\mathrm{exp}\left( -\frac{1}{2}(\frac{K-k\mu _1}{\sqrt{k}\sigma _1})^{2}\right) \le \mathrm{exp}\left( \frac{9K\mu _1}{4\sigma _1^2}\right) . \end{aligned}$$
(8)

Besides, for \(k \ge [-\frac{2K}{\mu _1}]+1\), we have

$$\begin{aligned} \frac{ \mathrm{exp}\left( -\frac{1}{2}(\frac{K-(k+1)\mu _1}{\sqrt{k+1}\sigma _1})^{2}\right) }{ \mathrm{exp}\left( -\frac{1}{2}(\frac{K-k\mu _1}{\sqrt{k}\sigma _1})^{2}\right) } = \mathrm{exp}\left( -\frac{\mu _1^2}{2\sigma _1^2}+\frac{K^2}{2k(k+1)\sigma _1^2}\right) < \mathrm{exp}\left( -\frac{3\mu _1^2}{8\sigma _1^2}\right) . \end{aligned}$$
(9)

Then, from Hoeffding Inequality, (8) and (9), we derive

$$\begin{aligned}&\sum _{k=[-\frac{2K}{\mu _1}]+1}^{t}\mathbb {P}\left( \sum _{q=1}^{k} X_{q} > K\right)< \sum _{k=[-\frac{2K}{\mu _1}]+1}^{t}\mathrm{exp}\left( -\frac{1}{2}(\frac{K-k\mu _1}{\sqrt{k}\sigma _1})^{2}\right) \\<&\sum _{k=[-\frac{2K}{\mu _1}]+1}^{t} \mathrm{exp}\left( \frac{9K\mu _1}{4\sigma _1^2}\right) \times \mathrm{exp}\left( -\frac{3\mu _1^2}{8\sigma _1^2} \left( k-[-\frac{2K}{\mu _1}]-1\right) \right) <\frac{ \mathrm{exp}\left( \frac{9K\mu _1}{4\sigma _1^2}\right) }{1-\mathrm{exp}\left( -\frac{3\mu _1^2}{8\sigma _1^2}\right) }.\nonumber \end{aligned}$$
(10)

From (10), we know that the second term on the RHS of (7) is a small quantity compared with the first term provided K large enough, so we can neglect it to obtain

$$\begin{aligned} \sum _{k=1}^{t}\mathbb {P}\left( \sum _{q=1}^{k} X_{q} > K\right) < -\frac{2K}{\mu _1} \mathrm{exp}\big (\frac{2K\mu _1}{\sigma _1^{2}}\big ). \end{aligned}$$
(11)

Note that \( \mathbb E_{f_1} [L(x_j^t)] = \mu _1 < 0 \), \( \mathbb E_{f_2} [L(x_j^t)] = \mu _2 > 0 \), \( \text{ Var }_{f_1} [L(x_j^t)] = \sigma _1^2 \), \( \text{ Var }_{f_2} [L(x_j^t)] = \sigma _2^2 \).

Given \(\varepsilon >0\) and \(p>0\), Define event

$$B(\varepsilon ,p)= \{ |L(\mathbf{x}_i^t)|< \varepsilon b, \text{ for } i=1,\ldots ,N \text{ and } t=1,\ldots ,p \},$$

where b is the prespecified threshold in detection. Besides, we use \(\{T_s = p\}\) to represent the event that our algorithm detects the change at \(t=p\). We have the following lemma

Lemma 3

For any \(t \le p \), we have

$$\begin{aligned} \{T_s = t\}\wedge B(\varepsilon ,p) \subset \big \{ \frac{\sum _{j=1}^{N} y_{j}^{t}}{N}> (1- \frac{ \sqrt{N}\varepsilon \lambda _{2} }{1- \lambda _{2}})b \big \} \wedge B(\varepsilon ,p). \end{aligned}$$

Proof

Note that, by eigen-decomposition, \( \mathbf{W} = \frac{1}{N} \mathbf{1} \mathbf{1}^\intercal + \sum _{j=2}^N \lambda _j w_j w_j^\intercal . \) Throughout the proof, we assume under the condition that \(B(\varepsilon ,p)\) occurs. First, by the recursive form of our algorithm in (3), the result in (4) and the definition of \(B(\varepsilon ,p)\), for any sensor j, we have

$$\begin{aligned}&|z_{j}^{t} - \frac{\sum _{i=1}^{N} y_{i}^{t}}{N} | = | z_{j}^{t} - \frac{\sum _{i=1}^{N} z_{i}^{t}}{N} | \le \Vert \mathbf{z}^t - \frac{\sum _{i=1}^{N} z_{i}^{t}}{N} \mathbf{1}\Vert _2 = \Vert \sum _{k=1}^{t} (\mathbf{W}^{t-k+1}-\frac{1}{N}{} \mathbf{1}{} \mathbf{1}^\mathrm{T})(\mathbf{y}^{k}-\mathbf{y}^{k-1}) \Vert _{2}\\ \le&\sum _{k=1}^{t} \lambda _{2}^{t-k+1} \Vert \mathbf{y}^{k}-\mathbf{y}^{k-1} \Vert _{2} \le \sum _{k=1}^{t} \lambda _{2}^{t-k+1} \Vert \mathbf{L}^{k} \Vert _{2} \le \sum _{k=1}^{t} \lambda _{2}^{t-k+1} \sqrt{N}\varepsilon b \le \frac{ \sqrt{N}\varepsilon \lambda _{2}b}{1- \lambda _{2}}, \end{aligned}$$

where \(\lambda _2\) is the second largest eigenvalue modulus of \(\mathbf{W}\). If \(\{T_s = t\}\) happens, then \(z_{j}^{t}>b\) holds for some j, which, together with the inequality above, leads to \( \frac{\sum _{j=1}^{N} y_{j}^{t}}{N}> (1- \frac{ \sqrt{N}\varepsilon \lambda _{2}}{1- \lambda _{2}})b. \)

Lemma 4

Assume a sequence of independent random variables \(Y_1,\ldots ,Y_N\). Take any integer \(M>N\) and let

$$\begin{aligned} C(M,N) {=}\big \{(i_1,\ldots ,i_N): i_j \in \mathbb {N} \ \text{ and }\ M-N \le \sum _{j=1}^{N}i_{j} \le M \big \}. \end{aligned}$$

Then we have

$$\begin{aligned} \mathbb {P}\left( \sum _{j=1}^{N} Y_j> K \right) \le \sum _{C(M,N)} \prod _{j=1}^{N}\mathbb {P}\left( Y_j > \frac{i_j K}{M} \right) . \end{aligned}$$

Proof

\(\forall (y_1,\ldots ,y_N) \in \{(Y_1,\ldots ,Y_N): \sum _{j=1}^{N} Y_{j}> K\}\), take \( i_j = \left[ \frac{y_j M}{ \sum _{j=1}^{N} y_{j}}\right] \), for \(j=1,\ldots ,N \), where \(\left[ x\right] \) denotes the largest integer smaller than x. We can verify that \(M-N \le \sum _{j=1}^{N} i_j \le M\) and \(y_{j} > i_j K/M\). To see \(M-N \le \sum _{j=1}^{N} i_j\), notice that \(i_j\ge \frac{y_jM}{\sum _{j=1}^N y_j}-1\).Therefore, we have

$$\begin{aligned} \big \{(Y_1,\ldots ,Y_N):\ \sum _{j=1}^{N} Y_{j}> K\big \} \subset \bigcup _{ C(M,N)} \left\{ (Y_1,\ldots ,Y_N): \ Y_{j} > \frac{i_j K}{M} \text{ for } j=1,\ldots ,N \right\} . \end{aligned}$$

Since \(Y_{j}\)’s are independent with each other, we obtain

$$\begin{aligned} \mathbb {P}\left( \sum _{j=1}^{N} Y_{j}> K \right) \le \sum _{C(M,N)} \prod _{j=1}^{N}\mathbb {P}\left( Y_{j} > \frac{i_j K}{M} \right) . \end{aligned}$$

1.1 5.1 Proof of Theorem 1

First, we calculate the probability that our algorithm stops within time p. The value of p is to be specified later.

$$\begin{aligned} \nonumber&\sum _{t=1}^{p} \mathbb {P}(T_s=t) = \sum _{t=1}^{p} \left( \mathbb {P}\left( \{T_s=t\} \wedge B(\varepsilon ,p)\right) + \mathbb {P}\left( \{T_s=t\} \wedge \bar{B}(\varepsilon ,p)\right) \right) \\ \nonumber \le&\sum _{t=1}^{p} \mathbb {P}\left( \{T_s=t\}\wedge B(\varepsilon ,p)\right) + \mathbb {P}\left( \bar{B}(\varepsilon ,p)\right) \\ \le&\sum _{t=1}^{p} \mathbb {P}\left( \{T_s=t\} \wedge B(\varepsilon ,p) \right) + 2Np\times \mathrm{exp}\left( -\frac{(\varepsilon b - \mu _1)^2}{2\sigma _1^2}\right) , \end{aligned}$$

where the last inequality is from Hoeffding Inequality and assumptions in Sect. 3. The value of \(\varepsilon \) is to be specified later.

Denote \(\bar{b} = N(1- \frac{ \sqrt{N}\varepsilon \lambda _{2}}{1- \lambda _{2}})b\), then \(\bar{b}\) will also tend to infinity as b tends to infinity provided \(\varepsilon \) small enough. By Lemma 3, we have

$$\begin{aligned} \sum _{t=1}^{p} \mathbb {P}(T_s=t) \le&\sum _{t=1}^{p} \mathbb {P}\left( \left\{ \sum _{j=1}^{N} y_{j}^{t}> \bar{b}\right\} \wedge B(\varepsilon ,p)\ \right) + 2Np \times \mathrm{exp}\left( -\frac{(\varepsilon b - \mu _1)^2}{2\sigma _1^2}\right) . \end{aligned}$$
(12)

By Lemma 4, we have

$$\begin{aligned} \mathbb {P}\left( \left\{ \sum _{j=1}^{N} y_{j}^{t}> \bar{b}\right\} \wedge B(\varepsilon ,p)\ \right) \le \sum _{C(M,N)} \prod _{j=1}^{N}\mathbb {P}\left( \left\{ y_{j}^{t} > \frac{i_j \bar{b}}{M}\right\} \wedge B_j(\varepsilon ,p) \right) , \end{aligned}$$
(13)

where \(B_j(\varepsilon , p) = \{ |L(\mathbf{x}_j^t)|< \varepsilon b, \text{ for } t=1,\ldots ,p \}\) and the value of M is to be specified later. If \(y_{j}^{t}>i_j \bar{b}/M\), then there must exist \(1\le k\le t \) such that \(y_{j}^{t} = \sum _{q=k}^{t} L(\mathbf{x}_j^q) \ge i_j \bar{b}/M\). So, we have

$$\begin{aligned} \mathbb {P}\left( \left\{ y_{j}^{t}> \frac{i_j \bar{b}}{M}\right\} \wedge B_j(\varepsilon ,p) \right) \le \sum _{k=1}^{t} \mathbb {P}\left( \left\{ \sum _{q=k}^{t} L(\mathbf{x}_j^q)> \frac{i_j \bar{b}}{M}\right\} \wedge B_j(\varepsilon ,p) \right) . \end{aligned}$$
(14)

The influence of \(B_j(\varepsilon ,p)\) in (14) can be interpreted as truncating the original distribution of \(L(\cdot )\). It’s obvious that the new distribution is still sub-Gaussian. Besides, the mean and variance almost keep unchanged provided \(\varepsilon b\) large enough.

If \(i_j=0\), we just set the upper bound of the probability in (14) to be 1. If \(i_j\ne 0\), by Lemma 2, we have

$$\begin{aligned} \sum _{k=1}^{t} \mathbb {P}\left( \left\{ \sum _{q=k}^{t} L(\mathbf{x}_j^q) > \frac{i_j \bar{b}}{M}\right\} \wedge B_j(\varepsilon ,p) \right) < -\frac{2i_j \bar{b}}{M\mu _1} \mathrm{exp}\left( \frac{2i_j \bar{b}\mu _1}{M\sigma _1^{2}}\right) . \end{aligned}$$
(15)

Plugging (15) into (13), we get

$$\begin{aligned}&\sum _{C(M,N)} \prod _{j=1}^{N}\mathbb {P}\left( \left\{ y_{j}^{t} > \frac{i_j \bar{b}}{M}\right\} \wedge B_j(\varepsilon ,p) \right) < \sum _{C(M,N)} \prod _{i_j\ne 0} \frac{2i_j \bar{b}}{-M\mu _1} \mathrm{exp}\left( \frac{2i_j \bar{b}\mu _1}{M\sigma _1^{2}}\right) \\ \nonumber \le&\sum _{C(M,N)} \left( \frac{2\bar{b}}{-\mu _1}\right) ^N \mathrm{exp}\left( \frac{2\bar{b}\mu _1}{\sigma _1^{2}}(1-\frac{N}{M})\right) = |C(M,N)| \left( \frac{2\bar{b}}{-\mu _1}\right) ^N \mathrm{exp}\left( \frac{2\bar{b}\mu _1}{\sigma _1^{2}}(1-\frac{N}{M})\right) . \end{aligned}$$
(16)

Plugging (16) and (13) into (12), we obtain

$$\begin{aligned} \sum _{t=1}^{p} \mathbb {P}(T_s=t) < \sum _{t=1}^{p} |C(M,N)| \left( \frac{2\bar{b}}{-\mu _1}\right) ^N \mathrm{exp}\left( \frac{2\bar{b}\mu _1}{\sigma _1^{2}}(1-\frac{N}{M})\right) + 2Np \times \mathrm{exp}\left( -\frac{(\varepsilon b - \mu _1)^2}{2\sigma _1^2}\right) \end{aligned}$$
(17)
$$\begin{aligned} = p\left| C(M,N)\right| \left( \frac{2\bar{b}}{-\mu _1}\right) ^N \mathrm{exp}\left( \frac{2\bar{b}\mu _1}{\sigma _1^{2}}(1-\frac{N}{M})\right) + 2Np\times \mathrm{exp}\left( -\frac{(\varepsilon b - \mu _1)^2}{2\sigma _1^2}\right) . \end{aligned}$$
(18)

Next, we will show that as b tends to infinity, the second term on the RHS of (17) is a small quantity in comparison with the first term if we choose the value of M and \(\varepsilon \) properly. Note that 2Np is a small quantity in comparison with \( p\left| C(M,N)\right| \left( -2\bar{b}/\mu _1\right) ^N\), so we only require \(\frac{2\bar{b}\mu _1}{\sigma _1^{2}}(1-\frac{N}{M}) \ge -\frac{(\varepsilon b - \mu _1)^2}{2\sigma _1^2}.\) Choose \(M=(N+1)^2\). Recall that \(\bar{b} = N(1- \frac{ \sqrt{N}\varepsilon \lambda _{2}}{1- \lambda _{2}})b\), the equation above can be rewritten as

$$\begin{aligned} \frac{(\varepsilon b - \mu _1)^2}{2\sigma _1^2} \ge -\frac{2(N^3+N^2+N)\mu _1 b}{(N+1)^2\sigma _1^2}\left( 1- \frac{ \sqrt{N}\varepsilon \lambda _{2}}{1- \lambda _{2}}\right) . \end{aligned}$$
(19)

To ensure that (19) holds as b tends to infinity, \(\varepsilon = 2\sqrt{-N\mu _1/b}\) is sufficient. Plugging the value of M and \(\varepsilon \) into (17) and neglecting the second term, we get

$$\begin{aligned}&\mathbb {P}\left( T_s \le p \right) \le |C((N+1)^2,N)| \left( \frac{2\bar{b}}{-\mu _1}\right) ^N \\&\cdot \mathrm{exp}\big (\frac{2(N^3+N^2+N)\mu _1 b}{(N+1)^2\sigma _1^{2}}-\frac{4N(N^3+N^2+N) \sqrt{-\mu _1}\mu _1\lambda _{2}\sqrt{b}}{(N+1)^2(1-\lambda _{2})\sigma _1^2}+\ln (p)\big ). \end{aligned}$$

So \(\forall l > -\frac{4N(N^3+N^2+N) \sqrt{-\mu _1}\mu _1\lambda _{2}}{(N+1)^2(1-\lambda _{2})\sigma _1^2}\), if we choose \( p = \mathrm{exp}\left( -\frac{2(N^3+N^2+N)\mu _1 b}{(N+1)^2\sigma _1^{2}} - l\sqrt{b}\right) , \)

$$\begin{aligned}&\lim _{b\rightarrow +\infty } \mathbb {P}\left( T_s \le p \right) \le \\&\lim _{b\rightarrow +\infty } |C((N+1)^2,N)| \left( \frac{2\bar{b}}{-\mu _1}\right) ^N \mathrm{exp}\left( -\left( l+\frac{4N(N^3+N^2+N) \sqrt{-\mu _1}\mu _1\lambda _{2}}{(N+1)^2(1-\lambda _{2})\sigma _1^2}\right) \sqrt{b}\right) =0, \end{aligned}$$

which together with the definition of \(\mathrm{ARL}\) leads to \( \mathrm{ARL} \ge p\), \(\forall l > -\frac{4N(N^3+N^2+N) \sqrt{-\mu _1}\mu _1\lambda _{2}}{(N+1)^2(1-\lambda _{2})\sigma _1^2}\). This leads to our desired result when b tends to infinity.

1.2 5.2 Proof of Lemma 1

First of all, note that \(\mathbb {P}\left( T_s =+\infty \right) =0\), so given \(\varepsilon >0\), we have

$$\begin{aligned} \mathrm{EDD} \le \frac{b(1+\varepsilon )}{\mu _2} + \sum _{t=[\frac{b(1+\varepsilon )}{\mu _2}]+1}^{+\infty }\mathbb {P}\left( T_s=t\right) t. \end{aligned}$$
(20)

If \(T_s=t\), then we have that \(z_j^{t-1}<b\) holds for all j. Since \(\sum _{j=1}^{N} z_{j}^{t-1} = \sum _{j=1}^{N} y_{j}^{t-1}\), there must exist some \(y_j^{t-1}<b\). Therefore, we have

$$\begin{aligned} \sum _{t=[\frac{b(1+\varepsilon )}{\mu _2}]+1}^{+\infty }\mathbb {P}\left( T_s=t\right) t \le \sum _{t=[\frac{b(1+\varepsilon )}{\mu _2}]+1}^{+\infty } \sum _{j=1}^{N} \mathbb {P}\left( y_j^{t-1}<b \right) t. \end{aligned}$$
(21)

Note that \(y_j^{t-1}\ge \sum _{q=1}^{t-1}L(\mathbf{x}_j^q) \), together with Hoeffding Inequality, we get

$$\begin{aligned} \mathbb {P}\left( y_j^{t-1}<b \right) t \le&\mathbb {P}\left( \sum _{q=1}^{t-1}L(\mathbf{x}_j^q)<b \right) = \mathrm{exp}\left( -\frac{1}{2}\left( \frac{b-(t-1)\mu _2}{\sqrt{t-1}\sigma _2}\right) ^{2}\right) t. \end{aligned}$$
(22)

When b is large enough, for any \(t>[\frac{b(1+\varepsilon )}{\mu _2}]\), utilizing the similar technique in (9), we get

$$\begin{aligned} \frac{\mathrm{exp}\left( -\frac{1}{2}(\frac{b-t\mu _2}{\sqrt{t}\sigma _2})^{2}\right) }{ \mathrm{exp}\left( -\frac{1}{2}(\frac{b-(t-1)\mu _2}{\sqrt{t-1}\sigma _2})^{2}\right) } \times \frac{t+1}{t} \le \mathrm{exp}\left( -\frac{b}{2}(1-\frac{1}{(1+\varepsilon )^2})\right) . \end{aligned}$$
(23)

Plugging (22) and (23) into (21), utilizing the similar technique in (10), we get

$$\begin{aligned}&\sum _{t=[\frac{b(1+\varepsilon )}{\mu _2}]+1}^{+\infty }\mathbb {P}\left( T_s=t\right) t \le \sum _{t=[\frac{b(1+\varepsilon )}{\mu _2}]+1}^{+\infty } N\times \mathrm{exp}\left( -\frac{1}{2}\left( \frac{b-(t-1)\mu _2}{\sqrt{t-1}\sigma _2}\right) ^{2}\right) t\end{aligned}$$
(24)
$$\begin{aligned} \le&N \left( \frac{b(1+\varepsilon )}{\mu _2}+1\right) \frac{\mathrm{exp}\left( -\frac{\varepsilon ^2 b}{2(1+\varepsilon )\sigma _2^2}\right) }{1-\mathrm{exp}\left( -\frac{b}{2}(1-\frac{1}{(1+\varepsilon )^2})\right) }. \end{aligned}$$
(25)

Note that \(\forall \varepsilon >0\), as b tends to infinity, the RHS of (24) would converge to zero. Therefore, by (20), we get \( \mathrm{EDD}\le \frac{b\big (1+o(1)\big )}{\mu _2}. \)

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Liu, Q., Zhang, R., Xie, Y. (2019). Distributed Change Detection via Average Consensus over Networks. In: Steland, A., Rafajłowicz, E., Okhrin, O. (eds) Stochastic Models, Statistics and Their Applications. SMSA 2019. Springer Proceedings in Mathematics & Statistics, vol 294. Springer, Cham. https://doi.org/10.1007/978-3-030-28665-1_13

Download citation

Publish with us

Policies and ethics