Skip to main content

Strategies for Parallel Unaware Cleaners

  • Conference paper
  • First Online:
Algorithms for Sensor Systems (ALGOSENSORS 2014)

Abstract

We investigate the parallel traversal of a graph with multiple robots unaware of each other. All robots traverse the graph in parallel forever and the goal is to minimize the time needed until the last node is visited (first visit time) and the time between revisits of a node (revisit time). We also want to minimize the visit time, i.e. the maximum of the first visit time and the time between revisits of a node. We present randomized algorithms for uncoordinated robots, which can compete with the optimal coordinated traversal by a small factor, the so-called competitive ratio.

For ring and path graph simple traversal strategies allow constant competitive factors even in the worst case. For grid and torus graphs with \(n\) nodes there is a \(\mathcal{O}(\log n)\)-competitive algorithm for both visit problems succeeding with high probability, i.e. with probability \(1-n^{-\mathcal{O}(1)}\). For general graphs we present an \(\mathcal{O}(\log ^2 n)\)-competitive algorithm for the first visit problem, while for the visit problem we show an \(\mathcal{O}(\log ^3 n)\)-competitive algorithm both succeeding with high probability.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 34.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 44.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Baezayates, R., Culberson, J., Rawlins, G.: Searching in the plane. Inf. Comput. 106(2), 234–252 (1993)

    Article  MathSciNet  Google Scholar 

  2. Bektas, T.: The multiple traveling salesman problem: an overview of formulations and solution procedures. Omega 34(3), 209–219 (2006)

    Article  Google Scholar 

  3. Christofides, N.: Worst-case analysis of a new heuristic for the travelling salesman problem. Technical report, DTIC Document (1976)

    Google Scholar 

  4. Dereniowski, D., Disser, Y., Kosowski, A., Pająk, D., Uznański, P.: Fast collaborative graph exploration. In: Fomin, F.V., Freivalds, R., Kwiatkowska, M., Peleg, D. (eds.) ICALP 2013, Part II. LNCS, vol. 7966, pp. 520–532. Springer, Heidelberg (2013)

    Chapter  Google Scholar 

  5. Dynia, M., Kutyłowski, J., der Heide, F.M., Schindelhauer, C.: Smart robot teams exploring sparse trees. In: Královič, R., Urzyczyn, P. (eds.) MFCS 2006. LNCS, vol. 4162, pp. 327–338. Springer, Heidelberg (2006)

    Chapter  Google Scholar 

  6. Dynia, M., Łopuszański, J., Schindelhauer, C.: Why robots need maps. In: Prencipe, G., Zaks, S. (eds.) SIROCCO 2007. LNCS, vol. 4474, pp. 41–50. Springer, Heidelberg (2007)

    Chapter  Google Scholar 

  7. Fakcharoenphol, J., Rao, S., Talwar, K.: A tight bound on approximating arbitrary metrics by tree metrics. In: Proceedings of the Thirty-fifth Annual ACM Symposium on Theory of Computing, pp. 448–455. ACM (2003)

    Google Scholar 

  8. Fraigniaud, P., Ga̧sieniec, L., Kowalski, D.R., Pelc, A.: Collective tree exploration. Network 48, 166–177 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  9. Frederickson, G., Hecht, M.S., Kim, C.E.: Approximation algorithms for some routing problems. In: 17th Annual Symposium on Foundations of Computer Science, pp. 216–227, Oct 1976

    Google Scholar 

  10. GuoXing, Y.: Transformation of multidepot multisalesmen problem to the standard travelling salesman problem. Eur. J. Oper. Res. 81(3), 557–560 (1995)

    Article  MATH  Google Scholar 

  11. Higashikawa, Y., Katoh, N., Langerman, S., Tanigawa, S.-I.: Online graph exploration algorithms for cycles and trees by multiple searchers. J. Comb. Optim. 28(2), 480–495 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  12. Karp, R.M.: Reducibility Among Combinatorial Problems. Springer, Heidelberg (1972)

    Google Scholar 

  13. Lovász, L.: Random walks on graphs: a survey. In: Miklós, D., Sós, V.T., Szőnyi, T. (eds.) Combinatorics, Paul Erdős is Eighty, vol. 2, pp. 353–398. János Bolyai Mathematical Society, Budapest (1996)

    Google Scholar 

  14. Newman, D.J.: The double dixie cup problem. Am. Math. Mon. 67, 58–61 (1960)

    Article  MATH  Google Scholar 

  15. Ortolf, C., Schindelhauer, C.: Online multi-robot exploration of grid graphs with rectangular obstacles. In: Proceedings of the Twenty-fourth Annual ACM Symposium on Parallelism in Algorithms and Architectures, SPAA ’12, pp. 27–36. ACM, New York (2012)

    Google Scholar 

  16. Papadimitriou, C.H.: The Euclidean travelling salesman problem is NP-complete. Theoret. Comput. Sci. 4(3), 237–244 (1977)

    Article  MathSciNet  MATH  Google Scholar 

  17. Portugal, D., Rocha, R.: A survey on multi-robot patrolling algorithms. In: Camarinha-Matos, L.M. (ed.) Technological Innovation for Sustainability. IFIP AICT, vol. 349, pp. 139–146. Springer, Heidelberg (2011)

    Chapter  Google Scholar 

  18. Rosenkrantz, D., Stearns, R., Lewis, P.: Approximate algorithms for the traveling salesperson problem. In: IEEE Conference Record of 15th Annual Symposium on Switching and Automata Theory, 1974, pp. 33–42, Oct 1974

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Christian Ortolf .

Editor information

Editors and Affiliations

A Appendix

A Appendix

1.1 A.1 Canonical Cleaning

Theorem 1

Using the canonical cleaning it is possible to achieve a long-term visit time of \(\mathcal{O}((n/k) \log n)\) and a visit time of \(\hbox {diameter}(G)+\mathcal{O}((n/k) \log n)\) with high probability.

Proof

We choose for each robot an independent uniform random choice of the nodes of the cycle \(P\) as the cycle-start-node. The waiting-time is defined as \(\hbox { diameter}(G)-|s_r,v_s|\). So, all nodes start the traversal at the same time.

Let \(g\) be a subpath on the cycle \(P\) of length at most \(2n\). The probability that no robots are in this subpath is \((1-\frac{g}{|P|})^k\). For \(k\) robots a subpath \(g \ge \frac{2 c n \ln n}{k}\) is empty with probability

$$ \left( 1-\frac{g}{|P|}\right) ^k \le \exp \left( - \frac{g k}{|P|} \right) \le \exp \left( - \frac{g k}{2n} \right) \le \exp \left( - c \ln n \right) \le n^{-c}\ . $$

Hence, the maximum gap between two nodes on the cycle \(P\) is at most \(\mathcal{O}((n/k)\log n)\) with high probability.

So, the long term visit time is bounded by this gap. From the waiting time, the first visit time follows. Note that after the first visit, the revisit time matches the long term visit time.

1.2 A.2 Canonical Algorithm First Visit

Lemma 2

Assume there exists a parallel unaware cleaner algorithm \(\mathcal{A}\) for \(k\) robots on a graph with \(n\) nodes, where for all nodes \(u\) the probability that the first visit time is less or equal than \(t_f\) is at least \(p>0\). Furthermore, \(t_f\) and \(p\) are known. Then, this cleaning algorithm can be transformed into a canonical algorithm having visit time \(\mathcal{O}(\frac{1}{p} t_f \log n)\) with high probability.

Proof

Let \(P(r)\) with \(|P(r)| \le t_f\) be the resulting path of robot \(r\) performing algorithm \(\mathcal{A}\). Then, the cycle-start-node of the canonical algorithm is defined by choosing a random uniform node \(v_s\) from \(P(r)\). We set waiting-time \((r)\)=0.

We now show that this algorithm fulfills the time behavior.

  1. 1.

    The first visit time can be proved as follow.

    Each node is visited with probability of at least \(\frac{p}{t_f}\). However, there are dependencies between these events, since nodes might be visited by the same robot. So, we consider the subpath before a node \(v\) of length \(\frac{2 c t_f \ln n}{p}\) on a cycle \(C\) of length \(2n\) with \(V(C) = V\). Then, at least \(c \ln n\) different robots have positive probabilities to visit this interval. Let \(1, \ldots , k\) be these robots and let \(p_i\) be the probability that one of these robots visits this interval. For these probabilities we have \(\sum _{i=1}^k p_i \ge \frac{p}{t_f} \frac{c t_f \ln n}{p} = c \ln n\), since otherwise a node exists which is visited with smaller probability than \(\frac{p}{t_f}\).

    The probability for not visiting this interval is therefore

    $$ \prod _{i=1}^k \left( 1 - p_i\right) \le \prod _{i=1}^k \exp \left( - p_i\right) \le \exp \left( - \sum _{i=1}^k p_i\right) \le \exp \left( - c \ln n\right) \le n^{-c}\ . $$

    Since with high probability a cycle-start-node is chosen on the cycle \(P\) at most \((2 c t_f \ln n)/p\) nodes before \(v\), \(v\) will be visited after \(t_f + 2 \frac{c}{p} t_f \ln n\) steps for the first time w.h.p. From the union bound the claim follows.

  2. 2.

    The visit time follows by the following observation: From the observations above we know that the subpath of length \(2 c t_f \ln n\) on \(P\) before and after any node is visited within time \(t_f\). Therefore the visit time of a node is at most \(4 c t_f \ln n + 2 t_f\).

1.3 A.3 Analysis of Torus Algorithm

Theorem 2

Algorithm 2 is a high probability \(\mathcal{O}(\log n)\)-competitive visit cleaning algorithm for the \(m\times m\)-torus graph.

Proof

The following Lemma shows that the torus algorithm distributes the robots with equal probabilities.

Lemma 3

For all \(t\in \{1, \ldots , \sqrt{n}\}\), \(i\in \{0, \ldots , t\}\) the probability that a robot starting at node \((s_{r.x},s_{r.y})\) is at node \((s_{r.x}+i, s_{r.y}+(t-i))\) after \(t\) rounds is \(1/(t+1)\).

Proof

This follows by induction. For \(t=0\) the probability is \(1\) that the robot is at the start node \((s_{r.x},s_{r.y})\). Assume that at round \(t-1\) the claim is true.

For the induction we have to consider three cases:

  • If \(x=s_{r.x}\) and \(y=s_{r.y}+t\) then the probability to move to this point is the product of the stay probability at \((x,y-1)\) and the probability to increment \(y\). By induction this is \(\frac{1}{t} \left( 1- \frac{1}{t+1} \right) = \frac{1}{t+1}\).

  • If \(y=s_{r.y}\) and \(x=s_{r.x}+t\) then the probability to move to this point is the product of the stay probability at \((x,y-1)\) and the probability to increment \(x\). By induction this is again \(\frac{1}{t} \left( 1- \frac{1}{t+1} \right) = \frac{1}{t+1}\).

  • For all other cases we have to combine the probability to increment \(x\) and \(y\), the sum of which is \(\frac{t}{t+1}\). By induction we get as probability \(\frac{1}{t} \frac{t}{t+1} = \frac{1}{t+1}\) claim follows.

Assume that \(t_f\) is the first visit time time for a robot placement in the torus. For the cleaning of a target node \((x,y)\) we choose a set of nodes \(S\) with \(t-4 t_f\) nodes at a diagonal in distance \(t\), see Fig. 6. \(A =N_{t_f}(S)\) is now the bait, i.e. the area, which guarantees the minimum number of robots the recruitment area \(N_{t_f}(A)\). Lemma 1 states that at least \(|A|/(t_f+1)\) robots must be in this recruitment area \(N_{t_f}(A)\). Now, the cleaning algorithm makes sure that all these robots pass through the target node during the time interval \([t-2 t_f, t+2t_f]\) with a probability of at least \(1/(t+2t_f+1)\). Now, the size of \(|A|\) is at least \(2 t_f (t-4 t_f)\). So, the expected number of robots passing through the target node is at least

$$ \frac{|A|}{(t_f+1)(t+2t_f+1)} \ge \frac{2 t_f (t-4 t_f)(t+2t_f+1)}{t_f+1} \ge \frac{t-4t_f}{t+2t_f+1}\ . $$
Fig. 6.
figure 6

The robot recruitment area for robots exploring the target node.

Fig. 7.
figure 7

The robot recruitment area for robots on the cycle.

So for \(t \ge 10 t_f\) we expect at least a constant number of \(\frac{1}{2}\) robots passing through any node in a time interval of length \(3 t_f\). If we increase the time interval to the size of some \(c t_f \log n\) for some appropriately chosen constant \(c\), applying a Chernoff bound ensures us to visit this node with at least one robot with high probability.

This proves that in the first phase of the algorithm we visit (and revisit) each node in every time intervals of length \(\mathcal{O}(t_f \log n)\).

It remains to show that in the second phase, where the algorithm enters the cycle the distance on the cycle is bounded by \(\mathcal{O}(t_f \log n)\). For this, we consider \(4t_f < \sqrt{n}\) consecutive nodes on the cycle, which lie on \(4t_f\) consecutive diagonals, see Fig. 7. So, all of the \(|A|/(t_f + 1)\) robots in the recruitment area have a target node, which can be reached after \(\sqrt{n}\) steps. For each of these target nodes, the probability to be reached by a robot on the corresponding diagonal is at least \(\frac{1}{\sqrt{n}}\). The minimum size of \(|A|\) is at least \(\sqrt{n}-2t_v \), which results in an expected number of at least

$$ \frac{2t_f (\sqrt{n}-2t_f)}{(2t_f +1)\sqrt{n}}\ge 1- \frac{t_f}{\sqrt{n}} $$

robots on the target nodes of the cycle. For \(t_f \le \frac{1}{2} {\sqrt{n}}\) this means that the expected number of robots in an interval of length \(4t_f\) is at least \(\frac{1}{2}\). So, the longest empty interval has length of at most \(\mathcal{O}(t_f \log n)\) by applying Chernoff bounds on \(\mathcal{O}(\log n)\) neighbored intervals.

For \(t_f \ge \frac{1}{2} \sqrt{n}\) we consider \(\sqrt{n}\) consecutive nodes on consecutive diagonals. Every robot ends the first phase and starts the cycle within this interval with probability \(\frac{1}{\sqrt{n}}\). The minimum number of robots to explore all \(n\) nodes is at least \(\frac{n}{t_f+1}\), which follows by Lemma 1 for \(A=V\). Now, for \(c \frac{t_f}{\sqrt{n}} \log n\) neighbored intervals on the cycle each of length \(\sqrt{n}\) the probability that a single robot chooses a node in this interval is at least

$$ \frac{t_f}{\sqrt{n}} \frac{c \log n}{\sqrt{n}} = c\frac{t_f}{n} \log n \ . $$

So, the expected number of robots is \(c \frac{n}{t_f}\frac{t_f}{n} \log n = c \log n\) for an time interval of length \(c \frac{t_f}{\sqrt{n}}\sqrt{n} \log n= c t_f \log n\). Now, by Chernoff bounds the probability that we find this interval to be empty has a probability of at most \({n^{-c'}}\) for some constants \(c, c'\).

So, the maximum distance of two robots on a cycle in the first and second phase is at most \(\mathcal{O}(t_f \log n)\) with high probability. Since the visit time is at least the first visit time the competitive ratio of \(\mathcal{O}(\log n)\) follows.

1.4 A.4 Proof of Lemma 3

Lemma 3

For a graph \(G\), a node \(v\in V\), \(\beta \) chosen randomly from \([1,2]\), a random permutation \(\pi \) over \(\{1, \ldots , n\}\), and for \(\ell = 8 \beta t \log n\) the probability that \(v\in W\) is at least \(\frac{1}{4}\).

Proof

We will prove that \(P(v\in U) \ge \frac{1}{4}\), which implies the claim because \(U \subset W\).

Consider the first node \(w\) in the \(\ell +2t\)-neighborhood of \(v\) according to the random permutation \(\pi \), i.e. \(w=u_{\pi (i*)}\) where \(i^* = \min \{i\ | \ |v,u_{\pi (i)}| \le \ell +2t\}\). If \(w\) is closer than \(\ell -2t\) to \(v\), i.e. \(|v,w| \le \ell -2t\), then \(v\) is in the working area of \(w\) (and \(U\)), since no node with smaller index can be closer than \(w\), i.e. \(w \in U_{i*} \subseteq U\). On the other hand if this node is in the critical distance \(|v,w| \in (\ell -2t,\ell +2t]\), then it is excluded from \(U_{i*}\) and since \(i^*\) has the smallest index in the vicinity it is also not in any other working area, i.e. \(v \not \in U\). Since \(\pi \) is a random permutation the probability of \(v \in W\) is given by the number of elements in the closer vicinity:

$$ P_{\ell }(v\in U) = \frac{|N_{\ell -2t}(v)|}{|N_{\ell +2t}(v)|} $$

This implies

$$\begin{aligned} \prod _{i=0}^{2 \log n} {P_{\ell +4it}(v\in U)} = \frac{|N_{\ell -t}(v)|}{|N_{\ell +8t \log n+2t}(v)|} \ge \frac{1}{n} \end{aligned}$$
(4)

Now, we choose \(\beta \) randomly from \(\{1, 1+\frac{1}{2 \log n}, 1+\frac{2}{2 \log n}, \ldots , 1+\frac{2 \log n-1}{2 \log n}\}\) and compute \(\ell = 8 \beta t \log n\). Hence,

$$ P(v\in U) = \frac{1}{2 \log n} \sum _{i=0}^{2 \log n-1} P_{8t \log n + 4it}(v\in W) $$

Assume that \(P(v\in U) < \frac{1}{4}\), then at least half of all values of \((P_{8t \log n + 4it}(v\in W))_{i \in \{0,\ldots , 2 \log n-1\}}\) are smaller than \(\frac{1}{2}\). Then, we observe the following.

$$ \prod _{i=0}^{2 \log n} {P_{8t \log n+4it}(v\in U)} < \left( \frac{1}{2}\right) ^{\log n} = \frac{1}{n} \ , $$

which contradicts (4). Therefore \(P(v\in W) \ge P(v\in U) \ge \frac{1}{4}\).

The same argument holds, if we choose \(\beta \) randomly from the real interval \([1,2]\).

1.5 A.5 Analysis of Algorithm 5

Theorem 3

Algorithm 5 is a high probability \(\mathcal{O}(\log ^2 n)\)-competitive first visit algorithm for every undirected graph.

Proof

Consider the round of the outer loop, where \(t=2^i \in [t_f,2t_f]\), where \(t_f\) is the first visit time of the optimal algorithm. We show that in this round all nodes will be explored with high probability. Lemma 5 states that the number of robot moves of one-shot-cleaning is bounded by \(100 \cdot 2^i \log n\). So, the overall number of each robot moves is bounded by \(800 (c+1) \log ^2 n\).

For any node \(u\) the probability, that the one-shot-cleaning algorithm for \(\ell = 8 \beta t \log n\) chooses \(u \in W\) is at least \(\frac{1}{4}\) following Lemma 3. If \(u\) resides in \(W_i\), the number of robots performing the cleaning is at least \(|W_i|/(2t)\) implied by Lemma 6. These \(k_i\) robots have to explore a cycle of length at most twice the size of the connected Steiner-tree computed in Algorithm 4. These are at most \(34 |W_i| \log n\) nodes. Now, Algorithm 3 starts with a random node and explores \(68 t \log n\) nodes. So, after one execution of the one-shot-cleaning algorithm the probability of a node not to be explored is at most

$$ 1- \frac{1}{4} \frac{68 t \log n}{34 |W_i| \log n} = 1- \frac{t}{2|W_i|} $$

The cleaning is be independently repeated for \(k_i \ge \frac{|W_i|}{2t}\) times.

$$ \left( 1- \frac{t}{2|W_i|}\right) ^{\frac{|W_i|}{2t}} \le e^{-\frac{1}{4}} $$

Hence, the maximum probability of a node not to be explored after \(4(c+1)\ln n\) repetitions is at most \(\frac{1}{n^c}\).

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Ortolf, C., Schindelhauer, C. (2015). Strategies for Parallel Unaware Cleaners. In: Gao, J., Efrat, A., Fekete, S., Zhang, Y. (eds) Algorithms for Sensor Systems. ALGOSENSORS 2014. Lecture Notes in Computer Science(), vol 8847. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-662-46018-4_3

Download citation

  • DOI: https://doi.org/10.1007/978-3-662-46018-4_3

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-662-46017-7

  • Online ISBN: 978-3-662-46018-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics