Advertisement

Positional Encoding by Robots with Non-rigid Movements

  • Kaustav Bose
  • Ranendu AdhikaryEmail author
  • Manash Kumar Kundu
  • Buddhadeb Sau
Conference paper
  • 181 Downloads
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11639)

Abstract

Consider a set of autonomous computational entities, called robots, operating inside a polygonal enclosure (possibly with holes), that have to perform some collaborative tasks. The boundary of the polygon obstructs both visibility and mobility of a robot. Since the polygon is initially unknown to the robots, the natural approach is to first explore and construct a map of the polygon. For this, the robots need an unlimited amount of persistent memory to store the snapshots taken from different points inside the polygon. However, it has been shown by Di Luna et al. [DISC 2017] that map construction can be done even by oblivious robots by employing a positional encoding strategy where a robot carefully positions itself inside the polygon to encode information in the binary representation of its distance from the closest polygon vertex. Of course, to execute this strategy, it is crucial for the robots to make accurate movements. In this paper, we address the question whether this technique can be implemented even when the movements of the robots are unpredictable in the sense that the robot can be stopped by the adversary during its movement before reaching its destination. However, there exists a constant \(\delta > 0\), unknown to the robot, such that the robot can always reach its destination if it has to move by no more than \(\delta \) amount. This model is known in literature as non-rigid movement. We give a partial answer to the question in the affirmative by presenting a map construction algorithm for robots with non-rigid movement, but having O(1) bits of persistent memory and the ability to make circular moves.

Keywords

Autonomous robots Map construction Non-rigid movement Polygon with holes Look Compute-Move cycle Distributed algorithm 

1 Introduction

Distributed coordination of autonomous mobile robots has been extensively studied in literature in the last two decades. Fundamental problems like Gathering [1, 6, 8, 11], Pattern Formation [3, 5, 12, 13] etc., have been studied in the setting where the robots are deployed in the plane with infinite extent and without any obstacles. Recently in [10], Meeting, which is a simpler version of the Gathering problem, has been investigated for robots inside a polygonal enclosure containing polygonal obstacles, where their boundaries limit both visibility and mobility of a robot. This setting models many real life scenarios like moping robots inside a room, robots employed in factories or an art gallery etc. To solve the various distributed problems in this model, the robots may have to first explore and construct a map of the environment. For this, the robots need an unlimited amount of persistent memory. However, in [10], it has been shown that map construction can be done even by oblivious robots with rigid movements, i.e., where a robot can accurately move by any distance. Their strategy is based on a positional encoding technique, where the robot carefully moves within the polygonal enclosure in such a way that their memory is implicitly encoded in its distance from the closest polygon vertex. In this paper, we show that this technique can be adapted to the non-rigid setting (where the movements of the robots can be interrupted by the adversary) as well, provided that the robot has a constant number of persistent bits and the ability to make circular moves.

2 Model and Definitions

Polygon. A polygon P is a non-empty, connected, and compact region in \(\mathbb {R}^2\) whose boundary \(\partial (P)\) is a set of finitely many disjoint simple closed polygonal chains. There is one connected component of \(\partial (P)\), called the external boundary, which encloses all others (if any), which are called holes. Vertices and edges of a polygon can be defined in the standard way. V(P) and E(P) will respectively denote the set of vertices and edges of the polygon. For any two points \(x, y \in P\), we say that x and y are visible to each other if the line segment joining them lies in P, i.e., \(\overline{xy} \subset P\). We shall assume that there is some global coordinate system, with respect to which, the coordinates of the polygon vertices are algebraic numbers.

Robot. By a robot, we mean an anonymous mobile computational entity modeled as a dimensionless point inside P. A robot positioned at \(x \in P\) can observe a point \(y \in P\) if and only if x and y are visible to each other. The robot is endowed with O(1) bits of persistent memory. This model is known in literature as \(\textsc {FState}\) [9], where the internal state of the robot can assume a finite number of ‘colors’. \(\mathcal {S}\) will denote the set of all possible states of the robot. A robot, when active, operates according to the so-called LOOK-COMPUTE-MOVE cycle. In each cycle, a previously idle robot wakes up and executes the following steps. In the LOOK phase, the robot takes a snapshot of the region of P that it can currently see. The snapshot is expressed in the local coordinate system of the robot having the origin at its current position. In the COMPUTE phase, based on the snapshot and its internal state, the robot performs computations according to a deterministic algorithm to decide (1) a destination point \(y \in P\), (2) a trajectory to y from its current location \(x \in P\), which is either a straight line segment, or circular arc, (3) a state \(s \in \mathcal {S}\). Then in the MOVE phase, the robot sets its internal state to s and moves towards the point y along the decided trajectory. When a robot transitions from one LCM cycle to the next, all of its local memory (past computations and snapshots) are completely erased, and only its internal state is retained. Depending on whether or not the adversary can stop a robot before it reaches its computed destination, there are two movement models in literature, namely rigid and non-rigid, respectively. In the rigid model, a robot is always able to reach its desired destination without any interruption. In the case of non-rigid movements, there exists a constant \(\delta > 0\), such that if the robot decides to move by an amount (path length) smaller than \(\delta \), then the robot will reach it; otherwise, it will move by at least \(\delta \) amount. The value of \(\delta \) is not known to the robot.

Geometric Definitions and Notations. Let v be any vertex of P, and uw be its two adjacent vertices. We shall say that u is the preceding vertex of v and w is the succeeding vertex of v if one can reach from \(\overline{vu}\) to \(\overline{vw}\) by moving around v (staying inside P) in the counterclockwise direction (according to the sense of handedness of the robot). For any vertex \(p_i \in V(P)\), unless mentioned otherwise, \(p_{i-1}\) and \(p_{i+1}\) will respectively denote the vertices preceding and succeeding \(p_i\).
Fig. 1.

The polygon vertex closest to x is \(p'\), but it is not visibile to x. Its closest visible vertex is p.

For a set \(X = \{x_1,x_2,\ldots ,x_n\}\) of distinct points in \(\mathbb {R}^2\), \(n \ge 2\), the Voronoi region of any \(x_i \in X\), denoted by \(Vor_X(x_i)\) or simply \(Vor(x_i)\), is the set of all points in \(\mathbb {R}^2\) which are closer to \(x_i\) than any other point in X, that is, \(Vor_X(x_i) = \{y \in \mathbb {R}^2 \mid d(y, x_i) \le d(y, x_j), \forall i \ne j\}.\) Points shared by two Voronoi regions \(Vor_X(x_i)\) and \(Vor_X(x_j)\) constitute the Voronoi edge defined by \(x_i\) and \(x_j\). Similarly, we can define Voronoi regions for a set \(L = \{l_1,l_2,\ldots ,l_n\}\),\(n \ge 2\) of straight line segments (any two of which can intersect only at their endpoints). We will define the Voronoi region of \(l_i \in L\) as \(LVor_L(l_i) = \{y \in \mathbb {R}^2 \mid d(y, l_i) \le d(y, l_j), \forall i \ne j\}\) where \(d(y, l_k) =\) Inf\(\{d(y, z) \mid z \in l_k\}\). In the context for our problem, there is a minor technical issue that needs to be addressed. For a polygon P, the polygon edge closest to a point \(x \in P\) is of course visible to it. But the vertex closest to x may not be visible from x (See Fig. 1). In the remainder of the paper, unless mentioned otherwise, whenever we say ‘closest vertex’, it should be understood as ‘closest visible vertex’. We will also define the polygon Voronoi region of a vertex \(p_i\), denoted by \(PVor_P(p_i)\), as the set of points \(x \in P\) such that \(p_i\) is visible to x and \(p_i\) is closer to x than any other vertex visible from x. \(Vor_{V(P)}(p_i)\) or \(Vor_P(p_i)\) will denote the usual Voronoi region of \(p_i\) for the set V(P).

For any point x, and any real number \(r > 0\), D(xr) denotes the closed disc \(\{y \in \mathbb {R}^2 \mid d(y,x) \le r\}\). For any three points cyz such that \(d(y,c) = d(z,c)\), we shall denote by arc(yzc), the circular arc centered at c drawn from y to z in counterclockwise direction. Also, \(arc(y,\theta ,c)\) will denote the circular arc arc(yzc) where \(\angle ycz = \theta \). A point \(x \in P\) is said to be properly close to \(p_i \in V(P)\), if for any point \(z \in arc(x,y,p_i)\), where \(y \in \overline{p_ip_{i+1}}\) with \(d(y,p_i) = d(x,p_i)\), the following holds: (1) \(z \in PVor_{V(P)}(p_i)\) and (2) \(p_{i+1}\) is visible from z. We can define a coordinate system by any ordered pair of distinct points in the polygon. The coordinate system defined by (uv) will be the coordinate system with origin at u, \(\overrightarrow{uv}\) as the positive X-axis, d(uv) as the unit distance and the positive Y-axis according to the chirality or handedness of the robot.

3 A Brief Overview of the Positional Encoding Technique

Computational Model. We assume that each robot internally runs a Blum-Shub-Smale machine [2] extended with a square-root primitive. A Blum-Shub-Smale machine is a random-access machine whose registers can store arbitrary real numbers and can operate directly on them. Its computational primitives are the four basic arithmetic operations on real numbers, and it can test whether a real number is positive. Depending on the application, it is also customary to extend the basic model with additional primitives, such as root extractions, trigonometric functions, etc. In our case, we only require the square-root primitive that will be needed in geometric computations.

Encoding Algebraic Reals. Consider an algebraic real number \(\alpha \). The minimal polynomial of \(\alpha \) over \(\mathbb {Q}\) is the unique monic polynomial in \(\mathbb {Q}[x]\) of least degree which has \(\alpha \) as a root. Let \(\mathfrak {m}(x) = x^n + a_{n-1}x^{n-1} + \ldots + a_1x + a_0 \in \mathbb {Q}[x]\) be the minimal polynomial of \(\alpha \) over \(\mathbb {Q}\). Now \(\mathfrak {m}\) has n complex roots. However, the real roots can be arranged in ascending order. So, let \(\alpha \) be the ith real root of \(\mathfrak {m}\). Then \(\alpha \) can be uniquely represented by \((n,i,a_{n-1},\dots ,a_0)\). Now any rational number \((-1)^s\frac{p}{q}\), with \(p, q > 0\), \(s \in \{0,1\}\), can represented as a 3-tuple of non-negative integers as \((s,p,q) \in \mathbb {Z}^3_{\ge 0}\). Thus \(\alpha \) can be represented by an array of \(3n+2\) non-negative integers. We can represent each non-negative integer m as the bit string \(0^m1\). Let us denote by \(\beta (\alpha )\), the bit string obtained by concatenating the bit strings of the \(3n+2\) non-negative integers. Now for any non-negative integer \(\lambda \), let \(r(\alpha , \lambda ) < 1\) be the real number whose (usual) binary representation is \(0.0^{\lambda }1\beta (\alpha )\). We shall say that \(r(\alpha , \lambda )\) encodes \(\alpha \).

Lemma 1

If \(0< d < 1\) be a real number such that \(d = r(\alpha , \lambda )\), for some algebraic real \(\alpha \) and non-negative integer \(\lambda \), then \(\frac{d}{2} = r(\alpha , \lambda + 1)\). Therefore, \(\frac{d}{2^k} = r(\alpha , \lambda + k)\), for any integer \(k \ge 1\).

Computing the Code. Suppose a basic Blum-Shub-Smale machine has an algebraic number \(\alpha \) stored in its register and it has to construct its code \(\beta (\alpha )\). The machine will generate all finite sequences of bits in lexicographic order. For each sequence, it will check if it is a well-formed code of an algebraic number; if it is, it will extract the coefficients of the polynomial \(\mathfrak {q}\) from it. Then it computes \(\mathfrak {q}(\alpha )\). Since \(\alpha \) is algebraic, eventually a polynomial \(\mathfrak {q}\) is found such that \(\mathfrak {q}(\alpha ) = 0\). Since \(\mathfrak {q}\) must be a multiple of the minimal polynomial \(\mathfrak {m}\) of \(\alpha \), we can determine it by finding its irreducible factor that has \(\alpha \) as a root. Then Sturm’s theorem [7] can be applied to find out how many real roots of the minimal polynomial are smaller than \(\alpha \). Thus we have obtained all that are required to encode \(\alpha \).

Computations on the Implicit Form. Once a number is encoded in this form, we cannot necessarily retrieve it in finite time. But we can approximate it arbitrarily well, for instance via Sturm’s theorem. However, we can do Turing-computable bit manipulations on this implicit form to compute all kinds of common functions (e.g. basic arithmetic operations, root extractions of any degree etc.) on the algebraic number without decoding its explicit form.

Encoding Snapshots. A snapshot taken by a robot contains the visible portion of the polygon P, which is basically a union of line segments, each of which being a sub-segment of an edge of P. So, a snapshot can be represented as an array of real numbers, say \(S = (x_1, y_1, x'_1, y'_1, x_2, y_2, x'_2, y'_2, \ldots )\), where \((x_i, y_i)\) and \((x'_i, y'_i)\) are the endpoints of the ith visible segment of \(\partial (P)\). Note that none of these points is necessarily a vertex of P. We have discussed how to compute the code of a single algebraic number. Now we describe how we can encode a snapshot of P with algebraic vertices taken from a point \(x \in P\). The vertices of P have algebraic coordinate with respect to some global coordinate system. Of course, the vertices may not have algebraic coordinates in the local coordinate system of the robot. Let \(\varPhi _x\) be the transformation from the global coordinate system to the local coordinate system of the robot. Note that x is not necessarily an algebraic point, and the parameters of \(\varPhi _x\) are not necessarily algebraic numbers either. Therefore, the coordinates and the distances between vertices of \(\varPhi _x(P)\) may not be algebraic. However, all the ratios of the distances are algebraic, as \(\varPhi _x\), being a similarity transformation, preserves ratios between segment lengths. Then it follows that if the robot picks two visible vertices of \(\varPhi _x(P)\), say v and \(v'\), and transforms all the visible vertices of \(\varPhi _x(P)\) in the coordinate system \((v, v')\), then they will have algebraic coordinates. Then they can be encoded by a basic Blum-Shub-Smale machine as we discussed earlier. However, recall that a snapshot taken from x may not contain only vertices of \(\varPhi _x(P)\). We can identify the potentially non-vertex endpoints by a basic Blum-Shub-Smale machine, as a non-vertex point \((x_j, y_j) \in S\) is necessarily of the form \((x_j, y_j) = c(x_i,y_i), c > 1\) for some visible polygon vertex \((x_i,y_i)\). These potentially non-vertex endpoints will be simply marked with an ‘undefined’ flag in the snapshot. The robot will pick two ‘defined’ points in the snapshot for the coordinate transformation. The coordinates of the ‘defined’ points of S will be transformed as discussed earlier, and each ‘undefined’ point will be simply replaced with a (0, 0) or any algebraic point of our choice along with the ‘undefined’ flag. Then these coordinates can be encoded into a finite bit string, and then they can be concatenated into a single code for the entire snapshot. We can similarly encode multiple snapshots into a single bit string. Along with the snapshots, we can also pack as many other finitely described elements as we want.

Positional Encoding. Suppose that \(\beta \) is the code or bit string of the information that the robot wants to encode. Let d be a real number that encodes it, i.e., the binary representation of d is \(0.0^{\lambda }1\beta (\alpha )\) for some non-negative integer \(\lambda \). The robot will encode the information by positioning itself in the polygon in such a way that its distance from the closest polygon vertex is d (according to its local coordinate system). From Lemma 1, it follows that the robot can encode the same information by placing itself at a distance \(\frac{d}{2^k}\) from the vertex for any integer \(k \ge 1\). This ‘scalability’ property allows the robot to get arbitrarily close to the vertex without losing information.

4 The Algorithm

In [10], the memory of a robot is encoded in the distance from its closest polygon vertex. Obviously, the robot needs rigid movements to accurately position itself at a point whose distance from the particular vertex correctly encodes the memory. In the non-rigid setting, we need some additional options where we can encode our memory. In particular, apart from the distance from some particular vertex, we shall also encode the memory in the tangent of the angle that the robot makes with an edge or a diagonal, at some vertex. In the remainder of the paper, whenever we say that the memory is encoded in some angle \(\alpha \), it is to be understood that the memory is encoded by the real number \(tan (\alpha )\). Notice that since \(tan (\alpha )\) monotonically tends to 0, as \(\alpha < \frac{\pi }{2}\) tends to 0, we can use the scalability property of the encoding scheme to encode the memory in an angle as small as we want. The persistent bits or the internal states are used so that each time a robot wakes up, it knows ‘where’ its memory is encoded and which coordinate system the snapshots in the memory are expressed in. In each case, the robot also sets a particular polygon vertex, that is visible to it, as its virtual vertex. A summary of this is provided in Table 1.
Table 1.

The virtual vertex and encoded memory of the robot, corresponding to its internal state.

For any robot r at a point x inside the polygon P

State

Virtual vertex

Memory

Encoded in

Coordinate system

\(s_1\)

\(p_i =\) the closest visible vertex

\(d(x,p_i)\)

\((p_i,p_{i+1})\)

\(s_2\)

\(p_i =\) the closest visible vertex

\(d(x,p_i)\)

\((p_{i-1},p_i)\)

\(s_3\)

\(p_a =\) the nearer endpoint of the closest boundary segment, say \(\overline{p_ip_{i+1}}\), \(a \in \{i,i+1\}\)

\(tan (\angle xp_ap_b)\), where \(p_b\) is the other endpoint of \(\overline{p_ip_{i+1}}\)

\((p_i,p_{i+1})\)

\(s_4\)

\(p_i =\) the closest visible vertex

\(tan (\angle xp_ip_{i-1} - \frac{\pi }{2})\)

\((p_{i-1},p_i)\)

\(s_5\)

\(p_i =\) the closest visible vertex

\(tan (\pi - \angle xp_ip_{i+1})\)

\((p_i,p_{i+1})\)

\(s_6\)

\(p_i =\) the closest visible vertex

\(tan (\angle xp_iO)\), where \({p_iO}\) is the angle bisector of \(\angle p_{i-1}p_ip_{i+1}\)

\((p_{i-1},p_i)\)

\(s_7\)

\(p_i =\) the closest visible vertex

\(tan (\angle xp_ip_j)\), where either x lies on the interior of the Voronoi edge \({PVor}(p_i)\cap PVor(p_j)\) or \(\overrightarrow{p_ix}\) intersects \({PVor}(p_i)\cap PVor(p_j)\) first

\((p_i,p_{j})\) or \((p_j,p_{i})\)

Our map construction algorithm is similar to the one presented in [10]. The robot will keep exploring new vertices (but not touching it), and near each vertex, it will take a new snapshot and encode it, merging with the old snapshots. As it explores, it keeps track of the vertices that it has seen but not yet visited. Whenever it reaches a new connected component of the boundary, it explores it entirely in the counterclockwise direction (i.e., by moving from a vertex to its succeeding vertex). After exploring a connected component for the first time, it will take a second tour of it, in the same direction. After completely exploring a previously unexplored connected component, it will choose an unvisited vertex of a different component and move to it via a suitable path. The robot repeats this until there are no unvisited vertices recorded in its encoded memory. Implementation of this strategy in the non-rigid setting is based on four basic techniques. A brief overview of these techniques are presented in Sect. 4.1. From there follows the main result of the paper presented in Theorem 1. We refer the readers to the full version [4] of the paper for further details.

Theorem 1

In \(\textsc {FState}\), a robot inside a polygon P with non-rigid movements can correctly construct and encode a map of the polygon in finite time.

4.1 Four Basic Techniques

Moving from One Virtual Vertex to Another in the Same Connected Component of the Boundary

Suppose that \(p_{i}\) is the virtual vertex of the robot r with internal state \(s_1\) (i.e., \(p_i\) is the vertex closest to r), and it has to approach the succeeding vertex \(p_{i+1}\). If r had rigid movements, it could have simply moved to a point suitably close to \(p_{i+1}\) in one go, without any interruption. But since r has non-rigid movements, it can be stopped multiple times during its journey. Now consider the situation shown in Fig. 2a. To move towards \(p_{i+1}\) via any path, the robot has to pass through the Voronoi region of \(p_j\). Hence, if r is stopped by the adversary while it is in the interior of \(PVor_{V(P)}(p_j)\), it will set \(p_j\) as its virtual vertex. To resolve this, the robot will change its state to \(s_3\) before moving. When its state is \(s_3\), to set the virtual vertex, it considers the closest boundary segment, instead of the closest vertex. The endpoint of its closest boundary segment that is closer to it, is set as the virtual vertex. In case of a tie, any one of the endpoints can be chosen as the virtual vertex. The robot will move along a path as shown in Fig. 2b. Such a path can be defined by a tuple \((p_i,p_{i+1},\alpha )\), where the path consists of two linear segments \(\overline{p_{i}q}\) and \(\overline{qp_{i+1}}\) of equal length with \(\angle qp_ip_{i+1} = \angle qp_{i+1}p_i = \alpha \) and q lying on the perpendicular bisector of \(\overline{p_ip_{i+1}}\). We shall denote the path as \(\mathcal {P}(p_i,p_{i+1},\alpha )\). The path should be chosen in such a way that any point on the path is closer to the boundary segment \(\overline{p_{i}p_{i+1}}\) than any other point of \(\partial (P)\). In other words, \(\mathcal {P}(p_i,p_{i+1},\alpha )\) should be inside \(LVor_{E(P)}(\overline{p_ip_{i+1}})\).
Fig. 2.

(a) If a robot moves from \(p_i\) towards \(p_{i+1}\), it has to pass through the Voronoi region of \(p_j\). (b) The robot will move along the path \(\mathcal {P}(p_i,p_{i+1},\alpha )\) drawn in green. (Color figure online)

Now let us describe our strategy more formally. Suppose that a robot r is at a point x inside the polygon P, such that the following are true: (A1) \(r.state = s_1\), (A2) x is properly close to the vertex \(p_i\). Since \(r.state = s_1\), \(p_i\) is the virtual vertex of r, and its memory is encoded in the distance \(d(x,p_i)\) and expressed in the coordinate system defined by \((p_{i},p_{i+1})\). Since r is properly close to \(p_i\), if r moves around \(p_i\) along a circular arc in counterclockwise direction (i.e., keeping its distance from \(p_i\) fixed), \(p_i\) will remain its virtual vertex and also, all of \(\overline{p_{i}p_{i+1}}\) will remain visible to it. So, r will move around \(p_i\) in counterclockwise direction to move to a point \(x'\) such that the following conditions are satisfied: (B1) the data encoded by \(\alpha = \angle x'p_ip_{i+1}\) is same as the data encoded by \(d(x',p_i) = d(x,p_i)\), both expressed in the coordinate system \((p_{i},p_{i+1})\), (B2) the path \(\mathcal {P}(p_i,p_{i+1},\alpha )\) is inside \(LVor_{E(P)}(\overline{p_ip_{i+1}})\). After reaching such a point \(x'\), r will change its state to \(s_3\). It will then follow the path \(\mathcal {P}(p_i,p_{i+1},\alpha )\), where \(\alpha = \angle x'p_ip_{i+1}\), to move towards \(p_{i+1}\). However, we have not yet specified how close r should get to \(p_{i+1}\). Our objective is to get close to \(p_{i+1}\), take a new snapshot and encode the new snapshot (merged with the older ones) in its distance from \(p_{i+1}\). We want these snapshots to be expressed in the coordinate system defined by \((p_{i+1},p_{i+2})\), where \(p_{i+2}\) is the vertex succeeding \(p_{i+1}\). But in order to do that, \(p_{i+2}\) should be visible to the robot. Notice that if some portion of \(\overline{p_{i+1}p_{i+2}} \setminus \{p_{i+1}\}\) is visible to r, then it will be able to see all points of \(\overline{p_{i+1}p_{i+2}}\) if it goes close enough to \(p_{i+1}\). However, if \(\overline{p_{i+1}p_{i+2}} \setminus \{p_{i+1}\}\) is completely invisible to r, the segment \(\overline{p_{i+1}p_{i+2}}\) will never be completely visible to it, no matter how close it gets to \(p_{i+1}\). In this section, we will only discuss the first case. The later case is more complex and will be discussed in the next section.

So, consider the case where some portion of \(\overline{p_{i+1}p_{i+2}} \setminus \{p_{i+1}\}\) is visible to r. In this case, r will move to a point \(x''\) that is close enough to \(p_{i+1}\) so that the following conditions are satisfied: (C1) \(x''\) is properly close to \(p_{i+1}\), (C2) \(d(x'',p_{i+1})\) is encoding the old snapshots merged with its current view (newly discovered vertices), all expressed in the coordinate system defined by \((p_{i+1},p_{i+2})\). The robot will first move close enough to \(p_{i+1}\), say at \(x'''\), so that the first condition is satisfied (See Fig. 3), i.e., \(x'''\) is properly close to \(p_{i+1}\). Then the robot decides to further move towards \(p_{i+1}\) to a suitable point \(x''\) in order to fulfill the last condition. There are two ways it may fail to achieve this. First, if \(d(x'',x''') > \delta \), the adversary can stop it at some point \(x''''\) in between. However, the old snapshots are still available as it is encoded in \(\angle x''''p_{i+1}p_i = \angle x'''p_{i+1}p_i\). So, r can identify that it has failed to reach its destination. Then it will recompute the destination and move towards it. Secondly, even if it reaches \(x''\), a new vertex may be discovered which is not present in the data encoded in \(d(p_{i+1}, x'')\). Therefore, r will again recompute a destination so that the newly discovered vertices are encoded (along with the old data). From the existence of \(\delta > 0\) and the fact that the polygon has finitely many vertices, it follows that r can eventually reach a point \(x''\) where it finds that \(d(x'',p_{i+1})\) encodes precisely the data encoded by \(\angle x''p_{i+1}p_i\), merged with the new vertices of the polygon that are visible from \(x''\). Observe that the visibility of both \(\overline{p_{i+1}p_{i}}\) and \(\overline{p_{i+1}p_{i+2}}\) are crucial at any point during this process. This is because the robot has to transform the data encoded in \(\angle x''p_{i+1}p_i\) from the coordinate system \((p_{i},p_{i+1})\) to \((p_{i+1},p_{i+2})\). When all three \(p_i,p_{i+1},p_{i+2}\) are visible, the robot knows their exact positions and hence, it can perform this conversion, which is computable by a rational function, on (the implicit form of) the old snapshots. When the conditions C1, C2 are achieved, r will change its state to \(s_1\). Clearly we are back to the situation where A1, A2 holds (\(p_i\) to be replaced with \(p_{i+1}\)), and hence r can now move to \(p_{i+1}\) in the same manner.
Fig. 3.

(a) The shaded circular sector of radius \(d = d(p_{i+1},y)\) intersects no vertex other than \(p_{i+1}\). Any point on \(\overrightarrow{p_{i+1}y}\) less than \(\frac{d}{2}\) distance away from \(p_{i+1}\) satisfies the first condition of proper closeness to \(p_{i+1}\). (b) Any point on the interior of \(\overline{p_{i+1}y}\) satisfies the second condition of proper closeness to \(p_{i+1}\).

Discovering the Succeeding Vertex and Encoding a New Snapshot

Now consider the case where \(\overline{p_{i+1}p_{i+2}} \setminus \{p_{i+1}\}\) is completely invisible to r (See Fig. 4). This is possible only if \(\angle p_ip_{i+1}p_{i+2} > \pi \). Then no matter how close r gets to \(p_{i+1}\), \(\overline{p_{i+1}p_{i+2}} \setminus \{p_{i+1}\}\) will remain completely invisible to it. In this case, r will move to a point \(x''\) that is close enough to \(p_{i+1}\), such that the following conditions are satisfied. (D1) \(d(x'',p_{i+1}) = d\) should encode all the old data encoded by \(\angle x''p_{i+1}p_i\) (both) expressed in the coordinate system \((p_{i},p_{i+1})\). (D2) Let S be the semicircular disc of radius 2d, centered at \(p_{i+1}\) and having diameter along the line \(\overleftrightarrow {p_{i}p_{i+1}}\). Then S should not intersect with any portion of \(\partial (P)\) except \(\overline{p_{i}p_{i+1}}\). (D3) Every point on \(\overline{p_{i}p_{i+1}}\) should be visible from every point on \(arc(u,\pi ,p_{i+1})\), where \(u \in \overline{p_{i}p_{i+1}}\) with \(d(u,p_{i+1}) = d\). When these conditions are satisfied, the robot will change its state to \(s_2\). Clearly, \(p_{i+1}\) is its virtual vertex. Let y be a point on the line through \(p_{i+1}\) and perpendicular to \(\overline{p_{i}p_{i+1}}\), with \(d(y,p_{i+1}) = d(x'',p_{i+1}) = d\). The robot will then move to the point y along \(arc(x'',y,p_{i+1})\). It implies from condition D2 that as r traverses along this arc (where it can be stopped several times by the adversary), \(p_{i+1}\) will remain its virtual vertex. Upon reaching the point y, \(\overline{p_{i+1}p_{i+2}}\setminus \{p_{i+1}\}\) may still be completely invisible. In that case, r will have to move further along a circular arc and place itself on the extension of the segment \(\overline{p_{i}p_{i+1}}\). But if r revolves with the same radius, its virtual vertex may change. Therefore it has to first reduce its distance from \(p_{i+1}\). But recall that its distance from \(p_{i+1}\) is encoding its memory and hence, the data will be lost if this distance is changed. Therefore, before changing its distance from \(p_{i+1}\), it will encode the data ‘somewhere’ else, such that it is preserved while it moves towards \(p_{i+1}\). Notice that although moving around \(p_{i+1}\) with the same radius can change its virtual vertex, it can still move by a small enough angle without changing its virtual vertex. From its view from y, it can compute a point \(y''\), such that the following conditions are satisfied: (E1) \(d(p_{i+1}, y'') = d(p_{i+1}, y) = d\), (E2) \(D(y'',d) \cap \partial (P) = \{p_{i+1}\}\), (E3) \(\angle y''p_{i+1}y < \frac{\pi }{2}\) encodes the same data encoded by d, both expressed in the coordinate system \((p_{i},p_{i+1})\). Now r will first move to \(y''\) along a circular arc and then change its state to \(s_4\). Then it will reduce its distance from \(p_{i+1}\) to \(d'\), so that \(d'\) satisfies the following conditions. (F1) \(d'\) encodes the same data encoded by \(\angle y''p_{i+1}y\) both expressed in the coordinate system defined by \((p_{i}p_{i+1})\). (F2) Let z be the point on the extension of the segment \(\overline{p_{i}p_{i+1}}\) with \(d(z,p_{i+1})=d'\). Then \(D(z,d') \cap \partial (P) = \{p_{i+1}\}\). When these conditions are satisfied, it will change its state to \(s_2\). Now r will move to z by moving around \(p_{i+1}\) in counterclockwise direction maintaining the distance \(d'\) from it. Upon reaching z, it can see at least some portion of \(\overline{p_{i+1}p_{i+2}} \setminus \{p_{i+1}\}\). Suppose that it still can not see \(p_{i+2}\). But since it can see some portion of \(\overline{p_{i+1}p_{i+2}} \setminus \{p_{i+1}\}\), it can compute the point \(z'\) on the extension of the segment \(\overline{p_{i+2}p_{i+1}}\) with \(d(z',p_{i+1})=d'\). Now r will move around \(p_{i+1}\) in clockwise direction towards \(z'\), but not touching it (say by choosing the middle of \(arc(z',z,p_{i+1})\) as its destination and so on). Eventually, it will be able to see \(p_{i+2}\). In fact, it can see both \(\overline{p_{i+1}p_{i+2}}\) and \(\overline{p_{i}p_{i+1}}\) entirely. Now r has to encode a new snapshot (merged with the old ones) in its distance from \(p_{i+1}\). Before that it will encode its memory in the angle that it makes with the extension of \(\overline{p_{i+2}p_{i+1}}\) at \(p_{i+1}\) by revolving further towards \(z'\), and then will change its state to \(s_5\). Then it will move towards \(p_{i+1}\) so that conditions C1, C2 are satisfied. When they are achieved, r will change its state to \(s_1\).
Fig. 4.

The robot moving around \(p_{i+1}\) to discover the succeeding Vertex and encode a new snapshot. The trail of the robot is shown in blue. (Color figure online)

Taking a Second Tour of a Connected Component of the Boundary

From the two techniques discussed, it is clear how a robot can ‘visit’ all the vertices of a previously unexplored connected component C of \(\partial (P)\). Also, whenever r encodes a new snapshot, it marks the position of its current virtual vertex with a ‘visited once’ flag. Upon completing its first tour of C, it will start a second tour of C in the same direction. In the second tour, the points from where the snapshots are taken, should constitute an ‘approximation’ of C, say \(\overline{C}\), such that the closed polygonal curve \(\overline{C}\) (1) does not self-intersect, (2) does not intersect \(\partial (P)\), and (3) does not intersect any other previous approximations. This will ensure that eventually all polygon vertices are discovered (See [4]). Suppose that C is composed of m vertices \(p_1, \ldots , p_m\). Assume that r has started exploring C from (close to) \(p_1\). As described earlier, it will sequentially visit all the vertices and eventually arrive at a point close to \(p_m\), from where \(p_1\) is visible. It can clearly identify \(p_1\) to be a previously visited vertex and will decide to start the second tour. Now clearly r has a full picture of C. So it can compute a distance d implicitly and include it in its memory, so that d has the following property. Let \(\tilde{C}(d) = \{{p_1'},\ldots ,{p_m'}\}\) denote the approximation of \({C} = \{{p_1},\ldots ,{p_m}\}\) such that each \(\overline{{p_i'}{p_{i+1}'}}\) is parallel to \(\overline{p_ip_{i+1}}\) (\(p_{m+1}\) is to be understood as \(p_1\)) and the separation between them is d (See Fig. 5). Then d should be small enough, so that the approximation \(\tilde{C}(d)\) satisfies all the three requirements. The points from where the robot will take snapshots during its second tour, will constitute an approximation \(\overline{C} = \{p^1_1, p^2_1,\ldots p^1_m, p^2_m\}\) consisting of 2m points, with \(\overline{C}\) lying in the region between C and \(\tilde{C}(d)\).

We shall now discuss the procedure in detail. The robot will approach \(p_1\) (with state \(s_3\)) in the same manner as described previously, but with an extra requirement that the path it follows should be lying in the region between C and \(\tilde{C}({\frac{d}{2}})\). Note that although d is computed in the implicit form, r can get an approximation of d in explicit form that is smaller than the actual value. First consider the case where \(\angle p_mp_1p_2\) is not reflex. Similar to the first tour, r goes to a point x so that the conditions C1 and C2 are satisfied (with \(p_{i} = p_m, p_{i+1} = p_1, p_{i+2} = p_2\)). We can refer to this in short by simply saying that ‘r takes a snapshot at x’. The extra requirement in this case would be that \(d(x,p_1) < \frac{d}{2}\). After this, r will change its state to \(s_1\). Now r will move around \(p_1\) to reach a point \(x'\) so that the condition B2 (with \(p_i = p_1, p_{i+1} = p_2\)) is satisfied, plus \(\angle x'p_1p_2\) should encode the view from \(x'\) merged with the older snapshots (encoded by \(d(x',p_1)\)) expressed in the coordinate system \((p_1,p_2)\). Again using similar phrasing, we shall refer to this by saying ‘r takes a snapshot at \(x'\)’. Let us denote the points x and \(x'\) by \(p^1_1\) and \(p^2_1\). Note that our constructions ensure that the line segment \(\overline{p^1_1p^2_1}\) is inside the region C and \(\tilde{C}({d})\). Now consider the case where \(\angle p_mp_1p_2\) is reflex. The robot will go to a point x that is close enough to \(p_1\), such that the following conditions are satisfied: (G1) \(d(x,p_1)\) encodes the old snapshots (encoded in \(\angle xp_1p_m\)) expressed in the coordinate system defined by \((p_m,p_1)\), (G2) \(d(x,p_1) < \frac{d}{2}\). After reaching such a point x, r will change its state to \(s_2\). Let \(\overrightarrow{p_1A}\) and \(\overrightarrow{p_1B}\) be the extensions of the segments \(\overline{p_2p_1}\) and \(\overline{p_mp_1}\) respectively. Let \(\overrightarrow{p_1O}\) be the angular bisector of the angle \(\angle Ap_1B\). Now r can move around \(p_1\) to place itself at a point \(p^1_1\) between the lines \(\overrightarrow{p_1A}\) and \(\overrightarrow{p_1O}\) such that the angle \(\angle p^1_1p_1O\) encodes the view from \(p^1_1\), merged with the older snapshots, all expressed in the coordinate system defined by \((p_m,p_1)\). In other words, r takes a snapshot at \(p^1_1\). Then r will change its state to \(s_6\), move towards \(p_1\) to encode the data in its distance from \(p_1\), again change its state to \(s_2\) and move around \(p_1\) to take a snapshot at a point \(p^2_1\) between the lines \(\overrightarrow{p_1O}\) and \(\overrightarrow{p_1B}\) encoding the snapshot (merged with the old ones) in the angle \(\angle p^2_1p_1O\) expressed in the coordinate system defined by \((p_m,p_1)\). Then r will again change its state to \(s_6\) and move towards \(p_1\) to encode the data in its distance from \(p_1\), this time expressed in the coordinate system \((p_1,p_2)\). After this, it will change its state to \(s_1\). Continuing in this manner, the robot will revisit all the vertices of the component, and take snapshots at \(p^1_i\) and \(p^2_i\), near each vertex \(p_i\). The polygonal chain \(\overline{C}\) clearly satisfies all three desired properties.
Fig. 5.

The robot taking a second tour. The trail of the robot is shown in blue. The approximations \(\tilde{C}(d)\) and \(\tilde{C}({\frac{d}{2}})\) are shown in pink and grey dotted lines respectively. (Color figure online)

Moving from One Connected Component to Another

A robot will move from a virtual vertex \(p_i\) to a vertex \(p_j\) belonging to a different connected component of \(\partial (P)\) only if \(\overline{p_ip_j} \subset PVor_P(p_i) \cup PVor_P(p_j)\). The robot r with state \(s_3\) will approach \(p_i\) and encode its memory in its distance from \(p_i\), expressed in the coordinate system defined by \((p_{i-1},p_{i})\). The robot will then change its state to \(s_2\). Note that \(p_j\) may not even be visible from its current position if \(\angle p_{i-1}p_ip_{i+1}\) is reflex. If \(\angle p_{i-1}p_ip_{i+1}\) is reflex and \(p_j\) lies in the open half-plane delimited by \(\overleftrightarrow {p_{i-1}p_i}\) containing \(p_{i+1}\), it will have to encode its memory in the coordinate system \((p_{i},p_{i+1})\) by previously discussed techniques. It will then change its state to \(s_1\). From its memory, it knows that the plan is to move to \(p_j\). It will then move around \(p_i\) to move to a point x so that the following conditions are satisfied. (H1) The ray \(\overrightarrow{p_ix}\) intersects the interior of the Voronoi edge \(Vor_{S(x)}(p_i) \cap Vor_{S(x)}(p_j)\), where S(x) denotes the polygon vertices visible from x. Suppose that the ray intersects the Voronoi edge \(Vor_{S(x)}(p_i) \cap Vor_{S(x)}(p_j)\) at point A. (H2) The angle \(\alpha = \angle xp_ip_j\) encodes its memory. All coordinates of the snapshots are expressed in the coordinate system defined by \((p_i,p_j)\). The encoding will also contain a rational approximation of \(\frac{1}{2}(p_i - p_j)\) expressed in the local coordinate system of r. The robot will then change its state to \(s_7\), move along \(\mathcal {P}(p_i,p_j,\alpha )\) towards \(p_j\), i.e., it will first move to A and then to a point properly close to \(p_j\). Consider the situation when r stops at a point z on the path \(\mathcal {P}({p_i,p_j,\alpha })\). When r was at x, it verified H1 by checking from its snapshot that the disc D(Ad) contains no polygon vertex other than \(p_i, p_j\), where \(d = d(A,p_i) = d(A,p_j)\). It implies from this that \(\mathcal {P}({p_i,p_j,\alpha }) \subset PVor_P(p_i) \cup PVor_P(p_j)\). Hence, at z, its closest visible vertex, and hence its virtual vertex, is either \(p_i\) or \(p_j\). It computes intersections between the ray from its virtual vertex, passing through it, and the perpendicular bisectors of the lines joining its virtual vertex and other visible vertices; and then checks if the intersection point is on the corresponding Voronoi edge. It will find that the ray intersects the Voronoi edge defined by \(p_i\) and \(p_j\) first. However, r does not immediately know whether it is moving from \(p_i\) to \(p_j\), or from \(p_j\) to \(p_i\). However, it knows that its memory is encoded in the angle it makes with \(\overline{p_ip_j}\) at its virtual vertex \(\in \{p_i,p_j\}\). But r does not to know if it is encoded with the coordinate system \((p_i,p_j)\) or \((p_j,p_i)\). However, recall that the memory contains a rational approximation of \(\frac{1}{2}(p_i - p_j)\), call it w, expressed in its local coordinate system. Now r computes \(w + \frac{1}{2}(p_i + p_j)\), which gives an approximation of \(p_i\). from which r determines that it is moving away from \(p_i\), and also the fact that its encoded memory is expressed in \((p_i,p_j)\). So, eventually it will move to a point properly close to \(p_j\) so that its distance from \(p_j\) encodes its memory expressed in either \((p_j,p_{j+1})\) or \((p_{j-1},p_j)\), and then change its state to \(s_1\) or \(s_2\) accordingly.

5 Conclusion

In this work, we have shown how a finite state robot with non-rigid movements can construct the map of a polygon by a positional encoding strategy. The techniques developed here, give a general movement strategy for finite state robots with non-rigid movements, to move about in the polygon, without losing its encoded memory. The map construction algorithm can be used as a subroutine to solve distributed algorithms for mobile robot systems under this model, where the knowledge of the polygon may be required. For instance, consider the Gathering problem, where a set of autonomous, anonymous, asynchronous finite state mobile robots with no agreement in coordinate system and no communication capabilities, have to meet at some point in the polygon. Assume that the polygon is asymmetric. Then each robot will first construct and encode the map of the polygon. Since the polygon is asymmetric, the robots can deterministically pick a polygon vertex as their meeting point. Then using our techniques, the robots can move to that vertex. However, when the polygon is not asymmetric, Gathering appears to be challenging even for robots with unlimited memory. For symmetric polygons, we can consider the relaxed version of Gathering, called Meeting, where any two of the robots have to become mutually aware by seeing each other at their LOOK phases. Using our techniques, a patrolling strategy similar to [10] can be adapted to our setting to solve Meeting.

It would be very interesting to investigate whether map construction or Meeting can be solved by fully oblivious robots with non-rigid movements. Another direction would be to study the problems for oblivious robots with limited visibility. Also, our movement model allows the robots to make circular moves, as opposed to [10], where the robots can move only along a straight line. It would be interesting to see if the same result can be achieved without the ability to make circular moves.

Notes

Acknowledgements

The first three authors are supported by NBHM, DAE, Govt. of India, CSIR, Govt. of India and UGC, Govt. of India, respectively. We would like to thank the anonymous reviewers for their valuable comments which helped us improve the quality and presentation of the paper.

References

  1. 1.
    Agathangelou, C., Georgiou, C., Mavronicolas, M.: A distributed algorithm for gathering many fat mobile robots in the plane. In: ACM Symposium on Principles of Distributed Computing, PODC 2013, Montreal, QC, Canada, 22–24 July 2013, pp. 250–259 (2013).  https://doi.org/10.1145/2484239.2484266
  2. 2.
    Blum, L., Cucker, F., Shub, M., Smale, S.: Complexity and Real Computation. Springer, Berlin (1998).  https://doi.org/10.1007/978-1-4612-0701-6CrossRefzbMATHGoogle Scholar
  3. 3.
    Bose, K., Adhikary, R., Kundu, M.K., Sau, B.: Arbitrary pattern formation on infinite grid by asynchronous oblivious robots. In: Das, G.K., Mandal, P.S., Mukhopadhyaya, K., Nakano, S. (eds.) WALCOM 2019. LNCS, vol. 11355, pp. 354–366. Springer, Cham (2019).  https://doi.org/10.1007/978-3-030-10564-8_28CrossRefGoogle Scholar
  4. 4.
    Bose, K., Adhikary, R., Kundu, M.K., Sau, B.: Positional encoding by robots with non-rigid movements. CoRR abs/1905.09786 (2019). http://arxiv.org/abs/1905.09786
  5. 5.
    Cicerone, S., Di Stefano, G., Navarra, A.: Asynchronous arbitrary pattern formation: the effects of a rigorous approach. Distrib. Comput. 1–42 (2018).  https://doi.org/10.1007/s00446-018-0325-7MathSciNetCrossRefGoogle Scholar
  6. 6.
    Cieliebak, M., Flocchini, P., Prencipe, G., Santoro, N.: Distributed computing by mobile robots: gathering. SIAM J. Comput. 41(4), 829–879 (2012).  https://doi.org/10.1137/100796534MathSciNetCrossRefzbMATHGoogle Scholar
  7. 7.
    Cohen, H.: A Course in Computational Algebraic Number Theory, vol. 138. Springer, Heidelberg (2013).  https://doi.org/10.1007/978-3-662-02945-9CrossRefGoogle Scholar
  8. 8.
    Flocchini, P., Prencipe, G., Santoro, N., Widmayer, P.: Arbitrary patternformation by asynchronous, anonymous, oblivious robots. Theor. Comput. Sci. 407(1–3), 412–447 (2008).  https://doi.org/10.1016/j.tcs.2008.07.026CrossRefzbMATHGoogle Scholar
  9. 9.
    Flocchini, P., Santoro, N., Viglietta, G., Yamashita, M.: Rendezvous withconstant memory. Theor. Comput. Sci. 621, 57–72 (2016).  https://doi.org/10.1016/j.tcs.2016.01.025CrossRefzbMATHGoogle Scholar
  10. 10.
    Luna, G.A.D., Flocchini, P., Santoro, N., Viglietta, G., Yamashita, M.: Meeting in a polygon by anonymous oblivious robots. In: 31st International Symposium on Distributed Computing, DISC 2017, pp. 14:1–14:15, Vienna, 16-20 October 2017.  https://doi.org/10.4230/LIPIcs.DISC.2017.14
  11. 11.
    Pagli, L., Prencipe, G., Viglietta, G.: Getting close without touching :near-gathering for autonomous mobile robots. Distrib. Comput. 28(5), 333–349 (2015).  https://doi.org/10.1007/s00446-015-0248-5MathSciNetCrossRefzbMATHGoogle Scholar
  12. 12.
    Suzuki, I., Yamashita, M.: Distributed anonymous mobile robots:formation of geometric patterns. SIAM J. Comput. 28(4), 1347–1363 (1999).  https://doi.org/10.1137/S009753979628292XMathSciNetCrossRefzbMATHGoogle Scholar
  13. 13.
    Yamauchi, Y., Yamashita, M.: Randomized pattern formation algorithm for asynchronous oblivious mobile robots. In: Kuhn, F. (ed.) DISC 2014. LNCS, vol. 8784, pp. 137–151. Springer, Heidelberg (2014).  https://doi.org/10.1007/978-3-662-45174-8_10CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Department of MathematicsJadavpur UniversityKolkataIndia

Personalised recommendations