Abstract
We do a little example tour through many methods and ideas we are going to study in this book.This is a quick walk through the methodology that is of interest to design swarm robot systems. We model a robot controller with a finite state machine for a collective-decision-making problem. We immediately face the typical challenge of distinguishing between microscopic information that is available to an individual robot and macroscopic information that is only available to an external observer. We continue with a simple macroscopic model of collective-decision making and discuss whether it represents a self-organizing system.
Keywords
- Simple Macroscopic Model
- Finite State Machine
- Collective Decision-making
- Fracture Swarms
- Micro-macro Problem
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
“And what happens to that incredibly complex memory bank that remembers the whole system during these periods of ‘swarming’?”
—Stanisław Lem, The Invincible
“We take off into the cosmos, ready for anything: for solitude, for hardship, for exhaustion, death. […] A single world, our own, suffices us; but we can’t accept it for what it is.”
—Stanisław Lem, Solaris
3.1 Finite State Machines as Robot Controllers
A robot controller can be represented by a finite state machine with states being associated to actions and transitions triggered, for example, by sensor input or timers. A state represents constant actuation for the time spent in that state. An example for a collision avoidance behavior modeled by a finite state machine is given in Fig. 3.1. The robot has a sensor to the left s l and a sensor to the right s r . Thresholds θ l and θ r determine when an object is too close (bigger values of s indicate closer objects). The turns are executed for a defined time until a timer is triggered.
A minimal example is shown in Fig. 3.2. The condition of the transition T can depend on a certain sensor value and a threshold, on a timer, on a received message, etc.
3.2 State Transitions Based on Robot–Robot Interactions
In the following we focus on state transitions that depend on robot–robot interactions. In particular we focus on situations when the states of neighboring robots determine each robot’s state transitions and hence its behavior. Say there are only two possible states: A and B. Say the swarm size is N. Then we can define a variable a that counts robots currently in state A and a variable b that counts robots in state B. Both a and b reflect global knowledge about the swarm that is available to an observer but not to the robots themselves. An obvious condition is N = a + b. Furthermore, we can calculate the fraction of robots that are in state A by \(\alpha =\frac {a}{N}\).
We assume that an agent is able to determine the internal state of all its neighbors. This could be implemented by explicit messaging or, for example, each robot could switch on a colored LED which encodes its internal state and which could be detected by vision. Furthermore, we assume that all robots also keep moving around although we do not specify a particular purpose of doing so here. However, if the robots are in motion, then their neighborhoods are also dynamic.
Based on the unit disc model we assume a sensor range r. For a robot R 0 all robots within distance r are within its neighborhood \(\mathcal {N}\). For the situation shown in Fig. 3.3 we have \(\mathcal {N}=\{R_1,R_2,R_3,R_4\}\). We assume that R 0 knows the states of all neighboring robots \(\mathcal {N}\). In similarity to the above defined variables we can introduce variables \(\hat {a}\) and \(\hat {b}\) that give the number of neighboring robots of robot R 0 plus itself that are in state A and B, respectively. We have \(|\mathcal {N}|+1=\hat {a}+\hat {b}\) and we can give a fraction of robots that are in state A by \(\hat {\alpha }=\frac {\hat {a}}{|\mathcal {N}|+1}\).
In the following we assume that a robot’s state transitions between A and B depend exclusively on \(\hat {\alpha }\). For example, we could say if the robot is currently in state A and it measures \(\hat {\alpha }<0.5\) (i.e., there are more neighbors in state B than state A) then it switches to state B.
3.3 Early Micro-Macro Problems
We can interpret the swarm fraction \(\hat {\alpha }\) as a measurement by robot R 0 of the actual current global situation which is given by swarm fraction α. This is also called “local sampling” and will be discussed in detail later (see Sect. 5.2). Generally we have \(\alpha \ne \hat {\alpha }\). Under which conditions could we hope for \(\alpha \approx \hat {\alpha }\)? In average and for a so-called well-mixed system, that is a system without a bias, we can assume \(\alpha \approx \hat {\alpha }\). However, the well-mixed assumption does often not hold and the sampling error (i.e., variance in the robot’s measurements) can have systematic effects and hence introduce a bias as well. This complex of problems is already a small taste of the micro-macro problem that will be discussed later in detail. The main challenge of swarm robotics is to find connections between the microscopic level (here, local measurements of \(\hat {\alpha }\)) and the macroscopic level (here, the actual global situation α). We also speak of establishing a micro-macro link.
3.4 Minimal Example: Collective Decision-Making
Next, we investigate a minimal example of collective decision-making (more details later, see Chap. 6). The task in collective decision-making is typically to find a consensus, that is, 100% of the robots in the swarm agree on a decision which could, for example, be to switch to the same internal state. Based on the above defined result \(\hat {\alpha }\) of locally sampling the neighborhood and a threshold of 0.5 we can define a finite state machine for this little collective decision-making scenario (see Fig. 3.4).
These transition rules define what we call a majority rule because this approach tries to reinforce the current majority. If there are more close-by robots in state A, then the considered robot switches to A (otherwise it stays in B). If there are more close-by robots in state B, then the considered robot switches to B (otherwise it stays in A). Our hope is that in average the local measurement \(\hat {\alpha }\) gives a good approximation to the global state α (cf. micro-macro problem). As a consequence, the whole swarm should converge on a consensus of either α = 1 or α = 0.
3.5 Macroscopic Perspective
On the microscopic level the situation is relatively clear. The measurement of \(\hat {\alpha }\) is probabilistic but the state switching behavior for a given \(\hat {\alpha }\) is deterministic. If we want to determine the macroscopic effect of this microscopic behavior, then things get a bit more difficult. We have to look into the combinatorics of all possible neighborhoods. For simplicity we restrict ourselves to a small neighborhood of \(|\mathcal {N}|=2\) and we make use of the above defined true swarm fraction α.
The probability P B→A to switch from B to A is given by
because the considered robot has to be in state B (probability 1 − α), we assume that its neighborhood is statistically independent, so we multiply with the probability that both (\(|\mathcal {N}|=2\)) neighboring robots are in state A (otherwise the transition condition α > 0.5 is not satisfied). According to combinatorics there would be \({3 \choose 1}=3\) ways of arranging two A and one B (BAA, ABA, and AAB) but only the first one results in a switch with the considered robot in state B. A plot of Eq. (3.1) is shown in Fig. 3.5. For α > 0.66 there are too few robots in state B that could potentially switch; that is why the probability is decreasing with increasing α. For α < 0.66 there are too few robots in state A that could generate a local majority; that is why the probability is decreasing with decreasing α.
The probability to switch from A to B is fully symmetric and defined by
In all other situations we observe no switch and we get
whereas the term 2(1 − α)α 2 accounts for the above-mentioned situations ABA, and AAB where we would have a majority of A but the considered robot is already in state A and hence does not switch to A. In symmetry we get
3.6 Expected Macroscopic Dynamics and Feedbacks
Now we would like to know the expected macroscopic dynamics of the system, that is, how does the swarm fraction α develop over time. For that we need to introduce a representation of time. We define a time interval Δt that is long enough to observe state transitions and short enough to observe not too many state transitions. We define the expected change Δα of α by using the above defined probabilities of state transitions P B→A and P A→B (α) and by weighting them according to the contribution of the state switches to the change of Δα
The first term represents transitions B → A, which contribute positively to α (i.e., generating more robots in state A). The second term represents transitions A → B, which contribute negatively to α (i.e., generating more robots in state B). Factor 1/N accounts for an assumed switching rate of one robot per time step. Equation (3.5) is plotted in Fig. 3.6.
What is shown in Fig. 3.6 represents a feedback process. Whether it is positive or negative feedback is easily determined visually. The left half of the diagram represents minority situations in terms of state A because we have α < 0.5 there. In the left half we also have only negative values for Δα. That means the minority of state A is even reinforced. Similarly, in the right half of the diagram we have majority situations in terms of state A because α > 0.5. In the right half we also have only positive values for Δα. That means the majority of state A is even reinforced. Hence, we have positive feedback.
Does this robot swarm qualify as self-organizing system? It contains positive feedback. Does it also contain negative feedback? It is not obvious but it does contain negative feedback. All real-world systems have limited resources, so also here the positive feedback is stopped once the minority is eaten up and no robots remain to switch to the majority (for α = 0 and α = 1). The system has initial fluctuations that determine whether we end up with 100% of the robots in state A or 100% in state B, and we have multiple interactions between many robots. The balance between exploitation and exploration is, however, biased towards exploitation. Once the swarm has converged on a consensus it will never leave it. This can be bad in certain situations, for example, in dynamic environments when the swarm should stay adaptive to new situations. Hence, this robot swarm is close to a self-organizing system but misses exploration.
How could exploration be included? We could allow robots to spontaneously switch between states. Say in each time step they have a 5% chance of a spontaneous switch. The above given expected change Δα, Eq. (3.5), then changes to
While α = 0 and α = 1 were fixed points before, they are not anymore. For α = 0 we have now \(\frac {\varDelta \alpha (\alpha )}{\varDelta t} = 0.05\) and for α = 1 we have \(\frac {\varDelta \alpha (\alpha )}{\varDelta t} = -0.05\). Hence, we have regions of negative feedback at the two boundaries for about α < 0.052 and α > 0.948. As a consequence we always have a few robots in the opposite state, who can serve as explorers to check whether the other option has improved in utility. So then we have a self-organizing system.
3.7 Further Reading
This chapter is meant to give you a small taste of what we are going to discuss in this book. So you can just keep reading or if you want to know many more details about modeling collective-decision-making systems then you can continue by reading Valentini’s book on the subject [392]. Vigelius et al. [402] study micro-macro models of collective decision-making. Couzin et al. [76] explain how animals find good decisions. Biancalani et al. [41] describe a very simple decision-making model that, however, most likely does not scale. Reina et al. [324] give a hint on how a software engineering approach for collective decision-making in swarm robotics can look like.
3.8 Tasks
3.8.1 Task: Plot the Macroscopic Dynamic System Behavior
Plot the expected change \(\frac {\varDelta \alpha (\alpha )}{\varDelta t}\) with negative feedback as given in Eq. (3.6). How does it change when you choose different probabilities for spontaneous switching?
3.8.2 Task: Simulate Collective Decision-Making
Write a little simulation that simulates non-embodied agents moving randomly on a torus and switching states according to the rules we have defined in this chapter. Monitor the current global state α and plot its trajectory, that is, how it changes over time. Check also different probabilities for spontaneous switching again. Does it appear that the swarm changes from large majority in favor of option A to option B and vice versa? How long does it take?
References
Biancalani, T., Dyson, L., & McKane, A. J. (2014). Noise-induced bistable states and their mean switching time in foraging colonies. Physical Review Letters, 112, 038101. http://link.aps.org/doi/10.1103/PhysRevLett.112.038101
Couzin, I. D., Ioannou, C. C., Demirel, G., Gross, T., Torney, C. J., Hartnett, A., et al. (2011). Uninformed individuals promote democratic consensus in animal groups. Science, 334(6062), 1578–1580. ISSN 0036-8075. https://doi.org/10.1126/science.1210280. http://science.sciencemag.org/content/334/6062/1578
Reina, A., Dorigo, M., & Trianni, V. (2014). Towards a cognitive design pattern for collective decision-making. In M. Dorigo, M. Birattari, S. Garnier, H. Hamann, M. M. de Oca, C. Solnon, & T. Stützle (Eds.), Swarm intelligence. Lecture notes in computer science (Vol. 8667, pp. 194–205). Berlin: Springer International Publishing. ISBN 978-3-319-09951-4. http://dx.doi.org/10.1007/978-3-319-09952-1_17
Valentini, G. (2017). Achieving consensus in robot swarms: Design and analysis of strategies for the best-of-n problem. Berlin: Springer. ISBN 978-3-319-53608-8. https://doi.org/10.1007/978-3-319-53609-5
Vigelius, M., Meyer, B., & Pascoe, G. (2014). Multiscale modelling and analysis of collective decision making in swarm robotics. PLoS One, 9(11), 1–19. https://doi.org/10.1371/journal.pone.0111542
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer International Publishing AG
About this chapter
Cite this chapter
Hamann, H. (2018). Short Journey Through Nearly Everything. In: Swarm Robotics: A Formal Approach. Springer, Cham. https://doi.org/10.1007/978-3-319-74528-2_3
Download citation
DOI: https://doi.org/10.1007/978-3-319-74528-2_3
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-74526-8
Online ISBN: 978-3-319-74528-2
eBook Packages: EngineeringEngineering (R0)