Advertisement

Identifiers in Registers

Describing Network Algorithms with Logic
  • Benedikt Bollig
  • Patricia Bouyer
  • Fabian ReiterEmail author
Open Access
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11425)

Abstract

We propose a formal model of distributed computing based on register automata that captures a broad class of synchronous network algorithms. The local memory of each process is represented by a finite-state controller and a fixed number of registers, each of which can store the unique identifier of some process in the network. To underline the naturalness of our model, we show that it has the same expressive power as a certain extension of first-order logic on graphs whose nodes are equipped with a total order. Said extension lets us define new functions on the set of nodes by means of a so-called partial fixpoint operator. In spirit, our result bears close resemblance to a classical theorem of descriptive complexity theory that characterizes the complexity class \(\textsc {pspace}\) in terms of partial fixpoint logic (a proper superclass of the logic we consider here).

1 Introduction

This paper is part of an ongoing research project aiming to develop a descriptive complexity theory for distributed computing.

In classical sequential computing, descriptive complexity is a well-established field that connects computational complexity classes to equi-expressive classes of logical formulas. It began in the 1970s, when Fagin showed in [6] that the graph properties decidable by nondeterministic Turing machines in polynomial time are exactly those definable in existential second-order logic. This provided a logical—and thus machine-independent—characterization of the complexity class \(\textsc {np}\). Subsequently, many other popular classes, such as \(\textsc {p}\), \(\textsc {pspace}\), and \(\textsc {exptime}\) were characterized in a similar manner (see for instance the textbooks [8, 12, 15]).

Of particular interest to us is a result due to Abiteboul, Vianu [1], and Vardi [19], which states that on structures equipped with a total order relation, the properties decidable in \(\textsc {pspace}\) coincide with those definable in partial fixpoint logic. The latter is an extension of first-order logic with an operator that allows us to inductively define new relations of arbitrary arity. Basically, this means that new relations can occur as free (second-order) variables in the logical formulas that define them. Those variables are initially interpreted as empty relations and then iteratively updated, using the defining formulas as update rules. If the sequence of updates converges to a fixpoint, then the ultimate interpretations are the relations reached in the limit. Otherwise, the variables are simply interpreted as empty relations. Hence the term “partial fixpoint”.

While well-developed in the classical case, descriptive complexity has so far not received much attention in the setting of distributed network computing. As far as the authors are aware, the first step in this direction was taken by Hella et al. in [10, 11], where they showed that basic modal logic evaluated on finite graphs has the same expressive power as a particular class of distributed automata operating in constant time. Those automata constitute a weak model of distributed computing in arbitrary network topologies, where all nodes synchronously execute the same finite-state machine and communicate with each other by broadcasting messages to their neighbors. Motivated by this result, several variants of distributed automata were investigated by Kuusisto and Reiter in [14, 18] and [17] to establish similar connections with standard logics such as the modal \(\mu \)-calculus and monadic second-order logic. However, since the models of computation investigated in those works are based on anonymous finite-state machines, they are much too weak to solve many of the problems typically considered in distributed computing, such as leader election or constructing a spanning tree. It would thus be desirable to also characterize stronger models.

A common assumption underlying many distributed algorithms is that each node of the considered network is given a unique identifier. This allows us, for instance, to elect a leader by making the nodes broadcast their identifiers and then choose the one with the smallest identifier as the leader. To formalize such algorithms, we need to go beyond finite-state machines because the number of bits required to encode a unique identifier grows logarithmically with the number of nodes in the network. Recently, in [2, 3], Aiswarya, Bollig and Gastin introduced a synchronous model where, in addition to a finite-state controller, nodes also have a fixed number of registers in which they can store the identifiers of other nodes. Access to those registers is rather limited in the sense that their contents can be compared with respect to a total order, but their numeric values are unknown to the nodes. (This restriction corresponds precisely to the notion of order-invariant distributed algorithms, which was introduced by Naor and Stockmeyer in [16].) Similarly, register contents can be copied, but no new values can be generated. Since the original motivation for the model was to automatically verify certain distributed algorithms running on ring networks, its formal definition is tailored to that particular setting. However, the underlying principle can be generalized to arbitrary networks of unbounded maximum degree, which was the starting point for the present work.

Contributions. While on an intuitive level, the idea of finite-state machines equipped with additional registers might seem very natural, it does not immediately yield a formal model for distributed algorithms in arbitrary networks. In particular, it is not clear what would be the canonical way for nodes to communicate with a non-constant number of peers, if we require that they all follow the same, finitely representable set of rules.

The model we propose here, dubbed distributed register automata, is an attempt at a solution. As in [2, 3], nodes proceed in synchronous rounds and have a fixed number of registers, which they can compare and update without having access to numeric values. The new key ingredient that allows us to formalize communication between nodes of unbounded degree is a local computing device we call transition maker. This is a special kind of register machine that the nodes can use to scan the states and register values of their entire neighborhood in a sequential manner. In every round, each node runs the transition maker to update its own local configuration (i.e., its state and register valuation) based on a snapshot of the local configurations of its neighbors in the previous round. A way of interpreting this is that the nodes communicate by broadcasting their local configurations as messages to their neighbors. Although the resulting model of computation is by no means universal, it allows formalizing algorithms for a wide range of problems, such as constructing a spanning tree (see Example 5) or testing whether a graph is Hamiltonian (see Example 6).

Nevertheless, our model is somewhat arbitrary, since it could be just one particular choice among many other similar definitions capturing different classes of distributed algorithms. What justifies our choice? This is where descriptive complexity comes into play. By identifying a logical formalism that has the same expressive power as distributed register automata, we provide substantial evidence for the naturalness of that model. Our formalism, referred to as functional fixpoint logic, is a fragment of the above-mentioned partial fixpoint logic. Like the latter, it also extends first-order logic with a partial fixpoint operator, but a weaker one that can only define unary functions instead of arbitrary relations. We show that on totally ordered graphs, this logic allows one to express precisely the properties that can be decided by distributed register automata. The connection is strongly reminiscent of Abiteboul, Vianu and Vardi’s characterization of \(\textsc {pspace}\), and thus contributes to the broader objective of extending classical descriptive complexity to the setting of distributed computing. Moreover, given that logical formulas are often more compact and easier to understand than abstract machines (compare Examples 6 and 8), logic could also become a useful tool in the formal specification of distributed algorithms.

The remainder of this paper is structured around our main result:

Theorem 1

When restricted to finite graphs whose nodes are equipped with a total order, distributed register automata are effectively equivalent to functional fixpoint logic.

After giving some preliminary definitions in Sect. 2, we formally introduce distributed register automata in Sect. 3 and functional fixpoint logic in Sect. 4. We then sketch the proof of Theorem 1 in Sect. 5, and conclude in Sect. 6.

2 Preliminaries

We denote the empty set by Open image in new window , the set of nonnegative integers by Open image in new window , and the set of integers by Open image in new window . The cardinality of any set S is written as Open image in new window and the power set as Open image in new window .

In analogy to the commonly used notation for real intervals, we define the notation Open image in new window for any Open image in new window such that Open image in new window . To indicate that an endpoint is excluded, we replace the corresponding square bracket with a parenthesis, e.g., Open image in new window . Furthermore, if we omit the first endpoint, it defaults to 0. This gives us shorthand notations such as Open image in new window and Open image in new window .

All graphs we consider are finite, simple, undirected, and connected. For notational convenience, we identify their nodes with nonnegative integers, which also serve as unique identifiers. That is, when we talk about the Open image in new window of a node, we mean its numerical representation. A Open image in new window is formally represented as a pair Open image in new window , where the set Open image in new window of Open image in new window is equal to Open image in new window , for some integer Open image in new window , and the set Open image in new window consists of undirected Open image in new window of the form Open image in new window such that Open image in new window . Additionally, Open image in new window must satisfy that every pair of nodes is connected by a sequence of edges. The restriction to graphs of size at least two is for technical reasons; it ensures that we can always encode Boolean values as nodes.

We refer the reader to [5] for standard graph theoretic terms such as Open image in new window , Open image in new window , Open image in new window , Open image in new window , and Open image in new window .

Graphs are used to model computer networks, where nodes correspond to processes and edges to communication links. To represent the current configuration of a system as a graph, we equip each node with some additional information: the current state of the corresponding process, taken from a nonempty finite set Open image in new window , and some pointers to other processes, modeled by a finite set Open image in new window of registers.

We call Open image in new window a Open image in new window and define a Open image in new window - Open image in new window as a tuple Open image in new window , where Open image in new window is a graph, called the Open image in new window graph of Open image in new window , Open image in new window is a Open image in new window that assigns to each node a state Open image in new window , and Open image in new window is a Open image in new window that associates with each node a Open image in new window Open image in new window . The set of all Open image in new window -configurations is denoted by Open image in new window . Figure 1 on page 6 illustrates part of a Open image in new window -configuration.

If Open image in new window , then we are actually dealing with a tuple Open image in new window , which we call a Open image in new window - Open image in new window . Accordingly, the elements of Open image in new window may also be called Open image in new window . A set Open image in new window of labeled graphs will be referred to as a Open image in new window . Moreover, if the labels are irrelevant, we set Open image in new window equal to the singleton Open image in new window , where Open image in new window is our dummy label. In this case, we identify Open image in new window with Open image in new window and call it an Open image in new window graph.

3 Distributed Register Automata

Many distributed algorithms can be seen as transducers. A leader-election algorithm, for instance, takes as input a network and outputs the same network, but with every process storing the identifier of the unique leader in some dedicated register Open image in new window . Thus, the algorithm transforms a Open image in new window -configuration into a Open image in new window -configuration. We say that it defines a Open image in new window . By the same token, if we consider distributed algorithms that decide graph properties (e.g., whether a graph is Hamiltonian), then we are dealing with a Open image in new window , where Open image in new window is some set of labels. The idea is that a graph will be accepted if and only if every process eventually outputs Open image in new window .

Let us now formalize the notion of transduction. For any two signatures Open image in new window and Open image in new window , a Open image in new window is a partial mapping Open image in new window such that, if defined, Open image in new window for some Open image in new window and Open image in new window . That is, a transduction does not modify the underlying graph but only the states and register valuations. We denote the set of all Open image in new window by Open image in new window and refer to Open image in new window and Open image in new window as the Open image in new window and Open image in new window of Open image in new window . By extension, Open image in new window and Open image in new window are called the sets of Open image in new window and Open image in new window , and Open image in new window and Open image in new window the sets of Open image in new window and Open image in new window . Similarly, any Open image in new window -configuration Open image in new window can be referred to as an Open image in new window of Open image in new window and Open image in new window as an Open image in new window .

Next, we introduce our formal model of distributed algorithms.

Definition 2

(Distributed register automaton). Let Open image in new window and Open image in new window be two signatures. A Open image in new window (or simply ) with input signature Open image in new window and output signature Open image in new window is a tuple Open image in new window consisting of a nonempty finite set Open image in new window of Open image in new window , a finite set Open image in new window of Open image in new window that includes both Open image in new window and Open image in new window , an Open image in new window Open image in new window , a transition maker Open image in new window whose specification will be given in Definition 3 below, a set Open image in new window of Open image in new window , and an Open image in new window Open image in new window . The registers in Open image in new window are called Open image in new window .

Automaton Open image in new window computes a transduction Open image in new window . To do so, it runs in a sequence of synchronous rounds on the input configuration’s underlying graph Open image in new window . After each round, the automaton’s global configuration is a Open image in new window -configuration Open image in new window , i.e., the underlying graph is always  Open image in new window . As mentioned before, for a node Open image in new window , we interpret Open image in new window as the current state of  Open image in new window and Open image in new window as the current register valuation of Open image in new window . Abusing notation, we let Open image in new window and say that Open image in new window is the Open image in new window of Open image in new window . In Fig. 1, the local configuration node 17 is Open image in new window .

For a given input configuration Open image in new window , the automaton’s Open image in new window is Open image in new window , where for all Open image in new window , we have Open image in new window if Open image in new window , and Open image in new window if Open image in new window . This means that every node Open image in new window is initialized to state Open image in new window , and Open image in new window ’s initial register valuation Open image in new window assigns Open image in new window ’s own identifier (provided by  Open image in new window ) to all non-input registers while keeping the given values assigned by Open image in new window to the input registers.
Fig. 1.

Part of a configuration, as seen by a single node. Assuming the identifiers of the nodes are the values represented in black boxes (i.e., those stored in register Open image in new window ), the automaton at node 17 will update its own local configuration Open image in new window by running the transition maker on the sequence consisting of the local configurations of nodes 17, 2, 34, and 98 (in that exact order).

Each subsequent configuration is obtained by running the transition maker  Open image in new window synchronously on all nodes. As we will see, Open image in new window computes a functionthat maps from nonempty sequences of local configurations to local configurations. This allows the automaton Open image in new window to transition from a given configuration Open image in new window to the next configuration Open image in new window as follows. For every node Open image in new window of degree  Open image in new window , we consider the list Open image in new window of Open image in new window ’s neighbors sorted in ascending (identifier) order, i.e., Open image in new window . (See Fig. 1 for an example, where  Open image in new window corresponds to node 17.) If Open image in new window is already in a halting state, i.e., if Open image in new window , then its local configuration does not change anymore, i.e., Open image in new window . Otherwise, we define Open image in new window , which we may write more suggestively asIntuitively, node Open image in new window updates its own local configuration by using Open image in new window to scan a snapshot of its neighbors’ local configurations. As the system is synchronous, this update procedure is performed simultaneously by all nodes.

A configuration Open image in new window is called a Open image in new window if all nodes are in a halting state, i.e., if Open image in new window for all Open image in new window . We say that \(A\,halts\) if it reaches a halting configuration.

The output configuration produced by a halting configuration Open image in new window is the Open image in new window Open image in new window , where for all Open image in new window and Open image in new window , we have Open image in new window . In other words, each node Open image in new window outputs the state Open image in new window and keeps in its output registers the values assigned by Open image in new window .

It is now obvious that Open image in new window defines a transduction Open image in new window . If Open image in new window receives the input configuration Open image in new window and eventually halts and produces the output configuration Open image in new window , then Open image in new window . Otherwise (if Open image in new window does not halt), Open image in new window is undefined.

Deciding graph properties. Our primary objective is to use distributed register automata as decision procedures for graph properties. Therefore, we will focus on automata Open image in new window that halt in a finite number of rounds on every input configuration, and we often restrict to input signatures of the form Open image in new window and the output signature Open image in new window . For example, for Open image in new window , we may be interested in the set of Open image in new window -labeled graphs that have exactly one a-labeled node Open image in new window (the “leader”). We stipulate that Open image in new window an input configuration Open image in new window with underlying graph Open image in new window if Open image in new window such that Open image in new window for all Open image in new window . Conversely, Open image in new window Open image in new window   Open image in new window if Open image in new window such that Open image in new window for some Open image in new window . This corresponds to the usual definition chosen in the emerging field of distributed decision [7]. Accordingly, a graph property Open image in new window is Open image in new window by Open image in new window if the automaton accepts all input configurations that satisfy Open image in new window and rejects all the others.

It remains to explain how the transition maker Open image in new window works internally.

Basically, a transition maker Open image in new window is a sequential register automaton (in the spirit of [13]) that reads a nonempty sequence Open image in new window of local configurations of Open image in new window in order to produce a new local configuration Open image in new window . While reading this sequence, it traverses itself a sequence Open image in new window of Open image in new window , which each consist of an inner state Open image in new window and an Open image in new window Open image in new window , where the symbol Open image in new window represents an undefined value. For the initial inner configuration, we set Open image in new window and Open image in new window for all Open image in new window . Now for Open image in new window , when Open image in new window is in the inner configuration Open image in new window and reads the local configuration Open image in new window , it can compare all values assigned to the inner registers and registers by Open image in new window and Open image in new window (with respect to the order relation on Open image in new window ). In other words, it has access to the binary relation Open image in new window such that for Open image in new window and Open image in new window , we have Open image in new window if and only if Open image in new window , and analogously for Open image in new window , Open image in new window , and Open image in new window . In particular, if Open image in new window , then Open image in new window is incomparable with respect to Open image in new window . Equipped with this relation, Open image in new window transitions to Open image in new window by evaluating Open image in new window and computing Open image in new window such that Open image in new window if Open image in new window , and Open image in new window if Open image in new window , where Open image in new window and Open image in new window . Finally, after having read the entire input sequence and reached the inner configuration Open image in new window , the transition maker outputs the local configuration Open image in new window such that Open image in new window and Open image in new window implies Open image in new window . Here we assume without loss of generality that Open image in new window guarantees that Open image in new window for all Open image in new window .

Remark 4

Recall that Open image in new window for any graph Open image in new window with n nodes. However, as registers cannot be compared with constants, this actually represents an arbitrary assignment of unique, totally ordered identifiers. To determine the smallest identifier (i.e., 0), the nodes can run an algorithm such as the following.

Example 5

(Spanning tree). We present a simple automaton Open image in new window with input signature Open image in new window and output signature Open image in new window that computes a (breadth-first) spanning tree of its input graph Open image in new window , rooted at the node with the smallest identifier. More precisely, in the computed output configuration Open image in new window , every node will store the identifier of its tree parent in register Open image in new window and the identifier of the root (i.e., the smallest identifier) in register Open image in new window . Thus, as a side effect, Open image in new window also solves the leader election problem by electing the root as the leader.

The automaton operates in three phases, which are represented by the set of states Open image in new window . A node terminates as soon as it reaches the third phase, i.e., we set Open image in new window . Accordingly, the (trivial) input and output functions are Open image in new window and Open image in new window . In addition to the output registers, each node has an auxiliary register Open image in new window that will always store its own identifier. Thus, we choose Open image in new window . For the sake of simplicity, we describe the transition maker Open image in new window in Algorithm 1 using pseudocode rules. However, it should be clear that these rules could be relatively easily implemented according to Definition 3.

All nodes start in state 1, which represents the tree-construction phase. By Open image in new window , whenever an active node (i.e., a node in state 1 or 2) sees a neighbor whose Open image in new window register contains a smaller identifier than the node’s own Open image in new window register, it updates its Open image in new window and Open image in new window registers accordingly and switches to state 1. To resolve the nondeterminism in Open image in new window , we stipulate that  Open image in new window is chosen to be the neighbor with the smallest identifier among those whose Open image in new window register contains the smallest value seen so far.

As can be easily shown by induction on the number of communication rounds, the nodes have to apply Open image in new window no more than Open image in new window times in order for the pointers in register Open image in new window to represent a valid spanning tree (where the root points to itself). However, since the nodes do not know when Open image in new window rounds have elapsed, they must also check that the current configuration does indeed represent a single tree, as opposed to a forest. They do so by propagating a signal, in form of state 2, from the leaves up to the root.

By  Open image in new window , if an active node whose neighbors all agree on the same root realizes that it is a leaf or that all of its children are in state 2, then it switches to state 2 itself. Assuming the Open image in new window pointers in the current configuration already represent a single tree, Open image in new window ensures that the root will eventually be notified of this fact (when all of its children are in state 2). Otherwise, the Open image in new window pointers represent a forest, and every tree contains at least one node that has a neighbor outside of the tree (as we assume the underlying graph is connected).

Depending on the input graph, a node can switch arbitrarily often between states 1 and 2. Once the spanning tree has been constructed and every node is in state 2, the only node that knows this is the root. In order for the algorithm to terminate, Open image in new window then makes the root broadcast an acknowledgment message down the tree, which causes all nodes to switch to the halting state 3.    \(\square \)

Building on the automaton from Example 5, we now give an example of a graph property that can be decided in our model of distributed computing. The following automaton should be compared to the logical formula presented later in Example 8, which is much more compact and much easier to specify.

Example 6

(Hamiltonian cycle). We describe an automaton with input signature Open image in new window and output signature Open image in new window that decides if the underlying graph Open image in new window of its input configuration Open image in new window is Hamiltonian, i.e., whether Open image in new window contains a cycle that goes through each node exactly once. The automaton works under the assumption that  Open image in new window encodes a valid spanning tree of Open image in new window in the registers Open image in new window and Open image in new window , as constructed by the automaton from Example 5. Hence, by combining the two automata, we could easily construct a third one that decides the graph property of Hamiltonicity.

The automaton Open image in new window presented here implements a simple backtracking algorithm that tries to traverse  Open image in new window along a Hamiltonian cycle. Its set of states is Open image in new window , with the set of halting states Open image in new window . Each non-halting state consists of two components, the first one serving for the backtracking procedure and the second one for communicating in the spanning tree. The input function Open image in new window initializes every node to the state Open image in new window , while the output function simply returns the answers chosen by the nodes, i.e., Open image in new window . In addition to the input registers, each node has a register Open image in new window storing its own identifier and a register Open image in new window to point to its successor in a (partially constructed) Hamiltonian path. That is, Open image in new window . We now describe the algorithm in an informal way. It is, in principle, easy to implement in the transition maker  Open image in new window , but a thorough formalization would be rather cumbersome.

In the first round, the root marks itself as Open image in new window and updates its Open image in new window register to point towards its smallest neighbor (the one with the smallest identifier). Similarly, in each subsequent round, any Open image in new window node that is pointed to by one of its neighbors marks itself as Open image in new window and points towards its smallest Open image in new window neighbor. However, if all neighbors are already Open image in new window , the node instead sends the Open image in new window signal to its predecessor and switches back to Open image in new window (in the following round). Whenever a Open image in new window node receives the Open image in new window signal from its Open image in new window , it tries to update its Open image in new window to the next-smallest Open image in new window neighbor. If no such neighbor exists, it resets its Open image in new window pointer to itself, propagates the Open image in new window signal to its predecessor, and becomes Open image in new window in the following round.

There is only one exception to the above rules: if a node that is adjacent to the root cannot find any Open image in new window neighbor, it chooses the root as its Open image in new window . This way, the constructed path becomes a cycle. In order to check whether that cycle is Hamiltonian, the root now broadcast a Open image in new window down the spanning tree. If the Open image in new window reaches an Open image in new window node, that node replies by sending the message Open image in new window towards the root. On the other hand, every Open image in new window leaf replies with the message Open image in new window . While Open image in new window is always forwarded up to the root, Open image in new window is only forwarded by nodes that receive this message from all of their children. If the root receives only Open image in new window , then it knows that the current cycle is Hamiltonian and it switches to the halting state  Open image in new window . The information is then broadcast through the entire graph, so that all nodes eventually accept. Otherwise, the root sends the Open image in new window signal to its predecessor, and the search for a Hamiltonian cycle continues. In case there is none (in particular, if there is not even an arbitrary cycle), the root will eventually receive the Open image in new window signal from its greatest neighbor, which indicates that all possibilities have been exhausted. If this happens, the root switches to the halting state Open image in new window , and all other nodes eventually do the same.    \(\square \)

4 Functional Fixpoint Logic

In order to introduce functional fixpoint logic, we first give a definition of first-order logic that suits our needs. Formulas will always be evaluated on ordered, undirected, connected, Open image in new window -labeled graphs, where Open image in new window is a fixed finite set of labels.

Throughout this paper, let Open image in new window be an infinite supply of Open image in new window and Open image in new window be an infinite supply of Open image in new window ; we refer to them collectively as Open image in new window . The corresponding set of Open image in new window is generated by the grammar Open image in new window , where Open image in new window and Open image in new window . With this, the set of Open image in new window of Open image in new window over Open image in new window is given by the grammarwhere Open image in new window and Open image in new window are terms, Open image in new window , and Open image in new window . As usual, we may also use the additional operators Open image in new window , Open image in new window , Open image in new window , Open image in new window to make our formulas more readable, and we define the notations Open image in new window , Open image in new window , and Open image in new window as abbreviations for Open image in new window , Open image in new window , and Open image in new window , respectively.

The sets of Open image in new window of a term Open image in new window and a formula  Open image in new window are denoted by Open image in new window and Open image in new window , respectively. While node variables can be bound by the usual quantifiers Open image in new window and Open image in new window , function variables can be bound by a partial fixpoint operator that we will introduce below.

To interpret a formula Open image in new window on an Open image in new window -labeled graph Open image in new window with Open image in new window , we are given a Open image in new window for the variables that occur freely in Open image in new window . This is a partial function Open image in new window such that Open image in new window if Open image in new window is a free node variable and Open image in new window if Open image in new window is a free function variable. We call Open image in new window and Open image in new window the Open image in new window of Open image in new window and Open image in new window under  Open image in new window , and denote them by Open image in new window and Open image in new window , respectively. For a composite term Open image in new window , the corresponding interpretation Open image in new window under  Open image in new window is defined in the obvious way.

We write Open image in new window to denote that Open image in new window  satisfies  Open image in new window under assignment Open image in new window . If  Open image in new window does not contain any free variables, we simply write Open image in new window and refer to the set  Open image in new window of Open image in new window -labeled graphs that satisfy Open image in new window as the graph property Open image in new window by  Open image in new window . Naturally enough, we say that two devices (i.e., automata or formulas) are Open image in new window if they specify (i.e., decide or define) the same graph property and that two classes of devices are equivalent if their members specify the same class of graph properties.

As we assume that the reader is familiar with first-order logic, we only define the semantics of the atomic formulas (whose syntax is not completely standard):

We now turn to Open image in new window . Syntactically, it is defined as the extension of first-order logic that allows us to write formulas of the form

where Open image in new window , Open image in new window , and Open image in new window are formulas. We use the notation “ Open image in new window ” to emphasize that Open image in new window may occur freely in  Open image in new window (possibly among other variables). The free variables of formula (\(*\)) are given by Open image in new window .

The idea is that the Open image in new window pfp binds the function variables Open image in new window . The  Open image in new window lines in square brackets constitute a system of function definitions that provide an interpretation of Open image in new window , using the special node variables Open image in new window and Open image in new window as helpers to represent input and output values. This is why Open image in new window also binds any free occurrences of Open image in new window and Open image in new window in Open image in new window , but not in  Open image in new window .

To specify the semantics of (\(*\)), we first need to make some preliminary observations. As before, we consider a fixed Open image in new window -labeled graph Open image in new window with Open image in new window and assume that we are given a variable assignment  Open image in new window for the free variables of (\(*\)). With respect to  Open image in new window and  Open image in new window , each formula  Open image in new window induces an operator Open image in new window that takes some interpretation of the function variables Open image in new window and outputs a new interpretation of Open image in new window , corresponding to the function graph defined by  Open image in new window via the node variables  Open image in new window and  Open image in new window . For inputs on which Open image in new window does not define a functional relationship, the new interpretation of Open image in new window behaves like the identity function. More formally, given a variable assignment Open image in new window that extends Open image in new window with interpretations of Open image in new window , the operator Open image in new window maps Open image in new window to the function Open image in new window such that for all Open image in new window ,Here, Open image in new window is the extension of Open image in new window interpreting Open image in new window as Open image in new window and Open image in new window as Open image in new window .
In this way, the operators Open image in new window give rise to an infinite sequence Open image in new window of tuples of functions, called Open image in new window , where the initial stage contains solely the identity function Open image in new window and each subsequent stage is obtained from its predecessor by componentwise application of the operators. More formally,for Open image in new window . Now, since we have not imposed any restrictions on the formulas  Open image in new window , this sequence might never stabilize, i.e, it is possible that Open image in new window for all \(k \ge 0\). Otherwise, the sequence reaches a (simultaneous) fixpoint at some position k no greater than Open image in new window (the number of Open image in new window -tuples of functions on  Open image in new window ).
We define the Open image in new window Open image in new window of the operators Open image in new window to be the reached fixpoint if it exists, and the tuple of identity functions otherwise. That is, for Open image in new window ,Having introduced the necessary background, we can finally provide the semantics of the formula Open image in new window presented in (\(*\)):where Open image in new window is the extension of Open image in new window that interprets Open image in new window as  Open image in new window , for Open image in new window . In other words, the formula Open image in new window can intuitively be read as

“if Open image in new window are interpreted as the partial fixpoint of Open image in new window , then  Open image in new window holds”.

4.1 Syntactic Sugar

Before we consider a concrete formula (in Example 8), we first introduce some “syntactic sugar” to make using functional fixpoint logic more pleasant.

Set variables. According to our definition of functional fixpoint logic, the operator Open image in new window can bind only function variables. However, functions can be used to encode sets of nodes in a straightforward manner: any set  Open image in new window may be represented by a function that maps nodes outside of  Open image in new window to themselves and nodes inside  Open image in new window to nodes distinct from themselves. Therefore, we may fix an infinite supply Open image in new window of Open image in new window , and extend the syntax of first-order logic to allow atomic formulas of the form Open image in new window , where  Open image in new window is a term and Open image in new window is a set variable in Open image in new window . Naturally, the semantics is that “ Open image in new window  is an element of  Open image in new window ”. To bind set variables, we can then write partial fixpoint formulas of the form Open image in new window , where Open image in new window , Open image in new window , and Open image in new window are formulas. The stages of the partial fixpoint induction are computed as before, but each set variable  Open image in new window is initialized to Open image in new window , and falls back to Open image in new window in case the sequence of stages does not converge to a fixpoint.

Quantifiers over functions and sets. Partial fixpoint inductions allow us to iterate over various interpretations of function and set variables and thus provide a way of expressing (second-order) quantification over functions and sets. Since we restrict ourselves to graphs whose nodes are totally ordered, we can easily define a suitable order of iteration and a corresponding partial fixpoint induction that traverses all possible interpretations of a given function or set variable. To make this more convenient, we enrich the language of functional fixpoint logic with second-order quantifiers, allowing us to write formulas of the form Open image in new window and Open image in new window , where Open image in new window , Open image in new window , and  Open image in new window is a formula. Obviously, the semantics is that “there exists a function Open image in new window , or a set  Open image in new window , respectively, such that  Open image in new window holds”.

As a consequence, it is possible to express any graph property definable in monadic second-order logic, the extension of first-order logic with set quantifiers.

Corollary 7

When restricted to finite graphs equipped with a total order, functional fixpoint logic is strictly more expressive than monadic second-order logic.

The strictness of the inclusion in the above corollary follows from the fact that even on totally ordered graphs, Hamiltonicity cannot be defined in monadic second-order logic (see, e.g., the proof in [4, Prp. 5.13]). As the following example shows, this property is easy to express in functional fixpoint logic.

Example 8

(Hamiltonian cycle). The following formula of functional fixpoint logic defines the graph property of Hamiltonicity. That is, an unlabeled graph  Open image in new window satisfies this formula if and only if there exists a cycle in  Open image in new window that goes through each node exactly once.Here, Open image in new window , Open image in new window , and Open image in new window . Intuitively, we represent a given Hamiltonian cycle by a function Open image in new window that tells us for each node  Open image in new window , which of Open image in new window ’s neighbors we should visit next in order to traverse the entire cycle. Thus, Open image in new window actually represents a directed version of the cycle.

To ensure the existence of a Hamiltonian cycle, our formula states that there is a function Open image in new window satisfying the following two conditions. By the first line, each node  Open image in new window must have exactly one Open image in new window -predecessor and one Open image in new window -successor, both of which must be neighbors of  Open image in new window . By the second line, if we start at any node  Open image in new window and collect into a set  Open image in new window all the nodes reachable from  Open image in new window (by following the path specified by Open image in new window ), then  Open image in new window must contain all nodes.    \(\square \)

5 Translating Between Automata and Logic

Having introduced both automata and logic, we can proceed to explain the first part of Theorem 1 (stated in Sect. 1), i.e., how distributed register automata can be translated into functional fixpoint logic.

Proposition 9

For every distributed register automaton that decides a graph property, we can construct an equivalent formula of functional fixpoint logic.

Proof

(sketch). Given a distributed register automaton Open image in new window deciding a graph property  Open image in new window over label set  Open image in new window , we can construct a formula Open image in new window of functional fixpoint logic that defines  Open image in new window . For each state  Open image in new window , our formula uses a set variable  Open image in new window to represent the set of nodes of the input graph that are in state  Open image in new window . Also, for each register  Open image in new window , it uses a function variable Open image in new window to represent the function that maps each node  Open image in new window to the node  Open image in new window whose identifier is stored in  Open image in new window ’s register  Open image in new window . By means of a partial fixpoint operator, we enforce that on any Open image in new window -labeled graph Open image in new window , the final interpretations of  Open image in new window and Open image in new window represent the halting configuration reached by Open image in new window on  Open image in new window . The main formula is simplywhich states that all nodes end up in a halting state that outputs  Open image in new window .

Basically, the subformulas  Open image in new window and  Open image in new window can be constructed in such a way that for all Open image in new window , the \((i + 1)\)-th stage of the partial fixpoint induction represents the configuration reached by Open image in new window in the i-th round. To achieve this, each of the subformulas contains a nested partial fixpoint formula describing the result computed by the transition maker  Open image in new window between two consecutive synchronous rounds, using additional set and function variables to encode the inner configurations of  Open image in new window at each node. Thus, each stage of the nested partial fixpoint induction corresponds to a single step in the transition maker’s sequential scanning process.    \(\square \)

Let us now consider the opposite direction and sketch how to go from functional fixpoint logic to distributed register automata.

Proposition 10

For every formula of functional fixpoint logic that defines a graph property, we can construct an equivalent distributed register automaton.

Proof

(sketch). We proceed by structural induction: each subformula  Open image in new window will be evaluated by a dedicated automaton Open image in new window , and several such automata can then be combined to build an automaton for a composite formula. For this purpose, it is convenient to design centralized automata, which operate on a givenspanning tree (as computed in Example 5) and are coordinated by the root in a fairly sequential manner. In Open image in new window , each free node variable  Open image in new window of  Open image in new window is represented by a corresponding input register  Open image in new window whose value at the root is the current interpretation  Open image in new window of  Open image in new window . Similarly, to represent a function variable Open image in new window , every node  Open image in new window has a register Open image in new window storing Open image in new window . The nodes also possess some auxiliary registers whose purpose will be explained below. In the end, for any formula  Open image in new window (potentially with free variables), we will have an automaton Open image in new window computing a transduction Open image in new window , where Open image in new window and Open image in new window are supposed to constitute a spanning tree. The computation is triggered by the root, which means that the other nodes are waiting for a signal to wake up. Essentially, the nodes involved in the evaluation of  Open image in new window collect some information, send it towards the root, and go back to sleep. The root then returns  Open image in new window or  Open image in new window , depending on whether or not  Open image in new window holds in the input graph under the variable assignment provided by the input registers. Centralizing Open image in new window in that way makes it very convenient (albeit not efficient) to evaluate composite formulas. For example, in Open image in new window , the root will first run Open image in new window , and then Open image in new window in case Open image in new window returns  Open image in new window .
The evaluation of atomic formulas is straightforward. So let us focus on the most interesting case, namely when Open image in new window . The root’s program is outlined in Algorithm 2. Line 1 initializes a counter that ranges from 0 to Open image in new window , where n is the number of nodes in the input graph. This counter is distributed in the sense that every node has some dedicated registers that together store the current counter value. Every execution of  Open image in new window will increment the counter by 1, or return  Open image in new window if its maximum value has been exceeded. Now, in each iteration of the loop starting at Line 2, all registers Open image in new window and  Open image in new window are updated in such a way that they represent the current and next stage, respectively, of the partial fixpoint induction. For the former, it suffices that every node copies, for all i, the contents of Open image in new window to Open image in new window (Line 3). To update Open image in new window , Line 4 calls a subroutine Open image in new window whose effect is that Open image in new window for all i, where Open image in new window is the operator defined in Sect. 4. Line 5 checks whether we have reached a fixpoint: The root asks every node to compare, for all i, its registers Open image in new window and  Open image in new window . The corresponding truth value is propagated back to the root, where Open image in new window is given preference over Open image in new window . If the result is Open image in new window , we exit the loop and proceed with calling Open image in new window to evaluate  Open image in new window (Line 8). Otherwise, we try to increment the global counter by executing  Open image in new window (Line 6). If the latter returns  Open image in new window , the fixpoint computation is aborted because we know that it has reached a cycle. In accordance with the partial fixpoint semantics, all nodes then write their own identifier to every register Open image in new window (Line 7) before  Open image in new window is evaluated (Line 8).    \(\square \)

6 Conclusion

This paper makes some progress in the development of a descriptive distributed complexity theory by establishing a logical characterization of a wide class of network algorithms, modeled as distributed register automata.

In our translation from logic to automata, we did not pay much attention to algorithmic efficiency. In particular, we made extensive use of centralized subroutines that are triggered and controlled by a leader process. A natural question for future research is to identify cases where we can understand a distributed architecture as an opportunity that allows us to evaluate formulas faster. In other words, is there an expressive fragment of functional fixpoint logic that gives rise to efficient distributed algorithms in terms of running time? What about the required number of messages? We are then entering the field of automatic synthesis of practical distributed algorithms from logical specifications. This is a worthwhile task, as it is often much easier to declare what should be done than how it should be done (cf. Examples 6 and 8).

As far as the authors are aware, this area is still relatively unexplored. However, one noteworthy advance was made by Grumbach and Wu in [9], where they investigated distributed evaluation of first-order formulas on bounded-degree graphs and planar graphs. We hope to follow up on this in future work.

Notes

Acknowledgments

We thank Matthias Függer for helpful discussions. Work supported by ERC EQualIS (FP7-308087) (http://www.lsv.fr/~bouyer/equalis) and ANR FREDDA (17-CE40-0013) (https://www.irif.fr/anr/fredda/index).

References

  1. 1.
    Abiteboul, S., Vianu, V.: Fixpoint extensions of first-order logic and datalog-like languages. In: Proceedings of the Fourth Annual Symposium on Logic in Computer Science (LICS 1989), Pacific Grove, California, USA, 5–8 June 1989, pp. 71–79. IEEE Computer Society (1989).  https://doi.org/10.1109/LICS.1989.39160
  2. 2.
    Aiswarya, C., Bollig, B., Gastin, P.: An automata-theoretic approach to the verification of distributed algorithms. In: Aceto, L., de Frutos-Escrig, D. (eds.) 26th International Conference on Concurrency Theory, CONCUR 2015, Madrid, Spain, 14 September 2015. LIPIcs, vol. 42, pp. 340–353. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik (2015).  https://doi.org/10.4230/LIPIcs.CONCUR.2015.340
  3. 3.
    Aiswarya, C., Bollig, B., Gastin, P.: An automata-theoretic approach to the verification of distributed algorithms. Inf. Comput. 259(Part 3), 305–327 (2018).  https://doi.org/10.1016/j.ic.2017.05.006MathSciNetCrossRefzbMATHGoogle Scholar
  4. 4.
    Courcelle, B., Engelfriet, J.: Graph Structure and Monadic Second-Order Logic: A Language-Theoretic Approach. Encyclopedia of Mathematics and Its Applications, vol. 138. Cambridge University Press, Cambridge (2012). https://hal.archives-ouvertes.fr/hal-00646514.  https://doi.org/10.1017/CBO9780511977619
  5. 5.
    Diestel, R.: Graph Theory. GTM, vol. 173. Springer, Heidelberg (2017).  https://doi.org/10.1007/978-3-662-53622-3CrossRefzbMATHGoogle Scholar
  6. 6.
    Fagin, R.: Generalized first-order spectra and polynomial-time recognizable sets. In: Karp, R.M. (ed.) Complexity of Computation. SIAM-AMS Proceedings, vol. 7, pp. 43–73 (1974). http://www.almaden.ibm.com/cs/people/fagin/genspec.pdf
  7. 7.
    Feuilloley, L., Fraigniaud, P.: Survey of distributed decision. Bull. EATCS 119 (2016). http://eatcs.org/beatcs/index.php/beatcs/article/view/411
  8. 8.
    Grädel, E., et al.: Finite Model Theory and Its Applications. Texts in Theoretical Computer Science. An EATCS Series, 1st edn. Springer, Heidelberg (2007).  https://doi.org/10.1007/3-540-68804-8CrossRefGoogle Scholar
  9. 9.
    Grumbach, S., Wu, Z.: Logical locality entails frugal distributed computation over graphs (extended abstract). In: Paul, C., Habib, M. (eds.) WG 2009. LNCS, vol. 5911, pp. 154–165. Springer, Heidelberg (2010).  https://doi.org/10.1007/978-3-642-11409-0_14CrossRefGoogle Scholar
  10. 10.
    Hella, L., et al.: Weak models of distributed computing, with connections to modal logic. In: Kowalski, D., Panconesi, A. (eds.) ACM Symposium on Principles of Distributed Computing, PODC 2012, Funchal, Madeira, Portugal, 16–18 July 2012, pp. 185–194. ACM (2012).  https://doi.org/10.1145/2332432.2332466
  11. 11.
    Hella, L., et al.: Weak models of distributed computing, with connections to modallogic. Distrib. Comput. 28(1), 31–53 (2015). https://arxiv.org/abs/1205.2051. http://dx.doi.org/10.1007/s00446-013-0202-3
  12. 12.
    Immerman, N.: Descriptive Complexity. Texts in Computer Science. Springer, New York (1999).  https://doi.org/10.1007/978-1-4612-0539-5CrossRefzbMATHGoogle Scholar
  13. 13.
    Kaminski, M., Francez, N.: Finite-memory automata. Theor. Comput. Sci. 134(2), 329–363 (1994).  https://doi.org/10.1016/0304-3975(94)90242-9MathSciNetCrossRefzbMATHGoogle Scholar
  14. 14.
    Kuusisto, A.: Modal logic and distributed message passing automata. In: Rocca, S.R.D. (eds.) Computer Science Logic 2013 (CSL 2013), Torino, Italy, 2–5 September 2013, LIPIcs, vol. 23, pp. 452–468. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik (2013).  https://doi.org/10.4230/LIPIcs.CSL.2013.452
  15. 15.
    Libkin, L., et al.: Elements of Finite Model Theory. Texts in Theoretical Computer Science. An EATCS Series, 1st edn. Springer, Heidelberg (2004).  https://doi.org/10.1007/978-3-662-07003-1CrossRefzbMATHGoogle Scholar
  16. 16.
    Naor, M., Stockmeyer, L.J.: What can be computed locally? SIAM J. Comput. 24(6), 1259–1277 (1995).  https://doi.org/10.1137/S0097539793254571MathSciNetCrossRefzbMATHGoogle Scholar
  17. 17.
    Reiter, F.: Distributed graph automata. In: 30th Annual ACM/IEEE Symposium on Logic in Computer Science, LICS 2015, Kyoto, Japan, 6–10 July 2015, pp. 192–201. IEEE Computer Society (2015). https://arxiv.org/abs/1408.3030.  https://doi.org/10.1109/LICS.2015.27
  18. 18.
    Reiter, F.: Asynchronous distributed automata: a characterization of the modal MU-fragment. In: Chatzigiannakis, I., Indyk, P., Kuhn, F., Muscholl, A. (eds.) 44th International Colloquium on Automata, Languages, and Programming, ICALP 2017, Warsaw, Poland, 10–14 July 2017. LIPIcs, vol. 80, pp. 100:1–100:14. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik (2017). http://arxiv.org/abs/1611.08554.  https://doi.org/10.4230/LIPIcs.ICALP.2017.100
  19. 19.
    Vardi, M.Y.: The complexity of relational query languages (extended abstract). In: Lewis, H.R., Simons, B.B., Burkhard, W.A., Landweber, L.H. (eds.) Proceedings of the 14th Annual ACM Symposium on Theory of Computing, San Francisco, California, USA, 5–7 May 1982, pp. 137–146. ACM (1982).  https://doi.org/10.1145/800070.802186

Copyright information

© The Author(s) 2019

Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

Authors and Affiliations

  • Benedikt Bollig
    • 1
  • Patricia Bouyer
    • 1
  • Fabian Reiter
    • 1
    Email author
  1. 1.LSV, CNRS, ENS Paris-SaclayUniversité Paris-SaclayCachanFrance

Personalised recommendations