An \(O(m\log n)\) Algorithm for Stuttering Equivalence and Branching Bisimulation
 6 Citations
 1.3k Downloads
Abstract
We provide a new algorithm to determine stuttering equivalence with time complexity \(O(m \log n)\), where n is the number of states and m is the number of transitions of a Kripke structure. This algorithm can also be used to determine branching bisimulation in \(O(m(\log  Act +\log n))\) time. Theoretically, our algorithm substantially improves upon existing algorithms which all have time complexity O(mn) [2, 3, 9]. Moreover, it has better or equal space complexity. Practical results confirm these findings showing that our algorithm can outperform existing algorithms with orders of magnitude, especially when the sizes of the Kripke structures are large.
1 Introduction
Stuttering equivalence [4] and branching bisimulation [8] were proposed as alternatives to Milner’s weak bisimulation [13]. They are very close to weak bisimulation, with as essential difference that all states in the mimicking sequence \(\tau ^*a\,\tau ^*\) must be related to either the state before or directly after the a from the first system. This means that branching bisimulation and stuttering equivalence are slightly stronger notions than weak bisimulation.
In [9] an O(mn) time algorithm was proposed for stuttering equivalence and branching bisimulation, where m is the number of transitions and n is the number of states in either a Kripke structure (for stuttering equivalence) or a labelled transition system (for branching bisimulation). We refer to this algorithm as GV. It is based upon the O(mn) algorithm for bisimulation equivalence in [11]. Both algorithms require \(O(m{+}n)\) space. They calculate for each state whether it is bisimilar to another state.
The basic idea of the algorithms of [9, 11] is to partition the set of states into blocks. States that are bisimilar always reside in the same block. Whenever there are some states in a block \(B'\) from which a transition is possible to some block B and there are other states in \(B'\) from which such a step is not possible, \(B'\) is split accordingly. Whenever no splitting is possible anymore, the partition is called stable, and two states are in the same block iff they are bisimilar.
There have been some attempts to come up with improvements of GV. The authors of [2] observed that GV only splits a block in two parts at a time. They proposed to split a block in as many parts as possible, reducing moving states and transitions to new blocks. Their worst case time and space complexities are worse than that of GV, especially the space complexity O(mn), but in practice this algorithm can outperform GV. In [3], the space complexity is brought back to \(O(m{+}n)\). A technique to be performed on Graphics Processing Units based on both GV and [2, 3] is proposed in [19]. This improves the required runtime considerably by employing parallelism, but it does not imply any improvement to the singlethreaded algorithm.
In [15] an \(O(m \log n)\) algorithm is proposed for strong bisimulation as an improvement upon the algorithm of [11]. The core idea for this improvement is described as “process the smaller half” [1]. Whenever a block is split in two parts the amount of work must be contributed to the size of the smallest resulting block. In such a case a state is only involved in the process of splitting if it resides in a block at most half the size of the block it was previously in when involved in splitting. This means that a state can never be involved in more than \(\log _2 n\) splittings. As the time used in each state is proportional to the number of incoming or outgoing transitions in that state, the total required time is \(O(m \log n)\).
In this paper we propose the first algorithm for stuttering equivalence and branching bisimulation in which the “process the smaller half”technique is used. By doing so, we can finally confirm the conjecture in [9] that such an improvement of GV is conceivable. Moreover, we achieve an even lower complexity, namely \(O(m \log n)\), than conjectured in [9] by applying the technique twice, the second time for handling the presence of inert transitions. First we establish whether a block can be split by combining the approach regarding bottom states from GV with the detection approach in [15]. Subsequently, we use the “process the smaller half”technique again to split a block by only traversing transitions in a time proportional to the size of the smallest subblock. As it is not known which of the two subblocks is smallest, the transitions of the two subblocks are processed alternatingly, such that the total processing time can be contributed to the smallest block. For checking behavioural equivalences, applying such a technique is entirely new. We are only aware of a similar approach for an algorithm in which the smallest bottom strongly connected component of a graph needs to be found [5].
Compared to checking other equivalences the existing algorithms for branching bisimulation/stuttering equivalence were already known to be practically very efficient. This is the reason that they are being used in multiple explicitstate model checkers, such as Cadp [7], the mCRL2 toolset [10] and TVT [18]. In particular they are being used as preprocessing steps for other equivalences (weak bisimulation, trace based equivalences) that are much harder to compute. For weak bisimulation recently an O(mn) algorithm has been devised [12, 16], but until that time an expensive transitive closure operation of at best \(O(n^{2.373})\) was required. The improvements of our new algorithm are not restricted to stuttering equivalence and branching bisimulation alone, but they can also impact the computation time of all other behavioural equivalences.
Although our algorithm theoretically outperforms its predecessors substantially, we wanted to know whether it would also do so in practice. We find that for dedicated examples our algorithm lives up to its theoretical improvement outperforming the existing algorithms in accordance with the theory. For practical examples we see that our algorithm can always match the best running times of existing algorithms, but especially when the Kripke structures and transition systems get large, our algorithm tends to outperform existing algorithms with orders of magnitude.
2 Preliminaries
We introduce Kripke structures and (divergenceblind) stuttering equivalence. In Sect. 6 we explain branching bisimulation and its application to labelled transition systems.
Definition 1
 1.
S is a finite set of states.
 2.
\( AP \) is a finite set of atomic propositions.
 3.
\(\mathord {\, \xrightarrow {}_{} \,}\subseteq S\times S\) is a total transition relation, i.e., for each \(s\in S\) there is an \(s'\in S\) s.t. \(s\mathord {\, \xrightarrow {}_{} \,}s'\).
 4.
\(L:S\rightarrow 2^{ AP }\) is a state labelling.
We use \(n{=}S\) for the number of states and \(m{=}{\mathord {\, \xrightarrow {}_{} \,}}\) for the number of transitions. For a set of states \(B{\subseteq } S\), we write \(s\mathord {\, \xrightarrow {}_{B} \,}s'\) for \(s\mathord {\, \xrightarrow {}_{} \,}s'\) and \(s'\in B\).
Definition 2
 1.
\(L(s)=L(t)\).
 2.
for all \(s'\in S\) if \(s\mathord {\, \xrightarrow {}_{} \,}s'\), then there are \(t_0,\ldots , t_k\in S\) for some \(k\in \mathbb {N}\) such that \(t=t_0\), \(sRt_i\), \(t_i\mathord {\, \xrightarrow {}_{} \,}t_{i+1}\), and \(s'Rt_k\) for all \(i<k\).
We say that two states \(s,t\in S\) are divergenceblind stuttering equivalent, notation Open image in new window , iff there is a divergenceblind stuttering equivalence relation R such that sRt.
An important property of divergenceblind stuttering equivalence is that if states on a loop all have the same label then all these states are divergenceblind stuttering equivalent. We define stuttering equivalence in terms of divergenceblind stuttering equivalence using the following Kripke structure.
Definition 3
 1.
\(\mathord {\, \xrightarrow {}_{d} \,}=\mathord {\, \xrightarrow {}_{} \,}\cup \{\langle s,s_d\rangle ~~s\) is on a cycle of states all labelled with L(s), or \(s=s_d\}\).
 2.
For all \(s\in S\) we define \(L_d(s)=L(s)\) and \(L_d(s_d)=\{d\}\).
States \(s,t\in S\) are stuttering equivalent, notation Open image in new window iff there is a divergenceblind stuttering bisimulation relation R on \(S_d\) such that sRt.
Note that an algorithm for divergenceblind stuttering equivalence can also be used to determine stuttering equivalence by employing only a linear time and space transformation. Therefore, we only concentrate on an algorithm for divergenceblind stuttering equivalence.
3 Partitions and Splitters: A Simple Algorithm
Our algorithms perform partition refinement of an initial partition containing the set of states S. A partition \(\pi =\{ B_i\subseteq S~~1\le i\le k\}\) is a set of non empty subsets such that \(B_i\cap B_j=\emptyset \) for all \(1\le i<j\le k\) and \(S =\bigcup _{1\le i\le k}B_i\). Each \(B_i\) is called a block.
We call a transition \(s \mathord {\, \xrightarrow {}_{} \,} s'\) inert w.r.t. \(\pi \) iff s and \(s'\) are in the same block \(B \in \pi \). We say that a partition \(\pi \) coincides with divergenceblind stuttering equivalence when Open image in new window iff there is a block \(B\in \pi \) such that \(s,t\in B\). We say that a partition respects divergenceblind stuttering equivalence iff for all \(s,t\in S\) if Open image in new window then there is some block \(B\in \pi \) such that \(s,t\in B\). The goal of the algorithm is to calculate a partition that coincides with divergenceblind stuttering equivalence. This is done starting with the initial partition \(\pi _0\) consisting of blocks B satisfying that if \(s,t\in B\) then \(L(s)=L(t)\). Note that this initial partition respects divergenceblind stuttering equivalence.
We say that a partition \(\pi \) is cyclefree iff for each block \(B\in \pi \) there is no state \(s\in B\) such that \(s\mathord {\, \xrightarrow {}_{B} \,}s_1\) \(\mathord {\, \xrightarrow {}_{B} \,}\cdots \mathord {\, \xrightarrow {}_{B} \,}s_k\mathord {\, \xrightarrow {}_{} \,}s\) for some \(k\in \mathbb {N}\). It is easy to make the initial partition \(\pi _0\) cyclefree by merging all states on a cycle in each block into a single state. This preserves divergenceblind stuttering equivalence and can be performed in linear time employing a standard algorithm to find strongly connected components [1].
We say that a block \(B'\) is unstable under \({\varvec{B}} \) iff \(\textit{split} (B',{\varvec{B}})\not =\emptyset \) and \(\textit{cosplit} (B',{\varvec{B}})\) \(\not =\emptyset \). A partition \(\pi \) is unstable under \({\varvec{B}} \) iff there is at least one \(B' \in \pi \) which is unstable under \({\varvec{B}} \). If \(\pi \) is not unstable under \({\varvec{B}} \) then it is called stable under \({\varvec{B}} \). If \(\pi \) is stable under all \({\varvec{B}} \), then it is simply called stable.
A refinement of \(B' \in \pi \) under \({\varvec{B}} \) consists of two new blocks \(\textit{split} (B',{\varvec{B}})\) and \(\textit{cosplit} (B',{\varvec{B}})\). A partition \(\pi '\) is a refinement of \(\pi \) under \({\varvec{B}} \) iff all unstable blocks \(B' \in \pi \) have been replaced by new blocks \(\textit{split} (B',{\varvec{B}})\) and \(\textit{cosplit} (B',{\varvec{B}})\).
The following lemma expresses that if a partition is stable then it coincides with divergenceblind stuttering equivalence. It also says that during refinement, the encountered partitions respect divergenceblind stuttering equivalence and remain cyclefree.
Lemma 1
Let \(K=(S, AP ,\mathord {\, \xrightarrow {}_{} \,}, L)\) be a Kripke structure and \(\pi \) a partition of S.
 1.
For all states \(s,t\in S\), if \(s,t\in B\) with B a block of the partition \(\pi \), \(\pi \) is stable, and a refinement of the initial partition \(\pi _0\), then Open image in new window .
 2.
If \(\pi \) respects divergenceblind stuttering equivalence then any refinement of \(\pi \) under the union of some of the blocks in \(\pi \) also respects it.
 3.
If \(\pi \) is a cyclefree partition, then any refinement of \(\pi \) is also cyclefree.
Proof
 1.
We show that if \(\pi \) is a stable partition, the relation \(R=\{\langle s,t\rangle ~~s,t\in B,~B\in \pi \}\) is a divergenceblind stuttering equivalence. It is clear that R is symmetric. Assume sRt. Obviously, \(L(s)=L(t)\) because \(s,t\in B\) and B refines the initial partition. For the second requirement of divergenceblind stuttering equivalence, suppose \(s\mathord {\, \xrightarrow {}_{} \,} s'\). There is a block \(B'\) such that \(s'\in B'\). As \(\pi \) is stable, it holds for t that \(t=t_0\mathord {\, \xrightarrow {}_{} \,}t_1\mathord {\, \xrightarrow {}_{} \,}\cdots \mathord {\, \xrightarrow {}_{} \,} t_k\) for some \(k\in \mathbb {N}\), \(t_0,\ldots ,t_{k1}\in B\) and \(t_k\in B'\). This clearly shows that for all \(i<k\) \(sRt_i\), and \(s'Rt_k\). So, R is a divergenceblind stuttering equivalence, and therefore it holds for all states \(s,t\in S\) that reside in the same block of \(\pi \) that Open image in new window .
 2.
The second part can be proven by reasoning towards a contradiction. Let us assume that a partition \(\pi '\) that is a refinement of \(\pi \) under \({\varvec{B}} \) does not respect divergenceblind stuttering equivalence, although \(\pi \) does. Hence, there are states \(s, t\in S\) with Open image in new window and a block \(B' \in \pi \) with \(s, t\in B'\) and s and t are in different blocks in \(\pi '\). Given that \(\pi '\) is a refinement of \(\pi \) under \({\varvec{B}} \), \(s \in \textit{split} (B',{\varvec{B}})\) and \(t \in \textit{cosplit} (B', {\varvec{B}})\) (or vice versa, which can be proven similarly). By definition of \(\textit{split} \), there are \(s_1, \ldots , s_{k1} \in B'\) (\(k \in \mathbb {N}\)) and \(s_k \in {\varvec{B}} \) such that \(s \mathord {\, \xrightarrow {}_{} \,} s_1 \mathord {\, \xrightarrow {}_{} \,} \cdots \mathord {\, \xrightarrow {}_{} \,} s_k\). Then, either \(k=0\) and \(B'\subseteq {\varvec{B}} \), but then \(t\notin \textit{cosplit} (B',{\varvec{B}})\). Or \(k>0\), and since Open image in new window , there are \(t_1, \ldots , t_{l1} \in B'\) (\(l \in \mathbb {N}\)) and \(t_l \in {\varvec{B}} \) such that \(t \mathord {\, \xrightarrow {}_{} \,} t_1 \mathord {\, \xrightarrow {}_{} \,} \cdots \mathord {\, \xrightarrow {}_{} \,} t_l \) with \(s_i R t_j\) for all \(1 \le i < k\), \(1 \le j < l\) and \(s_k R t_l\). This means that we have \(t \in \textit{split} (B', {\varvec{B}})\), again contradicting that \(t \in \textit{cosplit} (B', {\varvec{B}})\).
 3.
If \(\pi \) is cyclefree, this property is straightforward, since splitting any block of \(\pi \) will not introduce cycles. \(\square \)
Lemma 2
Proof

\(\Rightarrow \) If \(\pi \) is unstable, then \(\textit{split} (B',{\varvec{B}})\not =\emptyset \) and \(\textit{split} (B',{\varvec{B}})\not = B'\). The first conjunct corresponds with the first condition. If \(\textit{split} (B',{\varvec{B}})\not = B'\), there are states \(s {\notin } \textit{split} (B',{\varvec{B}})\). As the blocks \(B' {\in } \pi \) do not have cycles, consider such an \(s \notin \textit{split} (B', {\varvec{B}})\) with a smallest distance to a state \(s_k {\in } bottom (B')\), i.e., \(s\mathord {\, \xrightarrow {}_{} \,}s_1\mathord {\, \xrightarrow {}_{} \,}\cdots \) \(\mathord {\, \xrightarrow {}_{} \,}s_k\) with all \(s_i\in B'\). If s itself is an element of \( bottom (B')\), the second part of the right hand side of the lemma follows. Assume \(s{\notin } bottom (B')\), there is some state \(s'{\in } B'\) closer to \( bottom (B')\) such that \(s\mathord {\, \xrightarrow {}_{} \,}s'\). Clearly, \(s'{\notin } \textit{split} (B',{\varvec{B}})\) either, as otherwise \(s\in \textit{split} (B',{\varvec{B}})\). But as \(s'\) is closer to \( bottom (B')\), the state s was not a state with the smallest distance to a state in \( bottom (B')\), which is a contradiction.

\(\Leftarrow \) It follows from the right hand side that \(\textit{split} (B',{\varvec{B}})\not =\emptyset \), \(\textit{split} (B',{\varvec{B}})\not =B'\). \(\square \)
This lemma can be used as follows to find a block to be split. Consider each \(B {\in } \pi \). Traverse its incoming transitions and mark the states that can reach B in zero or one step. If a block \(B'\) has marked states, but not all of its bottom states are marked, the condition of the lemma applies, and it needs to be split. It is at most needed to traverse all transitions to carry this out, so its complexity is O(m).
If B is equal to \(B'\), no splitting is possible. We implement it by marking all states in B as each state in B can reach itself in zero steps. In this case condition \( bottom (B')\cap \textit{split} (B',{\varvec{B}})\subset bottom (B')\) is not true. This is different from [9] where a block is never considered as a splitter of itself, but we require this in the algorithm in the next sections.
If a block \(B'\) is unstable, and all states from which a state in B can be reached in one step are marked, then a straightforward recursive procedure is required to extend the marking to all states in \(\textit{split} (B',B)\), and those states need to be moved to a new block. This takes time proportional to the number of transitions in \(B'\), i.e., O(m).
4 Constellations: An \(O(m\log n)\) Algorithm
The crucial idea to transform the algorithm from the previous section into an \(O(m\log n)\) algorithm stems from [15]. By grouping the blocks in the current partition \(\pi \) into constellations such that \(\pi \) is stable under the union of the blocks in such a constellation, we can determine whether a block exists under which \(\pi \) is unstable by only looking at blocks that are at most half the size of the constellation, i.e., \(B \le \frac{1}{2}{\varvec{B}} \), where \({\varvec{B}}  = \varSigma _{B'{\in {\varvec{B}}}} B'\), for a block B in a constellation \({\varvec{B}} \). If a block \(B'\in \pi \) is unstable under B, then we use a remarkable technique consisting of two procedures running alternatingly to identify the smallest block resulting from the split. The whole operation runs in time proportional to the smallest block resulting from the split. We involve the blocks in \({\varvec{B}} \setminus B\) in the splitting without explicitly analysing the states contained therein (for convenience, we write \({\varvec{B}} \setminus B\) instead of \({\varvec{B}} \setminus \{ B \}\)).
Working with constellations in this way ensures for each state that whenever it is involved in splitting, i.e., if it is part of a block that is used to split or that is being split, this block is half the size of the previous block in which the state resided when it was involved in splitting. That ensures that each state can at most be \(\log _2(n)\) times involved in splitting. When involving a state, we only analyse its incoming and outgoing transitions, resulting in an algorithm with complexity \(O(m\log n)\). Although we require quite a number of auxiliary data structures, these are either proportional to the number of states or to the number of transitions. So, the memory requirement is \(O(m{+}n)\).
In the following, the set of constellations also forms a partition, which we denote by \(\mathcal{C}\). A constellation is a set of one or more blocks from the current partition \(\pi \). If a constellation contains only one block, it is called trivial. The current partition \(\pi \) is stable with respect to each constellation in \(\mathcal{C}\).
If a constellation \({\varvec{B}} {\in }\mathcal{C}\) contains more than one block, we select one block \(B {\in } {\varvec{B}} \) which is at most half the size of \(\varvec{B}\), and move it to a new trivial constellation \({\varvec{B'}} \). We check whether the current partition is stable under B and \({\varvec{B}} \setminus B\) according to Lemma 2 by traversing the incoming transitions of states in B and marking the encountered states that can reach B in zero or one step. For all blocks \(B'\) that are unstable according to Lemma 2, we calculate \(\textit{split} (B',B)\) and \(\textit{cosplit} (B',B)\), as indicated below.
As noted in [15], \(\textit{cosplit} (B',B)\) is stable under \({\varvec{B}} \setminus B\). Therefore, only further splitting of \(\textit{split} (B',B)\) under \({\varvec{B}} \setminus B\) must be investigated. If \(B'\) is stable under B because all bottom states of \(B'\) are marked, it can be that \(B'\) is not stable under \({\varvec{B}} \setminus B\), which we do not address here explicitly, as it proceeds along the same line.
There is a special list data structure to recall for any \(B'\) and \({\varvec{B}} \) which transitions go from \(B'\) to \({\varvec{B}} \). When investigating whether \(\textit{split} (B',B)\) is stable under B we adapt this list to determine the transitions from \(\textit{split} (B',B)\) to \({\varvec{B}} \setminus B\) and we simultaneously tag the states in \(B'\) that have a transition to \({\varvec{B}} \setminus B\). Therefore, we know whether there are transitions from \(\textit{split} (B',B)\) to \({\varvec{B}} \setminus B\) and we can traverse the bottom states of \(\textit{split} (B',B)\) to inspect whether there is a bottom state without a transition to B. Following Lemma 2, this allows us to determine whether \(\textit{split} (B',B)\) must be split under \({\varvec{B}} \setminus B\) in a time proportional to the size of B. How splitting is carried out is indicated below.
Lemma 3
Let \(K=(S, AP ,\mathord {\, \xrightarrow {}_{} \,},L)\) be a Kripke structure with cycle free partition \(\pi \) with refinement \(\pi '\). If \(\pi \) is stable under a constellation \({\varvec{B}} \), and \(B'\in \pi \) is refined into \(B_1',\ldots ,B_k'\in \pi '\), then for each \(B_i'\) where the bottom states in \(B_i'\) are also bottom states in \(B'\), it holds that \(B_i'\) is also stable under \({\varvec{B}} \).
Proof
Assume \(B_i'\) is not stable under \({\varvec{B}} \). This means that \(B_i'\) is not an element of \({\varvec{B}} \). Hence, there is a state \(s\in B_i'\) such that \(s\mathord {\, \xrightarrow {}_{} \,} s'\) with \(s' \in {\varvec{B}} \) and there is a bottom state \(t\in B_i'\) with no outgoing transition to a state in \({\varvec{B}} \). But as \(B'\) was stable under \({\varvec{B}} \), and s has an outgoing transition to a state in \({\varvec{B}} \), all bottom states in \(B'\) must have at least one transition to a state in \({\varvec{B}} \). Therefore, t cannot be a bottom state of \(B'\), and must have become a bottom state after splitting \(B'\). \(\square \)
This means that if a block \(B'\) is the result of a refinement, and some of its states became bottom states, it must be made sure that \(B'\) is stable under the constellations. Typically, from the new bottom states a smaller number of blocks in the constellation can be reached. For each block we maintain a list of constellations that can be reached from states in this block. We match the outgoing transitions of the new bottom states with this list, and if there is a block \(B''\) reachable from states in the constellation, but not from the bottom states, \(B'\) must be split by \(B''\).
The complexity of checking for additional splittings to regain stability when states become bottom states is only O(m). Each state only becomes a bottom state once, and when that happens we perform calculations proportional to the number of outgoing transitions of this state to determine whether a split must be carried out.
It remains to show that splitting can be performed in a time proportional to the size of the smallest block resulting from the splitting. Consider splitting \(B'\) under \(B{\in }{\varvec{B}} \). While marking \(B'\) four lists of all marked and non marked, bottom and non bottom states have been constructed. We simultaneously mark states in \(B'\) either red or blue. Red means that there is a path from a state in \(B'\) to a state in B. Blue means that there is no such path. Initially, marked states are red, and non marked bottom states are blue.
This colouring is simultaneously extended to all states in \(B'\), spending equal time to both. The procedure is stopped when the colouring of one of the colours cannot be enlarged. We colour states red that can reach other red states via inert transitions using a simple recursive procedure. We colour states blue for which it is determined that all outgoing inert transitions go to a blue state (for this we need to recall for each state the number of outgoing inert transitions) and there is no direct transition to B. The marking procedure that terminates first, provided that its number of marked states does not exceed \(\frac{1}{2}B'\), has the smallest block that must be split. Now that we know the smallest block we move its states to a newly created block.
Splitting regarding \({\varvec{B}} \setminus B\) only has to be applied to \(\textit{split} (B', B)\), or, if all bottom states of \(B'\) were marked, to \(B'\). As noted before \(\textit{cosplit} (B', B)\) is stable under \({\varvec{B}} \setminus B\). Define \(C :=\textit{split} (B',B)\) or \(C:=B'\) depending on the situation. We can traverse all bottom states of C and check whether they have outgoing transitions to \({\varvec{B}} \setminus B\). This provides us with the blue states. The red states are obtained as we explicitly maintained the list of all transitions from C to \({\varvec{B}} \setminus B\). By simultaneously extending this colouring the smallest subblock of either red or blue states is obtained and splitting can commence.
5 Detailed Algorithm
This section presents the data structures and the algorithm in more detail.
5.1 Data Structures
As a basic data structure, we use (singlylinked) lists. For a list L of elements, we assume that for each element e, a reference to the position in L preceding the position of e is maintained, such that checking membership and removal can be done in constant time. In some cases we add extra information to the elements in the list. Moreover, for each list L, we maintain the size L and pointers to its first and last element.
 1.
The current partition \(\pi \) consists of a list of blocks. Initially, it corresponds to \(\pi _0\). All blocks are part of a single, initial constellation \({\varvec{C}} _0\).
 2.For each block \({B} \), we maintain the following:
 (a)
A reference to the constellation containing \({B} \).
 (b)
A list \(B\).btmsts of the bottom states and a list \(B\).nonbtmsts of the other states.
 (c)A list \(B.\textit{toconstlns} \) of structures associated with constellations reachable via a transition from some \(s {\in } B\). Initially, it contains one element associated with \({\varvec{C}} _0\). Each element associated with some constellation \({\varvec{C}} \) in this list also contains the following:

A reference \(\textit{translist} \) to a list of all transitions from states in B to states in \({\varvec{C}} \setminus B\) (note that transitions between states in B, i.e., inert transitions, are not in this list).

When splitting the block B into B and \(B'\) there is a reference in each list element to the corresponding list element in \(B'.\textit{toconstlns} \) (which in turn refers back to the element in \(B.\textit{toconstlns} \)).

In order to check for stability when splitting produces new bottom states, each element contains a list to keep track of which new bottom states can reach the associated constellation.

 (d)
A reference \(B.\textit{inconstlnref} \) is used to refer to the element in \(B.\textit{toconstlns} \) associated with the constellation of B. It is used when a noninert transition becomes inert and needs to be added to the \(\textit{translist} \) of the element associated with that constellation.
Furthermore, when splitting a block \(B'\) in constellation \({\varvec{B}} '\) under a constellation \(\varvec{B}\) and block \({B} {\in } {\varvec{B}} \), the following temporary structures are used, with \({\varvec{C}} \) the new constellation to which \({B} \) is moved: (a)
A list \(B'.\textit{mrkdbtmsts} \) contains marked states in \(B'\) with a transition to \({B} \).
 (b)
A list \(B'.\textit{mrkdnonbtmsts} \) contains states that are marked, but are not bottom states.
 (c)
A reference \(B'.\textit{constlnref} \) refers to the (new) element in \(B'.\textit{toconstlns} \) associated with constellation \({\varvec{C}} \), i.e., the new constellation of B.
 (d)
A reference \(B'.\textit{coconstlnref} \) is used to refer to the element in \(B'.\textit{toconstlns} \) associated with constellation \({\varvec{B}} \), i.e., the old constellation of B.
 (e)
A list \(B'.\textit{newbtmsts} \) to keep track of the states that have become bottom states when \(B'\) was split. This is required to determine whether \(B'\) is stable under all constellations after a split.
 (a)
 3.
Constellations are stored in two lists trivialconstlns and nontrivialconstlns. The first contains constellations consisting of exactly one block, while the latter contains the other constellations. Initially, if \(\pi _0\) consists of one block, \({\varvec{C}} _0\) is added to trivialconstlns and nothing needs to be done, because the initial partition is already stable. Otherwise \({\varvec{C}} _0\) is added to nontrivialconstlns.
 4.
For each constellation, we maintain its list of blocks and its size (number of states).
 5.
Each transition \(s\mathord {\, \xrightarrow {}_{} \,}s'\) refers with \(\textit{toconstlncnt} \) to the number of transitions from s to the constellation in which \(s'\) resides. For each state and constellation, there is one such variable, provided there is a transition from this state to this constellation.
Each transition \(s\mathord {\, \xrightarrow {}_{} \,}s'\) has a reference to the element associated with \({\varvec{B}} \) in the list \(B.\textit{toconstlns} \) where \(s {\in } B\) and \(s'{\in }{\varvec{B}} \). This is denoted as \((s\mathord {\, \xrightarrow {}_{} \,}s').\textit{toconstlnref} \). Initially, it refers to the single element in \(B.\textit{toconstlns} \), unless the transition is inert, i.e., both \(s{\in }B\) and \(s'{\in }B\).
Furthermore, each transition \(s\mathord {\, \xrightarrow {}_{} \,}s'\) is stored in the list of transitions from B to \({\varvec{B}} \). Initially, there is such a list for each block in the initial partition \(\pi _0\). From a transition \(s \mathord {\, \xrightarrow {}_{} \,} s'\), the list can be accessed via \((s\mathord {\, \xrightarrow {}_{} \,}s').\textit{toconstlnref}.\textit{translist} \).
 6.For each state \(s {\in } {B} \) we maintain the following information:
 (a)
A reference to the block containing s.
 (b)
A static list \(s.T_{ tgt }\) of transitions of the form \(s \mathord {\, \xrightarrow {}_{} \,} s'\) containing precisely all the transitions from s.
 (c)
A static list \(s.T_{ src }\) of transitions \(s' \mathord {\, \xrightarrow {}_{} \,} s\) containing all the transitions to s. We write such transitions as \(s \mathord {\, \mathop {\leftarrow }\limits ^{} \,} s'\), to stress that these move into s.
 (d)
A counter \(s.\textit{inertcnt} \) containing the number of outgoing transitions to a state in the same block as s. For any bottom state s, we have \(s.\textit{inertcnt} = 0\).
 (e)
Furthermore, when splitting a block \(B'\) under \(\varvec{B}\) and \({B} {\in } {\varvec{B}} \), there are references \(s.\textit{constlncnt} \) and \(s.\textit{coconstlncnt} \) to the variables that are used to count how many transitions there are from s to B and from s to \({\varvec{B}} \setminus {B} \).
Figure 2 illustrates some of the used structures. A block \(B_1\) in constellation \({\varvec{B}} \) contains bottom states \(s_1\), \(s_2'\) and nonbottom state \(s_2\). For \(s_1\), we have transitions \(s_1 \mathord {\, \xrightarrow {}_{} \,} s_1'\), \(s_1 \mathord {\, \xrightarrow {}_{} \,} s_1''\) to constellation \({\varvec{C}} \). Both have the following references: (a)
\(\textit{toconstlncnt} \) to the number of outgoing transitions from \(s_1\) to \({\varvec{C}} \).
 (b)
\(\textit{toconstlnref} \) to the element \(({\varvec{C}}, \bullet , \bullet )\) in \(B_1.\textit{toconstlns} \), where the \(\bullet \)’s are the (now uninitialized) references that are used when splitting.
 (c)
Via \(({\varvec{C}}, \bullet , \bullet )\), a reference \(\textit{translist} \) to the list of transitions from \(B_1\) to \({\varvec{C}} \).
Note that for the inert transition \(s_2 \mathord {\, \xrightarrow {}_{} \,} s_2'\), we only have a reference to the number of outgoing transitions from \(s_2\) to \({\varvec{B}} \).
 (a)
5.2 Finding the Blocks that Must Be Split
While nontrivialconstlns is not empty, we perform the algorithm listed in the following sections. To determine whether the current partition \(\pi \) is unstable, we select a constellation \({\varvec{B}} \) in nontrivialconstlns, and we select a block B from \({\varvec{B}} \) such that \(B\le \frac{1}{2}{\varvec{B}} \). We first check which blocks are unstable for \({B} \) and \({\varvec{B}} \setminus B\).
 1.
Move \({B} \) to a new trivial constellation \(\varvec{C}\). If \({\varvec{B}}.\textit{blocks}  {=}1\), make \({\varvec{B}} \) trivial.
 2.
For each state \(s {\in } {B} \), do the steps below for each \(s'{\in } B'\) such that \(s \mathord {\, \mathop {\leftarrow }\limits ^{} \,} s' \in s.T_{ src }\), and \(B \ne B'\).
 (a)
If \(B'\) has no marked states, put it in a list splittableblks, let \(B'.\textit{coconstlnref} \) refer to \((s \mathord {\, \mathop {\leftarrow }\limits ^{} \,} s') .\textit{toconstlnref} \), \(B'.\textit{constlnref} \) to a new element in \(B'.\textit{toconstlns} \).
 (b)
Mark \(s'\).
 (c)
Let \(s'.\textit{constlncnt} \) be the number of transitions to B and \(s'.\textit{coconstlncnt} \) the number of remaining outgoing transitions. All outgoing transitions of \(s'\) must refer to the appropriate counter.
 (d)
Move all visited transitions to \(B'.\textit{constlnref}.\textit{translist} \).
 (a)
 3.
Next, check whether B itself can be split. Mark all states, add B to \(\textit{splittableblks} \) and reset \(B.\textit{constlnref} \) and \(B.\textit{coconstlnref} \). For each state \(s {\in }B\), do the steps below for each \(s'{\in } B' {\in } {\varvec{B'}} \) such that \(s \mathord {\, \mathop {\leftarrow }\limits ^{} \,} s' \in s.T_{ src }\), and either \({\varvec{B'}} {=} {\varvec{B}} \) or \({\varvec{B'}} {=} {\varvec{C}} \).
 (a)
If \({\varvec{B'}} {=} {\varvec{B}} \), let \(B.\textit{coconstlnref} \) refer to \((s \mathord {\, \xrightarrow {}_{} \,} s').\textit{toconstlnref} \) and \(B.\textit{constlnref} \) and \(B.\textit{inconstlnref} \) to a new element for \({\varvec{C}} \) in \(B.\textit{toconstlns} \).
 (b)
Update \(s.\textit{constlncnt} \) and \(s.\textit{coconstlncnt} \) as in step 2(c).
 (a)
 4.
For each \(B'{\in } \textit{splittableblks} \), if all its bottom states are marked and either there is no marked bottom state s with \(s.\textit{coconstlncnt} {=} 0\) or \(B'.\textit{coconstlnref}.\textit{translist} \) is empty, remove \(B'\) from \(\textit{splittableblks} \) and remove its temporary markings, i.e. unmark all states, reset the counters and references.
 5.
If \(\textit{splittableblks} \) is not empty, start splitting (Sect. 5.3). Else, select another nontrivial constellation \({\varvec{B}} \) and block \(B{\in } {\varvec{B}} \), and continuing with step 1. If there are no nontrivial constellations left, the algorithm terminates.
5.3 Splitting the Blocks
We walk through the blocks \(B'{\in }{\varvec{B'}} \) in splittableblks, which must be split into two or three blocks under constellation \(\varvec{B}\) and block \({B} \). If all bottom states are marked, then we have \(\textit{split} ({B} ', {B}) = {B} '\), and can start with step 3 below.
 1.
Launch a lockstep search with \(D_1\) the list of marked states in \(B'\), \(D_2\) the list \(B'.\textit{btmsts} \), \(K=B'\), and \({ nested} = \mathbf{false}\).
 2.Depending on whether detect1 or detect2 terminated in the previous step, one of the lists L or \(L'\) contains the states to be moved to a new block \(B''\). Below we refer to this list as N. For each \(s {\in } N\), move s to \(B''\), and do the following:
 (a)
For each \(s \mathord {\, \xrightarrow {}_{} \,} s' {\in } T_{ tgt }\), do the following steps.
 i.
If \((s \mathord {\, \xrightarrow {}_{} \,} s').\textit{toconstlnref} \) is initialized, check whether it refers to an new element in \(B''.\textit{toconstlns} \). If not, create it. If appropriate, set references \(B''.\textit{inconstlnref} \), \(B''.\textit{constlnref} \) and \(B''.\textit{coconstlnref} \). Move \(s \mathord {\, \xrightarrow {}_{} \,} s'\) to the \(\textit{translist} \) of the new element. If the related element in \(B'.\textit{toconstlns} \) no longer holds transitions, remove it.
 ii.
Else, if \(s'{\in } B' \setminus N\) (a transition becomes noninert), decrement \(s.\textit{inertcnt} \). If \(s.\textit{inertcnt} {=} 0\), make s bottom, add \(s \mathord {\, \xrightarrow {}_{} \,} s'\) to \(B''.\textit{inconstlnref}.\textit{translist} \) (if \(B''.\textit{inconstlnref} \) does not exist, create it first).
 i.
 (b)
For each \(s \mathord {\, \mathop {\leftarrow }\limits ^{} \,} s' {\in } T_{ src }\), \(s' {\in } B' {\setminus } N\) (an inert transition becomes noninert), perform steps similar to 2(a).ii.
 (a)
 3.
Next, we split \(\textit{split} (B', B)\) under \({\varvec{B}} {\setminus } B\). Define \(C{=}\textit{split} (B', B)\). C is stable under \({\varvec{B}} {\setminus } B\) if \(C.\textit{coconstlnref} \) is uninitialized or holds an empty \(\textit{translist} \), or for all \(s{\in } C.\textit{mrkdbtmsts} \) it holds that \(s.\textit{coconstlncnt} >0\). If this is not the case, then we launch a lockstep search with \(D_1\) the list of states s occurring in some \(s\mathord {\, \xrightarrow {}_{} \,}s'\) in \(\textit{split} (B', B).\textit{coconstlnref}.\textit{translist} \), \(D_2\) the list of states s with \(s.\textit{coconstlncnt} = 0\) in \(C.\textit{mrkdbtmsts} \), \(K=C\), and \({ nested}=\mathbf{true}\). Finally, we split C by moving the states in either L or \(L'\) to a new block \(B'''\), depending on which list is the smallest.
 4.
Remove the temporary markings of each block C resulting from the splitting of \(B'\).
 5.
If the splitting of \(B'\) resulted in new bottom states, check for those states whether further splitting is required, i.e., whether from some of them, not all constellations can be reached which can be reached from the block. For all \(\hat{B}{\in }\{B',B'',B'''\}\), new bottom states s, \(s \mathord {\, \xrightarrow {}_{} \,} s' {\in } s.T_{ tgt }\), add s to the states list of the element associated with \({\varvec{\bar{B}}} \) in \(\hat{B}.\textit{toconstlns} \), where \(s'{\in } {\varvec{\bar{B}}} \), and move the element to the front of the list.
 6.
Perform the following steps for each block \(\hat{B}\) with new bottom states, as long as there are such blocks.
 (a)Walk through the elements in \(\hat{B}.\textit{toconstlns} \). If the states list of an element associated with a constellation \({\varvec{B}} \) does not contain all new bottom states, further splitting is required under \({\varvec{B}} \):
 i.
Launch a lockstep search with \(D_1\) the list of states s occurring in some \(s\mathord {\, \xrightarrow {}_{} \,}s'\) with \(s'{\in }{\varvec{B}} \) in the list \(\textit{translist} \) associated with \({\varvec{B}} {\in } \hat{B}.\textit{toconstlns} \), \(D_2\) the list of states \(s{\in } \hat{B}.\textit{newbtmsts} \) minus the new bottom states that can reach \({\varvec{B}} \), \(K = \hat{B}\), and \({ nested} = \mathbf{true}\).
 ii.
Split \(\hat{B}\) by performing step 2 to produce a new block \(\hat{B}'\). Move all states in \(\hat{B}.\textit{newbtmsts} \) that have moved to \(\hat{B}'\) to \(\hat{B}'.\textit{newbtmsts} \), and also move them from the states lists in the elements of \(\hat{B}.\textit{toconstlns} \) to the corresponding elements of \(\hat{B}'.\textit{toconstlns} \) (those elements refer to each other). If a states list becomes empty, move that element to the back of its list.
 iii.
Perform step 5 for \(\hat{B}\) and \(\hat{B}'\).
 i.
 (b)
If no further splitting was required for \(\hat{B}\), empty \(\hat{B}.\textit{newbtmsts} \) and clear the remaining states lists in \(\hat{B}.\textit{toconstlns} \).
 (a)
 7.
If \({\varvec{B'}} {\in } \textit{trivialconstlns} \), move it to nontrivialconstlns.
6 Application to Branching Bisimulation
We show that the algorithm can also be used to determine branching bisimulation, using the transformation from [14, 17], with complexity \(O(m(\log  Act +\log n))\). Branching bisimulation is typically applied to labelled transition systems (LTSs). An LTS is a three tuple \(A = (S, { Act}, \mathord {\, \xrightarrow {}_{} \,})\), with S a finite set of states, \({ Act}\) a finite set of actions including the internal action \(\tau \), and \(\mathord {\, \xrightarrow {}_{} \,} \subseteq S \times { Act} \times S\) a transition relation.
Definition 4
Our new algorithm can be applied to an LTS by translating it to a Kripke structure.
Definition 5
 1.
\(S_A=S\cup \{\langle a,t\rangle \mid s\mathord {\, \xrightarrow {a}_{} \,}t\) for some \(t\in S \}\).
 2.
\( AP = Act \cup \{\bot \}\).
 3.
\(\rightarrow \) is the least relation satisfying (\(s,t{\in } S\), \(a{\in } Act {\setminus }\tau \)): \(\frac{s\mathord {\, \xrightarrow {a}_{} \,}t}{s\mathord {\, \xrightarrow {}_{} \,}\langle a,t\rangle }\), \(\frac{}{\langle a,t\rangle \mathord {\, \xrightarrow {}_{} \,}t}\) and \(\frac{s\mathord {\, \xrightarrow {\tau }_{} \,}t}{s\mathord {\, \xrightarrow {}_{} \,}t}\).
 4.
\(L(s)=\{\bot \}\) for \(s\in S\) and \(L(\langle a,t\rangle )=\{a\}\).
The following theorem stems from [14].
Theorem 1
Let A be an LTS and \(K_A\) its embedding. Then two states are branching bisimilar in A iff they are divergenceblind stuttering equivalent in \(K_A\).
If we start out with an LTS with n states and m transitions then its embedding has at most \(m+n\) states and 2m transitions. Hence, the algorithm requires \(O(m\log (n{+}m))\) time. As m is at most \( Act n^2\) this is also equal to \(O(m(\log  Act {+}\log n))\).
As a final note, the algorithm can also be adapted to determine divergencesensitive branching bisimulation [8], by adding a \(\tau \)self loop to those states on a \(\tau \)loop.
7 Experiments
The new algorithm has been implemented as part of the mCRL2 toolset [6], which offers implementations of GV and the algorithm by Blom and Orzan [2] that distinguishes states by their connection to blocks via their outgoing transitions. We refer to the latter as BO. The performance of GV and BO can be very different on concrete examples. We have extensively tested the new algorithm by applying it to thousands of randomly generated LTSs and comparing the results with those of the other algorithms.
Runtime (in sec.) and memory use (in MB) results for GV, BO, and GW

Practically all experiments have been performed on machines running CentOS Linux, with an Intel E52620 2.0 GHz CPU and 64 GB RAM. Exceptions to this are the final two entries in Table 1, which were obtained by using a machine running Fedora 12, with an Intel Xeon E5520 2.27 GHz CPU and 1 TB RAM.
Figure 3 presents the runtime results for two sets of experiments to demonstrate that GW has the expected scalability. At the left are the results of analysing single sequences of the shape \((a{\cdot } \tau )^n\). As the length 2n of such a sequence is increased, the results show that the runtimes of both BO and GV increase at least quadratically, while the runtime of GW grows linearly. All algorithms require n iterations, in which BO and GV walk over all the states in the sequence, but GW only moves two states into a new block. At the right of Fig. 3, the results are displayed of analysing trees of depth n that up to level \(n{}1\) correspond with a binary tree of \(\tau \)transitions. Each state at level \(n{}1\) has a uniquely labelled outgoing transition to a state in level n. BO only needs one iteration to obtain the stable partition. Still GW beats BO by repeatedly splitting off small blocks of size \(2(k1)\) if a state at level k is the splitter.
Table 1 contains results for minimising LTSs from the VLTS benchmark set^{1} and the mCRL2 toolset^{2}. These experiments demonstrate that also when applied to actual state spaces of real models, GW generally outperforms the best of the other algorithms, often with a factor 10 and sometimes with a factor 100. This difference tends to grow as the LTSs get larger. GW’s memory usage is only sometimes substantially higher than GV’s and BO’s, which surprised us given the amount of required bookkeeping.
Footnotes
References
 1.Aho, A., Hopcroft, J., Ullman, J.: The Design and Analysis of Computer Algorithms. AddisonWesley, Reading (1974)zbMATHGoogle Scholar
 2.Blom, S.C., Orzan, S.: Distributed branching bisimulation reduction of state spaces. In: FMICS 2003. ENTCS, vol. 80, pp. 109–123. Elsevier (2003)Google Scholar
 3.Blom, S.C., van de Pol, J.C.: Distributed branching bisimulation minimization by inductive signatures. In: PDMC 2009. EPTCS, vol. 14, pp. 32–46. Open Publ. Association (2009)Google Scholar
 4.Browne, M.C., Clarke, E.M., Grumberg, O.: Characterizing finite Kripke structures in propositional temporal logic. Theoret. Comput. Sci. 59(1,2), 115–131 (1988)MathSciNetCrossRefzbMATHGoogle Scholar
 5.Chatterjee, K., Henzinger, M.: Faster and dynamic algorithms for maximal endcomponent decomposition and related graph problems in probabilistic verification. In: SODA 2011, pp. 1318–1336. SIAM (2011)Google Scholar
 6.Cranen, S., Groote, J.F., Keiren, J.J.A., Stappers, F.P.M., de Vink, E.P., Wesselink, W., Willemse, T.A.C.: An overview of the mCRL2 toolset and its recent advances. In: Piterman, N., Smolka, S.A. (eds.) TACAS 2013 (ETAPS 2013). LNCS, vol. 7795, pp. 199–213. Springer, Heidelberg (2013)CrossRefGoogle Scholar
 7.Garavel, H., Lang, F., Mateescu, R., Serwe, W.: CADP 2011: a toolbox for the construction and analysis of distributed processes. Softw. Tools Technol. Transfer. 15(2), 98–107 (2013)zbMATHGoogle Scholar
 8.van Glabbeek, R.J., Weijland, W.P.: Branching time and abstraction in bisimulation semantics. J. ACM 43(3), 555–600 (1996)MathSciNetCrossRefzbMATHGoogle Scholar
 9.Groote, J.F., Vaandrager, F.W.: An efficient algorithm for branching bisimulation and stuttering equivalence. In: Paterson, M. (ed.) ICALP 1990. LNCS, vol. 443, pp. 626–638. Springer, Heidelberg (1990)CrossRefGoogle Scholar
 10.Groote, J.F., Mousavi, M.R.: Modeling and Analysis of Communicating Systems. The MIT Press, Cambridge (2014)zbMATHGoogle Scholar
 11.Kannelakis, P., Smolka, S.: CCS Expressions, Finite State Processes and Three Problems of Equivalence. Inf. Comput. 86, 43–68 (1990)MathSciNetCrossRefzbMATHGoogle Scholar
 12.Li, W.: Algorithms for computing weak bisimulation equivalence. In: TASE 2009, pp. 241–248. IEEE (2009)Google Scholar
 13.Milner, R. (ed.): Calculus of Communicating Systems. Lecture Notes in Computer Science, vol. 92. Springer, Heidelberg (1980)zbMATHGoogle Scholar
 14.De Nicola, R., Vaandrager, F.W.: Three logics for branching bisimulation. Journal of the ACM 42, 458–487 (1995)MathSciNetCrossRefzbMATHGoogle Scholar
 15.Paige, R., Tarjan, R.E.: Three partition refinement algorithms. SIAM J. Comput. 16(6), 973–989 (1987)MathSciNetCrossRefzbMATHGoogle Scholar
 16.Ranzato, F., Tapparo, F.: Generalizing the PaigeTarjan algorithm by abstract interpretation. Inf. Comput. 206(5), 620–651 (2008)MathSciNetCrossRefzbMATHGoogle Scholar
 17.Reniers, M.A., Schoren, R., Willemse, T.A.C.: Results on embeddings between statebased and eventbased systems. Comput. J. 57(1), 73–92 (2014)CrossRefGoogle Scholar
 18.Virtanen, H., Hansen, H., Valmari, A., Nieminen, J., Erkkilä, T.: Tampere verification tool. In: Jensen, K., Podelski, A. (eds.) TACAS 2004. LNCS, vol. 2988, pp. 153–157. Springer, Heidelberg (2004)CrossRefGoogle Scholar
 19.Wijs, A.: GPU accelerated strong and branching bisimilarity checking. In: Baier, C., Tinelli, C. (eds.) TACAS 2015. LNCS, vol. 9035, pp. 368–383. Springer, Heidelberg (2015)Google Scholar