An \(O(m\log n)\) Algorithm for Stuttering Equivalence and Branching Bisimulation

  • Jan Friso GrooteEmail author
  • Anton Wijs
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9636)


We provide a new algorithm to determine stuttering equivalence with time complexity \(O(m \log n)\), where n is the number of states and m is the number of transitions of a Kripke structure. This algorithm can also be used to determine branching bisimulation in \(O(m(\log | Act |+\log n))\) time. Theoretically, our algorithm substantially improves upon existing algorithms which all have time complexity O(mn) [2, 3, 9]. Moreover, it has better or equal space complexity. Practical results confirm these findings showing that our algorithm can outperform existing algorithms with orders of magnitude, especially when the sizes of the Kripke structures are large.

1 Introduction

Stuttering equivalence [4] and branching bisimulation [8] were proposed as alternatives to Milner’s weak bisimulation [13]. They are very close to weak bisimulation, with as essential difference that all states in the mimicking sequence \(\tau ^*a\,\tau ^*\) must be related to either the state before or directly after the a from the first system. This means that branching bisimulation and stuttering equivalence are slightly stronger notions than weak bisimulation.

In [9] an O(mn) time algorithm was proposed for stuttering equivalence and branching bisimulation, where m is the number of transitions and n is the number of states in either a Kripke structure (for stuttering equivalence) or a labelled transition system (for branching bisimulation). We refer to this algorithm as GV. It is based upon the O(mn) algorithm for bisimulation equivalence in [11]. Both algorithms require \(O(m{+}n)\) space. They calculate for each state whether it is bisimilar to another state.

The basic idea of the algorithms of [9, 11] is to partition the set of states into blocks. States that are bisimilar always reside in the same block. Whenever there are some states in a block \(B'\) from which a transition is possible to some block B and there are other states in \(B'\) from which such a step is not possible, \(B'\) is split accordingly. Whenever no splitting is possible anymore, the partition is called stable, and two states are in the same block iff they are bisimilar.

There have been some attempts to come up with improvements of GV. The authors of [2] observed that GV only splits a block in two parts at a time. They proposed to split a block in as many parts as possible, reducing moving states and transitions to new blocks. Their worst case time and space complexities are worse than that of GV, especially the space complexity O(mn), but in practice this algorithm can outperform GV. In [3], the space complexity is brought back to \(O(m{+}n)\). A technique to be performed on Graphics Processing Units based on both GV and [2, 3] is proposed in [19]. This improves the required runtime considerably by employing parallelism, but it does not imply any improvement to the single-threaded algorithm.

In [15] an \(O(m \log n)\) algorithm is proposed for strong bisimulation as an improvement upon the algorithm of [11]. The core idea for this improvement is described as “process the smaller half” [1]. Whenever a block is split in two parts the amount of work must be contributed to the size of the smallest resulting block. In such a case a state is only involved in the process of splitting if it resides in a block at most half the size of the block it was previously in when involved in splitting. This means that a state can never be involved in more than \(\log _2 n\) splittings. As the time used in each state is proportional to the number of incoming or outgoing transitions in that state, the total required time is \(O(m \log n)\).

In this paper we propose the first algorithm for stuttering equivalence and branching bisimulation in which the “process the smaller half”-technique is used. By doing so, we can finally confirm the conjecture in [9] that such an improvement of GV is conceivable. Moreover, we achieve an even lower complexity, namely \(O(m \log n)\), than conjectured in [9] by applying the technique twice, the second time for handling the presence of inert transitions. First we establish whether a block can be split by combining the approach regarding bottom states from GV with the detection approach in [15]. Subsequently, we use the “process the smaller half”-technique again to split a block by only traversing transitions in a time proportional to the size of the smallest subblock. As it is not known which of the two subblocks is smallest, the transitions of the two subblocks are processed alternatingly, such that the total processing time can be contributed to the smallest block. For checking behavioural equivalences, applying such a technique is entirely new. We are only aware of a similar approach for an algorithm in which the smallest bottom strongly connected component of a graph needs to be found [5].

Compared to checking other equivalences the existing algorithms for branching bisimulation/stuttering equivalence were already known to be practically very efficient. This is the reason that they are being used in multiple explicit-state model checkers, such as Cadp [7], the mCRL2 toolset [10] and TVT [18]. In particular they are being used as preprocessing steps for other equivalences (weak bisimulation, trace based equivalences) that are much harder to compute. For weak bisimulation recently an O(mn) algorithm has been devised [12, 16], but until that time an expensive transitive closure operation of at best \(O(n^{2.373})\) was required. The improvements of our new algorithm are not restricted to stuttering equivalence and branching bisimulation alone, but they can also impact the computation time of all other behavioural equivalences.

Although our algorithm theoretically outperforms its predecessors substantially, we wanted to know whether it would also do so in practice. We find that for dedicated examples our algorithm lives up to its theoretical improvement outperforming the existing algorithms in accordance with the theory. For practical examples we see that our algorithm can always match the best running times of existing algorithms, but especially when the Kripke structures and transition systems get large, our algorithm tends to outperform existing algorithms with orders of magnitude.

2 Preliminaries

We introduce Kripke structures and (divergence-blind) stuttering equivalence. In Sect. 6 we explain branching bisimulation and its application to labelled transition systems.

Definition 1

A Kripke structure is a four tuple \(K=( S, AP ,\mathord {\, \xrightarrow {}_{} \,}, L)\), where
  1. 1.

    S is a finite set of states.

  2. 2.

    \( AP \) is a finite set of atomic propositions.

  3. 3.

    \(\mathord {\, \xrightarrow {}_{} \,}\subseteq S\times S\) is a total transition relation, i.e., for each \(s\in S\) there is an \(s'\in S\) s.t. \(s\mathord {\, \xrightarrow {}_{} \,}s'\).

  4. 4.

    \(L:S\rightarrow 2^{ AP }\) is a state labelling.


We use \(n{=}|S|\) for the number of states and \(m{=}|{\mathord {\, \xrightarrow {}_{} \,}}|\) for the number of transitions. For a set of states \(B{\subseteq } S\), we write \(s\mathord {\, \xrightarrow {}_{B} \,}s'\) for \(s\mathord {\, \xrightarrow {}_{} \,}s'\) and \(s'\in B\).

Definition 2

Let \(K=(S, AP ,\mathord {\, \xrightarrow {}_{} \,},L)\) be a Kripke structure. A symmetric relation \(R\subseteq S\times S\) is a divergence-blind stuttering bisimulation iff for all \(s,t\in S\) such that sRt:
  1. 1.


  2. 2.

    for all \(s'\in S\) if \(s\mathord {\, \xrightarrow {}_{} \,}s'\), then there are \(t_0,\ldots , t_k\in S\) for some \(k\in \mathbb {N}\) such that \(t=t_0\), \(sRt_i\), \(t_i\mathord {\, \xrightarrow {}_{} \,}t_{i+1}\), and \(s'Rt_k\) for all \(i<k\).


We say that two states \(s,t\in S\) are divergence-blind stuttering equivalent, notation Open image in new window , iff there is a divergence-blind stuttering equivalence relation R such that sRt.

An important property of divergence-blind stuttering equivalence is that if states on a loop all have the same label then all these states are divergence-blind stuttering equivalent. We define stuttering equivalence in terms of divergence-blind stuttering equivalence using the following Kripke structure.

Definition 3

Let \(K=(S, AP ,\mathord {\, \xrightarrow {}_{} \,},L)\) be a Kripke structure. Define the Kripke structure \(K_d=(S\cup \{s_d\}, AP \cup \{d\},\mathord {\, \xrightarrow {}_{d} \,},L_d)\) where d is an atomic proposition not occurring in \( AP \) and \(s_d\) is a fresh state not occurring in S. Furthermore,
  1. 1.

    \(\mathord {\, \xrightarrow {}_{d} \,}=\mathord {\, \xrightarrow {}_{} \,}\cup \{\langle s,s_d\rangle ~|~s\) is on a cycle of states all labelled with L(s), or \(s=s_d\}\).

  2. 2.

    For all \(s\in S\) we define \(L_d(s)=L(s)\) and \(L_d(s_d)=\{d\}\).


States \(s,t\in S\) are stuttering equivalent, notation Open image in new window iff there is a divergence-blind stuttering bisimulation relation R on \(S_d\) such that sRt.

Note that an algorithm for divergence-blind stuttering equivalence can also be used to determine stuttering equivalence by employing only a linear time and space transformation. Therefore, we only concentrate on an algorithm for divergence-blind stuttering equivalence.

3 Partitions and Splitters: A Simple Algorithm

Our algorithms perform partition refinement of an initial partition containing the set of states S. A partition \(\pi =\{ B_i\subseteq S~|~1\le i\le k\}\) is a set of non empty subsets such that \(B_i\cap B_j=\emptyset \) for all \(1\le i<j\le k\) and \(S =\bigcup _{1\le i\le k}B_i\). Each \(B_i\) is called a block.

We call a transition \(s \mathord {\, \xrightarrow {}_{} \,} s'\) inert w.r.t. \(\pi \) iff s and \(s'\) are in the same block \(B \in \pi \). We say that a partition \(\pi \) coincides with divergence-blind stuttering equivalence when Open image in new window iff there is a block \(B\in \pi \) such that \(s,t\in B\). We say that a partition respects divergence-blind stuttering equivalence iff for all \(s,t\in S\) if Open image in new window then there is some block \(B\in \pi \) such that \(s,t\in B\). The goal of the algorithm is to calculate a partition that coincides with divergence-blind stuttering equivalence. This is done starting with the initial partition \(\pi _0\) consisting of blocks B satisfying that if \(s,t\in B\) then \(L(s)=L(t)\). Note that this initial partition respects divergence-blind stuttering equivalence.

We say that a partition \(\pi \) is cycle-free iff for each block \(B\in \pi \) there is no state \(s\in B\) such that \(s\mathord {\, \xrightarrow {}_{B} \,}s_1\) \(\mathord {\, \xrightarrow {}_{B} \,}\cdots \mathord {\, \xrightarrow {}_{B} \,}s_k\mathord {\, \xrightarrow {}_{} \,}s\) for some \(k\in \mathbb {N}\). It is easy to make the initial partition \(\pi _0\) cycle-free by merging all states on a cycle in each block into a single state. This preserves divergence-blind stuttering equivalence and can be performed in linear time employing a standard algorithm to find strongly connected components [1].

The initial partition is refined until it coincides with divergence-blind stuttering equivalence. Given a block \(B'\) of the current partition and the union \({\varvec{B}} \) of some of the blocks in the partition, we define
$$\begin{aligned} \begin{array}{l} \textit{split} (B',{\varvec{B}})=\{ s_0{\in } B' \mid \exists k {\in } \mathbb {N}, s_1,.., s_k {\in } S. s_i\mathord {\, \xrightarrow {}_{} \,}s_{i+1}, s_i{\in } B'\text { for all }i<k \wedge s_k {\in } {\varvec{B}} \}\\ \textit{cosplit} (B',{\varvec{B}})=B'\setminus \textit{split} (B',{\varvec{B}}). \end{array} \end{aligned}$$
Note that if \(B'\subseteq {\varvec{B}} \), then \(\textit{split} (B',{\varvec{B}})=B'\). It is common to split blocks under single blocks, i.e., \({\varvec{B}} \) corresponding with a single block \(B\in \pi \) [9, 11]. However, as indicated in [15], it is required to split under the union of some of the blocks in \(\pi \) to obtain an \(O(m \log n)\) algorithm. We refer to such groups of blocks as constellations. In Sect. 4, we use constellations consisting of more than one block when splitting.

We say that a block \(B'\) is unstable under \({\varvec{B}} \) iff \(\textit{split} (B',{\varvec{B}})\not =\emptyset \) and \(\textit{cosplit} (B',{\varvec{B}})\) \(\not =\emptyset \). A partition \(\pi \) is unstable under \({\varvec{B}} \) iff there is at least one \(B' \in \pi \) which is unstable under \({\varvec{B}} \). If \(\pi \) is not unstable under \({\varvec{B}} \) then it is called stable under \({\varvec{B}} \). If \(\pi \) is stable under all \({\varvec{B}} \), then it is simply called stable.

A refinement of \(B' \in \pi \) under \({\varvec{B}} \) consists of two new blocks \(\textit{split} (B',{\varvec{B}})\) and \(\textit{cosplit} (B',{\varvec{B}})\). A partition \(\pi '\) is a refinement of \(\pi \) under \({\varvec{B}} \) iff all unstable blocks \(B' \in \pi \) have been replaced by new blocks \(\textit{split} (B',{\varvec{B}})\) and \(\textit{cosplit} (B',{\varvec{B}})\).

The following lemma expresses that if a partition is stable then it coincides with divergence-blind stuttering equivalence. It also says that during refinement, the encountered partitions respect divergence-blind stuttering equivalence and remain cycle-free.

Lemma 1

Let \(K=(S, AP ,\mathord {\, \xrightarrow {}_{} \,}, L)\) be a Kripke structure and \(\pi \) a partition of S.

  1. 1.

    For all states \(s,t\in S\), if \(s,t\in B\) with B a block of the partition \(\pi \), \(\pi \) is stable, and a refinement of the initial partition \(\pi _0\), then Open image in new window .

  2. 2.

    If \(\pi \) respects divergence-blind stuttering equivalence then any refinement of \(\pi \) under the union of some of the blocks in \(\pi \) also respects it.

  3. 3.

    If \(\pi \) is a cycle-free partition, then any refinement of \(\pi \) is also cycle-free.



  1. 1.

    We show that if \(\pi \) is a stable partition, the relation \(R=\{\langle s,t\rangle ~|~s,t\in B,~B\in \pi \}\) is a divergence-blind stuttering equivalence. It is clear that R is symmetric. Assume sRt. Obviously, \(L(s)=L(t)\) because \(s,t\in B\) and B refines the initial partition. For the second requirement of divergence-blind stuttering equivalence, suppose \(s\mathord {\, \xrightarrow {}_{} \,} s'\). There is a block \(B'\) such that \(s'\in B'\). As \(\pi \) is stable, it holds for t that \(t=t_0\mathord {\, \xrightarrow {}_{} \,}t_1\mathord {\, \xrightarrow {}_{} \,}\cdots \mathord {\, \xrightarrow {}_{} \,} t_k\) for some \(k\in \mathbb {N}\), \(t_0,\ldots ,t_{k-1}\in B\) and \(t_k\in B'\). This clearly shows that for all \(i<k\) \(sRt_i\), and \(s'Rt_k\). So, R is a divergence-blind stuttering equivalence, and therefore it holds for all states \(s,t\in S\) that reside in the same block of \(\pi \) that Open image in new window .

  2. 2.

    The second part can be proven by reasoning towards a contradiction. Let us assume that a partition \(\pi '\) that is a refinement of \(\pi \) under \({\varvec{B}} \) does not respect divergence-blind stuttering equivalence, although \(\pi \) does. Hence, there are states \(s, t\in S\) with Open image in new window and a block \(B' \in \pi \) with \(s, t\in B'\) and s and t are in different blocks in \(\pi '\). Given that \(\pi '\) is a refinement of \(\pi \) under \({\varvec{B}} \), \(s \in \textit{split} (B',{\varvec{B}})\) and \(t \in \textit{cosplit} (B', {\varvec{B}})\) (or vice versa, which can be proven similarly). By definition of \(\textit{split} \), there are \(s_1, \ldots , s_{k-1} \in B'\) (\(k \in \mathbb {N}\)) and \(s_k \in {\varvec{B}} \) such that \(s \mathord {\, \xrightarrow {}_{} \,} s_1 \mathord {\, \xrightarrow {}_{} \,} \cdots \mathord {\, \xrightarrow {}_{} \,} s_k\). Then, either \(k=0\) and \(B'\subseteq {\varvec{B}} \), but then \(t\notin \textit{cosplit} (B',{\varvec{B}})\). Or \(k>0\), and since Open image in new window , there are \(t_1, \ldots , t_{l-1} \in B'\) (\(l \in \mathbb {N}\)) and \(t_l \in {\varvec{B}} \) such that \(t \mathord {\, \xrightarrow {}_{} \,} t_1 \mathord {\, \xrightarrow {}_{} \,} \cdots \mathord {\, \xrightarrow {}_{} \,} t_l \) with \(s_i R t_j\) for all \(1 \le i < k\), \(1 \le j < l\) and \(s_k R t_l\). This means that we have \(t \in \textit{split} (B', {\varvec{B}})\), again contradicting that \(t \in \textit{cosplit} (B', {\varvec{B}})\).

  3. 3.

    If \(\pi \) is cycle-free, this property is straightforward, since splitting any block of \(\pi \) will not introduce cycles.    \(\square \)

This suggests the following simple algorithm which has time complexity O(mn) and space complexity \(O(m{+}n)\), which was essentially presented in [9].
It is an invariant of this algorithm that \(\pi \) respects divergence-blind stuttering equivalence and \(\pi \) is cycle-free. In particular, \(\pi =\pi _0\) satisfies this invariant initially. If \(\pi \) is not stable, a refinement under some block B exists, splitting at least one block. Therefore, this algorithm finishes in at most \(n{-}1\) steps as during each iteration of the algorithm the number of blocks increases by one, and the number of blocks can never exceed the number of states. When the algorithm terminates, \(\pi \) is stable and therefore, two states are divergence-blind stuttering equivalent iff they are part of the same block in the final partition. This end result is independent of the order in which splitting took place.
In order to see that the time complexity of this algorithm is O(mn), we must show that we can detect that \(\pi \) is unstable and carry out splitting in time O(m). The crucial observation to efficiently determine whether a partition is stable stems from [9] where it was shown that it is enough to look at the bottom states of a block, which always exist for each block because the partition is cycle-free. The bottom states of a block are those states that do not have an outgoing inert transition, i.e., a transition to a state in the same block. They are defined by
$$\begin{aligned} bottom (B)=\{s\in B~|~\text {there is no state }s'\in B\text { such that }s\mathord {\, \xrightarrow {}_{} \,}s'\}. \end{aligned}$$
The following lemma presents the crucial observation concerning bottom states.

Lemma 2

Let \(K=(S, AP ,\mathord {\, \xrightarrow {}_{} \,},L)\) be a Kripke structure and \(\pi \) be a cycle-free partition of its states. Partition \(\pi \) is unstable under union \({\varvec{B}} \) of some of the blocks in \(\pi \) iff there is a block \(B'\in \pi \) such that
$$\begin{aligned} \textit{split} (B',{\varvec{B}}) \ne \emptyset \text { and } bottom (B')\cap \textit{split} (B',{\varvec{B}})\subset bottom (B'). \end{aligned}$$
Here \(\subset \) is meant to be a strict subset.


  • \(\Rightarrow \) If \(\pi \) is unstable, then \(\textit{split} (B',{\varvec{B}})\not =\emptyset \) and \(\textit{split} (B',{\varvec{B}})\not = B'\). The first conjunct corresponds with the first condition. If \(\textit{split} (B',{\varvec{B}})\not = B'\), there are states \(s {\notin } \textit{split} (B',{\varvec{B}})\). As the blocks \(B' {\in } \pi \) do not have cycles, consider such an \(s \notin \textit{split} (B', {\varvec{B}})\) with a smallest distance to a state \(s_k {\in } bottom (B')\), i.e., \(s\mathord {\, \xrightarrow {}_{} \,}s_1\mathord {\, \xrightarrow {}_{} \,}\cdots \) \(\mathord {\, \xrightarrow {}_{} \,}s_k\) with all \(s_i\in B'\). If s itself is an element of \( bottom (B')\), the second part of the right hand side of the lemma follows. Assume \(s{\notin } bottom (B')\), there is some state \(s'{\in } B'\) closer to \( bottom (B')\) such that \(s\mathord {\, \xrightarrow {}_{} \,}s'\). Clearly, \(s'{\notin } \textit{split} (B',{\varvec{B}})\) either, as otherwise \(s\in \textit{split} (B',{\varvec{B}})\). But as \(s'\) is closer to \( bottom (B')\), the state s was not a state with the smallest distance to a state in \( bottom (B')\), which is a contradiction.

  • \(\Leftarrow \) It follows from the right hand side that \(\textit{split} (B',{\varvec{B}})\not =\emptyset \), \(\textit{split} (B',{\varvec{B}})\not =B'\).    \(\square \)

This lemma can be used as follows to find a block to be split. Consider each \(B {\in } \pi \). Traverse its incoming transitions and mark the states that can reach B in zero or one step. If a block \(B'\) has marked states, but not all of its bottom states are marked, the condition of the lemma applies, and it needs to be split. It is at most needed to traverse all transitions to carry this out, so its complexity is O(m).

If B is equal to \(B'\), no splitting is possible. We implement it by marking all states in B as each state in B can reach itself in zero steps. In this case condition \( bottom (B')\cap \textit{split} (B',{\varvec{B}})\subset bottom (B')\) is not true. This is different from [9] where a block is never considered as a splitter of itself, but we require this in the algorithm in the next sections.

If a block \(B'\) is unstable, and all states from which a state in B can be reached in one step are marked, then a straightforward recursive procedure is required to extend the marking to all states in \(\textit{split} (B',B)\), and those states need to be moved to a new block. This takes time proportional to the number of transitions in \(B'\), i.e., O(m).

4 Constellations: An \(O(m\log n)\) Algorithm

The crucial idea to transform the algorithm from the previous section into an \(O(m\log n)\) algorithm stems from [15]. By grouping the blocks in the current partition \(\pi \) into constellations such that \(\pi \) is stable under the union of the blocks in such a constellation, we can determine whether a block exists under which \(\pi \) is unstable by only looking at blocks that are at most half the size of the constellation, i.e., \(|B| \le \frac{1}{2}|{\varvec{B}} |\), where \(|{\varvec{B}} | = \varSigma _{B'{\in {\varvec{B}}}} |B'|\), for a block B in a constellation \({\varvec{B}} \). If a block \(B'\in \pi \) is unstable under B, then we use a remarkable technique consisting of two procedures running alternatingly to identify the smallest block resulting from the split. The whole operation runs in time proportional to the smallest block resulting from the split. We involve the blocks in \({\varvec{B}} \setminus B\) in the splitting without explicitly analysing the states contained therein (for convenience, we write \({\varvec{B}} \setminus B\) instead of \({\varvec{B}} \setminus \{ B \}\)).

Working with constellations in this way ensures for each state that whenever it is involved in splitting, i.e., if it is part of a block that is used to split or that is being split, this block is half the size of the previous block in which the state resided when it was involved in splitting. That ensures that each state can at most be \(\log _2(n)\) times involved in splitting. When involving a state, we only analyse its incoming and outgoing transitions, resulting in an algorithm with complexity \(O(m\log n)\). Although we require quite a number of auxiliary data structures, these are either proportional to the number of states or to the number of transitions. So, the memory requirement is \(O(m{+}n)\).

In the following, the set of constellations also forms a partition, which we denote by \(\mathcal{C}\). A constellation is a set of one or more blocks from the current partition \(\pi \). If a constellation contains only one block, it is called trivial. The current partition \(\pi \) is stable with respect to each constellation in \(\mathcal{C}\).

If a constellation \({\varvec{B}} {\in }\mathcal{C}\) contains more than one block, we select one block \(B {\in } {\varvec{B}} \) which is at most half the size of \(\varvec{B}\), and move it to a new trivial constellation \({\varvec{B'}} \). We check whether the current partition is stable under B and \({\varvec{B}} \setminus B\) according to Lemma 2 by traversing the incoming transitions of states in B and marking the encountered states that can reach B in zero or one step. For all blocks \(B'\) that are unstable according to Lemma 2, we calculate \(\textit{split} (B',B)\) and \(\textit{cosplit} (B',B)\), as indicated below.

As noted in [15], \(\textit{cosplit} (B',B)\) is stable under \({\varvec{B}} \setminus B\). Therefore, only further splitting of \(\textit{split} (B',B)\) under \({\varvec{B}} \setminus B\) must be investigated. If \(B'\) is stable under B because all bottom states of \(B'\) are marked, it can be that \(B'\) is not stable under \({\varvec{B}} \setminus B\), which we do not address here explicitly, as it proceeds along the same line.

There is a special list data structure to recall for any \(B'\) and \({\varvec{B}} \) which transitions go from \(B'\) to \({\varvec{B}} \). When investigating whether \(\textit{split} (B',B)\) is stable under B we adapt this list to determine the transitions from \(\textit{split} (B',B)\) to \({\varvec{B}} \setminus B\) and we simultaneously tag the states in \(B'\) that have a transition to \({\varvec{B}} \setminus B\). Therefore, we know whether there are transitions from \(\textit{split} (B',B)\) to \({\varvec{B}} \setminus B\) and we can traverse the bottom states of \(\textit{split} (B',B)\) to inspect whether there is a bottom state without a transition to B. Following Lemma 2, this allows us to determine whether \(\textit{split} (B',B)\) must be split under \({\varvec{B}} \setminus B\) in a time proportional to the size of B. How splitting is carried out is indicated below.

There is one aspect that complicates matters. If blocks are split, the new partition is not automatically stable under all constellations. This is contrary to the situation in [15] and was already observed in [9]. Figure 1 indicates the situation. Block \(B'\) is stable under constellation \({\varvec{B}} \). But if \(B'\) is split under block C into \(B_1\) and \(B_2\), block \(B_1\) is not stable under \({\varvec{B}} \). The reason is, as exemplified by the following lemma, that some states that were non-bottom states in \(B'\) became bottom states in \(B_1\).
Fig. 1.

After splitting \(B'\) under C, \(B_1\) is not stable under \({\varvec{B}} \).

Lemma 3

Let \(K=(S, AP ,\mathord {\, \xrightarrow {}_{} \,},L)\) be a Kripke structure with cycle free partition \(\pi \) with refinement \(\pi '\). If \(\pi \) is stable under a constellation \({\varvec{B}} \), and \(B'\in \pi \) is refined into \(B_1',\ldots ,B_k'\in \pi '\), then for each \(B_i'\) where the bottom states in \(B_i'\) are also bottom states in \(B'\), it holds that \(B_i'\) is also stable under \({\varvec{B}} \).


Assume \(B_i'\) is not stable under \({\varvec{B}} \). This means that \(B_i'\) is not an element of \({\varvec{B}} \). Hence, there is a state \(s\in B_i'\) such that \(s\mathord {\, \xrightarrow {}_{} \,} s'\) with \(s' \in {\varvec{B}} \) and there is a bottom state \(t\in B_i'\) with no outgoing transition to a state in \({\varvec{B}} \). But as \(B'\) was stable under \({\varvec{B}} \), and s has an outgoing transition to a state in \({\varvec{B}} \), all bottom states in \(B'\) must have at least one transition to a state in \({\varvec{B}} \). Therefore, t cannot be a bottom state of \(B'\), and must have become a bottom state after splitting \(B'\).    \(\square \)

This means that if a block \(B'\) is the result of a refinement, and some of its states became bottom states, it must be made sure that \(B'\) is stable under the constellations. Typically, from the new bottom states a smaller number of blocks in the constellation can be reached. For each block we maintain a list of constellations that can be reached from states in this block. We match the outgoing transitions of the new bottom states with this list, and if there is a block \(B''\) reachable from states in the constellation, but not from the bottom states, \(B'\) must be split by \(B''\).

The complexity of checking for additional splittings to regain stability when states become bottom states is only O(m). Each state only becomes a bottom state once, and when that happens we perform calculations proportional to the number of outgoing transitions of this state to determine whether a split must be carried out.

It remains to show that splitting can be performed in a time proportional to the size of the smallest block resulting from the splitting. Consider splitting \(B'\) under \(B{\in }{\varvec{B}} \). While marking \(B'\) four lists of all marked and non marked, bottom and non bottom states have been constructed. We simultaneously mark states in \(B'\) either red or blue. Red means that there is a path from a state in \(B'\) to a state in B. Blue means that there is no such path. Initially, marked states are red, and non marked bottom states are blue.

This colouring is simultaneously extended to all states in \(B'\), spending equal time to both. The procedure is stopped when the colouring of one of the colours cannot be enlarged. We colour states red that can reach other red states via inert transitions using a simple recursive procedure. We colour states blue for which it is determined that all outgoing inert transitions go to a blue state (for this we need to recall for each state the number of outgoing inert transitions) and there is no direct transition to B. The marking procedure that terminates first, provided that its number of marked states does not exceed \(\frac{1}{2}|B'|\), has the smallest block that must be split. Now that we know the smallest block we move its states to a newly created block.

Splitting regarding \({\varvec{B}} \setminus B\) only has to be applied to \(\textit{split} (B', B)\), or, if all bottom states of \(B'\) were marked, to \(B'\). As noted before \(\textit{cosplit} (B', B)\) is stable under \({\varvec{B}} \setminus B\). Define \(C :=\textit{split} (B',B)\) or \(C:=B'\) depending on the situation. We can traverse all bottom states of C and check whether they have outgoing transitions to \({\varvec{B}} \setminus B\). This provides us with the blue states. The red states are obtained as we explicitly maintained the list of all transitions from C to \({\varvec{B}} \setminus B\). By simultaneously extending this colouring the smallest subblock of either red or blue states is obtained and splitting can commence.

The algorithm is concisely presented in the box below. It is presented in full detail in Sect. 5 as the bookkeeping details of the algorithm are far from trivial.

5 Detailed Algorithm

This section presents the data structures and the algorithm in more detail.

5.1 Data Structures

As a basic data structure, we use (singly-linked) lists. For a list L of elements, we assume that for each element e, a reference to the position in L preceding the position of e is maintained, such that checking membership and removal can be done in constant time. In some cases we add extra information to the elements in the list. Moreover, for each list L, we maintain the size |L| and pointers to its first and last element.

  1. 1.

    The current partition \(\pi \) consists of a list of blocks. Initially, it corresponds to \(\pi _0\). All blocks are part of a single, initial constellation \({\varvec{C}} _0\).

  2. 2.
    For each block \({B} \), we maintain the following:
    1. (a)

      A reference to the constellation containing \({B} \).

    2. (b)

      A list \(B\).btm-sts of the bottom states and a list \(B\).non-btm-sts of the other states.

    3. (c)
      A list \(B.\textit{to-constlns} \) of structures associated with constellations reachable via a transition from some \(s {\in } B\). Initially, it contains one element associated with \({\varvec{C}} _0\). Each element associated with some constellation \({\varvec{C}} \) in this list also contains the following:
      • A reference \(\textit{trans-list} \) to a list of all transitions from states in B to states in \({\varvec{C}} \setminus B\) (note that transitions between states in B, i.e., inert transitions, are not in this list).

      • When splitting the block B into B and \(B'\) there is a reference in each list element to the corresponding list element in \(B'.\textit{to-constlns} \) (which in turn refers back to the element in \(B.\textit{to-constlns} \)).

      • In order to check for stability when splitting produces new bottom states, each element contains a list to keep track of which new bottom states can reach the associated constellation.

    4. (d)

      A reference \(B.\textit{inconstln-ref} \) is used to refer to the element in \(B.\textit{to-constlns} \) associated with the constellation of B. It is used when a non-inert transition becomes inert and needs to be added to the \(\textit{trans-list} \) of the element associated with that constellation.

    Furthermore, when splitting a block \(B'\) in constellation \({\varvec{B}} '\) under a constellation \(\varvec{B}\) and block \({B} {\in } {\varvec{B}} \), the following temporary structures are used, with \({\varvec{C}} \) the new constellation to which \({B} \) is moved:
    1. (a)

      A list \(B'.\textit{mrkd-btm-sts} \) contains marked states in \(B'\) with a transition to \({B} \).

    2. (b)

      A list \(B'.\textit{mrkd-non-btm-sts} \) contains states that are marked, but are not bottom states.

    3. (c)

      A reference \(B'.\textit{constln-ref} \) refers to the (new) element in \(B'.\textit{to-constlns} \) associated with constellation \({\varvec{C}} \), i.e., the new constellation of B.

    4. (d)

      A reference \(B'.\textit{coconstln-ref} \) is used to refer to the element in \(B'.\textit{to-constlns} \) associated with constellation \({\varvec{B}} \), i.e., the old constellation of B.

    5. (e)

      A list \(B'.\textit{new-btm-sts} \) to keep track of the states that have become bottom states when \(B'\) was split. This is required to determine whether \(B'\) is stable under all constellations after a split.

  3. 3.

    Constellations are stored in two lists trivial-constlns and non-trivial-constlns. The first contains constellations consisting of exactly one block, while the latter contains the other constellations. Initially, if \(\pi _0\) consists of one block, \({\varvec{C}} _0\) is added to trivial-constlns and nothing needs to be done, because the initial partition is already stable. Otherwise \({\varvec{C}} _0\) is added to non-trivial-constlns.

  4. 4.

    For each constellation, we maintain its list of blocks and its size (number of states).

  5. 5.

    Each transition \(s\mathord {\, \xrightarrow {}_{} \,}s'\) refers with \(\textit{to-constln-cnt} \) to the number of transitions from s to the constellation in which \(s'\) resides. For each state and constellation, there is one such variable, provided there is a transition from this state to this constellation.

    Each transition \(s\mathord {\, \xrightarrow {}_{} \,}s'\) has a reference to the element associated with \({\varvec{B}} \) in the list \(B.\textit{to-constlns} \) where \(s {\in } B\) and \(s'{\in }{\varvec{B}} \). This is denoted as \((s\mathord {\, \xrightarrow {}_{} \,}s').\textit{to-constln-ref} \). Initially, it refers to the single element in \(B.\textit{to-constlns} \), unless the transition is inert, i.e., both \(s{\in }B\) and \(s'{\in }B\).

    Furthermore, each transition \(s\mathord {\, \xrightarrow {}_{} \,}s'\) is stored in the list of transitions from B to \({\varvec{B}} \). Initially, there is such a list for each block in the initial partition \(\pi _0\). From a transition \(s \mathord {\, \xrightarrow {}_{} \,} s'\), the list can be accessed via \((s\mathord {\, \xrightarrow {}_{} \,}s').\textit{to-constln-ref}.\textit{trans-list} \).

  6. 6.
    For each state \(s {\in } {B} \) we maintain the following information:
    1. (a)

      A reference to the block containing s.

    2. (b)

      A static list \(s.T_{ tgt }\) of transitions of the form \(s \mathord {\, \xrightarrow {}_{} \,} s'\) containing precisely all the transitions from s.

    3. (c)

      A static list \(s.T_{ src }\) of transitions \(s' \mathord {\, \xrightarrow {}_{} \,} s\) containing all the transitions to s. We write such transitions as \(s \mathord {\, \mathop {\leftarrow }\limits ^{} \,} s'\), to stress that these move into s.

    4. (d)

      A counter \(s.\textit{inert-cnt} \) containing the number of outgoing transitions to a state in the same block as s. For any bottom state s, we have \(s.\textit{inert-cnt} = 0\).

    5. (e)

      Furthermore, when splitting a block \(B'\) under \(\varvec{B}\) and \({B} {\in } {\varvec{B}} \), there are references \(s.\textit{constln-cnt} \) and \(s.\textit{coconstln-cnt} \) to the variables that are used to count how many transitions there are from s to B and from s to \({\varvec{B}} \setminus {B} \).

    Figure 2 illustrates some of the used structures. A block \(B_1\) in constellation \({\varvec{B}} \) contains bottom states \(s_1\), \(s_2'\) and non-bottom state \(s_2\). For \(s_1\), we have transitions \(s_1 \mathord {\, \xrightarrow {}_{} \,} s_1'\), \(s_1 \mathord {\, \xrightarrow {}_{} \,} s_1''\) to constellation \({\varvec{C}} \). Both have the following references:
    1. (a)

      \(\textit{to-constln-cnt} \) to the number of outgoing transitions from \(s_1\) to \({\varvec{C}} \).

    2. (b)

      \(\textit{to-constln-ref} \) to the element \(({\varvec{C}}, \bullet , \bullet )\) in \(B_1.\textit{to-constlns} \), where the \(\bullet \)’s are the (now uninitialized) references that are used when splitting.

    3. (c)

      Via \(({\varvec{C}}, \bullet , \bullet )\), a reference \(\textit{trans-list} \) to the list of transitions from \(B_1\) to \({\varvec{C}} \).


    Note that for the inert transition \(s_2 \mathord {\, \xrightarrow {}_{} \,} s_2'\), we only have a reference to the number of outgoing transitions from \(s_2\) to \({\varvec{B}} \).

Fig. 2.

An example showing some of the data structures used in the detailed algorithm.

5.2 Finding the Blocks that Must Be Split

While non-trivial-constlns is not empty, we perform the algorithm listed in the following sections. To determine whether the current partition \(\pi \) is unstable, we select a constellation \({\varvec{B}} \) in non-trivial-constlns, and we select a block B from \({\varvec{B}} \) such that \(|B|\le \frac{1}{2}|{\varvec{B}} |\). We first check which blocks are unstable for \({B} \) and \({\varvec{B}} \setminus B\).

  1. 1.

    Move \({B} \) to a new trivial constellation \(\varvec{C}\). If \(|{\varvec{B}}.\textit{blocks} | {=}1\), make \({\varvec{B}} \) trivial.

  2. 2.

    For each state \(s {\in } {B} \), do the steps below for each \(s'{\in } B'\) such that \(s \mathord {\, \mathop {\leftarrow }\limits ^{} \,} s' \in s.T_{ src }\), and \(B \ne B'\).

    1. (a)

      If \(B'\) has no marked states, put it in a list splittable-blks, let \(B'.\textit{coconstln-ref} \) refer to \((s \mathord {\, \mathop {\leftarrow }\limits ^{} \,} s') .\textit{to-constln-ref} \), \(B'.\textit{constln-ref} \) to a new element in \(B'.\textit{to-constlns} \).

    2. (b)

      Mark \(s'\).

    3. (c)

      Let \(s'.\textit{constln-cnt} \) be the number of transitions to B and \(s'.\textit{coconstln-cnt} \) the number of remaining outgoing transitions. All outgoing transitions of \(s'\) must refer to the appropriate counter.

    4. (d)

      Move all visited transitions to \(B'.\textit{constln-ref}.\textit{trans-list} \).

  3. 3.

    Next, check whether B itself can be split. Mark all states, add B to \(\textit{splittable-blks} \) and reset \(B.\textit{constln-ref} \) and \(B.\textit{coconstln-ref} \). For each state \(s {\in }B\), do the steps below for each \(s'{\in } B' {\in } {\varvec{B'}} \) such that \(s \mathord {\, \mathop {\leftarrow }\limits ^{} \,} s' \in s.T_{ src }\), and either \({\varvec{B'}} {=} {\varvec{B}} \) or \({\varvec{B'}} {=} {\varvec{C}} \).

    1. (a)

      If \({\varvec{B'}} {=} {\varvec{B}} \), let \(B.\textit{coconstln-ref} \) refer to \((s \mathord {\, \xrightarrow {}_{} \,} s').\textit{to-constln-ref} \) and \(B.\textit{constln-ref} \) and \(B.\textit{inconstln-ref} \) to a new element for \({\varvec{C}} \) in \(B.\textit{to-constlns} \).

    2. (b)

      Update \(s.\textit{constln-cnt} \) and \(s.\textit{coconstln-cnt} \) as in step 2(c).

  4. 4.

    For each \(B'{\in } \textit{splittable-blks} \), if all its bottom states are marked and either there is no marked bottom state s with \(s.\textit{coconstln-cnt} {=} 0\) or \(B'.\textit{coconstln-ref}.\textit{trans-list} \) is empty, remove \(B'\) from \(\textit{splittable-blks} \) and remove its temporary markings, i.e. unmark all states, reset the counters and references.

  5. 5.

    If \(\textit{splittable-blks} \) is not empty, start splitting (Sect. 5.3). Else, select another non-trivial constellation \({\varvec{B}} \) and block \(B{\in } {\varvec{B}} \), and continuing with step 1. If there are no non-trivial constellations left, the algorithm terminates.


5.3 Splitting the Blocks

Splitting the splittable blocks is performed using the following steps, in which the procedures used to simultaneously mark states when splitting a block are crucial for the performance. We refer to the whole operation as the lockstep search and call the two procedures detect1 and detect2. In the lockstep search, these procedures alternatingly process a transition. The entire operation terminates when one of the procedures terminates. If one procedure acquires more than half the number of states in the block it works on, it is stopped and the other is allowed to terminate. We present detect1 and detect2 below; both get a list of states, \(D_1\) and \(D_2\), respectively, and a block K to work on as input. In addition, detect2 takes a Boolean parameter indicating whether the splitting is a nested one, i.e., whether it directly follows an earlier split of the same block.

We walk through the blocks \(B'{\in }{\varvec{B'}} \) in splittable-blks, which must be split into two or three blocks under constellation \(\varvec{B}\) and block \({B} \). If all bottom states are marked, then we have \(\textit{split} ({B} ', {B}) = {B} '\), and can start with step 3 below.

  1. 1.

    Launch a lockstep search with \(D_1\) the list of marked states in \(B'\), \(D_2\) the list \(B'.\textit{btm-sts} \), \(K=B'\), and \({ nested} = \mathbf{false}\).

  2. 2.
    Depending on whether detect1 or detect2 terminated in the previous step, one of the lists L or \(L'\) contains the states to be moved to a new block \(B''\). Below we refer to this list as N. For each \(s {\in } N\), move s to \(B''\), and do the following:
    1. (a)

      For each \(s \mathord {\, \xrightarrow {}_{} \,} s' {\in } T_{ tgt }\), do the following steps.

      1. i.

        If \((s \mathord {\, \xrightarrow {}_{} \,} s').\textit{to-constln-ref} \) is initialized, check whether it refers to an new element in \(B''.\textit{to-constlns} \). If not, create it. If appropriate, set references \(B''.\textit{inconstln-ref} \), \(B''.\textit{constln-ref} \) and \(B''.\textit{coconstln-ref} \). Move \(s \mathord {\, \xrightarrow {}_{} \,} s'\) to the \(\textit{trans-list} \) of the new element. If the related element in \(B'.\textit{to-constlns} \) no longer holds transitions, remove it.

      2. ii.

        Else, if \(s'{\in } B' \setminus N\) (a transition becomes non-inert), decrement \(s.\textit{inert-cnt} \). If \(s.\textit{inert-cnt} {=} 0\), make s bottom, add \(s \mathord {\, \xrightarrow {}_{} \,} s'\) to \(B''.\textit{inconstln-ref}.\textit{trans-list} \) (if \(B''.\textit{inconstln-ref} \) does not exist, create it first).

    2. (b)

      For each \(s \mathord {\, \mathop {\leftarrow }\limits ^{} \,} s' {\in } T_{ src }\), \(s' {\in } B' {\setminus } N\) (an inert transition becomes non-inert), perform steps similar to 2(a).ii.

  3. 3.

    Next, we split \(\textit{split} (B', B)\) under \({\varvec{B}} {\setminus } B\). Define \(C{=}\textit{split} (B', B)\). C is stable under \({\varvec{B}} {\setminus } B\) if \(C.\textit{coconstln-ref} \) is uninitialized or holds an empty \(\textit{trans-list} \), or for all \(s{\in } C.\textit{mrkd-btm-sts} \) it holds that \(s.\textit{coconstln-cnt} >0\). If this is not the case, then we launch a lockstep search with \(D_1\) the list of states s occurring in some \(s\mathord {\, \xrightarrow {}_{} \,}s'\) in \(\textit{split} (B', B).\textit{coconstln-ref}.\textit{trans-list} \), \(D_2\) the list of states s with \(s.\textit{coconstln-cnt} = 0\) in \(C.\textit{mrkd-btm-sts} \), \(K=C\), and \({ nested}=\mathbf{true}\). Finally, we split C by moving the states in either L or \(L'\) to a new block \(B'''\), depending on which list is the smallest.

  4. 4.

    Remove the temporary markings of each block C resulting from the splitting of \(B'\).

  5. 5.

    If the splitting of \(B'\) resulted in new bottom states, check for those states whether further splitting is required, i.e., whether from some of them, not all constellations can be reached which can be reached from the block. For all \(\hat{B}{\in }\{B',B'',B'''\}\), new bottom states s, \(s \mathord {\, \xrightarrow {}_{} \,} s' {\in } s.T_{ tgt }\), add s to the states list of the element associated with \({\varvec{\bar{B}}} \) in \(\hat{B}.\textit{to-constlns} \), where \(s'{\in } {\varvec{\bar{B}}} \), and move the element to the front of the list.

  6. 6.

    Perform the following steps for each block \(\hat{B}\) with new bottom states, as long as there are such blocks.

    1. (a)
      Walk through the elements in \(\hat{B}.\textit{to-constlns} \). If the states list of an element associated with a constellation \({\varvec{B}} \) does not contain all new bottom states, further splitting is required under \({\varvec{B}} \):
      1. i.

        Launch a lockstep search with \(D_1\) the list of states s occurring in some \(s\mathord {\, \xrightarrow {}_{} \,}s'\) with \(s'{\in }{\varvec{B}} \) in the list \(\textit{trans-list} \) associated with \({\varvec{B}} {\in } \hat{B}.\textit{to-constlns} \), \(D_2\) the list of states \(s{\in } \hat{B}.\textit{new-btm-sts} \) minus the new bottom states that can reach \({\varvec{B}} \), \(K = \hat{B}\), and \({ nested} = \mathbf{true}\).

      2. ii.

        Split \(\hat{B}\) by performing step 2 to produce a new block \(\hat{B}'\). Move all states in \(\hat{B}.\textit{new-btm-sts} \) that have moved to \(\hat{B}'\) to \(\hat{B}'.\textit{new-btm-sts} \), and also move them from the states lists in the elements of \(\hat{B}.\textit{to-constlns} \) to the corresponding elements of \(\hat{B}'.\textit{to-constlns} \) (those elements refer to each other). If a states list becomes empty, move that element to the back of its list.

      3. iii.

        Perform step 5 for \(\hat{B}\) and \(\hat{B}'\).

    2. (b)

      If no further splitting was required for \(\hat{B}\), empty \(\hat{B}.\textit{new-btm-sts} \) and clear the remaining states lists in \(\hat{B}.\textit{to-constlns} \).

  7. 7.

    If \({\varvec{B'}} {\in } \textit{trivial-constlns} \), move it to non-trivial-constlns.


6 Application to Branching Bisimulation

We show that the algorithm can also be used to determine branching bisimulation, using the transformation from [14, 17], with complexity \(O(m(\log | Act |+\log n))\). Branching bisimulation is typically applied to labelled transition systems (LTSs). An LTS is a three tuple \(A = (S, { Act}, \mathord {\, \xrightarrow {}_{} \,})\), with S a finite set of states, \({ Act}\) a finite set of actions including the internal action \(\tau \), and \(\mathord {\, \xrightarrow {}_{} \,} \subseteq S \times { Act} \times S\) a transition relation.

Definition 4

Consider the LTS \(A=(S,{ Act},\mathord {\, \xrightarrow {}_{} \,})\). We call a symmetric relation \(R \subseteq S\times S\) a branching bisimulation relation iff
where \(\mathord {\, \twoheadrightarrow \,}\) is the transitive, reflexive closure of \(\mathord {\, \xrightarrow {\tau }_{} \,}\).
States are branching bisimilar iff there is a branching bisimulation relation R relating them.
Fig. 3.

Runtime results for \((a{\cdot } \tau )^ size \) sequences (left) and trees of depth \( size \) (right)

Our new algorithm can be applied to an LTS by translating it to a Kripke structure.

Definition 5

Let \(A=(S,{ Act},\mathord {\, \xrightarrow {}_{} \,})\) be an LTS. We construct the embedding of A to be the Kripke structure \(K_A=(S_A, AP ,\mathord {\, \xrightarrow {}_{} \,},L)\) as follows:
  1. 1.

    \(S_A=S\cup \{\langle a,t\rangle \mid s\mathord {\, \xrightarrow {a}_{} \,}t\) for some \(t\in S \}\).

  2. 2.

    \( AP = Act \cup \{\bot \}\).

  3. 3.

    \(\rightarrow \) is the least relation satisfying (\(s,t{\in } S\), \(a{\in } Act {\setminus }\tau \)): \(\frac{s\mathord {\, \xrightarrow {a}_{} \,}t}{s\mathord {\, \xrightarrow {}_{} \,}\langle a,t\rangle }\), \(\frac{}{\langle a,t\rangle \mathord {\, \xrightarrow {}_{} \,}t}\) and \(\frac{s\mathord {\, \xrightarrow {\tau }_{} \,}t}{s\mathord {\, \xrightarrow {}_{} \,}t}\).

  4. 4.

    \(L(s)=\{\bot \}\) for \(s\in S\) and \(L(\langle a,t\rangle )=\{a\}\).


The following theorem stems from [14].

Theorem 1

Let A be an LTS and \(K_A\) its embedding. Then two states are branching bisimilar in A iff they are divergence-blind stuttering equivalent in \(K_A\).

If we start out with an LTS with n states and m transitions then its embedding has at most \(m+n\) states and 2m transitions. Hence, the algorithm requires \(O(m\log (n{+}m))\) time. As m is at most \(| Act |n^2\) this is also equal to \(O(m(\log | Act |{+}\log n))\).

As a final note, the algorithm can also be adapted to determine divergence-sensitive branching bisimulation [8], by adding a \(\tau \)-self loop to those states on a \(\tau \)-loop.

7 Experiments

The new algorithm has been implemented as part of the mCRL2 toolset [6], which offers implementations of GV and the algorithm by Blom and Orzan [2] that distinguishes states by their connection to blocks via their outgoing transitions. We refer to the latter as BO. The performance of GV and BO can be very different on concrete examples. We have extensively tested the new algorithm by applying it to thousands of randomly generated LTSs and comparing the results with those of the other algorithms.

We experimentally compared the performance of GV, BO, and the implementation of the new algorithm (GW). All experiments involve the analysis of LTSs, which for GW are first transformed to Kripke structures using the translation of Sect. 6. The reported runtimes do not include the time to read the input LTS and write the output, but the time it takes to translate the LTS to a Kripke structure and to reduce strongly connected components is included.
Table 1.

Runtime (in sec.) and memory use (in MB) results for GV, BO, and GW

Practically all experiments have been performed on machines running CentOS Linux, with an Intel E5-2620 2.0 GHz CPU and 64 GB RAM. Exceptions to this are the final two entries in Table 1, which were obtained by using a machine running Fedora 12, with an Intel Xeon E5520 2.27 GHz CPU and 1 TB RAM.

Figure 3 presents the runtime results for two sets of experiments to demonstrate that GW has the expected scalability. At the left are the results of analysing single sequences of the shape \((a{\cdot } \tau )^n\). As the length 2n of such a sequence is increased, the results show that the runtimes of both BO and GV increase at least quadratically, while the runtime of GW grows linearly. All algorithms require n iterations, in which BO and GV walk over all the states in the sequence, but GW only moves two states into a new block. At the right of Fig. 3, the results are displayed of analysing trees of depth n that up to level \(n{-}1\) correspond with a binary tree of \(\tau \)-transitions. Each state at level \(n{-}1\) has a uniquely labelled outgoing transition to a state in level n. BO only needs one iteration to obtain the stable partition. Still GW beats BO by repeatedly splitting off small blocks of size \(2(k-1)\) if a state at level k is the splitter.

Table 1 contains results for minimising LTSs from the VLTS benchmark set1 and the mCRL2 toolset2. These experiments demonstrate that also when applied to actual state spaces of real models, GW generally outperforms the best of the other algorithms, often with a factor 10 and sometimes with a factor 100. This difference tends to grow as the LTSs get larger. GW’s memory usage is only sometimes substantially higher than GV’s and BO’s, which surprised us given the amount of required bookkeeping.



  1. 1.
    Aho, A., Hopcroft, J., Ullman, J.: The Design and Analysis of Computer Algorithms. Addison-Wesley, Reading (1974)zbMATHGoogle Scholar
  2. 2.
    Blom, S.C., Orzan, S.: Distributed branching bisimulation reduction of state spaces. In: FMICS 2003. ENTCS, vol. 80, pp. 109–123. Elsevier (2003)Google Scholar
  3. 3.
    Blom, S.C., van de Pol, J.C.: Distributed branching bisimulation minimization by inductive signatures. In: PDMC 2009. EPTCS, vol. 14, pp. 32–46. Open Publ. Association (2009)Google Scholar
  4. 4.
    Browne, M.C., Clarke, E.M., Grumberg, O.: Characterizing finite Kripke structures in propositional temporal logic. Theoret. Comput. Sci. 59(1,2), 115–131 (1988)MathSciNetCrossRefzbMATHGoogle Scholar
  5. 5.
    Chatterjee, K., Henzinger, M.: Faster and dynamic algorithms for maximal end-component decomposition and related graph problems in probabilistic verification. In: SODA 2011, pp. 1318–1336. SIAM (2011)Google Scholar
  6. 6.
    Cranen, S., Groote, J.F., Keiren, J.J.A., Stappers, F.P.M., de Vink, E.P., Wesselink, W., Willemse, T.A.C.: An overview of the mCRL2 toolset and its recent advances. In: Piterman, N., Smolka, S.A. (eds.) TACAS 2013 (ETAPS 2013). LNCS, vol. 7795, pp. 199–213. Springer, Heidelberg (2013)CrossRefGoogle Scholar
  7. 7.
    Garavel, H., Lang, F., Mateescu, R., Serwe, W.: CADP 2011: a toolbox for the construction and analysis of distributed processes. Softw. Tools Technol. Transfer. 15(2), 98–107 (2013)zbMATHGoogle Scholar
  8. 8.
    van Glabbeek, R.J., Weijland, W.P.: Branching time and abstraction in bisimulation semantics. J. ACM 43(3), 555–600 (1996)MathSciNetCrossRefzbMATHGoogle Scholar
  9. 9.
    Groote, J.F., Vaandrager, F.W.: An efficient algorithm for branching bisimulation and stuttering equivalence. In: Paterson, M. (ed.) ICALP 1990. LNCS, vol. 443, pp. 626–638. Springer, Heidelberg (1990)CrossRefGoogle Scholar
  10. 10.
    Groote, J.F., Mousavi, M.R.: Modeling and Analysis of Communicating Systems. The MIT Press, Cambridge (2014)zbMATHGoogle Scholar
  11. 11.
    Kannelakis, P., Smolka, S.: CCS Expressions, Finite State Processes and Three Problems of Equivalence. Inf. Comput. 86, 43–68 (1990)MathSciNetCrossRefzbMATHGoogle Scholar
  12. 12.
    Li, W.: Algorithms for computing weak bisimulation equivalence. In: TASE 2009, pp. 241–248. IEEE (2009)Google Scholar
  13. 13.
    Milner, R. (ed.): Calculus of Communicating Systems. Lecture Notes in Computer Science, vol. 92. Springer, Heidelberg (1980)zbMATHGoogle Scholar
  14. 14.
    De Nicola, R., Vaandrager, F.W.: Three logics for branching bisimulation. Journal of the ACM 42, 458–487 (1995)MathSciNetCrossRefzbMATHGoogle Scholar
  15. 15.
    Paige, R., Tarjan, R.E.: Three partition refinement algorithms. SIAM J. Comput. 16(6), 973–989 (1987)MathSciNetCrossRefzbMATHGoogle Scholar
  16. 16.
    Ranzato, F., Tapparo, F.: Generalizing the Paige-Tarjan algorithm by abstract interpretation. Inf. Comput. 206(5), 620–651 (2008)MathSciNetCrossRefzbMATHGoogle Scholar
  17. 17.
    Reniers, M.A., Schoren, R., Willemse, T.A.C.: Results on embeddings between state-based and event-based systems. Comput. J. 57(1), 73–92 (2014)CrossRefGoogle Scholar
  18. 18.
    Virtanen, H., Hansen, H., Valmari, A., Nieminen, J., Erkkilä, T.: Tampere verification tool. In: Jensen, K., Podelski, A. (eds.) TACAS 2004. LNCS, vol. 2988, pp. 153–157. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  19. 19.
    Wijs, A.: GPU accelerated strong and branching bisimilarity checking. In: Baier, C., Tinelli, C. (eds.) TACAS 2015. LNCS, vol. 9035, pp. 368–383. Springer, Heidelberg (2015)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2016

Authors and Affiliations

  1. 1.Department of Mathematics and Computer ScienceEindhoven University of TechnologyEindhovenThe Netherlands

Personalised recommendations