1 Introduction

According to the CAP theorem [4, 5] distributed systems that are prone to partitions can only guarantee availability or consistency. This leads to a spectrum of distributed systems that ranges from highly available systems (AP) to strongly consistent systems (CP) with hybrid systems - that are partly AP and partly CP - in the middle. A substantial body of research has focused on techniques or protocols to propagate updates [7, 16, 19, 20]. In this paper, we focus on language abstractions that ease the development of highly available and partition tolerant systems, the so-called AP systems.

A state-of-the-art approach towards high availability are conflict-free replicated data types (CRDTs) [19]. CRDTs rely on commutative operations to guarantee strong eventual consistency (SEC), a variation on eventual consistency that provides an additional strong convergence guaranteeFootnote 1. This avoids the need for synchronisation, yielding high availability and low latency.

The literature has proposed a portfolio of basic conflict-free data structures such as counters, sets, and linked lists [17, 18, 22]. However, advanced distributed systems require replicated data types that are tailored to the needs of the application. Consider, for example, a real-world collaborative text editor that represents documents as a balanced tree of characters, allowing for logarithmic time lookups, insertions, and deletions. To the best of our knowledge, the only tree CRDT has been proposed in [12]. In this approach, balancing the tree requires synchronising the replicas. However, this is not possible in AP systems as it implies giving up on availability.

When the current portfolio of CRDTs falls short, programmers can resort to two solutions. One is to manually engineer the data structure as a CRDT. This requires rethinking the data structure completely such that all operations commute. If the operations cannot be made commutative, programmers need to manually implement conflict resolution. This has shown to be error-prone and results in brittle systems [1, 9, 19]. Alternatively, programmers can use JSON CRDTs [9] or Lasp [14] to design custom CRDTs. JSON CRDTs let programmers arbitrarily nest linked lists and maps into new CRDTs, whereas Lasp supports functional transformations over existing CRDTs. However, these constructs are not general enough. Consider again the case of a collaborative text editor. Using lists and maps one cannot implement a balanced tree CRDT, nor can one derive a balanced tree from existing CRDTs.

In this paper, we explore a new direction which consists in devising a general-purpose language abstraction for high availability. We design a novel replicated data type called strong eventually consistent replicated object (SECRO). SECROs guarantee SEC by reordering conflicting operations in a way that solves the conflict. To find a conflict-free ordering of the operations, SECROs rely on application-specific information provided by the programmer through concurrent pre and postconditions defined over the operations of the SECRO. Our approach is based on the idea that conflict detection and resolution naturally depends on the semantics of the application [21].

We evaluate our approach by implementing a real-time collaborative text editor using SECROs and comparing it to a JSON CRDT implementation of the text editor, as proposed in [9]. We present various experiments that quantify the memory usage, execution time, and throughput of both implementations.

2 Strong Eventually Consistent Replicated Objects

In this section, we describe strong eventually consistent replicated objects from a programmer’s perspective. All code snippets are in CScriptFootnote 2, a JavaScript extension embodying our implementation of SECROs. We introduce the necessary syntax and features of CScript along with our explanation on SECROs.

2.1 SECRO Data Type

A SECRO is an object that implements an abstract data type and can be replicated to a group of devices. Like regular objects, SECROs contain state in the form of fields, and behaviour in the form of methods. It is not possible to directly access a SECRO’s internal state. Instead, the methods defined by the SECRO need to be used. These methods form the SECRO’s public interface. Methods can be further categorised in accessors (i.e. methods querying internal state) and mutators (i.e. methods updating the internal state).

As an example, consider the case of a collaborative text editor which organises documents as a balanced tree of characters [15, 22]. Listing 1.1 shows the structure of the Document SECRO. In order to create a new SECRO, programmers extend the SECRO abstract data type. Instead of implementing our own balanced tree data structure, we re-use an existing AVL tree data structure provided by the Closure libraryFootnote 3.

figure a

The Document SECRO defines three accessors (containsId, generateId and indexOf) and two mutators (insertAfter and delete). containsId returns a boolean that indicates the presence or absence of a certain identifier in the document tree. generateId uses a boundary allocation strategy [15] to compute stable identifiers based on the reference identifiers. Finally, indexOf returns the index of a character in the document tree. Note that side-effect free methods are annotated with @accessor, otherwise, CScript treats them as mutators.

The Document SECRO also defines methods to serialise and deserialise the document as it will be replicated over the network. Note that deserialisation creates a new replica of the Document SECRO. In order for the receiver to know the Document class, programmers must register their SECRO at the CScript factory (line 31).

Finally, the Document SECRO forwards insertAfter and delete operations on the text to the underlying AVL tree (as we describe later in Sect. 2.2). Besides the methods defined in the SECRO’s public interface, programmers can also enforce application-specific invariants by associating concurrent preconditions and postconditions to the mutators (Line 27 to 29). We say that pre and postconditions are state validators. State validators are used by the SECRO to order concurrent operations such that they do not violate any invariant. Next section further describes them.

2.2 State Validators

State validators let programmers define the data type’s behaviour in the face of concurrency. State validators are declarative rules that are associated to mutators. Those rules express invariants over the state of the object which need to uphold in the presence of concurrent operationsFootnote 4. Behind the scenes, the replication protocol may interleave concurrent operations. From the programmer’s perspective the only guarantee is that these invariants are upheld. State validators come in two forms:  

Preconditions.:

Specify invariants that must hold prior to the execution of their associated operation. As such, preconditions approve or reject the state before applying the actual update. In case of a rejection, the operation is aborted and a different ordering of the operations will be tried.

Postconditions.:

Specify invariants that must hold after the execution of their associated operation. In contrast to preconditions, an operation’s associated postcondition does not execute immediately. Instead, the postcondition executes after all concurrent operations complete. As such, postconditions approve or reject the state that results from a group of concurrent, potentially conflicting operations. In case of a rejection a different ordering is tried.

 

In CScript, state validators are methods which are prefixed with the pre or post keyword, defining a pre or postcondition, respectively. To illustrate state validators we again consider the example of a collaborative text editor and present the implementation of the insertAfter and delete methods and their associated preconditions and postconditions. Listing 1.2 contains the insertAfter operation. Listing The id argument on Line 1 is the identifier of the reference character. On Line 2 the method generates a new stable identifier for the character it is inserting. Using this identifier the method creates a new character on Line 3. Finally, Line 4 and 5 insert the character in the tree and return the newly added character. Line 7 to 10 define a precondition on insert. The precondition is a method which has the same name as its associated operation and takes as parameters the object’s current state followed by an array containing the arguments that are passed to its associated operation. In this case, id and char as passed to insertAfter. The precondition checks that the reference character exists (Line 9).

figure b

Lines 11 to 16 define a postcondition for the insertAfter operation. Similar to preconditions, postconditions are defined as a method which has the same name as its associated operation (insertAfter in this case). However, they take 4 arguments: (1) the SECRO’s initial state, (2) the state that results from applying the operation (insertAfter), (3) an array with the operation’s arguments, and (4) the operation’s return value (newChar in this case). This postcondition checks that the newly added character occurs at the correct position in the resulting tree, i.e. after the reference character that is identified by id. According to this postcondition any interleaving of concurrent character insertions is valid, e.g. two users may concurrently write “foo” and “bar” resulting in one of: “foobar”, “fboaor”, etc. If the programmer only wants to allow the interleavings “foobar” and “barfoo” the SECRO must operate on the granularity of words instead of single character manipulations.

Listing 1.3 contains the implementation of the delete method and its associated postcondition. Lines 1 to 3 show that characters are deleted by removing them from the underlying AVL tree. Recall that the character’s stable identifier uniquely identifies the character in the tree. Afterwards, the postcondition on Lines 4 to 7 ensures that the character no longer occurs in the tree.

figure c

Notice that preconditions are less expressive than postconditions but, they avoid unnecessary computations by rejecting invalid states prior to the execution of the operation. Preconditions are also useful to prevent operations from running on a corrupted state, thus improving the system’s robustness.

3 SECRO’s Replication Protocol

A SECRO is a general-purpose language abstraction that guarantees SEC, i.e. eventual consistency and strong convergence. To provide this guarantee SECROs implement a dedicated optimistic replication protocol. For the purpose of this paper, we describe the protocol in pseudocodeFootnote 5.

SECRO’s protocol propagates update operations to all replicas. In contrast to CRDTs, the operations of a SECRO do not necessarily commute. Therefore, the replication protocol totally orders the operations at all replicas. This order may not violate any of the operations’ pre or postconditions.

For the sake of simplicity we assume a causal order broadcasting mechanism without loss of generality, i.e. a communication medium in which messages arrive in an order that is consistent with the happened-before relation [10]. Note that even though we rely on causal order broadcasting, concurrent operations arrive in arbitrary orders at the replicas.

Intuitively, replicas maintain their initial state and a sequence of operations called the operation history. Each time a replica receives an operation, it is added to the replica’s history, which may require reordering parts of the history. Reordering the history boils down to finding an ordering of the operations that fulfils two requirements. First, the order must respect the causality of operations. Second, applying all the operations in the given order may not violate any of the concurrent pre or postconditions. An ordering which adheres to these requirements is called a valid execution. As soon as a valid execution is found each replica resets its state to the initial one and executes the operations in-order. Reordering the history is a deterministic process, hence, replicas that received the same operations find the same valid execution.

The existence of a valid execution cannot be guaranteed if pre and postconditions contradict each other. It is the programmer’s responsibility to provide correct pre and postconditions.

The replication protocol provides the following guarantees:

  1. 1.

    Eventually, all replicas converge towards the same valid execution (i.e. eventual consistency).

  2. 2.

    Replicas that received the same updates have identical operation histories (i.e. strong convergence).

  3. 3.

    Replicas eventually perform the operations of a valid execution if one exists, or issue an error if none exists.

The operation histories of replicas may grow unboundedly as they perform operations. In order to alleviate this issue we allow for replicas to periodically commit their state. Concretely, replicas maintain a version number. Whenever a replica commits, it clears its operation history and increments its version number. The replication protocol then notifies all other replicas of this commit, which adopt the committed state and also empty their operation history. All subsequent operations received by a replica which apply to a previous version number are ignored. As we explain in Sect. 3.1, the commit operation does not require synchronising the replicas and thus does not affect the system’s availability. However, commits come at the price of certain operations being dropped for the sake of bounded operation history size.

3.1 Algorithm

We now detail our replication protocol which makes the following assumptions:

  • Each node in the network may contain any number of replicas of a SECRO.

  • Nodes maintain vector clocks to timestamp the operations of a replica.

  • Nodes are able to generate globally unique identifiers using lamport clocks.

  • Reading the state of a replica happens side-effect free and mutators solely affect the replica’s state (i.e. the side effects are confined to the replica itself).

  • Eventually, all messages arrive, i.e. reliable communication: no message loss nor duplication (e.g., TCP/IP).

  • There are no byzantine failures, i.e. no malicious nodes.

A replica r is a tuple \( r=(v_{i},s_{0},s_{i},h,id_{c}) \) consisting of the replica’s version number \( v_{i} \), its initial state \( s_{0} \), its current state \( s_{i} \), its operation history h, and the id of the latest commit operation \( id_{c} \). A mutator m is represented as a tuple \( m=(o,p,a) \) consisting of the update operation o, precondition p, and postcondition a. We denote that a mutation \( m_{1} \) happened before \( m_{2} \) using \( m_{1} \prec m_{2} \). Similarly, we denote that two mutations happened concurrently using \( m_{1} \, \Vert \, m_{2} \). Both relations are based on the clocks carried by the mutators [8].

We now discuss in detail the three kinds of operations that are possible on replicas: reading, mutating, and committing state.

 

Reading Replicas.:

Reading the value of a replica \( (v_{i},s_{0},s_{i},h,id_{c}) \) simply returns its latest local state \( s_{i} \).

Mutating Replicas.:

When a mutator \( m = (o,p,a) \) is applied to a replica a mutate message is broadcast to all replicas. Such a message is an extension of the mutator \( (o, p, a, c,id) \) which additionally contains the node’s logical clock time c and a unique identifier id.

 

figure d

As mentioned before, operations on SECROs do not need to commute by design. Since operations are timestamped with logical clocks they exhibit a partial order. Algorithm 1 governs the replicas’ behaviour to guarantee SEC by ensuring that all replicas execute the same valid ordering of their operation history.

Algorithm 1 starts when a replica receives a mutate message. The algorithm consists of two parts. First, it adds the mutate message to the operation history, sorts the history according to the \(>>\) total order, and generates all linear extensions of the replica’s sorted history (see Lines 1 and 2). We say that \( m_{1} = (o_{1},p_{1},a_{1},c_{1},id_{1})>> m_{2} = (o_{2},p_{2},a_{2},c_{2},id_{2}) \) iff \( c_{1} \succ c_{2} \, \vee \, (c_{1} \, \Vert \, c_{2} \, \wedge \, id_{1} > id_{2})\). The generated linear extensions are all the permutations of \( h' \) that respect the partial order defined by the operations’ causal relations. Since replicas deterministically compute linear extensions and start from the same sorted operation history, all replicas generate the same sequence of permutations.

Second, the algorithm searches for the first valid permutation. In other words, for each operation within such a permutation the algorithm checks that the preconditions (Lines 8 to 13) and postconditions (Lines 14 to 18) hold. Remember that postconditions are checked only after all concurrent operations executed since they happened independently (e.g. during a network partition) and may thus conflict. For this reason, Line 7 computes the transitive closure of concurrent operationsFootnote 6 for every operation in the linear extension.

Since the “is concurrent” relation is not transitive, one might wonder why we consider operations that are not directly concurrent. To illustrate this, consider a replica \(r_1\) that executes operation \(o_1\) followed by \(o_2\) (\(o_1 \prec o_2\)) while concurrently replica \(r_2\) executes operation \(o_3\) (\(o_3 \, \Vert \, o_1 \wedge o_3 \, \Vert \, o_2\)). Since \(o_3\) may affect both \(o_1\) and \(o_2\) we take into account all three operations. This corresponds to the transitive closure \(\{o_1, o_2, o_3\}\). We refer the reader to Appendix A for a proof that no operation can break this transitive closure of concurrent operations.

Finally, the algorithm returns the replica’s updated state as soon as a valid execution is found, otherwise, it throws an exception.  

Committing Replicas.:

In a nutshell, commit clears a replica’s operation history h, increments the replica’s version and updates the initial state \( s_{0} \) with the replica’s current state \( s_{i} \). This avoids unbounded growth of operation histories, but operations concurrent with the commit will be discardedFootnote 7.

 

When a replica is committed a commit message is broadcast to all replicas (including the committed one). This message is a quadruple \( (s_{i},v_{i},clock,id) \) containing the committed state, the replica’s version number, the current logical clock time, and a unique id.

figure e

To ensure that replicas converge in the face of concurrent commits we design commit operations to commute. As a result, commit does not compromise availability. Algorithm 2 dictates how replicas handle commit messages. The algorithm distinguishes between two cases. First, the commit operation commits the current state (see Line 1). The replica’s version is incremented, its initial and current state are set to the committed state, the operation history is cleared and the id of the last performed commit is updated. Second, the commit operation commits the previous state (see Line 4). This means that the commit operation applies to the previous version \(v_{i-1}\). As a result, the newly received commit operation is concurrent with the last performed commit operation (i.e. the one that caused the replica to update its version from \( v_{i-1} \) to \( v_{i} \)). To ensure convergence, replicas perform the commit operation with the smallest ID. This ensures that the order in which commits are received is immaterial and hence that commit operations commute. Note that the algorithm does not need to tackle the case of committing an elder state since it cannot happen under the assumption of causal order broadcasting.

4 Evaluation

We now compare our novel replicated data type to JSON CRDTs, a state-of-the-art approach providing custom CRDTs built atop lists and maps. We perform a number of experiments which quantify the memory usage, execution time and throughput of the collaborative text editor. We implemented it twice in JavaScript, once using SECROsFootnote 8 and once using JSON CRDTsFootnote 9. The JSON CRDT implementation uses a list to represent text documents. The SECRO implementation comes in two variants: one that uses a list and one that uses a balanced tree (described in Sect. 2).

Note that SECROs are designed to ease the development of custom replicated data types guaranteeing SEC. Hence, our goal is not to outperform JSON CRDTs, but rather to evaluate the practical feasibility of SECROs.

4.1 Methodology

All experiments are performed on a cluster consisting of 10 worker nodes which are interconnected through a 10 Gbit twinax connection. Each worker node has an Intel Xeon E3-1240 processor at 3.50 GHz and 32 GB of RAM. Depending on the experiment, the benchmark is either run on a single worker node or on all ten nodes. We specify this for each benchmark.

To get statistically sound results we repeat each benchmark at least 30 times, yielding a minimum of 30 samples per measurement. Each benchmark starts with a number of warmup rounds to minimise the effects of program initialisation. Furthermore, we disable NodeJS’ just-in-time compiler optimisations to obtain more stable execution times.

We perform statistical analysis over our measurements as follows. First, depending on the benchmark we discard samples that are affected by garbage collection (e.g. the execution time benchmarks). Then, for each measurement including at least 30 samples we compute the average value and the corresponding 95% confidence interval.

4.2 Memory Usage

To compare the memory usage of the SECRO and JSON CRDT text editors, we perform an experiment in which 1000 operations are executed on each text editor. We continuously alternate between 100 character insertions followed by deletions of those 100 characters. We force garbage collection after each operationFootnote 10, and measure the heap usage. The resulting measurements are shown in Fig. 1. Green and red columns indicate character insertions and deletions respectively.

Fig. 1.
figure 1

Memory usage of the collaborative text editors. Error bars represent the 95% confidence interval for the average taken from 30 samples. These experiments are performed on a single worker node of the cluster. (Color figure online)

Figure 1a confirms our expectation that the SECRO implementations are more memory efficient than the JSON CRDT one. The memory usage of the JSON CRDT text editor grows unbounded since CRDTs cannot delete characters, but merely mark them as deleted using tombstones. Conversely, SECROs support true deletions by reorganising concurrent operations in a non-conflicting order. Hence, all 100 inserted characters are deleted by the following 100 deletions. This results in lower memory usage.

Figure 1b compares the memory usage of the list and tree-based implementations using SECROs. We conclude that the tree-based implementation consumes more memory than the list implementation. The reason is that nodes of a tree maintain pointers to their children, whereas nodes of a singly linked list only maintain a single pointer to the next node. Interestingly, we observe a staircase pattern. This pattern indicates that memory usage grows when characters are inserted (green columns) and shrinks when characters are deleted (red columns). Overall, memory usage increases linearly with the number of executed operations, even though we delete the inserted characters and commit the replica after each operation. Hence, SECROs cause a small memory overhead for each executed operation. This linear increase is shown by the dashed regression lines.

4.3 Execution Time

We now benchmark the time it takes to append characters to a text document. Although this is not a realistic edition pattern, it showcases the worst case performance. From Fig. 2a we notice that the SECRO versions exhibit a quadratic performance, whereas the JSON CRDT version exhibits a linear performance. The reason for this is that reordering the SECRO’s history (see Algorithm 1 in Sect. 3.1) induces a linear overhead on top of the operations themselves. Since insert is also a linear operation, the overall performance of the text editor’s insert operation is quadratic. To address this performance overhead the replica needs to be committed. The effect of commit on the execution time of insert operations is depicted in Appendix B.

Fig. 2.
figure 2

Execution time of character insertions in the collaborative text editors. Replicas are never committed. Error bars represent the 95% confidence interval for the average taken from a minimum of 30 samples. Samples affected by garbage collection are discarded. (Color figure online)

Figure 2b also shows that the SECRO implementation that uses a linked list is faster than its tree-based counterpart. To determine the cause of this counterintuitive observation, we measure the different parts that make up the total execution time:  

Execution time of operations.:

Total time spent on append operations.

Execution time of preconditions.:

Total time spent on preconditions.

Execution time of postconditions.:

Total time spent on postconditions.

Copy time.:

Due to the mutability of JavaScript objects our prototype implementation in CScript needs to copy the state before validating the potential history. The total time spent on copying objects (i.e. the document state) is the copy time.

 

Figures 5a and b in Appendix C depict the detailed execution time for the list and tree implementations respectively. The results show that the total execution time is dominated by the copy time. We observe that the tree implementation spends more time on copying the document than the list implementation. The reason being that copying a tree entails a higher overhead than copying a linked list as more pointers need to be copied. Furthermore, the tree implementation spends considerably less time executing operations, preconditions and postconditions, than the list implementation. This results from the fact that the balanced tree provides logarithmic time operations.

Unfortunately, the time overhead incurred by copying the document kills the speedup we gain from organising the document as a tree. This is because each insertion inserts only a single character but requires the entire document to be copied. To validate this hypothesis, we re-execute the benchmark shown in Fig. 2a but insert 100 characters per operation. Figure 2b shows the resulting execution times. As expected, the tree implementation now outperforms the list implementation. This means that the speedup obtained from 100 logarithmic insertions exceeds the copying overhead induced by the tree. In practice, this means that single character manipulations are too fine-grained. Manipulating entire words, sentences or even paragraphs is more beneficial for performance.

Overall, the execution time benchmarks show that deep copying the document induces a considerable overhead. We believe that the overhead is not inherent to SECROs, but to its implementation on top of a mutable language such as JavaScript.

4.4 Throughput

The experiments presented so far focused on the execution time of sequential operations on a single replica. To measure the throughput of the text editors under high computational loads we also perform distributed benchmarks. To this end, we use 10 replicas (one on each node of the cluster) and let them simultaneously perform operations on the text editor. The operations are equally spread over the replicas. We measure the time to convergence, i.e. the time that is needed for all replicas to process all operations and reach a consistent state. Note that replicas reorder operations locally, hence, the throughput depends on the number of operations and is independent of the number of replicas.

Fig. 3.
figure 3

Throughput of the list-based SECRO and JSON CRDT text editors, in function of the number of concurrent operations. The SECRO version committed the document replica at a commit interval of 100. Error bars represent the 95% confidence interval for the average of 30 samples.

Figure 3 depicts how the throughput of the list-based text editor varies in function of the load. We observe that the SECRO text editor scales up to 50 concurrent operations, at which point it reaches its maximal throughput. Afterwards, the throughput quickly degrades. On the other hand, the JSON CRDT implementation achieves a higher throughput than the SECRO version under high loads (100 concurrent operations and more). Hence, the JSON CRDT text editor scales better than the SECRO text editor, but SECROs are general-purpose which allowed us to organise documents as balanced trees of characters.

5 Related Work

We now discuss work that is closely related to the ideas presented in this paper. Central to SECROs is the idea of employing application-specific information to reorder conflicting operations. Bayou [21] was the first system to use application-level semantics for conflict resolution by means of merge procedures provided by users. Our work, however, does not require manual resolution of conflicts. Instead, programmers only need to specify the invariants the application should uphold in the face of concurrent updates, and the underlying update algorithm deterministically orders operations.

Within the CRDT literature, the research on JSON CRDTs [9] is the most closely related to our work. JSON CRDTs aim to ease the construction of CRDTs by hiding the commutativity restriction that traditionally applies to the operations. Programmers can build new CRDTs by nesting lists and maps in arbitrary ways. The major shortcoming is that nesting lists and maps does not suffice to implement arbitrary replicated data types. Hence, JSON CRDTs are not truly general-purpose as opposed to SECROs.

Lasp [14] is the first distributed programming language where CRDTs are first-class citizens. New CRDTs are defined through functional transformations over existing ones. In contrast, SECROs are not limited to a portfolio of existing data types that can be extended. Any existing data structure can be turned into a SECRO by associating state validators to the operations.

Besides CRDTs, cloud types [6] are high-level data types that can be replicated over the network. Similar to SECROs, cloud types do not impose restrictions on the operations of the replicated data type. However, cloud types hardcode how to merge updates coming from different replicas of the same type. As such, programmers have no means to customise the merge process of cloud types to fit the application’s semantics. Instead, they are bound to implement a new cloud type and the accompanying merge procedure that fits the application. Hence, conflict resolution needs to be manually dealt with.

Some work has considered a hybrid approach offering SEC for commutative operations, and requiring strong consistency for non-commutative ones [2, 3]. There are some similarities to SECROs as they employ application-specific invariants to classify operations as safe or unsafe under concurrent execution. In this work, unsafe operations are synchronised while SECROs reorder unsafe operations as to avoid conflicts without giving up on availability. Partial Order-Restrictions (PoR) consistency [13] uses application-specific restrictions over operations but cannot guarantee convergence nor invariant preservation since these properties depend on the restrictions over the operations specified by the programmer.

6 Conclusion

In this work, we propose strong eventually consistent replicated objects (SECROs), a data type that guarantees SEC without imposing restrictions on the operations. SECROs do not avoid conflicts by design, but instead compute a global total order of the operations that is conflict-free, without synchronising the replicas. To this end, SECROs use state validators: application-specific invariants that determine the object’s behaviour in the face of concurrency.

To the best of our knowledge, SECROs are the first approach to support truly general-purpose replicated data types while still guaranteeing SEC. By specifying state validators arbitrary data types can be turned into highly available replicated data types. This means that replicated data types can be implemented similarly to their sequential local counterpart, with the addition of preconditions and postconditions to define concurrent semantics. We showcase the flexibility of SECROs through the implementation of a collaborative text editor that stores documents as a tree of characters. The implementation re-uses a third-party AVL tree and turns into a replicated data type using SECROs.

We compared our SECRO-based collaborative text editor to a state-of-the-art implementation that uses JSON CRDTs. The benchmarks reveal that SECROs efficiently manage memory, whereas the memory usage of JSON CRDTs grows unbounded. Time complexity benchmarks reveal that SECROs induce a linear time overhead which is proportional to the size of the operation history. Performance wise, SECROs can be competitive to state-of-the-art solutions if committed regularly.