Advertisement

The Rewrite Engines Competitions: A RECtrospective

  • Francisco Durán
  • Hubert GaravelEmail author
Open Access
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11429)

Abstract

Term rewriting is a simple, yet expressive model of computation, which finds direct applications in specification and programming languages (many of which embody rewrite rules, pattern matching, and abstract data types), but also indirect applications, e.g., to express the semantics of data types or concurrent processes, to specify program transformations, to perform computer-aided verification, etc. The Rewrite Engines Competition (REC) was created under the aegis of the Workshop on Rewriting Logic and its Applications (WRLA) to serve three main goals: (i) being a forum in which tool developers and potential users of term rewrite engines can share experience; (ii) bringing together the various language features and implementation techniques used for term rewriting; and (iii) comparing the available term rewriting languages and tools in their common features. The present article provides a retrospective overview of the four editions of the Rewrite Engines Competition (2006, 2008, 2010, and 2018) and traces their evolution over time.

1 Introduction

When searching Google for “rewrite engine”, most of the references are about Apache web servers and rewrite engines for URLs. Such engines perform string rewriting, which is a particular case of term rewriting [1, 3], a very general model of computation based on the repeated application of simplification rules. Despite its simplicity, term rewriting has shown itself a suitable paradigm for expressing fundamental concepts of logics, mathematics, and computer science (e.g., concurrency, communication, interaction, etc.).

Beyond such theoretical aspects, the ideas of term rewriting influenced the design of specification and programming languages, many of which incorporate algebraic terms and rewrite rules. Software implementations of term rewriting have been developed including, of course, rewrite engines, but also a large variety of tools for compiler construction, program transformation, and formal verification by theorem proving or model checking.

In order to evaluate and compare the various rewrite engines available, a software competition named REC (Rewrite Engines Competition) was created in 2006. Organized together with WRLA (Workshop on Rewriting Logic and its Applications), REC provides a forum for sharing experiences among tool developers and potential users. Four editions of this competition have taken place so far: REC1 (2006), REC2 (2008), REC3 (2010), and REC4 (2018).

The present article, which is part of the TOOLympics project to celebrate the 25th anniversary of the TACAS conference by gathering numerous software competitions at ETAPS, provides a retrospective overview of past editions of the REC competition. Section 2 summarizes the developments of the competition; Sect. 3 lists all tools that have been assessed, and Sect. 4 presents the collection of benchmarks accumulated during the successive editions; finally, Sect. 5 draws perspectives for future editions of the REC competition.

2 Evolution of REC Competitions

In the mid-2000’s, it became manifest that the term-rewriting community was lacking a comparative study of the different rewrite engines available. The following excerpt, quoted from [11], articulates the motivation for such a study:

“The idea of organizing a rewrite competition arose from noticing various applications of rewriting in different areas and by different categories of researchers, many of them manifesting a genuine and explicit interest in term rewriting. We believe that many of us can benefit from such rewrite engine competitions, provided that they are fair and explicitly state what was tested in each case. For example, users of rewrite engine can more informatively select the right rewrite engine for their particular application. On the other hand, for rewrite engine developers, such events give them ideas on how to improve their tools and what to prioritize, as well as a clearer idea of how their engine compares to others.”

It was not clear, however, how to conduct such a study. The abstract and general nature of term rewriting has given birth to a great diversity in software implementations. General-purpose rewrite engines differ in the various forms of rewriting they support (conditional, nondeterministic, context-sensitive, etc.). Many other rewrite engines are specialized for particular problems and embedded into programming languages, theorem provers, environments for compiler construction and program transformation, etc. (see Sect. 3 for examples).

REC1 [11] faced such doubts about the right approach to follow and decided to focus on efficiency, measured in terms of CPU time and memory use. Only two tools participated in this first edition of the competition, organized together with WRLA 2006. A collection of benchmarks, namely term rewrite systems sorted in four categories (see Sect. 4), was produced. Each benchmark was translated by hand into the input language of each participating tool, and revised by tool developers to make sure this code was optimal for their tools.

REC2 [15] expanded on the ideas of REC1, with a double goal: (i) broaden the comparison by assessing the efficiency of a larger number of rewrite engines—indeed, five tools participated in REC2; and (ii) being a showcase for the term-rewriting community, with a dedicated session at WRLA 2008, where all participating tools were presented by their developers, who exposed the features and strengths of each tool and discussed the outcomes of the competition. Tool developers actively participated in the whole process of REC2, not merely for adapting competition benchmarks to the tools, but also for exchanging views on how to organize the competition and present its results. As a result of fruitful discussions, several changes were implemented, such as the design of a common language for expressing the benchmarks (see Sect. 4 below).

REC3 [14] followed the same approach as REC2, with a greater emphasis on automation and a larger set of term-rewriting benchmarks—including problems related to program transformation, a key application area of term rewriting. The developers of all the participating tools were involved in this competition, organized together with WRLA 2010. The reported results indicate the computation time spent by each tool on each benchmark.

REC4 [17] was the result of a long-term effort undertaken in 2015 and presented at WRLA 2018. The competition’s scope was broadened away from traditional rewrite engines to include functional and object-oriented languages. As a consequence, REC4 did not consider particular features implemented only in some tools, but focused instead on basic features common to all tools, namely term rewrite systems that are confluent and terminating, with free constructors and conditional rules. Tool execution and comparison of results was fully automated, making it unnecessary to include tool developers directly in the competition—although they were contacted by email, in case of problems, before the presentation of the results. A Top-5 podium was produced to indicate which tools can tackle the most problems within a given amount of time and memory.

3 Tools Assessed

So far, not fewer than 18 tools have been assessed during the REC competitions, as shown by Table 1. This table lists which tools participated in which editions of the competition. Not all tools have been assessed in all editions, as it happened, e.g., for prominent tools such as ELAN [4] and ASF+SDF [6], the development of which halted before or just after REC1.
Table 1.

Languages and tools considered in the Rewrite Engines Competitions

language (tool)

web site

rec1

rec2

rec3

rec4

ASF+SDF [6]

http://www.meta-environment.org

\(\times \)

\(\times \)

\(\times \)

CafeOBJ [12]

http://cafeobj.org

\(\times \)

Clean [26]

http://clean.cs.ru.nl

\(\times \)

Haskell (GHC) [22]

http://www.haskell.org

\(\times \)

LNT (CADP) [8, 16]

http://cadp.inria.fr

\(\times \)

Lotos (CADP) [16, 19]

http://cadp.inria.fr

\(\times \)

Maude [9]

http://maude.cs.illinois.edu

\(\times \)

\(\times \)

\(\times \)

\(\times \)

mCRL2 [18]

http://www.mcrl2.org

\(\times \)

OCaml [21]

http://www.ocaml.org

\(\times \)

Opal (OCS) [25]

http://github.com/TU-Berlin/opal

\(\times \)

Rascal [5]

http://www.rascal-mpl.org

\(\times \)

Scala [24]

http://www.scala-lang.org

\(\times \)

SML (MLton) [23]

http://www.mlton.org

\(\times \)

SML (SML/NJ) [23]

http://www.smlnj.org

\(\times \)

Stratego/XT [7]

http://www.metaborg.org

\(\times \)

\(\times \)

\(\times \)

TermWare [13]

http://gradsoft.ua/index_eng.html

\(\times \)

Tom [2]

http://tom.loria.fr

\(\times \)

\(\times \)

\(\times \)

TXL [10]

http://txl.ca

\(\times \)

It is worth pointing out the versatility of term rewriting and the diversity of its implementations. It is used in both specification and programming languages. These languages can be algebraic (e.g., CafeOBJ, LOTOS, Maude, mCRL2, Stratego/XT, etc.), functional (e.g., Clean, Haskell, LNT, OCaml, SML, etc.), or object-oriented (e.g., Rascal, Scala, Tom, etc.), and certain languages combine several of these traits, such as Opal, which is both algebraic and functional, or OCaml, which is both functional and object-oriented. Some languages also support higher-order programming (e.g., Haskell, OCaml), while others have built-in support for concurrency (e.g., LOTOS, LNT, Maude, mCRL2, etc.). Implementations encompass compilers and interpreters, certain languages (e.g., OCaml or Rascal) offering both, while other approaches (e.g., Tom) enable term rewrite systems to be embedded in general-purpose languages such as C or Java. Finally, some implementations (ASF/SDF, Stratego/XT, etc.) provide rich environments for language design, including support for lexical/syntactic analysis, construction and traversal of abstract syntax trees, as well as program transformations.

4 REC Benchmarks

As a byproduct of the efforts made in organizing the four REC competitions, a collection of benchmarks has been progressively accumulated1.

REC1 [11] set up the foundations of this collection, by gathering 41 term rewrite systems, split into four distinct categories: unconditional term rewrite systems (in which no rewrite rule has Boolean premises), conditional term rewrite systems (in which some rewrite rules have Boolean premises), rewriting modulo axioms (in which rewriting relies on certain axioms, such as commutativity and/or associativity), and rewriting modulo strategies (in which rewriting is context sensitive, guided by local strategies). Several REC1 benchmarks were derived from generic benchmarks parameterized by variables (e.g., the parameter of function computing the factorial of a natural number, the length of a list to be sorted, etc.) by giving particular values to these variables. Following the terminology used for the Model Checking Contest [20], we distinguish between models, which are generic benchmarks, and instances, which are benchmarks derived from generic benchmarks by giving actual values to parameters; the remaining benchmarks, which are not parameterized, are counted both as models and instances.

REC2 [15] brought a significant evolution: in REC1, each benchmark was specified in the input language of each tool, which was only feasible as the number of tools was small. REC2 introduced, to express its benchmarks, a common language, which we name REC-2008 and which was inspired by the TPDB language used at that time by the Termination Competition (the Confluence Competition uses a similar language). Several tools were adapted to accept this new language REC-2008 as input; for the other tools, translation was done manually.

REC3 [14] pursued in the same vein as REC2, while increasing the number of instances. REC3 also tried to expand the scope of the competition by introducing a separate collection of benchmarks meant for program transformation and expressed in an imperative language named TIL; however, this initiative was left with no follow-through.

REC4 [17], in order to address a larger set of specification and programming languages, introduced a new language REC-2017 derived from REC-2008 with additional restrictions ensuring that benchmarks are deterministic (hence, confluent), terminating, and free from equations between constructors. Consequently, the 3rd and 4th categories (rewriting modulo equations and rewriting modulo strategies) were removed, and the 1st and 2nd categories (unconditional and conditional rewriting) were merged into a single one, as most languages do not make such a distinction. The remaining REC-2008 benchmarks were upgraded to the REC-2017 language, and many new, significantly complex benchmarks were added to the collection. To provide for an objective comparison, scripts were developed to translate REC-2017 specifications to the input languages of all tools under assessment.

Table 2 gives a quantitative overview of the evolution of the REC benchmark collection; each cells having the form “\((m)\;n\)” denotes m models and n instances.
Table 2.

Benchmarks considered in the Rewrite Engines Competitions

category

rec1

rec2

rec3

rec4

source language

tool-specific

rec-2008

rec-2008

rec-2017

unconditional term rewrite systems

(5) 7

(5) 12

(7) 26

(19) 43

conditional term rewrite systems

(9) 25

(8) 18

(6) 17

(24) 42

rewriting modulo equations

(4) 9

(4) 6

(4) 6

(0) 0

rewriting modulo strategies

(0) 0

(1) 1

(1) 3

(0) 0

total

(18) 41

(18) 37

(18) 52

(43) 85

5 Conclusion

Term rewriting is a fundamental topic with many applications, as illustrated by the multiplicity of term-rewriting implementations in compilers and interpreters.

The Rewrite Engines Competitions (REC), the evolutions of which have been reviewed in the present article, stimulate the research interest in this field. One main lesson to be retained from these competitions is that performance of term rewriting significantly differs across implementations: there is room for enhancements and, following the latest REC competition (2018), three developer teams already reported plans to improve their tools to take into account the REC results.

Future REC competitions should address at least two points: (i) more languages should be assessed, inviting recent tools in the competition and keeping in mind that some tools may disappear if they are no longer maintained; (ii) more benchmarks should be considered, which will require dedicated effort to develop new benchmarks, given the lack of large, computationally intensive term rewriting systems freely available on the Web, and the subtle semantic differences that exist between the various flavours of term rewrite systems.

Footnotes

  1. 1.

    These are available from http://rec.gforge.inria.fr.

References

  1. 1.
    Baader, F., Nipkow, T.: Term Rewriting and All That. Cambridge University Press, Cambridge (1998)CrossRefGoogle Scholar
  2. 2.
    Balland, E., Brauner, P., Kopetz, R., Moreau, P.-E., Reilles, A.: Tom: Piggybacking rewriting on Java. In: Baader, F. (ed.) RTA 2007. LNCS, vol. 4533, pp. 36–47. Springer, Heidelberg (2007).  https://doi.org/10.1007/978-3-540-73449-9_5CrossRefGoogle Scholar
  3. 3.
    Bezem, M., Klop, J., de Vrijer, R., Terese (group) (eds.): Term Rewriting Systems. Cambridge Tracts in Theoretical Computer Science, vol. 55. Cambridge University Press, Cambridge (2003)Google Scholar
  4. 4.
    Borovanský, P., Kirchner, C., Kirchner, H., Moreau, P.E., Ringeissen, C.: An overview of ELAN. Electron. Notes Theor. Comput. Sci. 15, 55–70 (1998)CrossRefGoogle Scholar
  5. 5.
    van den Bos, J., Hills, M., Klint, P., van der Storm, T., Vinju, J.J.: Rascal: from algebraic specification to meta-programming. In: Durán, F., Rusu, V. (eds.) Proceedings of the 2nd International Workshop on Algebraic Methods in Model-based Software Engineering (AMMSE 2011), Zurich, Switzerland. Electronic Proceedings in Theoretical Computer Science, vol. 56, pp. 15–32, June 2011Google Scholar
  6. 6.
    van den Brand, M., Heering, J., Klint, P., Olivier, P.A.: Compiling language definitions: the ASF+SDF compiler. ACM Trans. Program. Lang. Syst. 24(4), 334–368 (2002)CrossRefGoogle Scholar
  7. 7.
    Bravenboer, M., Kalleberg, K.T., Vermaas, R., Visser, E.: Stratego/XT 0.17 – a language and toolset for program transformation. Sci. Comput. Program. 72(1–2), 52–70 (2008)MathSciNetCrossRefGoogle Scholar
  8. 8.
    Champelovier, D., Clerc, X., Garavel, H., Guerte, Y., McKinty, C., Powazny, V., Lang, F., Serwe, W., Smeding, G.: Reference Manual of the LNT to LOTOS Translator (Version 6.6), INRIA, Grenoble, France, February 2017Google Scholar
  9. 9.
    Clavel, M., Durán, F., Eker, S., Escobar, S., Lincoln, P., Martí-Oliet, N., Meseguer, J., Talcott, C.L.: Maude Manual (Version 2.7.1), July 2016Google Scholar
  10. 10.
    Cordy, J.R.: The TXL source transformation language. Sci. Comput. Program. 61(3), 190–210 (2006)MathSciNetCrossRefGoogle Scholar
  11. 11.
    Denker, G., Talcott, C.L., Rosu, G., van den Brand, M., Eker, S., Serbanuta, T.F.: Rewriting logic systems. Electron. Notes Theor. Comput. Sci. 176(4), 233–247 (2007)CrossRefGoogle Scholar
  12. 12.
    Diaconescu, R., Futatsugi, K.: CafeOBJ Report – The Language, Proof Techniques, and Methodologies for Object-Oriented Algebraic Specification. AMAST Series in Computing, vol. 6. World Scientific (1998)Google Scholar
  13. 13.
    Doroshenko, A.E., Shevchenko, R.: A rewriting framework for rule-based programming dynamic applications. Fundam. Inform. 72(1–3), 95–108 (2006)zbMATHGoogle Scholar
  14. 14.
    Durán, F., Roldán, M., Bach, J.C., Balland, E., van den Brand, M., Cordy, J.R., Eker, S., Engelen, L., de Jonge, M., Kalleberg, K.T., Kats, L.C.L., Moreau, P.E., Visser, E.: The third Rewrite Engines Competition. In: Ölveczky, P.C. (ed.) WRLA 2010. LNCS, vol. 6381, pp. 243–261. Springer, Heidelberg (2010).  https://doi.org/10.1007/978-3-642-16310-4_16CrossRefGoogle Scholar
  15. 15.
    Durán, F., Roldán, M., Balland, E., van den Brand, M., Eker, S., Kalleberg, K.T., Kats, L.C.L., Moreau, P.E., Schevchenko, R., Visser, E.: The second Rewrite Engines Competition. Electron. Notes Theor. Comput. Sci. 238(3), 281–291 (2009)CrossRefGoogle Scholar
  16. 16.
    Garavel, H., Lang, F., Mateescu, R., Serwe, W.: CADP 2011: a toolbox for the construction and analysis of distributed processes. Springer Int. J. Softw. Tools Technol. Transf. (STTT) 15(2), 89–107 (2013)CrossRefGoogle Scholar
  17. 17.
    Garavel, H., Tabikh, M.-A., Arrada, I.-S.: Benchmarking implementations of term rewriting and pattern matching in algebraic, functional, and object-oriented languages—The 4th Rewrite Engines Competition. In: Rusu, V. (ed.) WRLA 2018. LNCS, vol. 11152, pp. 1–25. Springer, Cham (2018).  https://doi.org/10.1007/978-3-319-99840-4_1CrossRefGoogle Scholar
  18. 18.
    Groote, J., Mousavi, M.: Modeling and Analysis of Communicating Systems. The MIT Press, Cambridge (2014)CrossRefGoogle Scholar
  19. 19.
    ISO/IEC: LOTOS – A Formal Description Technique Based on the Temporal Ordering of Observational Behaviour. International Standard 8807, International Organization for Standardization – Information Processing Systems – Open Systems Interconnection, Geneva, September 1989Google Scholar
  20. 20.
    Kordon, F., Garavel, H., Hillah, L.M., Paviot-Adet, E., Jezequel, L., Rodríguez, C., Hulin-Hubard, F.: MCC’2015 – the fifth Model Checking Contest. In: Koutny, M., Desel, J., Kleijn, J. (eds.) Transactions on Petri Nets and Other Models of Concurrency XI. LNCS, vol. 9930, pp. 262–273. Springer, Heidelberg (2016).  https://doi.org/10.1007/978-3-662-53401-4_12CrossRefGoogle Scholar
  21. 21.
    Leroy, X., Doligez, D., Frisch, A., Garrigue, J., Rémy, D., Vouillon, J.: The OCaml System Release 4.04 – Documentation and User’s Manual. INRIA, Paris, France, March 2016Google Scholar
  22. 22.
    Marlow, S. (ed.): Haskell 2010 Language Report, April 2010Google Scholar
  23. 23.
    Milner, R., Tofte, M., Harper, R., MacQueen, D.: Definition of Standard ML (Revised). MIT Press, Cambridge (1997)CrossRefGoogle Scholar
  24. 24.
    Odersky, M., Altherr, P., Cremet, V., Dubochet, G., Emir, B., Haller, P., Micheloud, S., Mihaylov, N., Moors, A., Rytz, L., Schinz, M., Stenman, E., Zenger, M.: The Scala Language Specification – Version 2.11. Programming Methods Laboratory, EPFL, Switzerland, March 2016Google Scholar
  25. 25.
    Pepper, P., Lorenzen, F. (eds.): The Programming Language Opal – 6th Corrected Edition. Department of Software Engineering and Theoretical Computer Science, Technische Universität Berlin, Germany, October 2012Google Scholar
  26. 26.
    Plasmeijer, R., van Eekelen, M., van Groningen, J.: Clean Version 2.2 Language Report. Department of Software Technology, University of Nijmegen, The Netherlands, December 2011Google Scholar

Copyright information

© The Author(s) 2019

Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

Authors and Affiliations

  1. 1.Universidad de MálagaMálagaSpain
  2. 2.Univ. Grenoble Alpes, Inria, CNRS, Grenoble INP, LIGGrenobleFrance

Personalised recommendations