1 Introduction

Logical frameworks make it easier to mechanize formal systems and proofs about them by providing a single meta-language with abstractions and primitives for common and recurring concepts, like variables and assumptions in proofs. This can have a major impact on the effort and cost of mechanization. By factoring out and abstracting over low-level details, it reduces the time it takes to mechanize formal systems, avoids errors in manipulating low-level operations, and makes the mechanizations themselves easier to maintain. It can also make an enormous difference when it comes to proof checking and constructing meta-theoretic proofs, as we focus on the essential aspect of a proof without getting bogged down in the quagmire of bureaucratic details.

The contextual logical framework [20, 21], an extension of the logical framework LF [14], is designed to support a broad range of common features that are needed for mechanizations of formal systems. To model variables, assumptions and derivations, programmers can take advantage of higher-order abstract syntax (HOAS) trees; a context of assumptions together with properties about uniqueness of assumptions can be represented abstractly using first-class contexts and context variables [21]; single and simultaneous substitutions together with their equational theory are supported via first-class substitutions [7, 8]; finally, derivation trees that depend on a context of assumption can be precisely described via contextual objects [20]. This last aspect is particularly important. By encapsulating and representing derivation trees together with their surrounding context of assumptions, we can analyze and manipulate these rich syntactic structures via pattern matching, and can construct (co)inductive proofs by writing recursive programs about them [6, 24]. This leads to a modular and robust design where we cleanly separate the representation of formal systems and derivations from the (co)inductive reasoning about them.

Substructural frameworks such as the linear logical framework LLF [9] provide additional abstractions to elegantly model the behaviour of imperative operations such as updating and deallocating memory [12, 30] and concurrent computation (see for example session types [5]). However, it has been very challenging to mechanize proofs about LLF specifications as we must manage mixed contexts of unrestricted and linear assumptions. When constructing a derivation tree, we must often split the linear resources and distribute them to the premises relying on a context join operation, written as \(\varPsi = \varPsi _1 \bowtie \varPsi _2\). This operation should be commutative and associative. Unrestricted assumptions present in \(\varPsi \) should be preserved in both contexts \(\varPsi _1\) and \(\varPsi _2\). The mix of unrestricted and restricted assumptions leads to an intricate equational theory of contexts that often stands in the way of mechanizing linear or separation logics in proof assistants and has spurred the development of specialized tactics [2, 16].

Our main contribution is the design of Lincx (read: “lynx”), a contextual linear logical framework with first-class contexts that may contain both intuitionistic and linear assumptions. On the one hand our work extends the linear logical framework LLF with support for first-class linear contexts together with an equational theory of context joins, contextual objects and contextual types; on the other we can view Lincx as a generalization of contextual LF to model not only unrestricted but also linear assumptions. Lincx hence allows us to abstractly represent syntax trees that depend on a mixed context of linear and unrestricted assumptions, and can serve as a foundation for mechanizing the meta-theory of stateful systems where we implement (co)inductive proofs about linear contextual objects by pattern matching following the methodology outlined by Cave and Pientka [6] and Thibodeau et al. [29]. Our main technical contributions are:

(1) A bi-directional decidable type system that only characterizes canonical forms of our linear LF objects. Consequently, exotic terms that do not represent legal objects from our object language are prevented. It is an inherent property of our design that bound variables cannot escape their scope, and no separate reasoning about scope is required. To achieve this we rely on hereditary substitution to guarantee normal forms are preserved. Equality of two contextual linear LF objects reduces then to syntactic equality (modulo \(\alpha \)-renaming).

(2) Definition of first-class (linear) contexts together with an equational theory of context joins. A context in Lincx may contain both unrestricted and linear assumptions. This not only allows for a uniform representation of contexts, but also leads to a uniform representation of simultaneous substitutions. Context variables are indexed and their indices are freely built from elements of an infinite, countable set through a context join operation (\(\bowtie \)) that is associative, commutative and has a neutral element. This allows a canonical representation of contexts and context joins. In particular, we can consider contexts equivalent modulo associativity and commutativity. This substantially simplifies the meta-theory of Lincx and also directly gives rise to a clean implementation of context joins which we exploit in our mechanization of the meta-theoretic properties of Lincx.

(3) Mechanization of Lincx together with its meta-theory in the proof assistant Beluga [23]. Our development takes advantage of higher-order abstract syntax to model binding structures compactly. We only model linearity constraints separately. We have mechanized our bi-directional type-theoretic foundation together with our equational theory of contexts. In particular, we mechanized all the key properties of our equational theory of context joins and the substitution properties our theory satisfies.

We believe that Lincx is a significant step towards modelling (linear) derivation trees as well-scoped syntactic structures that we can analyze and manipulate via case-analysis and implementing (co)inductive proofs as (co)recursive programs. As it treats contexts, where both unrestricted and linear assumptions live, abstractly and factors out the equational theory of context joins, it eliminates the need for users to explicitly state basic mathematical definitions and lemmas and build up the basic necessary infrastructure. This makes the task easier and lowers the costs and effort required to mechanize properties about imperative and concurrent computations.

2 Motivating Examples

To illustrate how we envision using (linear) contextual objects and (linear) contexts, we implement two program transformations on object languages that exploit linearity. We first represent our object languages in Lincx and then write recursive programs that analyze the syntactic structure of these objects by pattern matching. This highlights the role that contexts and context joins play.

2.1 Example: Code Simplification

To illustrate the challenges that contexts pose in the linear setting, we implement a program that translates linear Mini-ML expressions that feature let-expression into a linear core lambda calculus. We define the linear Mini-ML using the linear type ml and our linear core lambda calculus using the linear type lin as our target language. We introduce a linear LF type together with its constructors using the keyword Linear LF.

figure a

We use the linear implication to describe the linear function space and we model variable bindings that arise in abstractions and let-expressions using higher-order abstract syntax, as is common in logical frameworks. This encoding technique exploits the function space provided by LF to model variables. In linear LF it also ensures that bound variables are used only once.

Our goal is to implement a simple translation of Mini-ML expressions to the core linear lambda calculus by eliminating all let-expressions and transforming them into function applications. We thus need to traverse Mini-ML expressions recursively. As we go under an abstraction or a let-expression, our sub-expression will not, however, remain closed. We therefore model a Mini-ML expression together with its surrounding context in which it is meaningful. Our function trans takes a Mini-ML expression in a context , written as , and returns a corresponding expression in the linear lambda calculus in a context , an object of type . More precisely, there exists such a corresponding context . Due to linearity, the context of the result of translating a Mini-ML term has the same length as the original context. This invariant is however not explicitly tracked.

We first define the structure of such contexts using context schema declarations. The tag l ensures that any declaration of type ml in a context of schema must be linear. Similarly, any declaration of type lin in a context of schema must be linear.

figure b

To characterize the result of this translation, we define a recursive type:

figure c

By writing round parenthesis in we indicate that we do not pass \(\delta \) explicitly to the constructor Return, but it can always be reconstructed. It is merely an annotation declaring the schema of .

Fig. 1.
figure 1

Translation of linear ML-expressions to a linear core language

We now define a recursive function trans using the keyword rec (see Fig. 1). First, let us highlight some high level principles and concepts that we use. We write to describe an expression N that is meaningful in the context . For example, denotes a term of type ml in the context where is a context variable that describes contexts abstractly. We call M a meta-variable. It stands for a ml term that may depend on the context . In general, all meta-variables are associated with a stuck substitution, written or . We usually omit the substitution , if it is the identitiy substitution. One substitution that frequently arises in practice is the empty substitution that is written as [] and maps from the empty context to an unrestricted context . It hence acts as a weakening substitution.

Our simplification is implemented by pattern matching on objects and specifying constraints on contexts. In the variable case, since we have a linear context, we require that x be the only variable in the contextFootnote 1. In the lambda case we write for linear application and linear abstraction. We expect the type of M to be inferred as , since we interpret every pattern variable to depend on all its surrounding context unless otherwise specified. We now recursively translate M in the extended context , unpack the result and rebuild the equivalent linear term. Note that we pattern match on the result translating M by writing . However, we do not necessarily know that the output context is of the same length as the input context and hence necessarily has the shape , as we do not track this invariant explicitly. To write a covering program we would need to return an error, if we would encounter , i.e. a closed term where is empty. We omit this case here.

The third and fourth cases are the most interesting ones, as we must split the context. When we analyze for example , then M has some type and N has some type where . We specify these type annotations and context constraints explicitly. Note that we overload the \(\bowtie \) symbol in this example: when it occurs as a subscript it is part of the name, while when we write \(\gamma _1 \bowtie \gamma _2\) it refers to the operation on contexts. Then we can simply recursively translate M and N and rebuild the final result where we explicitly state . We proceed similarly to translate recursively every let-expression into a function application.

Type checking verifies that a given object is well-typed modulo context joins. This is non-trivial. Consider for example where . Clearly, we should be able to type check such an example also if the user wrote . Hence we want our underlying type theory to reason about context constraints modulo associativity and commutativity.

As the astute reader will have noticed, we only allow one context variable in every context, i.e. writing is illegal. Furthermore, we have deliberately chosen the subscripts for our context variables to emphasize their encoding in our underlying theory. Note that all context variables that belong to the same tree of context splits have the same name, but differ in their subscripts. The context variables and are called leaf-level context variables. The context variable is their direct parent and sits at the root of this tree. One can think of the tree of context joins as an abstraction of the typing derivation. To emphasize this idea, let us consider the following deeply nested pattern: where , , and , and where we again encode the splitting of in its subscript. Our underlying equational theory of context joins treats as equivalent to or as it takes into account commutativity and associativity. However, it may require us to generate a new intermediate node and eliminate intermediate nodes (such as ).

Fig. 2.
figure 2

Context joins

Our encoding of context variables is hence crucial to allow the rearrangement of context constraints, but also to define what it means to instantiate a given context variable such as with a concrete context . If contains also unrestricted assumptions then instantiating will have a global effect, as unrestricted assumptions are shared among all nodes in this tree of context joins. This latter complication could possibly be avoided if we separate the context of intuitionistic assumptions and the context of linear assumptions. However, this kind of separation between intuitionistic and linear assumptions is not trivial in the dependently typed setting because linear assumptions may depend on intuitionistic assumptions.

This design of context variables and capturing their dependency is essential to Lincx and to the smooth extension of contextual types to the linear setting. As the leaf-level context variables uniquely describe a context characterized by a tree of context joins, we only track the leaf-level context variables as assumptions while type checking an object, but justify the validity of context variables that occur as interior nodes through the leaf-level variables. We want to emphasize that this kind of encoding of context variables does not need to be exposed to programmers.

2.2 Example: CPS Translation

As a second example, we implement the translation of programs into continuation passing style following Danvy and Filinski [11]. Concretely, we follow closely the existing implementation of type-preserving CPS translation in Beluga by Belanger et al. [1], but enforce that the continuations are used linearly, an idea from Berdine et al. [3]. Although context splits do not arise in this example, as we only have one linear variable (standing for the continuation) in our context, we include it, to showcase the mix and interplay of intuitionistic and linear function spaces in encoding program transformations.

Our source language is a simple language consisting of natural numbers, functions, applications and let-expressions. We only model well-typed expressions by defining a type source that is indexed by types tp.

figure i

In our target language we distinguish between expressions, characterized by the type exp and values, defined by the type value. Continuations take values as their argument and return an exp. We ensure that each continuation itself is used exactly once by abstracting exp over the linear function space.

figure j

We can now define our source and value contexts as unrestricted contexts by marking the schema element with the tag u.

figure k

To guarantee that the resulting expression is well-typed, we define a context relation to relate the source context to the value context (see Fig. 3). Notice that we explicitly state that the type S of a source and target expression is closed; it does not depend on or . To distinguish between objects that depend on their surrounding context and objects that do not, we associate every index and pattern variable with a substitution (the identity substitution by default); if we want to state that a given variable is closed, we associate it with the empty substitution [].

We can now define the translation itself (see Fig. 3). The function cpse takes in a context relation and a source term of type source S[] that depends on context . It then returns the corresponding expression of type exp, depending on context extended by a continuation from value S to exp. The fact that the continuation is used only once in exp is enforced by declaring it linear in the context. The translation proceeds by pattern matching on the source term. We concentrate here on the interesting cases.

Fig. 3.
figure 3

CPS translation

Parameter Variable. If we encounter a variable from the context , written as #p, we look up the corresponding variable #q in the target context by using the context relation and we pass it to the continuation k. We omit here the definition of the lookup function which is straightforward. We use where we believe that the omitted object can reasonably be inferred. Finally, we note that k #q is well-typed in the context , as k is well-typed in the context that only contains the declaration and #q is well-typed in the context .

Constant z. We first retrieve the target context to build the final expression by pattern matching on the context relation r. Then we pass kz to the continuation k in the context . Note that an application k kz is well-typed in , as kz is well-typed in , i.e. its unrestricted part.

Lambda Abstraction. To convert functions, we extend the context and the context relation r and convert the term M recursively in the extended context to obtain the target expression P. We then pass to the continuation k the value

Application. Finally, let us consider the the source term app M N. We translate both M and N recursively to produce the target terms P and Q respectively. We then substitute for the continuation variable k2 in Q a continuation consuming the local argument of an application. A continuation is then built from this, expecting the function to which the local argument is applied and substituted for k1 in P producing a well-typed expression, if a continuation for the resulting type S is provided.

We take advantage of our built-in substitution here to reduce any administrative redexes. The term that we substitute for references to k2 in Q will be \(\beta \)-reduced wherever that k2 appears in a function call position, such as the function calls introduced in the variable case. We hence reduce administrative redexes using the built-in (linear) LF application.

3 LINCX: A Linear Logical Framework with First-Class Contexts

Throughout this section we gradually introduce Lincx, a contextual linear logical framework with first-class contexts (i.e. context variables) that generalizes the linear logical framework LLF [9] and contextual LF [6]. Figure 4 presents both contextual linear LF (see Sect. 3.1) and its meta-language (see Sect. 3.6).

Fig. 4.
figure 4

Contextual linear LF with first-class contexts

3.1 Syntax of Contextual Linear LF

Lincx allows for linear types, written \(A \multimap B\), and dependent types \(\varPi x {:} A.B\) where x may be unrestricted in B. We follow recent presentations where we only describe canonical LF objects using hereditary substitution.

As usual, our framework supports constants, (linear) functions, and (linear) applications. We only consider objects in \(\eta \)-long \(\beta \)-normal form, as these are the only meaningful terms in a logical framework. While the grammar characterizes objects in \(\beta \)-normal form, the bi-directional typing rules will also ensure that objects are \(\eta \)-long. Normal canonical terms are either intuitionistic lambda abstractions, linear lambda abstractions, or neutral atomic terms. We define (linear) applications as neutral atomic terms using a spine representation [10], as it makes the termination of hereditary substitution easier to establish. For example, instead of \(x\;M_1 \ldots M_n\), we write \(x \cdot M_1;~\ldots ;~ M_n;~ \epsilon \). The three possible variants of a spine head are: a variable x, a constant c or a parameter variable closure \(p[\sigma ]\).

Our framework contains ordinary bound variables x which may refer to a variable declaration in a context \(\varPsi \) or may be bound by either the unrestricted or linear lambda-abstraction, or by the dependent type \(\varPi x {:} A.B\). Similarly to contextual LF, Lincx also allows two kinds of contextual variables as terms. First, the meta-variable u of type \((\varPsi \vdash P)\) stands for a general LF object of atomic type P and uses the variables declared in \(\varPsi \). Second, the parameter variable p of type \((\varPsi \vdash \#A)\) stands for a variable object of type A from the context \(\varPsi \). These contextual variables are associated with a postponed substitution \(\sigma \) representing a closure. The intention is to apply \(\sigma \) as soon as we know what u (or p resp.) stands for.

The system has one mixed context \(\varPsi \) containing both intuitionistic and linear assumptions: \(x {:} A\) is an intuitionistic assumption in the context (also called unrestricted assumption), \(x \hat{:} A\) represents a linear assumption and \(x \check{:} A\) stands for its dual, an unavailable assumption. It is worth noting that we use \(\mathbin {\widehat{~}}\) throughout the system description to indicate a linear object – be it term, variable, name etc. Similarly, \(\check{\,}\) always denotes an unavailable resource.

In the simultaneous substitution \(\sigma \), we do not make the domain explicit. Rather, we think of a substitution together with its domain \(\varPsi \); the i-th element in \(\sigma \) corresponds to the i-th declaration in \(\varPsi \). The expression \(\mathbin {\textsf {id}_{\psi }}\) denotes the identity substitution with domain \(\psi _m\) for some index m; we write \(\cdot \) for the empty substitution. We build substitutions using normal terms M. We must however be careful: note that a variable x is only a normal term if it is of base type. As we push a substitution \(\sigma \) through a \(\lambda \)-abstraction \(\lambda x.M\), we need to extend \(\sigma \) with x. The resulting substitution \(\sigma , x\) might not be well-formed, since x might not be of base type and, in fact, we do not know its type. This is taken care of in our definition of substitution, based on contextual LF [7]. As we substitute and replace a context variable with a concrete context, we unfold and generate an (\(\eta \)-expanded) identity substitution for a given context \(\varPsi \).

3.2 Contexts and Context Joins

Since linearity introduces context splitting, context maintenance is crucial in any linear system. When we allow for first-class contexts, as we do in Lincx, it becomes much harder: we now need to ensure that, upon instantiation of the context variables, we do not accidentally join two contexts sharing a linear variable. To enforce this in Lincx, we allow for at most one (indexed) context variable per context and use indices to abstractly describe splitting. This lets us generalize the standard equational theory for contexts based on context joins to include context variables.

As mentioned above, contexts in Lincx are mixed. Besides linear and intuitionistic assumptions, we allow for unavailable assumptions following the approach of Schack-Nielsen and Schürmann [27], in order to maintain symmetry when splitting a context: if \(\varPsi = \varPsi _1 \bowtie \varPsi _2\), then \(\varPsi _1\) and \(\varPsi _2\) both contain all the variables declared in \(\varPsi \); however, if \(\varPsi _1\) contains a linear assumption \(x \hat{:} A\), \(\varPsi _2\) will contain its unavailable counterpart \(x \check{:} A\) (and vice-versa).

To account for context splitting in the presence of context variables, we index the latter. The indices are freely built from elements of an infinite, countable set \(\mathcal {I}\), through a join operation (\(\bowtie \)). It is associative and commutative, with \(\epsilon \) as its neutral element. In other words, \((\mathcal {I}^{*}, \bowtie , \epsilon )\) is a (partial) free commutative monoid over \(\mathcal {I}\). For our presentation it is important that no element of the monoid is invertible, that is if \(m \bowtie n = \epsilon \) then \(m = n = \epsilon \). In the process of joining contexts, it is crucial to ensure that each linear variable is used only once: we do not allow a join of \(\varPsi , x \hat{:} A\) with \(\varPhi , x \hat{:} A\). To express the fact that indices m and n share no elements of \(\mathcal {I}\) and hence the join of \(\psi _m\) with \(\psi _n\) is meaningful, we use the notation \(m \bot n\). In fact we will overload \(\bowtie \), changing it into a partial operation \(m \bowtie n\) that fails when \(m \not \!\!\bot n\). This is because we want the result of joining two context variables to continue being a correct context upon instantiation. We will come back to this point in Sect. 3.6, when discussing meta-substitution for context variables.

To give more intuition, the implementation of the indices in our formalization of the system is using binary numbers, where \(\mathcal {I}\) contains powers of 2, \(\bowtie \) is defined as a binary \(OR \) and \(\epsilon = 0\). \(m \bot n\) holds when m and n use different powers of 2 in their binary representation. We can also simply think of indices m as sets of elements from \(\mathcal {I}\) with \(\bowtie \) being \(\cup \) for sets not sharing any elements.

The only context variables tracked in the meta-context \(\varDelta \) are the leaf-level context variables \(\psi _i\). We require that these use elements of the carrier set \(i \in \mathcal {I}\) as indices. To construct context variables for use in contexts, we combine leaf-level context variables using \(\bowtie \) on indices. Consider again the tree describing the context joins (see Fig. 2). In this example, we have the leaf-level context variables \(\gamma _1\), \(\gamma _{21}\), and \(\gamma _{22}\). These are the only context variables we track in the meta-context \(\varDelta \). Using a binary encoding we would use the subscripts 100, 010 and 001 instead of 1, 21, and 22.

Rules of constructing a well-formed context (Fig. 5) describe four possible initial cases of context construction. First, the empty context, written simply as \(\cdot \), is well-formed. Next, there are two possibilities why a context denoted by a context variable \(\psi _i\) is well-formed. If the context variable \(\psi _i\) is declared in the meta-context \(\varDelta \), then it is well-formed and describes a leaf-variable. To guarantee that also context variables that describe intermediate nodes in our context tree are well-formed, we have a composition rule that allows joining two well-formed context variables using \(\bowtie \) operation on indices; the restriction we make on \(\bowtie \) ensures that they do not share any leaf-level variables. \(\psi _{\epsilon }\) forms a well-formed context as long as there is some context variable \(\psi _i\) declared in \(\varDelta \). This is an abstraction that allows us to describe the intuitionistic variables of a context. Finally, the last case for context extensions is straightforward.

Fig. 5.
figure 5

Well-formed contexts

In general we write \(\varGamma \) for contexts that do not start with a context variable and \(\varPsi ,\varGamma \) for the extension of context \(\varPsi \) by the variable declarations of \(\varGamma \).

When defining our inference rules, we will often need to access the intuitionistic part of a context. Much like in linear LF [9], we introduce the function \(\overline{\varPsi }\) which is defined as follows:

figure n

Note that this function does not remove any variable declarations from \(\varPsi \), it simply makes them unavailable. Further, when applying this function to a context variable, it drops all the indices, indicating access to only the shared part of the context variable. After we instantiate \(\psi _m\) with a concrete context, we will apply the operation. Extracting the intuitionistic part of a context is hence simply postponed.

Further, we define notation \(\mathsf {unr}(\varPsi )\) to denote an unrestricted context, i.e. a context that only contains unrestricted assumptions; while \(\overline{\varPsi }\) drops all linear assumptions, \(\mathsf {unr}(\varPsi )\) simply verifies that \(\varPsi \) is a purely intuitionistic context. In other words, \(\mathsf {unr}(\varPsi )\) holds if and only if \(\overline{\varPsi } = \varPsi \). We omit here its (straightforward) judgmental definition.

Fig. 6.
figure 6

Joining contexts

The rules for joining contexts (see Fig. 6) follow the approach presented by Schack-Nielsen in his PhD dissertation [26], but are generalized to take into account context variables. Because of the monoid structure of context variable indices, the description can be quite concise while still preserving the desired properties of this operation. For instance the expected property \(\varPsi = \varPsi \bowtie \overline{\varPsi }\) follows, on the context variable level, from \(\epsilon \) being the neutral element of \(\bowtie \). Indeed, for any \(\psi _m\), we have that \(\psi _m = \psi _m \bowtie \psi _{\epsilon }\).

It is also important to note that, thanks to the determinism of \(\bowtie \), context joins are unique. In other words, if \(\varPsi = \varPsi _1 \bowtie \varPsi _2\) and \(\varPhi = \varPsi _1 \bowtie \varPsi _2\), \(\varPsi = \varPhi \). On the other hand, context splitting is non-deterministic: given a context \(\varPsi \) we have numerous options of splitting it into \(\varPsi _1\) and \(\varPsi _2\), since each linear variable can go to either of the components.

We finish this section by describing the equational theory of context joins. We expect joining contexts to be a commutative and associative operation, and the unrestricted parts of contexts in the join should be equal. Further, it is always possible to extend a valid join with a ground unrestricted context, and \(\overline{\varPsi }\) can always be joined with \(\varPsi \) without changing the result.

Lemma 1

(Theory of context joins). 

  1. 1.

    (Commutativity) If \( \varPsi = \varPsi _1 \bowtie \varPsi _2\) then \( \varPsi = \varPsi _2 \bowtie \varPsi _1\).

  2. 2.

    (Associativity\(\,_1\)) If \( \varPsi = \varPsi _1 \bowtie \varPsi _2\) and \( \varPsi _1 = \varPsi _{11} \bowtie \varPsi _{12}\) then there exists a context \(\varPsi _0\) s.t. \( \varPsi = \varPsi _{11} \bowtie \varPsi _{0}\) and \( \varPsi _0 = \varPsi _{12} \bowtie \varPsi _{2}\).

  3. 3.

    (Associativity\(\,_2\)) If \( \varPsi = \varPsi _1 \bowtie \varPsi _2\) and \( \varPsi _2 = \varPsi _{21} \bowtie \varPsi _{22}\) then there exists a context \(\varPsi _0\) s.t. \( \varPsi _0 = \varPsi _1 \bowtie \varPsi _{21}\) and \( \varPsi = \varPsi _0 \bowtie \varPsi _{22}\).

  4. 4.

    If \( \varPsi = \varPsi _1 \bowtie \varPsi _2\) then \(\overline{\varPsi } = \overline{\varPsi _1} = \overline{\varPsi _2}\).

  5. 5.

    If \(\mathsf {unr}(\varGamma )\) and \( \varPsi = \varPsi _1 \bowtie \varPsi _2\) then \( \varPsi , \varGamma = \varPsi _1, \varGamma \bowtie \varPsi _2, \varGamma \).

  6. 6.

    For any \(\varPsi \), \(\varPsi = \varPsi \bowtie \overline{\varPsi }\).

We will need these properties to prove lemmas about typing and substitution, specifically for the cases that call for specific context joins.

3.3 Typing for Terms and Substitutions

We now describe the bi-directional typing rules of Lincx terms (see Fig. 7). All typing judgments have access to the meta-context \(\varDelta \), context \(\varPsi \), and to a fixed well-typed signature \(\varSigma \) where we store constants c together with their types and kinds. Lincx objects may depend on variables declared in the context \(\varPsi \) and a fixed meta-context \(\varDelta \) which contains contextual variables such as meta-variables u, parameter variables p, and context variables. Although the rules are bi-directional, they do not give a direct algorithm, as we need to split a context \(\varPsi \) into contexts \(\varPsi _1\) and \(\varPsi _2\) such that \(\varPsi = \varPsi _1 \bowtie \varPsi _2\) (see for example the rule for checking \(H \cdot S\) against a base type P). This operation is in itself non-deterministic, however since our system is linear there is only one split that makes the components (for example H and S in \(H \cdot S\)) typecheck.

Fig. 7.
figure 7

Typing rules for terms

Typing rules presented in Fig. 7 are, perhaps unsurprisingly, a fusion between contextual LF and linear LF. As in contextual LF, the typing for meta-variable closures and parameter variable closures is straightforward. A meta-variable \(u : (\varPsi \vdash P)\) represents an open LF object (a “hole” in a term). As mentioned earlier it has, associated with it, a postponed substitution \(\sigma \), applied as soon as u is made concrete. Similarly, a parameter variable \(p : (\varPsi \vdash \# A)\) represents an LF variable – either an unrestricted or linear one.

As in linear LF, we have two lambda abstraction rules (one introducing intuitionistic, the other linear assumptions) and two corresponding variable cases. Moreover, we ensure that types only depend on the unrestricted part of a context when checking that two types are equal. As we rely on hereditary substitutions, this equality check ends up being syntactic equality. Similarly, when we consider a spine \(M\,; S\) and check it against the dependent type \(\varPi x {:} A.B\), we make sure that M has type A in the unrestricted context before continuing to check the spine S against \([M/x]_{A}B\). When we encounter a spine \(\mathbin {M \,\hat{;}\, S}\) and check it against the linear type \(A \multimap B\) in the context \(\varPsi \), we must show that there exists a split s.t. \(\varPsi = \varPsi _1 \bowtie \varPsi _2\) and then check that the term M has type A in the context \(\varPsi _1\) and the remaining spine S is checked against B to synthesize a type P.

Fig. 8.
figure 8

Typing rules for substitutions

Finally, we consider the typing rules for substitutions, presented in Fig. 8. We exercise care in making sure the range context in the base cases, i.e. where the substitution is empty or the identity, is unrestricted. This guarantees weakening and contraction for unrestricted contexts.

The substitution \(\sigma , M\) is well-typed with domain \(\varPhi , x {:} A\) and range \(\varPsi \), if \(\sigma \) is a substitution from \(\varPhi \) to the context \(\varPsi \) and in addition M has type \([\sigma ]_{\varPhi }A\) in the unrestricted context \(\overline{\varPsi }\). The substitution \(\sigma , M\) is well-typed with domain \(\varPhi , x \hat{:} A\) and range \(\varPsi \), if there exists a context split \(\varPsi = \varPsi _1 \bowtie \varPsi _2\) s.t. \(\sigma \) is a substitution with domain \(\varPhi \) and range \(\varPsi _1\) and M is a well-typed term in the context \(\varPsi _2\). The substitution \(\sigma , M\) is well-typed with domain \(\varPhi , x \check{:} A\) and range \(\varPsi \), if \(\sigma \) is a substitution from \(\varPhi \) to \(\varPsi \) and for some context \(\varPsi '\), \(\overline{\varPsi } = \overline{\varPsi '}\), M is a well-typed term in the context \(\varPsi '\). This last rule, extending the substitution domain by an unavailable variable, is perhaps a little surprising. Intuitively we may want to skip the unavailable variable of a substitution. This would however mean that we have to perform not only context splitting, but also substitution splitting when defining the operation of simultaneous substitution. An alternative is to use an arbitrary term M to be substituted for this unavailable variable, as the typing rules ensure it will never actually occur in the term in which we substitute. When establishing termination of type-checking, it is then important that M type checks in a context that can be generated from the one we already have. We ensure this with a side condition \(\overline{\varPsi }=\overline{\varPsi '}\). By enforcing that the unrestricted parts of \(\varPsi \) and \(\varPsi '\) are equal we limit the choices that we have for \(\varPsi '\) deciding which linear variables to take (linear) and which to drop (unavailable), and deciding on the index of context variable.

When considering an identity substitution \(\mathbin {\textsf {id}_{\psi }}\), we allow for some ambiguity: we can use any \(\psi _m\) for both the domain and range of \(\mathbin {\textsf {id}_{\psi }}\). Upon meta-substitution, all instantiations of \(\psi _m\) will have the same names and types of variables; the only thing differentiating them will be their status (intuitionistic, linear or unavailable). Since substitutions do not store information about the status of variables they substitute for (this information is available only in the domain and range), the constructed identity substitution will be the same regardless of the initial choice of \(\psi _m\) – it will however have a different type.

The observation above has a more general consequence, allowing us to avoid substitution splits when defining the operation of hereditary substitution: if a substitution in Lincx transforms context \(\varPhi \) to context \(\varPsi \), it does so also for their unrestricted fragments.

Lemma 2

If \(\varDelta ; \varPsi \vdash \sigma \mathrel {\,\Leftarrow \,}\varPhi \) then \(\varDelta ; \overline{\varPsi }\vdash \sigma \mathrel {\,\Leftarrow \,}\overline{\varPhi }\).

3.4 Hereditary Substitution

Next we will characterise the operation of hereditary substitution, which allows us to consider only normal forms in our grammar and typing rules, making the decidability of type-checking easy to establish.

As usual, we annotate hereditary substitutions with an approximation of the type of the term we substitute for to guarantee termination.

figure o

We then define the dependency erasure operator \({(-)}^{-}\) as follows:

figure p

We will sometimes tacitly apply the dependency erasure operator \({(-)}^{-}\) in the following definitions. Hereditary single substitution for Lincx is standard and closely follows [7], since linearity does not induce any complications. When executing the current substitution would create redexes, we proceed by hereditarily performing another substitution. This reduction operation is defined as:

figure q

Termination can be readily established:

Theorem 1

(Termination of hereditary single substitution).

The hereditary substitutions \([M/x]_{\alpha }(N)\) and \(\mathsf {reduce}(M : \alpha , S)\) terminate, either by failing or successfully producing a result.

The following theorem provides typing for the hereditary substitution. We use J to stand for any of the forms of judgments defined above.

Theorem 2

(Hereditary single substitution property).

  1. 1.

    If \(\varDelta ; \overline{\varPsi } \vdash M \mathrel {\,\Leftarrow \,}A\) and \(\varDelta ; \varPsi , x {:} A \vdash J\) then \(\varDelta ; \varPsi \vdash [M/x]_{A} J \).

  2. 2.

    If \(\varDelta ; \varPsi _1 \vdash M \mathrel {\,\Leftarrow \,}A\), \(\varDelta ; \varPsi _2, x \hat{:} A \vdash J\) and \(\varPsi = \varPsi _1 \bowtie \varPsi _2\) then \(\varDelta ; \varPsi \vdash [M/x]_{A}J\)

  3. 3.

    If \(\varDelta ; \varPsi _1 \vdash M \mathrel {\,\Leftarrow \,}A\), \(\varDelta ; \varPsi _2 \vdash S > A \mathrel {\,\Rightarrow \,}B\), \( \varPsi = \varPsi _1 \bowtie \varPsi _2\) and \(\mathsf {reduce}(M :A^{-}, S) = M'\) then \(\varDelta ; \varPsi \vdash M' \mathrel {\,\Leftarrow \,}B\)

We can easily generalize hereditary substitution to simultaneous substitution. We focus here on the simultaneous substitution in a canonical terms (see Fig. 9). Hereditary simultaneous substitution relies on a lookup function that is defined below. Note that \((\sigma ,M)_{\varPsi , x \check{:} A} (x) = \bot \), since we assume x to be unavailable in the domain of \(\sigma \).

Fig. 9.
figure 9

Simultaneous substitution

figure r

Unlike many previous formulations of contextual LF, we do not allow substitutions to be directly extended with variables. Instead, following Cave and Pientka’s more recent approach [7], we require that substitutions must be extended with \(\eta \)-long terms, thus guaranteeing unique normal forms for substitutions. For this reason, we maintain a list of variable names and statuses which are not to be changed, \(\mathbin {\widetilde{\varPhi }}\) in \([\sigma ]_{\varPsi }^{\mathbin {\widetilde{\varPhi }}}\). This list gets extended every time we pass through a lambda expression. We use it when substituting in \(y \cdot S\) – if \(y \in \mathbin {\widetilde{\varPhi }}\) or \(\mathbin {\widehat{y}} \in \mathbin {\widetilde{\varPhi }}\) we simply leave the head unchanged. It is important to preserve not only the name of the variable, but also its status (linear, intuitionistic or unavailable), since we sometimes have to perform a split on \(\mathbin {\widetilde{\varPhi }}\). Such split works precisely like one on complete contexts, since types play no role in context splitting.

As simultaneous substitution is a transformation of contexts, it is perhaps not surprising that it becomes more complex in the presence of context splitting. Consider for instance the case where we push the substitution \(\sigma \) through an expression \(p[\tau ] \cdot S\). While \(\sigma \) has domain \(\varPsi \) (and is ignoring variables from \(\mathbin {\widetilde{\varPhi }}\)) and \(p[\tau ] \cdot S\) is well-typed in \((\varPsi , \varPhi )\), the closure \(p[\tau ]\) is well-typed in a context \((\varPsi _1, \varPhi _1)\) and the spine S is well-typed in a context \((\varPsi _2,\varPhi _2)\) where \(\varPsi = \varPsi _1 \bowtie \varPsi _2\) and \(\varPhi = \varPhi _1 \bowtie \varPhi _2\). As a consequence, \([\sigma ]_\varPsi ^{\mathbin {\widetilde{\varPhi }}} \tau \) and \([\sigma ]_\varPsi ^{\mathbin {\widetilde{\varPhi }}} S\) would be ill-typed, however \([\sigma ]_{\varPsi _1}^{\mathbin {\widetilde{\varPhi _1}}} \tau \) and \([\sigma ]_{\varPsi _2}^{\mathbin {\widetilde{\varPhi _2}}} S\) will work well. Notice that it is only the domain of the substitution that we need to split, not the substitution itself.

Similarly to the case for hereditary single substitution, the theorem below provides typing for simultaneous substitution.

Theorem 3

(Simultaneous substitution property).

If \(\varDelta ; \varPsi \vdash J\) and \(\varDelta ; \varPhi \vdash \sigma \mathrel {\,\Leftarrow \,}\varPsi \) then \(\varDelta ; \varPhi \vdash [\sigma ]_{\varPsi }J\).

3.5 Decidability of Type Checking in Contextual Linear LF

In order to establish a decidability result for type checking, we observe that the typing judgments are syntax directed. Further, when a context split is necessary (e.g. when checking \(\varDelta , \varPsi \vdash \sigma , M \mathrel {\,\Leftarrow \,}\varPhi , x \hat{:} A\)), it is possible to enumerate all the possible correct splits (all \(\varPsi _1\), \(\varPsi _2\) such that \(\varPsi = \varPsi _1 \bowtie \varPsi _2\)). For exactly one of them it will hold that \(\varDelta ; \varPsi _1 \vdash \sigma \mathrel {\,\Leftarrow \,}\varPhi \) and \(\varDelta ; \varPsi _2 \vdash M \mathrel {\,\Leftarrow \,}[\sigma ]_{\overline{\varPhi }} A\). Finally, in the \(\varDelta , \varPsi \vdash \sigma , M \mathrel {\,\Leftarrow \,}\varPhi , x \check{:} A\) case, thanks to explicit mention of all the variables (including unavailable ones), we can enlist all possible contexts \(\varPsi '\) well-formed under \(\varDelta \) and such that \(\overline{\varPsi } = \overline{\varPsi '}\).

Theorem 4

(Decidability of type checking). Type checking is decidable.

3.6 LINCX’s Meta-Language

To use contextual linear LF as an index language in Beluga, we have to be able to lift Lincx objects to meta-types and meta-objects and the definition of the meta-substitution operation. We are basing our presentation on one for contextual LF [6].

Figure 4 presents the meta-language of Lincx. Meta-objects are either contextual objects or contexts. The former may be instantiations to parameter variables \(p : (\varPsi \vdash \# A)\) or meta-variables \(u : (\varPsi \vdash P)\). These objects are written \(\mathbin {\widetilde{\varPsi }}.R\) where \(\mathbin {\widetilde{\varPsi }}\) denotes a list of variables obtained by dropping all the type information from the declaration, but retaining the information about variable status (intuitionistic, linear or unavailable).

figure s

Contexts as meta-objects are used to instantiate context variables \(\psi _i : G\). When constructing those we must exercise caution, as we need to ensure that no linear variable is used in two contexts that are, at any point, joined. At the same time, instantiations for context variables differing only in the index (\(\psi _i\) and \(\psi _j\)) have to use precisely the same variable names and their unrestricted fragments have to be equal. It is also important to ensure that the constructed context is of a correct schema G. Schemas describe possible shapes of contexts, and each schema element can be either linear (\(\lambda (\overrightarrow{x_i {:} A_i}).\mathbin {\widehat{A}}\)) or intuitionistic (\(\lambda (\overrightarrow{x_i {:} A_i}).A\)). This can be extended to also allow combinations of linear and intuitionistic schema elements.

Fig. 10.
figure 10

Well-formed meta-contexts

We now give rules for a well-formed meta-context \(\varDelta \) (see Fig. 10). It is defined on the structure of \(\varDelta \) and is mostly straightforward. As usual, we assume the names we choose are fresh. The noteworthy case arises when we extend \(\varDelta \) with a context variable \(\psi _i\). Because all context variables \(\psi _j\) will describe parts of the same context, we require their schemas to be the same. This side condition (\(\star \)) can be formally stated as: \(\forall j .\psi _j \in \mathsf {dom}(\varDelta )\rightarrow \psi _j : G \in \varDelta \). Moreover, to avoid manually ensuring that indices of context variables do not cross, we require that leaf context variables use elements of the carrier set \(i \in \mathcal {I}\) (i.e. they are formed without using the \(\bowtie \) operation).

Typing of meta-terms is straightforward and follows precisely the schema presented in previous work.

Fig. 11.
figure 11

Typing rules for meta-substitutions

Because of the interdependencies when substituting for context variables, we diverge slightly from standard presentations of typing of meta-substitutions.

First, we do not at all consider single meta-substitutions, as they would be limited only to parameter and meta-variables. In the general case it is impossible to meaningfully substitute only one context variable, as this would break the invariant that all instantiations of context variables share variable names and the intuitionistic part of the context.

Second, the typing rules for the simultaneous meta-substitution (see Fig. 11) are specialized in the case of substituting for a context variable. When extending \(\varTheta \) with an instantiation \(\varPsi _i\) for a context variable \(\psi _i : G\), we first verify that context \(\varPsi _i\) has the required schema G. We also have to check that \(\varPsi _i\) can be joined with any other instantiation \(\varPsi _j\) for context variable \(\psi _j\) already present in \(\varTheta \) (that is, \(\varPsi _i ~\bot _\psi ~ \varTheta \)). This is enough to ensure the desired properties of meta-substitution for context variables.

We can now define the simultaneous meta-substitution. The operation itself is straightforward, as linearity does not complicate things on the meta-level. What is slightly more involved is the variable lookup function.

figure t

On parameter and meta-variables it simply returns the correct meta-object, to which the simultaneous substitution from the corresponding closure is then applied. The lookup is a bit more complicated for context variables, since \(\varTheta \) only contains substitutions for leaf context variables \(\psi _i\). For arbitrary \(\psi _m\) we must therefore deconstruct the index \( m = i_1 \bowtie \cdots \bowtie i_k\) and return \(\varTheta _\varDelta (\psi _{i_1}) \bowtie \cdots \bowtie \varTheta _\varDelta (\psi _{i_k})\). Finally, for \(\psi _\epsilon \) we simply have to find any \(\varPsi /\psi _i\) in \(\varTheta \) and return \(\overline{\varPsi }\) – the typing rules for \(\varTheta \) ensure that the choice of \(\psi _i\) is irrelevant, as the unrestricted part of the substituted context is shared.

Theorem 5

(Simultaneous meta-substitution property).

If \(\varDelta \vdash \varTheta \mathrel {\,\Leftarrow \,}\varDelta '\) and \(\varDelta ';\varPsi \vdash J\), then \(\varDelta ; \mathbin {\llbracket \varTheta \rrbracket }_{\varDelta '} \varPsi \vdash \mathbin {\llbracket \varTheta \rrbracket }_{\varDelta '} J\).

3.7 Writing Programs About LINCX Objects

We sketch here why Lincx is a suitable index language for writing programs and proofs. In [29], Thibodeau et al. describe several requirements for plugging in an index language into the (co)inductive foundation for writing programs and proofs about them. They fall into three different classes. We will briefly touch on each one.

First, it requires that the index domain satisfies meta-substitution properties that we also prove for Lincx. Second, comparing two objects should be decidable. We satisfy this criteria, since we only characterize \(\beta \eta \)-long canonical forms and equality reduces to syntactic equality. The third criterion is unification of index objects. While we do not describe a unification algorithm for Lincx objects, we believe it is a straightforward extension of Schack-Nielsen and Schürmann’s work [27]. Finally, we require a notion of coverage of Lincx objects which is a straightforward extension of Pientka and Abel’s approach [22].

4 Mechanization of LINCX

We have mechanized key properties of our underlying theory in the proof assistant Beluga. In particular, we encoded the syntax, typing rules of Lincx together with single and simultaneous hereditary substitution operations in the logical framework LF relying on HOAS encodings to model binding. Our encoding is similar to Martens and Crary’s [15] of LF in LF, but we also handle meta-variables and simultaneous substitutions. Since Beluga   only intrinsically supports intuitionistic binding structures and contexts, linearity must be enforced separately. We do this through an explicit context of variable declarations, connecting each variable to a flag and a type. To model contexts with context variable indices we use a binary encoding. The implementation of Lincx in Beluga was crucial to arrive at our understanding of modelling context variables using commutative monoids.

As mentioned in Sect. 3.2, the context variable indices take context splitting into account by describing elements from a countably infinite set \(\mathcal {I}\), along with a neutral element and a join operation that is commutative and associative. We implement these indices using binary strings, where \(\epsilon \) is the empty string, and a string with a single positive bit represents a leaf-level variable. In other words, through this abstraction, every context variable in \(\varDelta \) is a binary string with a single positive bit. Schack-Nielsen [26] uses a similar encoding for managing flags for linear, unrestricted, and unavailable assumptions in concrete contexts. Our encoding lifts these ideas to modelling context variables. We then implement the \(\bowtie \) operation as a binary OR operation which fails when the two strings have a common positive (for instance a join between 001 and 011 would fail). The following describes the join of M and N, forming K.

figure u

We then proceed to prove commutativity, associativity and uniqueness of . Finally, we mechanized the proofs of the properties about our equational theory of context joins as total functions in Beluga. In particular, we mechanized proofs of Lemmas 1 and 2. Here we take advantage of Beluga ’s first-class contexts and in the base cases rely on the commutativity and associativity properties of the binary encoding of context variable indices. We note that context equality is entirely syntactic and can thus be defined simply in terms of reflection.

Although we had to model our mixed contexts of unrestricted and linear assumptions explicitly, Beluga ’s support for encoding formal systems using higher-order abstract syntax still significantly simplified our definitions of typing rules and hereditary substitution operation. In particular, it allowed us to elegantly model variable bindings in abstractions and \(\varPi \)-types.

Inductive properties about typing and substitution are implemented as recursive functions in Beluga. Many of the proofs in this paper become fairly tedious and complex on paper and mechanizing Lincx therefore helps us build trust in our foundation. Given the substantial amount of time and lines of code we devote to model contexts and context joins, our mechanization also demonstrate the value Lincx can bring to mechanizing linear systems or more generally systems that work with resources.Footnote 2

5 Related Work

The idea of using logical framework methodology to build a specification language for linear logic dates back three decades, beginning with Cervesato’s and Pfenning’s linear logical framework LLF [9] providing \(\multimap \), & and \(\top \) operators from intuitionistic linear logic, the maximal set of connectives for which unique canonical forms exist. The idea was later expanded to the concurrent logical framework CLF [31], which uses a monad to encapsulate less well-behaved operators. The quest to design meta-logics that allow us to reason about linear logical frameworks has been marred with difficulties in the past.

In proof theory, McDowell and Miller [18, 19] and later Gacek et al. [13] propose a two-level approach to reason about formal systems where we rely on a first-order sequent calculus together with inductive definitions and induction on natural numbers as a meta-reasoning language. We encode our fomal system in a specification logic that is then embedded in the first-order sequent calculus, the reasoning language. The design of the two-level approach is in principle modular and in fact McDowell’s PhD thesis [18] describes a linear specification logic. However the context of assumptions is encoded as a list explicitly in this approach. As a consequence, we need to reason modulo the equational properties of context joins and we may need to prove properties about the uniqueness of assumptions. Such bureaucratic reasoning then still pollutes our main proof.

In type theory, McCreight and Schürmann [17] give a tailored meta-logic \(\mathcal {L}^+_\omega \) for linear LF, which is an extension of the meta-logic for LF [28]. While \(\mathcal {L}^+_\omega \) also characterize partial linear derivations using contextual objects that depend on a linear context, the approach does not define an equational theory on contexts and context variables. It also does not support reasoning about contextual objects modulo such an equational theory. In addition \(\mathcal {L}^+_\omega \) does not cleanly separate the meta-theoretic (co)inductive reasoning about linear derivations from specifying and modelling the linear derivations themselves. We believe the modular design of Beluga, i.e. the clean separation of representing and modelling specifications and derivations on one hand and reasoning about such derivations on the other, offers many advantages. In particular, it is more robust and also supports extensions to (co)inductive definitions [6, 29].

The hybrid logical framework HLF by Reed [25] is in principle capable to support reasoning about linear specifications. In HLF, we reason about objects that are valid at a specific world, instead of objects that are valid within a context. However, contexts and worlds seem closely connected. Most recently Bock and Schürmann [4] propose a contextual logical framework XLF. Similarly to Lincx, it is also based on contextual modal type theory with first-class contexts. However, context variables have a strong nominal flavor in their system. In particular, Bock and Schürmann allow multiple context variables in the context and each context variable is associated with a list of variable names (and other context variable domains) from which it must be disjoint – otherwise the system is prone to repetition of linear variables upon instantiation.

On a more fundamental level the difference between HLF and XLF on the one hand and our approach on the other is how we think about encoding meta-theoretic proofs. HLF and XLF follow the philosophy of Twelf system and encoding proofs as relations. This makes it sometimes challenging to establish that a given relation constitutes an inductive proof and hence both systems have been rarely used to establish such meta-theoretic proofs. More importantly, the proof-theoretic strength of this approach is limited. For example, it is challenging to encode formal systems and proofs that rely on (co)inductive definitions such as proofs by logical relations and bisimulation proofs within the logical framework itself. We believe the modular design of separating cleanly between Lincx as a specification framework and embedding Lincx into the proof and programming language Beluga provides a simpler foundation for representing the meta-theory of linear systems. Intuitively, meta-proofs about linear systems only rely on linearity to model the linear derivations – however the reasoning about these linear derivation trees is not linear, but remains intuitionistic.

6 Conclusion and Future Work

We have presented Lincx, a linear contextual modal logical framework with first-class contexts as a foundation to model linear systems and derivations. In particular, Lincx satisfies the necessary requirements to serve as a specification and index language for Beluga and hence provides a suitable foundation for implementing proofs about (linear) derivation trees as recursive functions. We have also mechanized the key equational properties of context joins in Beluga. This further increases our confidence in our development.

There is a number of research questions that naturally arise and we plan to pursue in the future. First, we plan to extend Lincx with additional linear connectives such as \(\top \) and \( A \& B\). These additional connectives are for example present in [9]. We omitted them here to concentrate on modelling context joins and their equational theory, but we believe it is straightforward to add them.

Dealing with first-class contexts in the presence of additive operators is more challenging, as they may break canonicity. We plan to follow the approach in CLF [31] enclosing them into a monad to control their behaviour. Having also additive operators would allow us to for example model the meta-theory of session type systems [5] and reason about concurrent computation. Further we plan to add first-class substitution variables [7] to Lincx. This woud allow us to abstractly describe relations between context. This seems particularly important as we allow richer schemas definitions that model structured sequences.

Last but not least, we would like to implement Lincx as a specification language for Beluga to enable reasoning about linear specifications in practice.