SecCSL: Security Concurrent Separation Logic
Abstract
We present SecCSL, a concurrent separation logic for proving expressive, datadependent information flow security properties of lowlevel programs. SecCSL is considerably more expressive, while being simpler, than recent compositional information flow logics that cannot reason about pointers, arrays etc. To capture security concerns, SecCSL adopts a relational semantics for its assertions. At the same time it inherits the structure of traditional concurrent separation logics; thus SecCSL reasoning can be automated via symbolic execution. We demonstrate this by implementing SecC, an automatic verifier for a subset of the C programming language, which we apply to a range of benchmarks.
1 Introduction
Software verification successes abound, whether via interactive proof or via automatic program verifiers. While the former has yielded individual, deeply verified software artifacts [21, 24, 25] primarily by researchers, the latter appears to be having a growing impact on industrial software engineering [11, 36, 39].
At the same time, recent work has heralded major advancements in program logics for reasoning about secure information flow [23, 33, 34]—i.e. whether programs properly protect their secrets—yielding the first general program logics and proofs of information flow security for nontrivial concurrent programs [34]. Yet so far, such logics have remained confined to interactive proof assistants, making them practically inaccessible to industrial developers.
This is not especially surprising. The Covern logic [34], for example, pays for its generality with regard to expressive security policies, in terms of complexity. Worse, these logics reason only over very simple toy programming languages, which even lack support for pointers, arrays, and structures. Their complexity, we argue, hinders proof automation and makes scaling up these logics to realworld languages impractical. How, therefore, can we leverage the power of existing automatic deductive verification approaches for security proofs?
In this paper we present Security Concurrent Separation Logic (SecCSL), which achieves an unprecedented combination of simplicity, power, and ease of automation by capturing core concepts such as datadependent variable sensitivity [27, 31, 50], and shared invariants on sensitive memory [34] in the familiar style of Concurrent Separation Logic (CSL) [38], as exemplified in Sect. 2.
Prior work [14, 20] has noted the promise of separation logic for reasoning about information flow yet, to date, that promise remains unrealised. Indeed, the only two prior encodings of information flow concepts into separation logics which we are aware of have overlooked crucial features like concurrency [14], and lack the ability to separately specify the sensitivity of values and memory locations as we explain in Sect. 2. The logic in [20] lacks soundness arguments altogether while [14] fail to satisfy basic properties needed for automation (see the discussion following Proposition 1).
Designing a logic with the right combination of features, with the right semantics, is therefore nontrivial. To manage this, SecCSL assertions have a relational interpretation [6, 49] over a standard heap model (Sect. 3). This allows one to canonically encode information flow concepts while maintaining the approach and structure of traditional CSL proofs. To do so we adapt existing proof techniques for the soundness of CSL [46] into a compositional information flow security property (Sect. 4) that, like SecCSL itself, is simple and powerful. We have mechanized the soundness of SecCSL in Isabelle/HOL [37].
To demonstrate SecCSL ’s ease of use and capacity for automation, we implemented the prototype tool SecC (Sect. 5). We target C because it dominates lowlevel securitycritical code. SecC automates SecCSL reasoning via symbolic execution, in the style of contemporary Separation Logic program verifiers like VeriFast [22], Viper [30], and Infer [10]. SecC correctly analyzes wellknown benchmark problems (collected in [17]) within a few milliseconds; and we verify a variant of the CDDC case study [5] from the Covern project. Our Isabelle theories, the open source prototype tool SecC, and examples are available online at https://covern.org/secc [18].
2 An Overview of SecCSL
2.1 Specifying Information Flow Control in SecCSL
Consider the program in Fig. 1. It maintains a global pointer rec to a shared record, protected by the lock mutex. The is_classified field of the record identifies the confidentiality of the record’s data: when is_classified is true, the value stored in the data field is confidential, and otherwise it is safe to release publicly. The left thread outputs the data in the record whenever it is public by writing to the (memory mapped) output device register pointer OUTPUT_REG (here also protected by mutex). The right thread updates the record, ensuring its content is not confidential, here by clearing its data.
Suppose assigning a value d to the OUTPUT_REG register causes d to be outputted to a publiclyvisible location. Reasoning, then, that the example is secure requires capturing that (1) the data field of the record pointed to by rec is confidential precisely when the record’s is_classified field says it is, and (2) data sink OUTPUT_REG should never have confidential data written to it. Therefore the example only ever writes nonconfidential data into OUTPUT_REG.
2.2 Reasoning in SecCSL
When acquiring a lock one gets to assume that the lock’s invariant holds [38]. Subsequently, when releasing the lock one must prove that the invariant has been reestablished. For example, when reasoning about the code of the leftthread in Fig. 1, upon acquiring the mutex, SecCSL adds formula (4) to the intermediate assertion, which allows proving that the loop body is secure. When reasoning about the right thread, one must prove that the invariant has been reestablished when it releases the mutex. This is the reason e.g. that the right thread must clear the data field after setting is_classified to false.
Reasoning in SecCSL proceeds forward over the program text according to the rules in Fig. 4. When execution forks, as in Fig. 1, one reasons over each thread individually. For Fig. 1, SecCSL requires proving that the guard of the ifcondition is \(\texttt {low}\), i.e. that the program is not branching on a secret (rule If in Fig. 4), which would correspond to a timing channel, see Sect. 2.3 below. This follows from the part \(c \mathbin {::}\texttt {low}\) of invariant (4). Secondly, after the write to OUTPUT_REG, SecCSL requires that the expression that is being written to the location OUTPUT_REG has sensitivity \(\texttt {low}\) (rule Store in Fig. 4). This follows from \(d \mathbin {::}(c \mathrel {?} \texttt {high}: \texttt {low})\) in the invariant, which simplifies to \(d \mathbin {::}\texttt {high}\) given the guard \(c \equiv \texttt {true}\) of the ifstatement. Finally, when the right thread releases mutex, invariant (4) holds for the updated contents of \(\texttt {rec}\) (rule Unlock in Fig. 4).
2.3 Security Intuition and Informal Security Property
But what does security mean in SecCSL? Indeed, the SecCSL a judgement \(\ell _A \vdash \{ P \}~c~\{ Q \}\) additionally implies that the program c does not leak any sensitive information during its execution to potential attackers.
The attacker security level \(\ell _A\) in (3) represents an upper bound on the parts of the program’s memory that a potential, passive attacker is assumed to be able to observe before, during, and after the program’s execution. Intuitively this encompasses all memory locations whose sensitivity is \(\sqsubseteq \ell _A\). Which memory locations have sensitivity \(\sqsubseteq \ell _A\) is defined by the locationsensitivity assertions in the precondition P and the lock invariants: A memory location \( loc \) is visible to the \(\ell _A\) attacker iff P or a lock invariant contains some Open image in new window and in the program’s initial state e evaluates to \( loc \) and \({e_l}\) evaluates to some label \(\ell \) such that \(\ell \sqsubseteq \ell _A\) (see Fig. 3).
Which data is sensitive and should not be leaked to the \(\ell _A\) attacker is defined by the valuesensitivity assertions in P and the lock invariants: an expression e is sensitive when P or a lock invariant contains some \(e \mathbin {::}{{e_l}}\) and in the program’s initial state \({e_l}\) evaluates to some \(\ell \) with \(\ell \not \sqsubseteq \ell _A\). Security, then, requires that in all intermediate states of the program’s execution no sensitive data (as defined by valuesensitivity assertions) can be inferred via the attackerobservable memory (as defined by locationsensitivity assertions).
SecCSL proves a compositional security property that formalises this intuition (see Definition 3). Since the property needs to be compositional with regards to concurrent execution, the resulting security property is timing sensitive, meaning that not only must the program never reveal sensitive data into attackerobservable memory locations but the times at which it updates these memory locations cannot depend on sensitive data. It is wellknown that timinginsensitive security properties are not compositional under standard scheduling models [34, 48]. For this reason SecCSL forbids programs from branching on sensitive values. We believe that this restriction could in principle be relaxed in the future via established techniques [28, 29].
SecCSL ’s toplevel soundness (Sect. 4) formalises the above intuitive definition of security in the style of traditional noninterference [19] that compares two program executions with respect to the observations that can be made by an attacker. SecCSL adopts a relational interpretation for the assertions P and Q, and the lock invariants, in which they are evaluated against pairs of execution states. This relational semantics directly expresses the comparison needed for noninterference. As a result, most of the complexities related to SecCSL ’s soundness are confined to the semantic level, whereas the calculus retains its similarity to standard separation logic and hence its simplicity.
Under this relational semantics (see Fig. 2 in Sect. 3), when a pair of states satisfies an assertion P, it implies that the two states agree on the values of all nonsensitive expressions as defined by P (Lemma 1). Noninterference is then stated as Theorem 2: Program c with precondition P is secure against the \(\ell _A\)attacker if, whenever executed twice from two initial states jointly satisfying P and the lock invariants (and so agreeing on the values of all data assumed to be initially observable to the \(\ell _A\) attacker), in all intermediate pairs of states arrived at after running each execution for the same number of steps, the resulting states again agree at that initially \(\ell _A\)visible memory. This definition is timing sensitive as it compares executions that have the same number of steps.
3 The Logic SecCSL
3.1 Assertions
The standard expression semantics \(\llbracket e \rrbracket _{s}\) evaluates e over a store s, which assigns values to variables x as s(x). The interpretation \(f^\mathcal {A}\) of a function symbol f is a function, given statically by a logical structure \(\mathcal {A}\). Specifically, \(\sqsubseteq ^\mathcal {A}\) is the semantic ordering of the security lattice. We write \(s \models \phi \) if \(\llbracket \phi \rrbracket _s = true \).
To capture strong security properties, we require a declarative specification of which heap locations are considered visible to the \(\ell \)attacker, when assertion P holds in some (initial) state (see Sect. 2.3). We define this set in Fig. 3, denoted \(\mathrm {lows}_\ell (P,s)\) for initial store s. Note that, by design, the definition does not give a useful result for an existential like Open image in new window . This mirrors the usual difficulty of defining footprints for nonprecise separation logic assertions [26, 40, 41]. This restriction is not an issue in practice, as location sensitivity assertions Open image in new window are intended to describe the static regions of memory (data sinks) visible to the attacker, for which existential quantification over variables free in \({e_p}\) or \({e_l}\) is not necessary. A generalization to all precise predicates should be possible.
3.2 Entailments
Although implications between spatial formulas is not part of the assertion language, entailments \(P {\mathop {\implies }\limits ^{\ell }} Q\) between assertions still play a role in SecCSL ’s Hoare style consequence rule (Conseq in Fig. 4). We discuss entailment now as it sheds useful light on some consequences of SecCSL ’s relational semantics.
Definition 1 (Secure Entailment)

\((s,h), (s',h') \models _\ell P\) implies \((s,h), (s',h') \models _\ell Q\) for all s, h and \(s',h'\), and

\(\mathrm {lows}_\ell (P,s) \subseteq \mathrm {lows}_\ell (Q,s)\) for all s
The security level \(\ell \) is used not just in the evaluation of the assertions but also to preserve the \(\ell \)attacker visible locations of P in Q. This reflects the intuition that P is stronger than Q, and so Q should make fewer assumptions than P on the limitations of an attacker’s observational powers.
Proposition 1
Entailment (14) in Proposition 1 shows that sensitivity of values is compatible with equality. This property fails in the security separation logic of [14], where labels are part of the semantics of expressions but are not compared by equality. The second property (15) captures the intuition that lesssensitive data can always be used in contexts where moresensitive data might be expected (but not viceversa). Recall that \({e_l}'\) here is an expression. The additional condition \({e_l}' \mathbin {::}\ell \) guarantees that this expression denotes a meaningful security level, i.e. evaluates identically in both states (cf. (7)). (abusing notation to let the semantic \(\ell \) stand for some expression that denotes it). Property (16) encodes that constants do not depend on any state; again the security level expression \({e_l}\) must be meaningful, but trivially \(c \mathbin {::}\ell \) when \(\ell \) is constant, too. Value sensitivity is congruent with function application (17). This is not surprising, as functions map arguments equal in both states to equal results. Yet, as with (14) above, this property fails in [14] where security labels are attached to values. Note that the reverse entailment is false (e.g. for the constant function \(\lambda x. c\)).
Via (18), when Open image in new window it follows that both the location \({e_p}\) and the value \({e_v}\) adhere to the level \({e_l}\), cf. (9). Note that the antecedent Open image in new window is repeated in the consequent to ensure that the set of \(\ell \)attacker visible locations is preserved. Conditional assertions can be resolved when the test is definite, provided that P and Q describe the same set of public locations, (19) and symmetrically for \(\lnot \phi \). Finally, separating conjunction is monotone wrt. entailment (20).
3.3 Proof System
4 Security Definition and Soundness
The soundness theorem for SecCSL guarantees that if some triple \(\ell \vdash \{ P \}~c~\{ Q \}\) is derived using the rules of Fig. 4, then: all executions of c started in a state satisfying precondition P are memory safe, partially correct with respect to postcondition Q, and moreover secure with respect to the sensitivity of values as denoted by P and Q and at all times respect the sensitivity of locations as denoted by P (see Sect. 2.3). Proof outlines are relegated to Appendix B. All results have been mechanised in Isabelle/HOL [37] and are available at [18].
The toplevel security property of SecCSL is a noninterference condition [19]. Noninterference as a security property specifies, roughly, that for any pair of executions that start in states that agree on the values of all attackerobservable inputs, then, from the attacker’s point of view the resulting executions will be indistinguishable, i.e. all of the attacker visible observations will agree. In SecCSL, what is “attackerobservable” depends on the attacker level \(\ell \). The “inputs” are the expressions e, and the attackervisible inputs are those expressions e whose sensitivity is given by \(e \mathbin {::}{\ell '}\) judgements in the precondition P for which \(\ell ' \sqsubseteq \ell \). The attackervisible observations are the contents of all memory locations in \(\mathrm {lows}_\ell (P,s)\), for initial store s and precondition P. Thus we define when two heaps are indistinguishable to the \(\ell \)attacker.
Definition 2
(\(\ell \) Equivalence). Two heaps coincide on a set of locations A, written \(h \equiv _A h'\), iff for all \(a \in A.\ a \in \mathrm {dom}~{(}h) \cap \mathrm {dom}~{(}h') \text { and } h(a) = h'(a)\). Two heaps h and \(h'\) are \(\ell \)equivalent wrt. store s and assertion P, if \(h \equiv _A h'\) for \(A = \mathrm {lows}_\ell (P,s)\).
Then, the \(\ell \)validity of an assertion P in the relational semantics witnesses \(\ell \)equivalence between the corresponding heaps.
Lemma 1
If \((s,h), (s',h') \models _\ell P\), then \(h \equiv _A h'\) for \(A = \mathrm {lows}_\ell (P,s)\).
Furthermore, if \((s,h), (s',h') \models _\ell P\), then \(\mathrm {lows}_\ell (P,s) = \mathrm {lows}_\ell (P,s')\) since the security levels in labeled pointsto predicates must coincide on s and \(s'\), cf. (9).
Semantics. Semantic configurations, denoted by k in the following, are one of three kinds: \((\mathbf {run}\ c,L,s,h)\) denotes a command c in a state s, h where L is a set of locks that are currently not held by any thread and can be acquired by c; \((\mathbf {stop}\ L,s,h)\) similarly denotes a final state s, h with residual locks L, and \(\mathbf {abort}\) results from invalid heap access.
The singlestep relation \((\mathbf {run}\ c,L,s,h) {\mathop {\longrightarrow }\limits ^{\sigma }} k\) takes running configurations to successors k with respect to a schedule \(\sigma \) that resolves the nondeterminism of parallel composition. The schedule \(\sigma \) is a list of actions: the action \(\langle \tau \rangle \) represents the execution of atomic commands and the evaluation of conditionals; the actions \(\langle 1\rangle \) and \(\langle 2\rangle \) respectively denote the execution of the left and righthand sides of a parallel composition for a single step, and so define a deterministic scheduling discipline reminiscent of separation kernels [32]. For example, \((\mathbf {run}~c_1~\parallel ~c_2,L,s,h) {\mathop {\longrightarrow }\limits ^{\langle 1 \rangle \cdot \sigma }} (\mathbf {run}\ c'_1 \parallel c_2,L',s',h')\) if \((\mathbf {run}\ c_1,L,s,h) {\mathop {\longrightarrow }\limits ^{\sigma }} (\mathbf {run}\ c'_1,L',s',h')\). Configurations \((\mathbf {run}\ \texttt {lock}\ l,L,s,h)\) can only be scheduled if \(l \in L\) (symmetrically for \(\texttt {unlock})\)) and otherwise block without a possible step.
Executions \(k_1 \xrightarrow {~\sigma _1 \cdots \sigma _n~}^*~k_{n+1}\) chain several steps \(k_i {\mathop {\longrightarrow }\limits ^{\sigma _i}} k_{i+1}\) by accumulating the schedule. We are considering partial correctness only, thus the schedule is always finite and so are all executions. The rules for program steps are otherwise standard and can be found in Appendix A.
Compositional Security. To prove that SecCSL establishes its toplevel noninterference condition, we first define a compositional security condition that provides the central characterization of security for a command c with respect to precondition P and postcondition Q. That central, compositional property we denote \(\mathrm {secure}_\ell ^n(P, c, Q)\) and formalize below in Definition 3. It ensures that the first n steps (or fewer if the program terminates before that) are safe and preserve \(\ell \)equivalence of the heap locations specified initially in P, but in a way that is compositional across multiple execution steps, across multiple threads of execution and across different parts of the heap. It is somewhat akin, although more precise than, prior characterizations based on strong low bisimulation [16, 45].
Moreover, each \(P_{i+1} \mathrel {\star }\mathrm {invs}(L_{i+1})\) is required to preserve the sensitivity of all \(\ell \)visible heap locations of \(P_{i} \mathrel {\star }\mathrm {invs}(L_i)\), i.e. so that \(\mathrm {lows}_\ell (P_{i} \mathrel {\star }\mathrm {invs}(L_i), s_i) \subseteq \mathrm {lows}_\ell (P_{i+1} \mathrel {\star }\mathrm {invs}(L_{i+1}), s_{i+1})\). If some intermediate step \(m \le n\) terminates, then \(P_{m+1} = Q\), ensuring the postcondition holds when the executions terminate. Lastly, neither execution is allowed to reach an \(\mathbf {abort}\) configuration.
If the initial state satisfies \(P_1 \mathrel {\star }F \mathrel {\star }\mathrm {invs}(L_1)\) then (22) holds throughout the entire execution, and establishes the endtoend property that any final state indeed satisfies the postcondition and that \(\mathrm {lows}_\ell (P_1 \mathrel {\star }\mathrm {invs}(L_1), s_1) \subseteq \mathrm {lows}_\ell (P_i \mathrel {\star }\mathrm {invs}(L_i), s_i)\) with respect to the initially specified low locations.
The property \(\mathrm {secure}_\ell ^n(P, c, Q)\) is defined recursively to match the steps of the lockstep execution of the program.
Definition 3 (Security)

\(\mathrm {secure}_\ell ^0(P_1, c_1, Q)\) holds always.
 \(\mathrm {secure}_\ell ^{n+1}(P_1, c_1, Q)\) holds, iff for all pairs of states \((s_1,h_1)\), \((s_1',h_1')\), frames F, and sets of locks \(L_1\), such that \((s_1,h_1), (s_1',h_1') \models _\ell P_1 \mathrel {\star }F \mathrel {\star }\mathrm {invs}(L_1)\), and given two steps \((\mathbf {run}\ c_1,L_1,s_1,h_1) {\mathop {\longrightarrow }\limits ^{\sigma }} k\) and \((\mathbf {run}\ c_1,L_1,s_1',h_1') {\mathop {\longrightarrow }\limits ^{\sigma }} k'\) there exists an assertion \(P_2\) and a pair of successor states with either ofsuch that \((s_2,h_2), (s_2',h_2') \models _\ell P_2 \mathrel {\star }F \mathrel {\star }\mathrm {invs}(L_2)\) and \(\mathrm {lows}_\ell (P_{1} \mathrel {\star }\mathrm {invs}(L_1), s_1) \subseteq \mathrm {lows}_\ell (P_2 \mathrel {\star }\mathrm {invs}(L_2), s_2)\) in both cases.

\(k = (\mathbf {stop}\ L_2,s_2,h_2)\) and \(k' = (\mathbf {stop}\ L_2,s'_2,h'_2)\) and \(P_2 = Q\)

\(k = (\mathbf {run}\ c_2,L_2,s_2,h_2)\) and \(k' = (\mathbf {run}\ c_2,L_2,s'_2,h'_2)\) with \(\mathrm {secure}_\ell ^{n}(P_2, c_2, Q)\)

Two further side condition are imposed, ensuring all mutable shared state lies in the heap (cf. Sect. 3): \(c_1\) doesn’t modify variables occurring in \(\mathrm {invs}(L_1)\) and F (which guarantees that both remain intact), and the free variables in \(P_2\) can only mention those already present in \(P_1\), \(c_1\), or in any lock invariant (which guarantees that \(P_2\) remains stable against concurrent assignments). Note that each step can pick a different frame F, as required for the soundness of rule Par.
Lemma 2
\(\ell \vdash \{ P \}~c~\{ Q \}\) implies \(\mathrm {secure}_\ell ^n(P,c,Q)\) for every \(n \ge 0\).
Safety, Correctness and Noninterference. Execution safety and correctness with respect to pre and postcondition follow straightforwardly from Lemma 2.
Corollary 1 (Safety)
Given initial states \((s_1,h_1), (s'_1,h'_1) \models _\ell P \mathrel {\star }\mathrm {invs}(L_1)\) and two executions of a command c under the same schedule to resulting configurations k and \(k'\) respectively, then \(\ell \vdash \{ P \}~c~\{ Q \}\) implies \(k \not = \mathbf {abort}\wedge k' \not = \mathbf {abort}\).
Theorem 1 (Correctness)
The toplevel noninterference property also follows from Lemma 2 via Lemma 1. For brevity, we state the noninterference property directly in the theorem:
Theorem 2 (Noninterference)
Given a command c, and initial states \((s_1,h_1), (s'_1,h'_1) \models _\ell P \mathrel {\star }\mathrm {invs}(L_1)\) then \(\ell \vdash \{ P \}~c~\{ Q \}\) implies \(h_i \equiv _A h'_i\), where \(A = \mathrm {lows}_\ell (P \mathrel {\star }\mathrm {invs}(L_1),s_1)\), for all pairs of heaps \(h_i\) and \(h'_i\) arising from executing the same schedule from each initial state.
5 SecC: Automating SecCSL
To demonstrate the ease by which SecCSL can be automated, we develop the prototype tool SecC, available at [18]. It implements the logic from Sect. 3 for a subset of C. SecC is currently used to explore reasoning about example programs with interesting security policies. Thus its engineering has focused on features related to security reasoning (e.g. deciding when conditions \(e \mathbin {::}{e_l}\) are entailed) rather than reasoning about complex data structures.
Symbolic Execution. SecC automates SecCSL through symbolic execution, as pioneered for SL in [7]. Similarly to VeriFast’s algorithm in [22], the verifier computes the strongest postcondition of a command c when executed in a symbolic state, yielding a set of possible final symbolic states. Each such state \(\sigma = (\rho , s, \underline{P})\) maintains a path condition \(\rho \) of relational formulas (from procedure contracts, invariants, and the evaluation of conditionals) and a symbolic heap described by a list \(\underline{P} = (P_1 \star \cdots \star P_n)\) of atomic spatial assertions (pointsto and instances of defined predicates). The symbolic store s maps program variables to pure expressions, where s(e) denotes substituting s into e. As an example, when \(P_i = s({e_p}) \mapsto v\) is part of the symbolic heap, a load \(x := {e_p}\) in \(\sigma \) can be executed to yield the updated state \((\rho , s(x := v), \underline{P})\) where x is mapped to v.
To find the \(P_i\) we match the lefthand sides of pointsto predicates. Similarly, matching is used during checking of entailments \(\rho _1 \wedge \underline{P} {\mathop {\implies }\limits ^{\ell }} \exists \ \underline{x}.\ \rho _2 \wedge \underline{Q}\), where the conclusion is normalized to prenex form. The entailment is reduced to a nonspatial problem by incrementally computing a substitution \(\tau \) for the existentials \(\underline{x}\), removing pairs \(P_i = \tau (Q_j)\) in the process, as justified by (20) (see also “subtraction rules” in [7, Sec. 4]).
Finally, the remaining relational problem \(\rho _1 \Rightarrow \rho _2\) without spatial connectives can be encoded into firstorder [17], by duplicating the pure formulas in terms of fresh variables to represent the second state, and by the syntactic equivalent of (7). The resulting verification condition is discharged with Z3 [15]. This translation is semantically complete. For example, consider Fig. 4 from Prabawa et al. [43]. It has a conditional if(b == b) ..., whose check \((b = b) \mathbin {::}\texttt {low}\), translated to \((b = b) = (b' = b')\) by SecC, holds independently of b’s sensitivity.
Features. In addition to the logic from Sect. 3, SecC supports procedure modular verification with pre/postconditions as usual; and it supports userdefined spatial predicates. While some issues of the C source language are not addressed (yet), such as integer overflow, those that impact directly on information flow security are taken into account. Specifically, the shortcut semantics of boolean operators &&, , and ternary \(\_ \ {?}\ \_ \ {\texttt {:}}\ \_\) count as branching points and as such the left hand side resp. the test must not depend on sensitive data, similarly to the conditions of \(\texttt {if}\) statements and \(\texttt {while}\) loops.
A direct benefit of the integration of security levels into the assertion language is that it becomes possible to specify the sensitivity of data passed to library and operating system functions. For example, the execution time of malloc(len) would depend on the value of len, which can thus be required to satisfy \(\texttt {len} \mathbin {::}\texttt {low}\) by annotating its function header with an appropriate precondition, using SecC ’s requires annotation. Likewise, SecC can reason about limited forms of declassification, in which external functions are trusted to safely release otherwise sensitive data, by giving them appropriate pre/postconditions. For example, a password hashing library function prototype might be annotated with a postcondition asserting its result is \(\texttt {low}\), via SecC ’s ensures annotation.
Examples and Case Study. SecC proves Fig. 1 secure, and correctly flags buggy variants as insecure, e.g., where the test in thread 1 is reversed, or when thread 2 does not clear the data field upon setting the is_classified to FALSE. SecC also correctly analyzes those 7 examples from [17] that are supported by the logic and tool (each in \(\sim \)10 ms). All examples are available at [18].
To compare SecC and SecCSL against the recent Covern logic [34], we took a nontrivial example program that Murray et al. verified in Covern, manually translated it to C, and verified it automatically using SecC. The original program^{2}, written in Covern ’s tiny While language embedded in Isabelle/HOL, models the software functionality of a simplified implementation of the Cross Domain Desktop Compositor (CDDC) [5]. The CDDC is a device that facilitates interactions with multiple PCs, each of which runs applications at differing sensitivity, from a single keyboard, mouse and display. Its multithreaded software handles routing of keyboard input to the appropriate PC and switching between the PCs via mouse gestures. Verifying the C translation required adding SecCSL annotations for procedure pre/postconditions and loop invariants. The C translation including those annotations is \(\sim \)250 lines in length. The present, unoptimised, implementation of SecC verifies the resulting artifact in \(\sim \)5 s. In contrast, the Covern proof of this example requires \(\sim \)600 lines of Isabelle/HOL definitions/specification, plus \(\sim \)550 lines of Isabelle proof script.
6 Related Work
There has been much work targeting type systems and program logics for concurrent information flow. Karbyshev et al. [23] provide an excellent overview. Here we concentrate on work whose ideas are most closely related to SecCSL.
Costanzo and Shao [14] propose a sequential separation logic for reasoning about information flow. Unlike SecCSL, theirs does not distinguish value and location sensitivity. Their separation logic assertions have a fairly standard (nonrelational) semantics, at the price of having a securityaware language semantics that propagates security labels attached to values in the store and heap. As mentioned in Sect. 3.2, this has the unfortunate sideeffect of breaking intuitive properties about sensitivity assertions. We conjecture that the absence of such properties would make their logic harder to automate than SecCSL, which SecC demonstrates is feasible. SecCSL avoids the aforementioned drawbacks by adopting a relational assertion semantics.
Gruetter and Murray [20] propose a security separation logic in Coq [8] for Verifiable C, the C subset of the Verified Software Toolchain [2, 3]. However they provide no soundness proof for its rules and its feasibility to automate is unclear.
Two recent compositional logics for concurrent information flow are the Covern logic [34] and the type and effect system of Karbyshev et al. [23]. Both borrow ideas from separation logic. However, unlike SecCSL, neither is defined for languages with pointers, arrays etc.
Like SecCSL, Covern proves a timingsensitive security property. Location sensitivity is defined statically by valuedependent predicates, and value sensitivity is tracked by a dependent security typing context \(\Gamma \) [35], relative to a Hoare logic predicate P over the entire shared memory. In Covern locks carry nonrelational invariants. In contrast, SecCSL unifies these elements together into separation logic assertions with a relational semantics. Doing so leads to a much simpler logic, amenable to automation, while supporting pointers, etc.
On the other hand, Karbyshev et al. [23] prove a timinginsensitive security property, but rely on primitives to interact with the scheduler to prevent leaks via scheduling decisions. Unlike SecCSL, which assumes a deterministic scheduling discipline, Karbyshev et al. support a wider class of scheduling policies. Their system tracks resource ownership and transfer between threads at synchronisation points, similar to CSLs. Their resources include labelled scheduler resources that account for scheduler interaction, including when scheduling decisions become tainted by secret data—something that cannot occur in SecCSL ’s deterministic scheduling model.
Prior logics for sequential languages, e.g. [1, 4], have also adopted separation logic ideas to reason locally about memory, combining them with relational assertions similar to SecCSL ’s \(e \mathbin {::}{e_l}\) assertions. For instance, the agreement assertions \(\mathsf {A}(e)\) of [4] coincide with SecCSL ’s \(e \mathbin {::}\texttt {low}\). Unlike SecCSL, some of these logics support languages with explicit declassification actions [4].
Selfcomposition is another technique to exploit existing verification infrastructure for proofs of general hyperproperties [13], including but not limited to noninterference. Eilers et al. [17] present such an approach for Viper, which supports an assertion language similar to that of separation logic. It does not support public heap locations (which are information sources and sinks at the same time) albeit sinks can be modeled via preconditions of procedures. A similar approach is implemented in FramaC [9]. Both of [9, 17] do not support concurrency, and it remains unclear how selfcomposition could avoid an exponential blowup from concurrent interleaving, which SecCSL avoids.
The soundness proof for SecCSL follows the general structure of Vafeiadis’ [46] for CSL, which is also mechanised in Isabelle/HOL. There is, however, a technical difference: His analog of Definition 3, a recursive predicate called \(\mathrm {safe}_n(c,s,h,Q)\), refers to a semantic initial state s, h whereas we propagate a syntactic assertion (22) only. Our formulation has the benefit that some of the technical reasoning in the soundness proof is easier to automate. Its drawback is the need to impose technical sideconditions on the free variables of the frame F and the intermediate assertions \(P_i\).
7 Conclusion
We presented SecCSL, a concurrent separation logic for proving expressive datadependent information flow properties of programs. SecCSL is considerably simpler, yet handles features like pointers, arrays etc., which are out of scope for contemporary logics. It inherits the structure of traditional concurrent separation logics, and so like those logics can be automated via symbolic execution [10, 22, 30]. To demonstrate this, we implemented SecC, an automatic verifier for expressive information flow security for a subset of the C language.
Separation logic has proved to be a remarkably powerful vehicle for reasoning about programs, weak memory concurrency [47], program synthesis [42], and many other domains. With SecCSL, we hope that in future the same possibilities might be opened to verified information flow security.
Footnotes
Notes
Acknowledgement
We thank the anonymous reviewers for their careful and detailed comments that helped significantly to clarify the discussion of finer points.
This research was sponsored by the Department of the Navy, Office of Naval Research, under award #N629091812049. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the Office of Naval Research.
Supplementary material
References
 1.Amtoft, T., Bandhakavi, S., Banerjee, A.: A logic for information flow in objectoriented programs. In: Proceedings of Principles of Programming Languages (POPL), pp. 91–102. ACM (2006)Google Scholar
 2.Appel, A.W., et al.: Program Logics for Certified Compilers. Cambridge University Press, New York (2014)CrossRefGoogle Scholar
 3.Appel, A.W., et al.: The Verified Software Toolchain (2017). https://github.com/PrincetonUniversity/VST
 4.Banerjee, A., Naumann, D.A., Rosenberg, S.: Expressive declassification policies and modular static enforcement. In: Proceedings of Symposium on Security and Privacy (S&P), pp. 339–353. IEEE (2008)Google Scholar
 5.Beaumont, M., McCarthy, J., Murray, T.: The cross domain desktop compositor: using hardwarebased video compositing for a multilevel secure user interface. In: Annual Computer Security Applications Conference (ACSAC), pp. 533–545. ACM (2016)Google Scholar
 6.Benton, N.: Simple relational correctness proofs for static analyses and program transformations. In: Proceedings of Principles of Programming Languages (POPL), pp. 14–25. ACM (2004)Google Scholar
 7.Berdine, J., Calcagno, C., O’Hearn, P.W.: Symbolic execution with separation logic. In: Yi, K. (ed.) APLAS 2005. LNCS, vol. 3780, pp. 52–68. Springer, Heidelberg (2005). https://doi.org/10.1007/11575467_5CrossRefGoogle Scholar
 8.Bertot, Y., Castéran, P.: Interactive Theorem Proving and Program Development. Coq’Art: The Calculus of Inductive Constructions. Texts in Theoretical Computer Science. An EATCS Series. Springer, Heidelberg (2004). https://doi.org/10.1007/9783662079645CrossRefzbMATHGoogle Scholar
 9.Blatter, L., Kosmatov, N., Le Gall, P., Prevosto, V., Petiot, G.: Static and dynamic verification of relational properties on selfcomposed C code. In: Dubois, C., Wolff, B. (eds.) TAP 2018. LNCS, vol. 10889, pp. 44–62. Springer, Cham (2018). https://doi.org/10.1007/9783319929941_3CrossRefGoogle Scholar
 10.Calcagno, C., Distefano, D.: Infer: an automatic program verifier for memory safety of C programs. In: Bobaru, M., Havelund, K., Holzmann, G.J., Joshi, R. (eds.) NFM 2011. LNCS, vol. 6617, pp. 459–465. Springer, Heidelberg (2011). https://doi.org/10.1007/9783642203985_33CrossRefGoogle Scholar
 11.Calcagno, C., et al.: Moving fast with software verification. In: Havelund, K., Holzmann, G., Joshi, R. (eds.) NFM 2015. LNCS, vol. 9058, pp. 3–11. Springer, Cham (2015). https://doi.org/10.1007/9783319175249_1CrossRefGoogle Scholar
 12.Chlipala, A.: Formal Reasoning About Programs (2016)Google Scholar
 13.Clarkson, M.R., Schneider, F.B.: Hyperproperties. In: Proceedings of Computer Security Foundations Symposium (CSF), pp. 51–65 (2008)Google Scholar
 14.Costanzo, D., Shao, Z.: A separation logic for enforcing declarative information flow control policies. In: Abadi, M., Kremer, S. (eds.) POST 2014. LNCS, vol. 8414, pp. 179–198. Springer, Heidelberg (2014). https://doi.org/10.1007/9783642547928_10CrossRefGoogle Scholar
 15.de Moura, L., Bjørner, N.: Z3: an efficient SMT solver. In: Ramakrishnan, C.R., Rehof, J. (eds.) TACAS 2008. LNCS, vol. 4963, pp. 337–340. Springer, Heidelberg (2008). https://doi.org/10.1007/9783540788003_24CrossRefGoogle Scholar
 16.Del Tedesco, F., Sands, D., Russo, A.: Faultresilient noninterference. In: Proceedings of Computer Security Foundations Symposium (CSF), pp. 401–416. IEEE (2016)Google Scholar
 17.Eilers, M., Müller, P., Hitz, S.: Modular product programs. In: Ahmed, A. (ed.) ESOP 2018. LNCS, vol. 10801, pp. 502–529. Springer, Cham (2018). https://doi.org/10.1007/9783319898841_18CrossRefGoogle Scholar
 18.Ernst, G., Murray, T.: SecC tool description and Isabelle theories for SecCSL (2019). https://covern.org/secc
 19.Goguen, J., Meseguer, J.: Security policies and security models. In: Proceedings of Symposium on Security and Privacy (S&P), Oakland, California, USA, pp. 11–20, April 1982Google Scholar
 20.Gruetter, S., Murray, T.: Short paper: towards information flow reasoning about realworld C code. In: Proceedings of Workshop on Programming Languages and Analysis for Security (PLAS), pp. 43–48. ACM (2017)Google Scholar
 21.Gu, R., et al.: CertiKOS: an extensible architecture for building certified concurrent OS kernels. In: Proceedings of USENIX Symposium on Operating Systems Design and Implementation (OSDI), November 2016Google Scholar
 22.Jacobs, B., Smans, J., Philippaerts, P., Vogels, F., Penninckx, W., Piessens, F.: VeriFast: a powerful, sound, predictable, fast verifier for C and java. In: Bobaru, M., Havelund, K., Holzmann, G.J., Joshi, R. (eds.) NFM 2011. LNCS, vol. 6617, pp. 41–55. Springer, Heidelberg (2011). https://doi.org/10.1007/9783642203985_4CrossRefGoogle Scholar
 23.Karbyshev, A., Svendsen, K., Askarov, A., Birkedal, L.: Compositional noninterference for concurrent programs via separation and framing. In: Bauer, L., Küsters, R. (eds.) POST 2018. LNCS, vol. 10804, pp. 53–78. Springer, Cham (2018). https://doi.org/10.1007/9783319897226_3CrossRefGoogle Scholar
 24.Klein, G., et al.: Comprehensive formal verification of an OS microkernel. ACM Trans. Comput. Syst. 32(1), 2:1–2:70 (2014)CrossRefGoogle Scholar
 25.Leroy, X.: Formal verification of a realistic compiler. Commun. ACM 52(7), 107–115 (2009)CrossRefGoogle Scholar
 26.Löding, C., Madhusudan, P., Murali, A., Peña, L.: A first order logic with frames. http://madhu.cs.illinois.edu/FOFrameLogic.pdf
 27.Lourenço, L., Caires, L.: Dependent information flow types. In: Proceedings of Principles of Programming Languages (POPL), Mumbai, India, pp. 317–328, January 2015Google Scholar
 28.Mantel, H., Sands, D.: Controlled declassification based on intransitive noninterference. In: Chin, W.N. (ed.) APLAS 2004. LNCS, vol. 3302, pp. 129–145. Springer, Heidelberg (2004). https://doi.org/10.1007/9783540304777_9CrossRefGoogle Scholar
 29.Mantel, H., Sands, D., Sudbrock, H.: Assumptions and guarantees for compositional noninterference. In: Proceedings of Computer Security Foundations Symposium (CSF), CernaylaVille, France, pp. 218–232, June 2011Google Scholar
 30.Müller, P., Schwerhoff, M., Summers, A.J.: Viper: a verification infrastructure for permissionbased reasoning. In: Jobstmann, B., Leino, K.R.M. (eds.) VMCAI 2016. LNCS, vol. 9583, pp. 41–62. Springer, Heidelberg (2016). https://doi.org/10.1007/9783662491225_2CrossRefzbMATHGoogle Scholar
 31.Murray, T.: Short paper: on highassurance informationflowsecure programming languages. In: Proceedings of Workshop on Programming Languages and Analysis for Security (PLAS), pp. 43–48 (2015)Google Scholar
 32.Murray, T., et al.: seL4: from general purpose to a proof of information flow enforcement. In: Proceedings of Symposium on Security and Privacy (S&P), San Francisco, CA, pp. 415–429, May 2013Google Scholar
 33.Murray, T., Sabelfeld, A., Bauer, L.: Special issue on verified information flow security. J. Comput. Secur. 25(4–5), 319–321 (2017)CrossRefGoogle Scholar
 34.Murray, T., Sison, R., Engelhardt, K.: COVERN: a logic for compositional verification of information flow control. In: Proceedings of European Symposium on Security and Privacy (EuroS&P), London, United Kingdom, April 2018Google Scholar
 35.Murray, T., Sison, R., Pierzchalski, E., Rizkallah, C.: Compositional verification and refinement of concurrent valuedependent noninterference. In: Proceedings of Computer Security Foundations Symposium (CSF), pp. 417–431, June 2016Google Scholar
 36.Newcombe, C., Rath, T., Zhang, F., Munteanu, B., Brooker, M., Deardeuff, M.: How Amazon web services uses formal methods. Commun. ACM 58(4), 66–73 (2015)CrossRefGoogle Scholar
 37.Nipkow, T., Wenzel, M., Paulson, L.C. (eds.): Isabelle/HOLA Proof Assistant for HigherOrder Logic. LNCS, vol. 2283. Springer, Heidelberg (2002). https://doi.org/10.1007/3540459499CrossRefzbMATHGoogle Scholar
 38.O’Hearn, P.W.: Resources, concurrency and local reasoning. In: Gardner, P., Yoshida, N. (eds.) CONCUR 2004. LNCS, vol. 3170, pp. 49–67. Springer, Heidelberg (2004). https://doi.org/10.1007/9783540286448_4CrossRefGoogle Scholar
 39.O’Hearn, P.W.: Continuous reasoning: scaling the impact of formal methods. In: Proceedings of Logic in Computer Science (LICS), pp. 13–25. ACM (2018)Google Scholar
 40.O’Hearn, P.W., Yang, H., Reynolds, J.C.: Separation and information hiding. ACM Trans. Programm. Lang. Syst. (TOPLAS) 31(3), 11 (2009)zbMATHGoogle Scholar
 41.Piskac, R., Wies, T., Zufferey, D.: Automating separation logic using SMT. In: Sharygina, N., Veith, H. (eds.) CAV 2013. LNCS, vol. 8044, pp. 773–789. Springer, Heidelberg (2013). https://doi.org/10.1007/9783642397998_54CrossRefGoogle Scholar
 42.Polikarpova, N., Sergey, I.: Structuring the synthesis of heapmanipulating programs. Proc. ACM Program. Lang. 3(POPL), 72 (2019)CrossRefGoogle Scholar
 43.Prabawa, A., Al Ameen, M.F., Lee, B., Chin, W.N.: A logical system for modular information flow verification. Verification, Model Checking, and Abstract Interpretation. LNCS, vol. 10747, pp. 430–451. Springer, Cham (2018). https://doi.org/10.1007/9783319737218_20CrossRefGoogle Scholar
 44.Reynolds, J.C.: Separation logic: a logic for shared mutable data structures. In: Proceedings of Logic in Computer Science (LICS), pp. 55–74. IEEE (2002)Google Scholar
 45.Sabelfeld, A., Sands, D.: Probabilistic noninterference for multithreaded programs. In: Proceedings of Computer Security Foundations Workshop (CSFW), pp. 200–214. IEEE (2000)Google Scholar
 46.Vafeiadis, V.: Concurrent separation logic and operational semantics. In: Proceedings of Mathematical Foundations of Programming Semantics (MFPS), pp. 335–351 (2011)MathSciNetCrossRefGoogle Scholar
 47.Vafeiadis, V., Narayan, C.: Relaxed separation logic: a program logic for C11 concurrency. In: Proceedings of Object Oriented Programming Systems Languages & Applications (OOPSLA), pp. 867–884. ACM (2013)Google Scholar
 48.Volpano, D., Smith, G.: Probabilistic noninterference in a concurrent language. J. Comput. Secur. 7(2,3), 231–253 (1999)CrossRefGoogle Scholar
 49.Yang, H.: Relational separation logic. Theor. Comput. Sci. 375(1–3), 308–334 (2007)MathSciNetCrossRefGoogle Scholar
 50.Zheng, L., Myers, A.C.: Dynamic security labels and static information flow control. Int. J. Inf. Secur. 6(2–3), 67–84 (2007)CrossRefGoogle Scholar
Copyright information
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.