1 Introduction

Runtime Verification (RV) is a lightweight verification technique that checks whether a system satisfies a correctness property by analysing the current execution of the system [20, 29], expressed as a trace of execution events. Using the additional information obtained at runtime, the technique can often mitigate state explosion problems typically associated with more traditional verification techniques. At the same time, limiting the verification analysis to the current execution trace hinders the expressiveness of RV when compared to more exhaustive approaches. In fact, there are correctness properties that cannot be satisfactorily verified at runtime (e.g. the finiteness of the trace considered up to the current execution point prohibits the verification of liveness properties). Because of this reason, RV is often used as part of a multi-pronged approach towards ensuring system correctness [5, 6, 8, 14, 15, 25], complementing other verification techniques such as model checking, testing and type checking.

In order to attain an effective verification strategy consisting of multiple verification techniques that include RV, it is crucial to understand the expressive power of each technique: one can then determine how to best decompose the verification burden into subtasks that can then be assigned to the most appropriate verification technique. Monitorability concerns itself with identifying the properties that are analysable by RV. In [21, 22] (and subsequently in [2]), the problem of monitorability was studied for properties expressed in a variant of the modal \(\mu \)-calculus [26] called \(\mu \mathrm{HML}\) [28]. The choice of the logic was motivated by the fact that it can embed widely used logics such as CTL and LTL, and by the fact that it is agnostic of the underlying verification method used—this leads to better separation of concerns and guarantees a good level of generality for the results obtained. The main result in [2, 21, 22] is the identification of a monitorable syntactic subset of the logic \(\mu \mathrm{HML}\) (i.e., a set of logical formulas for which monitors carrying out the necessary runtime analysis exist) that is shown to be maximally expressive (i.e., any property that is monitorable in the logic may be expressed in terms of this syntactic subset). We are unaware of other maximality results of this kind in the context of RV.

In this work we strive towards extending the monitorability limits identified in [2, 21, 22] for \(\mu \mathrm{HML}\). Particularly, for any logic or specification language, monitorability is a function of the underlying monitoring setup. In [2, 21, 22], the framework assumes a classical monitoring setup, whereby a (single) monitor incrementally analyses an ordered trace of events describing the computation steps that were executed by the system. A key observation made by this paper is that, in general, execution traces need not be limited to the reporting of events that happened. For instance, they may describe events that could not have happened at specific points in the execution of a system. Alternatively, they may also include descriptions for depth-bounded trees of computations that were possible at specific points in an execution. We conjecture that there are instances where this additional information can be feasibly encoded in a trace, either dynamically or by way of a pre-processing phase (based, e.g., on the examination of logs of previous system executions, or on the full static checking of sub-components making up the system). More importantly, this additional information could, in principle, permit the verification of more properties at runtime.

The contribution of this paper is a study of how the aforementioned augmented monitoring setups may affect the monitorability of \(\mu \mathrm{HML}\), potentially extending the maximality limits identified in [2, 21, 22]. More concretely:

  1. 1.

    We show how these aspects can be expressed and studied in a general monitoring framework with (abstract) conditions, Theorems 3 and 4 resp. in Sects. 3 and 5.

  2. 2.

    We instantiate the general framework with trace conditions that describe the inability to perform actions, amounting to refusals [31], Propositions 1 and 5.

  3. 3.

    We also instantiate the framework with conditions describing finite execution graphs, amounting to the recursion-free fragment of the logic [24], Propositions 2 and 3.

  4. 4.

    Finally, we instantiate the framework with trace conditions that record information from previous monitored runs of the system, Proposition 4. This, in turn, leads us to a notion of alternating monitoring that allows monitors to aggregate information over monitored runs. We show that this extends the monitorable fragment of our logic in a natural and significant way.

The remainder of the paper is structured as follows. After outlining the necessary preliminaries in Sect. 2, we develop our parameterized monitoring framework with conditions in Sect. 3 for a monitoring setup that allows monitors to observe both silent and external actions of systems. The two condition instantiations for this strong setting are presented in Sect. 4. In Sect. 5 we extend the parameterized monitoring framework with conditions to a weak monitoring setup that abstracts from internal moves, followed by two instantiations similar to those presented in Sect. 4. Section 6 concludes by discussing related and future work.

2 Background

Labelled Transition Systems. We assume a set of external actions Act and a distinguished silent action \(\tau \). We let \(\alpha \) range over \(\textsc {Act} \) and \(\mu \) over \(\textsc {Act} \cup \{\tau \}\). A Labelled Transition System (LTS) on \(\textsc {Act} \) is a triple

$$\begin{aligned} L = \langle P,\textsc {Act},\rightarrow _L \rangle , \end{aligned}$$

where P is a nonempty set of system states referred to as processes \(p,q,\ldots \), and \(\rightarrow _L\ \subseteq P \times (\textsc {Act} \cup \{\tau \}) \times P\) is a transition relation. We write instead of \((p,\mu ,q) \in \ \rightarrow _L\). By we mean that there is some q such that . We use to mean that, in L, p can derive q using a single \(\mu \) action and any number of silent actions, i.e., . We distinguish between (general) traces \(s=\mu _1\mu _2\dots \mu _r\in (\textsc {Act} \cup \{\tau \})^*\) and external traces \(t=\alpha _1\alpha _2\dots \alpha _r\in \textsc {Act}^*\). For a general trace \(s=\mu _1\mu _2\dots \mu _r\in (\textsc {Act} \cup \{\tau \})^*\), means ; and for an external trace \(t=\alpha _1\alpha _2\dots \alpha _r\in \textsc {Act}^*\), means when \(r\ge 1\) and when \(t = \varepsilon \) is the empty trace. We occasionally omit the subscript L when it is clear from the context.

Example 1

The (standard) regular fragment of CCS [30] with grammar:

figure a

where \(x,y,z,\ldots \) are from some countably infinite set of variables Var, and the transition relation defined as:

constitutes the LTS \(\langle \textsc {Proc},\textsc {Act},\rightarrow \rangle \). We often use the CCS notation above to describe processes. \(\blacksquare \)

Specification Logic. Properties about the behaviour of processes may be specified via the logic \(\mu \mathrm{HML}\) [4, 28], a reformulation of the modal \(\mu \)-calculus [26].

Definition 1

\(\mu \mathrm{HML}\) formulae on \(\textsc {Act} \) are defined by the grammar:

figure b

where \(X,Y,Z,\ldots \) come from a countably infinite set of logical variables \(\textsc {LVar}\). For a given LTS \(L = \langle P,\textsc {Act},\rightarrow \rangle \), an environment \(\rho \) is a function \(\rho : \textsc {LVar} \rightarrow 2^{P}\). Given an environment \(\rho \), \(X \in \textsc {LVar}\), and \(S \subseteq P\), \(\rho [x \mapsto S]\) denotes the environment where \(\rho [X \mapsto S](X) = S\) and \(\rho [X \mapsto S](Y) = \rho (Y)\), for all \(Y \ne X\). The semantics of a \(\mu \mathrm{HML}\) formula \(\varphi \) over an LTS L relative to an environment \(\rho \), denoted as \([\![\varphi , \rho ]\!]_L\), is defined as follows:

Formulas \(\varphi \) and \(\psi \) are equivalent, denoted as \(\varphi \equiv \psi \), when \([\![{\varphi ,\rho }]\!]_{L} = [\![{\psi ,\rho }]\!]_{L}\) for every environment \(\rho \) and LTS L. We often consider closed formulae and simply write \([\![{\varphi }]\!]_{L}\) for \([\![{\varphi ,\rho }]\!]_{L}\) when the semantics of \(\varphi \) is independent of \(\rho \). \(\blacksquare \)

The logic \(\mu \mathrm{HML}\) is very expressive. It is also agnostic of the technique to be employed for verification. The property of monitorability, however, fundamentally relies on the monitoring setup considered.

Monitoring Systems. A monitoring setup on \(\textsc {Act} \) is a triple \( \langle M, I, L \rangle , \) where L is a system LTS on \(\textsc {Act} \), M is a monitor LTS on \(\textsc {Act} \), and I is the instrumentation describing how to compose L and M into an LTS, denoted by I(ML), on \(\textsc {Act} \). We call the pair (MI) a monitoring system on \(\textsc {Act} \). For \(M = \langle \textsc {Mon}, \textsc {Act},\rightarrow _M \rangle \), \(\textsc {Mon}\) is set of monitor states (ranged over by m) and \(\rightarrow _M\) is the monitor semantics described in terms of the behavioural state transitions a monitor takes when it analyses trace events \(\mu \in \textsc {Act} \cup \{\tau \}\). The states of the composite LTS I(ML) are written as \(m \triangleleft p\), where m is a monitor state and p is a system state; the monitored-system transition relation is denoted here by \(\rightarrow _{I(M,L)}\). We present our results with a focus on rejection monitors, i.e., monitors with a designated rejection state no, and hence safety fragments of the logic \(\mu \mathrm{HML}\). However, our results and arguments apply dually to acceptance monitors (with a designated acceptance state yes) and co-safety properties; see [21, 22] for details.

Definition 2

Fix a monitoring setup \(\langle M,I,L \rangle \) on \(\textsc {Act} \) and let m be a monitor state of M and \(\varphi \) a closed formula of \({\mu \text {HML}}\) on \(\textsc {Act} \). We say that m (MI)-rejects (or simply rejects, if MI are evident) a process p in L, written as \({\mathbf {rej}}_{\langle M,I,L \rangle }(m,p)\), when there are a process q in L and a trace \(s \in (\textsc {Act} \cup \{\tau \})^*\) such that . We say that m (MI)-monitors for \(\varphi \) on L whenever

$$\begin{aligned} for~each~process~p~of~L, {\mathbf {rej}}_{\langle M,I,L \rangle }(m, p)~if~and~only~if~p \notin [\![{\varphi }]\!]_{L}. \end{aligned}$$

(Subscripts are omitted when they are clear from the context.) Finally, m (MI)-monitors for \(\varphi \) when m (MI)-monitors for \(\varphi \) on L for every LTS L on \(\textsc {Act} \). The monitoring system (MI) is often omitted when evident. \(\blacksquare \)

We define monitorability for \(\mu \text {HML}\) in terms of monitoring systems (MI).

Definition 3

Fix a monitoring system (MI) and a fragment \(\varLambda \) of \(\mu \text {HML}\). We say that (MI) rejection-monitors for \(\varLambda \) whenever:

  • For all closed \(\varphi \in \varLambda \), there exists an m from M that (MI)-monitors for \(\varphi \).

  • For all m of M, there exists a closed \(\varphi \in \varLambda \) that is (MI)-monitored by m. \(\blacksquare \)

We note that if a monitoring system and a fragment \(\varLambda \) of \({\mu \text {HML}}\) satisfy the conditions of Definition 3, then \(\varLambda \) is the largest fragment of \({\mu \text {HML}}\) that is monitored by the monitoring system. Stated otherwise, any other logic fragment \(\varLambda '\) that satisfies the conditions of Definition 3 must be equally expressive to \(\varLambda \), i.e., \(\forall \varphi ' \in \varLambda '\cdot \exists \varphi \in \varLambda \cdot \varphi \equiv \varphi '\) and vice versa. Definition 3 can be dually given for acceptance-monitorability, when considering acceptance monitors. We next review two monitoring systems that respectively rejection-monitor for two different fragments of \(\mu \mathrm{HML}\). We omit the corresponding monitoring systems for acceptance-monitors, that monitor for the dual fragments of \(\mu \mathrm{HML}\).

The Basic Monitoring Setup. The following monitoring system, presented in [2], does not distinguish between silent actions and external actions.

Definition 4

A basic monitor on \(\textsc {Act} \) is defined by the grammar:

figure c

where x comes from a countably infinite set of monitor variables. Constant \(\textsf {no}\) denotes the rejection verdict state whereas \(\textsf {end}\) denotes the inconclusive verdict state. The basic monitor LTS \(M_b\) is the one whose states are the closed monitors of \(\textsc {Mon}_b\) and whose transition relation is defined by the (standard) rules in Table 1 (we elide the symmetric rule for \(m+n\)). \(\blacksquare \)

Note that by rule mVrd in Table 1, verdicts are irrevocable and monitors can only describe suffix-closed behaviour.

Definition 5

Given a system LTS L and a monitor LTS M that agree on \(\textsc {Act} \), the basic instrumentation LTS, denoted by \(I_b(M,L)\), is defined by the rules iMon and iTer in Table 1. (We do not consider rule iAbs for now.) \(\blacksquare \)

Instrumentation often relegates monitors to a passive role, whereby a monitored system transitions only when the system itself can. In rule iMon, when the system produces a trace event \(\mu \) that the monitor is able to analyse (and transition from m to n), the constituent components of a monitored system \(m\triangleleft p\) move in lockstep. Conversely, when the system produces an event \(\mu \) that the monitor is unable to analyse, the monitored system still executes, according to iTer, but the monitor transitions to the inconclusive state, where it remains for the rest of the computation.

Table 1. Behaviour and instrumentation rules for monitored systems (\(v {\in } \{\textsf {end}, \textsf {no}\}\)).

We refer to the pair \((M_b,I_b)\) from Definitions 4 and 5 as the basic monitoring system. For each system LTS L that agrees with the full monitoring system on \(\textsc {Act} \), we can show a correspondence between the respective monitoring setup \(\langle M_b, I_b, L \rangle \) and the following syntactic subset of \(\mu \mathrm{HML}\).

Definition 6

The safety \(\mu \mathrm{HML}\) is defined by the grammar:

figure d

Theorem 1

([2]). The basic monitoring system \((M_b,I_b)\) monitors for the logical fragment sHML.    \(\square \)

The proof of Theorem 1 relies on a monitor synthesis and a formula synthesis function. The monitor synthesis function, , is defined on the structure of the input formula and assumes a bijective mapping between formula variables and monitor recursion variables:

The case analyses in the above synthesis procedure handle some of the redundancies that may be present in formula specifications. For instance, it turns out that \(\textsf {max}~X{.}[\mu ]\textsf {tt}\equiv \textsf {tt}\) and, accordingly, . The formula synthesis function is defined analogously (see [2, 22] for more details).

Monitoring for External Actions. The results obtained in [21, 22] can be expressed and recovered within our more general framework. We can express a weak version of the modalities employed in [3, 21, 22] as follows:

$$\begin{aligned}{}[[ \mu ]]\varphi&\equiv \textsf {max}~X{.}( [ \tau ] X \wedge [ \mu ]\textsf {max}~Y{.}( \varphi \wedge [ \tau ] Y ) ) \ \text { and } \\ \langle \langle \mu \rangle \rangle \varphi&\equiv \textsf {min}~X{.}( \langle \tau \rangle X \vee \langle \mu \rangle \textsf {min}~Y{.}( \varphi \vee \langle \tau \rangle Y ) ). \end{aligned}$$

Definition 7

Weak safety \(\mu \mathrm{HML}\), presented in [21, 22], is defined by the grammar:

figure e

Definition 8

The set \(\textsc {Mon}_e\) of external monitors on \(\textsc {Act} \) contains all the basic monitors that do not use the silent action \(\tau \). The corresponding external monitor LTS \(M_e\), is defined similarly to \(M_b\), but with the closed monitors in \(\textsc {Mon}_e\) as its states. External instrumentation, denoted by \(I_e\), is defined by the three rules iMon, iTer and iAbs in Table 1, where in the case of iMon and iTer, action \(\mu \) is substituted by the external action \(\alpha \). We refer to the pair \((M_e,I_e)\) as the external monitoring system, amounting to the setup in [21, 22].\(\blacksquare \)

Theorem 2

([22]). The external monitoring system \((M_e,I_e)\) rejection-monitors for the logical fragment WsHML.   \(\square \)

3 Monitors that Detect Conditions

Given a set of processes P, a pair (Cr) is a condition framework when C is a non-empty set of conditions and \(r:C\rightarrow 2^P\) is a valuation function. We assume a fixed condition framework (Cr) and we extend the syntax and semantics of \(\mu \mathrm{HML}\) so that for every condition \(c \in C\), both c and \(\lnot c\) are formulas and for every LTS L on set of processes P, \([\![{c}]\!]= r(c)\) and \([\![{\lnot c}]\!]= P \setminus r(c)\). We call the extended logic \(\mu \mathrm{HML}^{(C,r)}\). Since, in all the instances we consider, r is easily inferred from C, it is often omitted and we simply write C instead of (Cr) and \(\mu \mathrm{HML}^{(C,r)}\) as \(\mu \mathrm{HML}^{C}\). We say that process p satisfies c when \(p \in [\![{c}]\!]\). We assume that C is closed under negation, meaning that for every \(c \in C\), there is some \(c'\in C\), such that \([\![{c'}]\!]= [\![{\lnot c}]\!]\). Conditions represent certain properties of processes that the instrumentation is able to report.

We extend the syntax of monitors, so that if m is a monitor and c a condition, then c.m is a monitor. The idea is that if c.m detects that the process satisfies c, then it can transition to m.

Definition 9

A basic C-monitor on \(\textsc {Act} \) is defined by the grammar:

figure f

where x comes from a countably infinite set of monitor variables and \(c \in C\). Basic C-monitor behaviour is defined as in Table 1, but allowing \(\mu \) to range over \(\textsc {Act} \cup C \cup \{\tau \}\). We call the resulting monitor LTS \(M_b^C\). \(\blacksquare \)

A monitor detects the satisfaction of condition c when the monitored system has transitioned to a process that satisfies c. To express this intuition, we add rule iCon to the instrumentation rules of Table 1:

We call the resulting instrumentation \(I^C_b\). We observe that the resulting monitor setup is transparent with respect to external actions: an external trace of the monitored system results in exactly the same external trace of the instrumentation LTS. However, the general traces are not preserved, as the rule iCon may introduce additional silent transitions for the instrumentation trace. However, we argue that this is an expected consequence of the instrumentation verifying the conditions of C. C-monitors monitor for \(\mathrm{sHML}^{C}\):

Definition 10

The strong safety fragment of \(\mu \mathrm{HML}^C\) is defined as:

figure g

where \(c \in C\). We note that \(\lnot c \vee \varphi \) can be viewed as an implication \(c \rightarrow \varphi \) asserting that if c holds, then \(\varphi \) must also hold. \(\blacksquare \)

It is immediate to see that \(\mathrm{sHML}^C\) is a fragment of \({\mu \text {HML}}^{C}\) and when \(C \subseteq {\mu \text {HML}}\), it is also a fragment of \(\mu \text {HML}\). Finally, if C is closed under negation, then \(\lnot c \vee \varphi \) can be rewritten as \(c' \vee \varphi \), where \([\![{c'}]\!]= [\![{\lnot c}]\!]\), and in the following we often take advantage of this equivalence to simplify the syntax of \(\mathrm{sHML}^C\).

Theorem 3

The monitoring system \((M_b^C,I_b^C)\) monitors for \(\mathrm{sHML}^C\).    \(\square \)

We note that Theorem 3 implies that \(\mathrm{sHML}^{C}\) is the largest monitorable fragment of \({\mu \text {HML}}^{C}\), relative to C.

4 Instantiations

We consider two possible instantiations for parameter C in the framework presented in Sect. 3. Since each of these instantiations consists of a fragment from the logic \(\mu \mathrm{HML}\) itself, they both show how monitorability for \(\mu \mathrm{HML}\) can be extended when using certain augmented traces.

4.1 The Inability to Perform an Action

The monitoring framework of [2, 22] (used also in other works such as [18, 19]), is based on the idea that, while a system is executing, it performs discrete computational steps called events (actions) that are recorded and relayed to the monitor for analysis. Based on the analysed events, the monitor then transitions from state to state. One may however also consider instrumentations that record a system’s inability to perform a certain action. Examples of this arise naturally in situations where actions are requested unsuccessfully by an external entity on a system, or whenever the instrumentation is able to report system stability (i.e., the inability of performing internal actions). For instance, such observations were considered in [1, 31], in the context of testing preorders.

In our setting, a process is unable to perform action \(\mu \) exactly when it satisfies \([ \mu ]\textsf {ff}\). For monitors that are able to detect the inability or failure of a process to perform actions, we set \(F_\textsc {Act} = \{ [ \mu ]\textsf {ff}\mid \mu \in \textsc {Act} \cup \{\tau \} \}\) as the set of conditions. By Theorem 3, the resulting maximal monitorable fragment of \(\mu \mathrm{HML}\) is given by the grammar:

figure h

We note the fact that \(\mu \mathrm{HML}\) is closed under negation, where \( \lnot [ \mu ]\textsf {ff}= \langle \mu \rangle \textsf {tt}\).

Proposition 1

The monitoring system \((M_b^{F_\textsc {Act}}\!,I_b^{F_\textsc {Act}}\!)\) monitors for the logical fragment \(\mathrm{sHML}^{F_\textsc {Act}}\).   \(\square \)

A special case of interest are monitors that can detect process stability, i.e., processes satisfying \([ \tau ]\textsf {ff}\). Such monitors monitor for \(\mathrm{sHML}^{\{[ \tau ]\textsf {ff}\}}\), namely sHML from Definition 6 extended with formulas of the form \(\langle \tau \rangle \textsf {tt}\vee \varphi \).

4.2 Depth-Bounded Static Analysis

In multi-pronged approaches using a combination of verification techniques, one could statically verify parts of a program (from specific execution points) with respect to certain behavioural properties using techniques such as Bounded Model Checking [11] and Partial Model Checking [7]. Typical examples arise in component-based software using modules, objects or agents that can be verified in isolation. This pre-computed verification can then be recorded as annotations to a component and subsequently reported by the instrumentation as part of the execution trace. This strategy would certainly be feasible for depth-bounded static analysis for which the original logic HML [24]—the recursion-free fragment of \(\mu \mathrm{HML}\) given below—is an ideal fit.

figure i

Again, HML is closed under negation [4]. If we allow monitors to detect the satisfaction of these kinds of conditions, then, according to Theorem 3, the maximal fragment of \(\mu \mathrm{HML}\) that we can monitor for, with HML as a condition framework, is \(\mathrm{sHML}^{\mathrm{HML}}\), defined by the following grammar:

figure j

where \(\eta \in \mathrm{HML}\). Another way to describe \({\text {sHML}}^{\mathrm{HML}}\) is as the \(\mu \mathrm{HML}\) fragment that includes all formulas whereby for every subformula of the form \(\varphi \vee \psi \), at most one of the constituent subformulas \(\varphi ,\psi \) uses recursion.

Proposition 2

The monitoring system \((M_b^{\mathrm{HML}},I_b^{\mathrm{HML}})\) monitors for the logical fragment \(\mathrm{sHML}^{\mathrm{HML}}\).    \(\square \)

Instead of HML, we can alternatively use a fragment \(\mathrm{HML}^d\) of HML that only allows formulas with nesting depth for the modalities of at most d. Since the complexity of checking HML formulas is directly dependent on this modal depth, there are cases where the overheads of checking such formulas are deemed to be low enough to be adequately checked for at runtime instead of checking for them statically.

5 Extending External Monitorability

We explore the impact of considering traces that encode conditions from Sect. 3 on the monitorability of the weak version of the logic used in [21, 22]:

figure k

This version of the logic abstracts away from internal moves performed by the system—note that the weak modality formulas are restricted to external actions \(\alpha \) as opposed to the general ones, \(\mu \). The semantics follows that presented in Sect. 2, but can alternatively be given a more direct inductive definition, e.g.

The main aim of this section is to extend the maximally-expressive monitorable subset of \({\mu \text {HML}}\) that was identified in [21, 22] using the framework developed in Sect. 3.

5.1 External Monitoring with Conditions

We define the external monitoring system with conditions similarly to Sect. 3. The syntax of Definition 8 is extended so that, for any instance of C, if m is a monitor and c a condition from C, then c.m is a monitor.

Definition 11

An external C-monitor on \(\textsc {Act} \) is defined by the grammar:

figure l

where \(c \in C\). C-monitor behaviour is defined as in Table 1, but extending rule mAct to condition prefixes that generate condition actions (i.e., \(\mu \) ranges over \(\textsc {Act} \cup C\)). We call the resulting monitor LTS \(M^C_e\).

For the instrumentation relation called \(I^C_e\), we consider the rules iMon, iTer from Table 1 for external actions \(\alpha \) instead of the general action \(\mu \), rule iAbs from the same table, and rule iCon from Sect. 3. \(\blacksquare \)

Note that the monitoring system \((M^C_e,I^C_e)\) may be used to detect \(\tau \)-transitions implicitly—we conjecture that this cannot be avoided in general. Consider two conflicting conditions \(c_1\) and \(c_2\), i.e., \([\![{c_1}]\!]{\cap } [\![{c_2}]\!]{=} \emptyset \). Definition 11 permits monitors of the form \(c_1{.}c_2{.}m\) that encode the fact that state m can only be reached when the system under scrutiny performs a non-empty sequence of \(\tau \)-moves to transition from a state satisfying \(c_1\) to another state satisfying \(c_2\). This, in some sense, is also related to obscure silent action monitoring studied in [2].

We identify the grammar for the maximally-expressive monitorable syntactic subset of the logic W\(\mu \mathrm{HML}\). It uses the formula \([[ \varepsilon ]]\varphi \) defined as:

$$\begin{aligned}{}[[ \varepsilon ]]\varphi \equiv \textsf {max}~X . (\varphi \wedge [ \tau ] X) \end{aligned}$$

The modality \([[ \varepsilon ]]\varphi \) quantifies universally over the set of processes that can be reached from a given one via any number of silent steps. Together with its dual \(\langle \langle \varepsilon \rangle \rangle \varphi \) modality, \([[ \varepsilon ]]\varphi \) is used in the modal characterisation of weak bisimilarity [30, 34], in which \(\tau \) transitions from one process may be matched by a (possibly empty) sequence of \(\tau \) transitions from another.

Definition 12

The weak safety fragment of W\(\mu \mathrm{HML}\) with C is defined as:

figure m

where \(c \in C\). \(\blacksquare \)

Theorem 4

The monitoring system \((M^C_e,I^C_e)\) monitors for \(\mathrm{WsHML}^{C}\).   \(\square \)

We highlight the need to insulate the appearance of the implication \(\lnot c \vee \varphi \) from internal system behaviour by using the modality \([[ \varepsilon ]]\) in Definition 12. For conditions that are invariant under \(\tau \)-transitions, this modality is not required but it cannot be eliminated otherwise; we revisit this point in Example 2.

5.2 Instantiating External Monitors with Conditions

We consider three different instantiations to our parametric external monitoring system of Sect. 5.1.

Recursion-Free Formulas. The weak version of HML, denoted by \(w\mathrm{HML}\), is the recursion-free fragment of W\(\mu \mathrm{HML}\). Similarly to what was argued earlier in Sect. 4.2, it is an appropriate set of conditions to instantiate set C in \(\mathrm{WsHML}^{C}\), and the maximal monitorable fragment of \(\mathrm{W}{\mu \mathrm{HML}}\) with conditions from \(w\mathrm{HML}\) is \(\mathrm{WsHML}^{w\mathrm{HML}}\), defined by the following grammar, where \(\eta \in w\mathrm{HML}\):

figure n

Proposition 3

The monitoring system \((M_e^{w\mathrm{HML}},I_e^{w\mathrm{HML}})\) monitors for the logical fragment \(\mathrm{WsHML}^{w\mathrm{HML}}\).    \(\square \)

An important observation (that is perhaps surprising) is that \(\mathrm{WsHML}^{w\mathrm{HML}}\) is not a fragment of \(\mathrm{W}{\mu \mathrm{HML}}\), as the following example demonstrates.

Example 2

Although for any (closed) \(\mathrm{WsHML}\) formula \(\varphi \) we have the logical equivalence \([[ \varepsilon ]]\varphi \equiv \varphi \) (notice that the monitor for \(\varphi \) that is guaranteed by Theorem 2 also monitors for \([[ \varepsilon ]]\varphi \)), this logical equivalence does not hold for a formula \(\varphi \) from \(\mathrm{W}{\mu \mathrm{HML}}\). Consider the formula \(\varphi _\epsilon \) below that may be expressed using a formula from \(\mathrm{WsHML}^{w\mathrm{HML}}\):

$$\varphi _\epsilon =[[ \varepsilon ]]\langle \langle \alpha \rangle \rangle \textsf {tt}\equiv [[ \varepsilon ]](\langle \langle \alpha \rangle \rangle \textsf {tt}\vee \textsf {ff}) \in \mathrm{WsHML}^{w\mathrm{HML}}.$$

Formula \(\varphi _\epsilon \) is not equivalent to \(\langle \langle \alpha \rangle \rangle \textsf {tt}\) (e.g. the process \(\alpha {.}\textsf {nil}+ \tau {.}\textsf {nil}\) satisfies \(\langle \langle \alpha \rangle \rangle \textsf {tt}\), but not \(\varphi _\epsilon \)) meaning that \([[ \varepsilon ]]\) plays a discerning role in the context of \(\mathrm{W}{\mu \mathrm{HML}}\). Furthermore, \(\varphi _\epsilon \) holds for process \(\tau {.}\alpha {.}\textsf {nil}\), but not for \(\alpha {.}\textsf {nil}+ \tau {.}\textsf {nil}\), even though these two processes cannot be distinguished by any \(\mathrm{W}{\mu \mathrm{HML}}\) formula. In fact, it turns out that they are bisimilar with respect to weak external transitions and this bisimulation characterises the satisfaction of W\(\mu \mathrm{HML}\) formulas [24]. Thus, there is no formula in W\(\mu \mathrm{HML}\) that is equivalent to \(\varphi _\epsilon \). \(\blacksquare \)

Previous Runs and Alternating Monitoring. A monitoring system could reuse information from previous system runs, perhaps recorded as execution logs, and whenever (sub)traces can be associated with specific states of the system, these can also be used as an instantiation for our parametric framework. More concretely, in [21, 22] it is shown that traces can be used to characterise the violation of \(\mathrm{WsHML}\) formulas, or the satisfaction of formulas from the dual fragment, \(\mathrm{WcHML}\), defined below.

Definition 13

The co-safety W\(\mu \mathrm{HML}\) is defined by the grammar:

figure o

The witnessed rejection and acceptance traces can in turn be used as part of an augmented trace for an instantiation for C to obtain the monitorable dual logics \(\mathrm{WsHML}^\mathrm{WcHML}\) and \(\mathrm{WcHML}^\mathrm{WsHML}\) that alternate between rejection monitoring and acceptance monitoring. The logic \(\mathrm{WsHML}^\mathrm{WcHML}\) is defined by the following grammar, where \(\theta \in \mathrm{WsHML}\):

figure p

and \(\mathrm{WcHML}^\mathrm{WsHML}\) is defined by the following grammar, where \(\chi \in \mathrm{WcHML}\):

figure q

Proposition 4

The monitoring system \((M_e^{\mathrm{WcHML}},I_e^{\mathrm{WcHML}})\) rejection-monitors for the logical fragment \(\mathrm{WsHML}^{\mathrm{WcHML}}\).    \(\square \)

One should observe that in this case, \(\mathrm{WsHML}^\mathrm{WcHML}\) is a fragment of \(\mathrm{W}{\mu \mathrm{HML}}\), in contrast to the previous instantiation \(\mathrm{WsHML}^{w\mathrm{HML}}\) from Sect. 5.2.

Lemma 1

For every \([[ \varepsilon ]](\eta \vee \varphi ) \in \mathrm{WsHML}^\mathrm{WcHML}\) (where \(\eta \in \mathrm{WsHML}\)), we have \([[ \varepsilon ]](\eta \vee \varphi ) \equiv \eta \vee \varphi \).    \(\square \)

Corollary 1

For every formula in \(\mathrm{WsHML}^\mathrm{WcHML}\), there is a logically equivalent formula in \(\mathrm{W}{\mu \mathrm{HML}}\).    \(\square \)

This entails that \(\mathrm{WsHML}^\mathrm{WcHML}\) can be reformulated using the following, simpler, grammar (here \(\eta \in \mathrm{WsHML}\)) which is clearly a fragment of \(\mathrm{W}{\mu \mathrm{HML}}\):

figure r

If the monitoring system can use such information from previous runs, there is no reason to limit this information to just one previous run. If the instrumentation mechanism can record up to i prior runs, the monitorable logic may be described as \(\mathrm{WsHML}^{i+1}\), defined inductively in the following way:

  • \(\mathrm{WsHML}^1 = \mathrm{WsHML}\) and \(\mathrm{WcHML}^1 = \mathrm{WcHML}\); and

  • \(\mathrm{WsHML}^{i+1} = \mathrm{WsHML}^{\mathrm{WcHML}^i}\) and \(\mathrm{WcHML}^{i+1} = \mathrm{WcHML}^{\mathrm{WsHML}^i}\).

Whenever this setup can be extended to unlimited prior runs, the resulting rejection-monitorable fragment would be \(\mathrm{WsHML}^\omega = \bigcup _i \mathrm{WsHML}^i\), which is also described by the following grammar:

figure s

\(\mathrm{WsHML}^\omega \) is a non-trivial extension of \(\mathrm{WsHML}\) which is still within \(\mathrm{W}{\mu \mathrm{HML}}\).

Failure to Execute an Action and Refusals. In Subsect. 4.1, we instantiated the condition set C as the set of formulas from \({\mu \text {HML}}\) that assert the inability of a process to perform an action. These formulas are of the form \([ \alpha ]\textsf {ff}\). We recast this approach in the setting of weak monitorability. In this setting where the monitoring system and the specification formulas ignore any silent transitions, the inability of a process to perform an \(\alpha \)-transition acquires a different meaning from the one used for the basic system. In particular, we consider a stronger version of these conditions that incorporates stability; this makes them invariant over \(\tau \)-transitions. We say that p refuses \(\alpha \) when and . In [31], a very similar notion is used for refusal testing (see also [1]). Thus, much in line with [31], we use the following definition.

Definition 14

A process p of an LTS L refuses action \(\alpha \in \textsc {Act}\) and write \(p \ \texttt {ref}\ \alpha \) when and . The set of conditions that corresponds to refusals is thus \(R_\textsc {Act} = \{ [ \tau ] \textsf {ff}\wedge [ \alpha ]\textsf {ff}\mid \alpha \in \textsc {Act} \}.\) \(\blacksquare \)

According to Theorem 4, the largest fragment of \(\mu \mathrm{HML}\) that we can monitor for, using monitors that can detect refusals, is \(\mathrm{WsHML}^{R_\textsc {Act}}\), given by the following grammar:

figure t

Again, \(\langle \tau \rangle \textsf {tt}\vee \langle \alpha \rangle \textsf {tt}\vee \varphi \) is best read as the implication \(([ \tau ] \textsf {ff}\wedge [ \alpha ]\textsf {ff})\rightarrow \varphi \): if the process is stable and cannot perform an \(\alpha \)-transition, then \(\varphi \) must hold.

Proposition 5

The monitoring system \((M_e^{R_\textsc {Act}},I_e^{R_\textsc {Act}})\) monitors for the logical fragment \(\mathrm{WsHML}^{R_\textsc {Act}}\).    \(\square \)

Example 3

Consider the formula

$$\varphi _s = [[ \varepsilon ]](\langle \tau \rangle \textsf {tt}\vee \langle \alpha \rangle \textsf {tt}\vee [[ \beta ]] \textsf {ff}) \in \mathrm{WsHML}^{R_\textsc {Act}}.$$

Formula \(\varphi _s\) claims that at every stable state that the system can reach, if action \(\alpha \) is impossible, then action \(\beta \) should also be impossible. We can see that \(\varphi _s\) is true for \(\tau {.}\textsf {nil}+ \beta {.}\textsf {nil}\), but not for \(\beta {.}\textsf {nil}\). However, the two processes cannot be distinguished by \(\mathrm{W}{\mu \mathrm{HML}}\), as they have the same weak external transitions. Therefore, \(\mathrm{WsHML}^{R_\textsc {Act}}\) is not a fragment of \(\mathrm{W}{\mu \mathrm{HML}}\)—but, as we have seen, it is a fragment of \({\mu \text {HML}}\). Here we have a part of the formula that clearly is not part of \(\mathrm{W}{\mu \mathrm{HML}}\). That is \(\langle \tau \rangle \textsf {tt}\), which asserts that the process can perform a silent transition. \(\blacksquare \)

Example 4

Let us consider an LTS \(L_0\) of stable processes—that is, \(L_0\) is an LTS without any silent transitions. \(L_0\) offers a simplified setting to cast our observations. In this case, the \([[ \varepsilon ]]\), \([ \tau ]\), and \(\langle \tau \rangle \) modalities can be eliminated from our formulas, and weak modalities are equivalent to strong modalities. This allows us to simplify the grammar for \(\mathrm{WsHML}^{F_\textsc {Act}}\) as follows:

figure u

Perhaps unsurprisingly, this grammar yields the same formulas as the restriction of grammar of Subsect. 4.1 on external actions. An instance of a specification that can be formalized in this fragment is the following. Consider a simple server-client system, where the client can request a resource, which is represented by action \(\texttt {rq}\), and the server may give a positive response, represented by action \(\texttt {rs}\), after which it needs to allocate said resource to the client, represented by action \(\texttt {al}\). A reasonable specification for the server is that if it is impossible at the moment to provide a resource, then it should not give a positive response to the client. In the above simplification of \(\mathrm{WsHML}^{F_\textsc {Act}}\), this specification can be formalized as \([ \texttt {rq} ](\langle \texttt {al} \rangle \textsf {tt}\vee [ \texttt {rs} ] \textsf {ff})\). If the LTS includes silent transitions, the corresponding specification would be written as

$$\begin{aligned} \varphi _r = [ \texttt {rq} ][[ \varepsilon ]](\langle \tau \rangle \textsf {tt}\vee \langle \texttt {al} \rangle \textsf {tt}\vee [[ \texttt {rs} ]] \textsf {ff}). \end{aligned}$$

In other words, after a request, if the server cannot provide a resource and it is stable—so, there is no possibility that after some time the resource will be available—then the server should not give a positive response to the client. \(\blacksquare \)

6 Conclusions

In order to devise effective verification strategies that straddle between the pre- and post-deployment phases of software production, one needs to understand better the monitorability aspects of the correctness properties that are to be verified. We have presented a general framework that allows us to determine maximal monitorable fragments of an expressive logic that is agnostic of the verification technique employed, namely \(\mu \mathrm{HML}\). By way of a number of instantiations, we also show how the framework can be used to reason about the monitorability induced by various forms of augmented traces. Our next immediate concern is to validate the proposed instantiations empirically by constructing monitoring systems and tools that are based on these results, as we did already for the original monitorability results of [21, 22] in [9, 10, 12].

Related Work. Monitorability for \(\mu \mathrm{HML}\) was first examined in [21, 22]. This work introduced the external monitoring system and identified \(\mathrm{WsHML}\) as the largest monitorable fragment of \(\mu \mathrm{HML}\), with respect to that system. The ensuring work in [2] focused on monitoring setups that can distinguish silent actions to a varying degree, and introduced the basic monitoring system, showing analogous monitorability results for \(\mu \text {HML}\).

Monitorability has also been examined for languages defined over traces, such as LTL. Pnueli and Zaks in [32] define a notion of monitorability over traces, although they do not attempt maximal monitorability results. Diekert and Leuckert revisited monitorability from a topological perspective in [16]. Falcone et al. in [17] extended the work in [32] to incorporate enforcement and introduced a notion of monitorability on traces that is parameterized with respect to a truth domain that corresponds to our separation to acceptance- and rejection-monitorable properties. In [13], the authors use a monitoring system that can generate derivations of satisfied formulas from a fragment of LTL. However, they do not argue that this fragment is somehow maximal. There is a significant body of work on synthesizing monitors from LTL formulas, e.g. [13, 23, 33, 35], and it would be worth investigating whether our general techniques for monitor synthesis can be applied effectively in these cases.

Phillips introduced refusal testing in [31] as a way to extend the capabilities of testing (see [18] for a discussion on how our monitoring setup relates to testing preorders). The meaning of refusals in [31] is very close to the one in Definition 14 and it is interesting to note how Phillips’ use of tests for refusal formulas is similar to our monitoring mechanisms for refusals. Abramsky [1] uses refusals in the context of a much more powerful testing machinery, in order to identify the kind of testing power that is required for distinguishing non-bisimilar processes.

The decomposition of the verification burden across verification techniques, or across iterations of alternating monitoring runs as presented in Sect. 5, can be seen as a method for quotienting. In [7] Andersen studies quotienting of the specification logics discussed in this paper to reduce the state-space during model checking and thus increase its efficiency (see also [27] for a more recent treatment). The techniques used rely heavily on the model’s concurrency constructs and may produce formulas that are larger in size than the original, but which can be checked against a smaller component of the model. In multi-pronged approaches to verification one would expect to encounter similar difficulties occasionally.