Synthesizing adaptive test strategies from temporal logic specifications
 185 Downloads
Abstract
Constructing good test cases is difficult and timeconsuming, especially if the system under test is still under development and its exact behavior is not yet fixed. We propose a new approach to compute test strategies for reactive systems from a given temporal logic specification using formal methods. The computed strategies are guaranteed to reveal certain simple faults in every realization of the specification and for every behavior of the uncontrollable part of the system’s environment. The proposed approach supports different assumptions on occurrences of faults (ranging from a single transient fault to a persistent fault) and by default aims at unveiling the weakest one. We argue that such tests are also sensitive for more complex bugs. Since the specification may not define the system behavior completely, we use reactive synthesis algorithms with partial information. The computed strategies are adaptive test strategies that react to behavior at runtime. We work out the underlying theory of adaptive test strategy synthesis and present experiments for a safetycritical component of a realworld satellite system. We demonstrate that our approach can be applied to industrial specifications and that the synthesized test strategies are capable of detecting bugs that are hard to detect with random testing.
Keywords
Automatic test case generation System testing Specification testing Adaptive tests Synthesis Reactive systems Mutation testing1 Introduction
Model checking [12, 48] is an algorithmic approach to prove that a model of a system adheres to its specification. However, model checking cannot always be applied effectively to obtain confidence in the correctness of a system. Possible reasons include scalability issues, thirdparty IP components for which no code or detailed model is available, or a high effort for building system models that are sufficiently precise. Moreover, model checking cannot verify the final and “live” product but only an (abstracted) model.
Testing is a natural alternative to complement formal methods like model checking, and automatic test case generation helps keeping the effort acceptable. Blackbox testing techniques, where tests are derived from a specification rather than the implementation, are particularly attractive: first, tests can be computed before the implementation phase starts, and thus guide the development. Second, the same tests can be reused across different realizations of a given specification. Third, a specification is usually much simpler than its implementation, which gives a scalability advantage. At the same time, the specification focuses on critical functional aspects that require thorough testing. Faultbased techniques [29] are particularly appealing, where the computed tests are guaranteed to reveal all faults in a certain fault class—after all, the foremost goal in testing is to detect bugs.
Methods to derive tests from declarative requirements (see, e.g., [25]) are sparse. One issue in this setting is controllability: the requirements leave plenty of implementation freedom, so they cannot be used to fully predict the system behavior for all given inputs. Consequently, test cases have to be adaptive, i.e., able to react to observed behavior at runtime, rather than being fixed input sequences. This is particularly true for reactive systems that continuously interact with their environment. Existing methods often work around this complication by requiring a deterministic system model as additional input [24]. Even a probabilistic model fixes the behavior in a way not necessarily required by the specification.
In previous work, we presented a faultbased approach to compute adaptive test strategies for reactive systems [10]. This approach generates tests that enforce certain coverage goals for every implementation of a provided specification. The generated tests can be used across realizations of the specification that differ not only in implementation details but also in their observable behavior. This is, e.g., useful for standards and protocols that are implemented by multiple vendors or for systems under development, where the exact behavior is not yet fixed.
Figure 1 outlines the assumed testing setup and shows how the approach for synthesizing adaptive test strategies (illustrated in black) can be integrated in an existing testing flow.
The user provides a specification \(\varphi \), which describes the requirements of the system under test (SUT) and additionally a fault model \(\delta \), which defines the coverage goal in terms of a class of faults for which the tests shall cause a specification violation. Both the specification and the coverage goal are expressed in Linear Temporal Logic (LTL) [46]. By default, our approach supports the detection of transient and permanent faults and distinguishes four fault occurrence frequencies: faults that occur at least (1) once, (2) repeatedly, (3) from some point on, or (4) permanently. Besides the four default fault occurrence frequencies, a user can also provide a custom frequency using LTL. Our approach then automatically synthesizes a test strategy to reveal a fault for the lowest frequency possible. Such a test strategy guarantees to cause a specification violation if the fault occurs with the defined fault occurrence (and all higher fault occurrence frequencies) and the test is executed long enough. Although test oracles can be synthesized from the specification \(\varphi \), in this paper, we do not explicitly consider test oracle synthesis, but assume that the oracles are available or manually generated for the test strategies.
An approach to compute adaptive test strategies for reactive systems from temporal specifications that provide implementation freedom. The tests are guaranteed to reveal certain bugs for every realization of the specification.
The underlying theory is considered in detail, i.e., we show that the approach is sound and complete for many interesting cases and provide additional solutions for other cases that may arise in practice.
A proof of concept tool, called PARTYStrategy,^{2} that is capable of generating multiple different test strategies, implemented on top of the synthesis tool PARTY [31].
A postprocessing procedure to generalize a test strategy by eliminating input constraints not necessary to guarantee a coverage goal.
A case study with a safetycritical software component of a realworld satellite system developed in the German Aerospace Center (DLR). We specify the system in LTL, synthesize test strategies, and evaluate the generated adaptive test strategies using code coverage and mutation coverage metrics. Our synthesized test strategies increase both the mutation coverage as well as the code coverage of random test cases by activating behaviors that require complex input sequences that are unlikely to be produced by random testing.
2 Motivating example
 1.
The traffic lights must never be green simultaneously.
 2.
If a car is waiting at the farmroad, f eventually turns \(\mathsf {true}\).
 3.
If no car is waiting at the farmroad, h eventually becomes \(\mathsf {true}\).
 4.
A picture is taken if a car on the farmroad makes a fast start.
Enforcing test objectives To mitigate scalability issues, we compute test cases directly from the specification \(\varphi \). Note that \(\varphi \) focuses on the desired properties only, and allows for plenty of implementation freedom. Our goal is to compute tests that enforce certain coverage objectives independent of this implementation freedom. Some uncertainties about the SUT behavior may actually be rooted in uncontrollable environment aspects (such as weather conditions) rather than implementation freedom inside the system. But for our testing approach, this makes no difference. We can force the farmroad’s traffic light to turn green (\(\mathsf {f}\) = \(\mathsf {true}\)) by relying on a correct implementation of Property 2 and setting \(\mathsf {c}\) = \(\mathsf {true}\). Depending on how the system is implemented, \(\mathsf {f}\) = \(\mathsf {true}\) might also be achieved by setting \(\mathsf {c}\) = \(\mathsf {false}\) all the time, but this is not guaranteed.
Adaptive test strategies Certain test goals may not be enforceable with a static input sequence. For our example, for \(\mathsf {p}\) to be \(\mathsf {true}\), a car must do a fast start. Yet, the specification does not prescribe the exact point in time when the traffic light turns to green. We thus synthesize adaptive test strategies that guide the controller’s inputs based on the previous inputs and outputs and, therefore, can take advantage of situational possibilities by exploiting previous system behavior.
Coverage objectives We follow a faultcentered approach to define the test objectives to enforce. The user defines a class of (potentially transient) faults. Our approach then computes adaptive test strategies (in form of state machines) that detect these faults. For a permanent stuckat0 fault at signal \(\mathsf {p}\), our approach could produce the test strategy \(\mathcal {T}_1\) from the previous paragraph: for any correct implementation of \(\varphi \), the strategy enforces \(\mathsf {p}\) becoming \(\mathsf {true}\) at least once. Thus, a faulty version where \(\mathsf {p}\) is always \(\mathsf {false}\) necessarily violates the specification, which can be detected [6] during test strategy execution. The test strategy \(\mathcal {T}_2\), as shown on the right of Fig. 3, is even more powerful since it also reveals stuckat0 faults for \(\mathsf {p}\) that occur not always but only from some point in time onwards. The difference to \(\mathcal {T}_1\) is mainly in the bold transition, which makes \(\mathcal {T}_2\) enforce \(\mathsf {p}\) = \(\mathsf {true}\) infinitely often rather than only once. Our approach distinguishes four fault occurrence frequencies (a fault occurs at least once, infinitely often, from some point on, or always) and synthesizes test strategies for the lowest one for which this is possible.
3 Background and related work
Faultbased testing Faultbased test case generation methods that use the concept of mutation testing [29] seed simple faults into a system implementation (or model) and compute tests that uncover these faults. Two hypotheses support the value of such tests. The Competent Programmer Hypothesis [1, 16] states that implementations are mostly close to correct. The Coupling Effect [16, 41] states that tests that detect simple faults are also sensitive to more complex faults. Our approach also relies on these hypotheses. However, in contrast to most existing work that considers permanent faults and deterministic system descriptions that define behavior unambiguously, our approach can deal with transient faults and focuses on uncovering faults in every implementation of a given LTL [46] specification (and all behaviors of the uncontrollable part of the system’s environment).
Adaptive tests If the behavior of the system or the uncontrollable part of the environment is not fully specified, tests may have to react to observed behavior at runtime to achieve their goals. Many testing theories and test case generation algorithms from specifications of labelled transition systems have been developed. Tretmans [49], for instance, proposed a testing theory analogous to the theory of testing equivalence and preorder for labelled transition systems under the assumption that an implementation communicates with its environment via inputs and outputs. Adaptive tests have been studied by Hierons [28] from a theoretical perspective, relying on fairness assumptions (every nondeterministic behavior is exhibited when trying often enough) or probabilities. Petrenko et al. compute adaptive tests for trace inclusion [43, 44, 45] or equivalence [35, 42, 44] from a specification given as nondeterministic finite state machine, also relying on fairness assumptions. Our work makes no such assumptions but considers the SUT to be fully antagonistic. Aichernig et al. [2] present a method to compute adaptive tests from (nondeterministic) UML state machines. Starting from an initial state, a trace to a goal state, the state that shall be covered by the resulting test case, is searched for every possible system behavior, issuing inconclusive verdicts only if the goal state is not reachable any more. Our approach uses reactive synthesis to enforce reaching the testing goal for all implementations if this is possible.
Testing as a game Yannakakis [52] points out that testing reactive systems can be seen as a game between two players: the tester providing inputs and trying to reveal faults, and the SUT providing outputs and trying to hide faults. The tester can only observe outputs and has thus partial information about the SUT. The goal is to find a strategy for the tester that wins against every SUT. The underlying complexities are studied by Alur et al. [3] in detail. Our work builds upon reactive synthesis [47] (with partial information [33]), which can also be seen as a game. However, we go far beyond the basic idea. We combine the game concept with userdefined fault models, work out the underlying theory, optimize the faults sensitivity in the temporal domain, and present a realization and experiments for LTL [46]. Nachmanson et al. [40] synthesize game strategies as tests for nondeterministic software models, but their approach is not faultbased and focuses on simple reachability goals. A variant of their approach considers the SUT to behave probabilistically with known probabilities [40]. The same model is also used in [8]. Test strategies for reachability goals are also considered by David et al. [13] for timed automata.
Vacuity detection Several approaches [5, 7, 34] aim at finding cases where a temporal specification is trivially satisfied (e.g., because the left side of an implication is false). Good tests avoid such vacuities to challenge the SUT. The method by Beer et al. [7] can produce witnesses that satisfy the specification nonvacuously, which can serve as tests. Our approach avoids vacuities by requiring that certain faulty SUTs violate the specification.
Testing with a model checker Model checkers can be utilized to compute tests from temporal specifications [25]. The method by Fraser and Ammann [22] ensures that properties are not vacuously satisfied and that faults propagate to observable property violations (using finitetrace semantics for LTL). Tan et al. [50] also define and apply a coverage metric based on vacuity for LTL. Ammann et al. [4] create tests from CTL [12] specifications using model mutations. All these methods assume that a deterministic system model is available in addition to the specification. Fraser and Wotawa [23] also consider nondeterministic models, but issue inconclusive verdicts if the system deviates from the behavior foreseen in the test case. In contrast, we search for test strategies that achieve their goal for every realization of the specification. Boroday et al. [11] aim for a similar guarantee (calling it strong test cases) using a model checker, but do not consider adaptive test cases, and use a finite state machine as a specification.
Synthesis of test strategies Bounded synthesis [21] aims for finding a system implementation of minimal size in the number of states. Symbolic procedures based on binary decision diagrams [18] and satisfiability solving [31] exist. In our setting, we do not synthesize an implementation of the system, but an adaptive test strategy, i.e., a controller that mimics the system’s environment to enforce a certain test goal. In contrast to a complete implementation of the controller, we strive for finding a partial implementation that assigns values only to those signals that necessarily contribute to reach the test goal. Other signals can be kept nondeterministic and either chosen during execution of the test strategy or randomized. We use a postprocessing procedure that eliminates assignments from the test strategy and invokes a modelchecker to verify that the test goal is still enforced. This postprocessing step is conceptually similar to procedures that aim for counterexample simplification [30] and don’t care identification in test patterns [38]. Jin et al. [30] separate a counterexample trace into forced segments that unavoidably progress towards the specification violation and free segments that, if avoided, may have prevented the specification violation. Our postprocessing step is similar, but instead of counterexamples, adaptive test strategies are postprocessed. Miyase and Kajihara [38] present an approach to identify don’t cares in test patterns of combinational circuits. In contrast to combinational circuits, we deal with reactive systems. Instead of postprocessing a complete test strategy, a partial test strategy can be directly synthesized by modifying a synthesis procedure to compute minimum satisfying assignments [17]. Although feasible, modifying a synthesis procedure requires a lot of work. Our postprocessing procedure uses the synthesis procedure in a plugandplay fashion and does not require manual changes in the synthesis procedure.
4 Preliminaries and notation
Traces We want to test reactive systems that have a finite set \(I=\{i_1,\ldots ,i_m\}\) of Boolean inputs and a finite set \(O=\{o_1,\ldots ,o_n\}\) of Boolean outputs. The input alphabet is \(\Sigma _I=2^I\), the output alphabet is \(\Sigma _O=2^O\), and \(\Sigma =2^{I\cup O}\). An infinite word \({\overline{\sigma }}\) over \(\Sigma \) is an (execution) trace and the set \(\Sigma ^\omega \) is the set of all infinite words over \(\Sigma \).
\(\sigma _0 \sigma _1 \sigma _2 \ldots \models p\) iff \(p \in \sigma _0\),
\({\overline{\sigma }}\models \lnot \varphi \) iff \({\overline{\sigma }}\not \models \varphi \),
\({\overline{\sigma }}\models \varphi _1 \vee \varphi _2\) iff \({\overline{\sigma }}\models \varphi _1\) or \({\overline{\sigma }}\models \varphi _2\),
\(\sigma _0 \sigma _1 \sigma _2 \ldots \models {{\,\mathrm{\mathsf {X}}\,}}\varphi \) iff \(\sigma _1 \sigma _2 \ldots \models \varphi \), and
\(\sigma _0 \sigma _1 \ldots \models \varphi _1 \mathbin {\mathsf {U}}\varphi _2\) iff \(\exists j \ge 0 {{\,\mathrm{\mathbin {.}}\,}}\sigma _j \sigma _{j+1} \ldots \models \varphi _2 \wedge \forall 0 \le k < j {{\,\mathrm{\mathbin {.}}\,}}\sigma _k \sigma _{k+1} \ldots \models \varphi _1\).
Mealy machines We use Mealy machines to model the reactive system under test. A Mealy machine is a tuple \(\mathcal {S}= (Q, q_0, \Sigma _I, \Sigma _O, \delta , \lambda )\), where Q is a finite set of states, \(q_0\in Q\) is the initial state, \(\delta : Q \times \Sigma _I\rightarrow Q\) is a total transition function, and \(\lambda : Q \times \Sigma _I\rightarrow \Sigma _O\) is a total output function. Given the input trace \({\overline{\sigma _I}}= x_0 x_1 \ldots \in \Sigma _I^\omega \), \(\mathcal {S}\) produces the output trace \({\overline{\sigma _O}}= \mathcal {S}({\overline{\sigma _I}}) = \lambda (q_0, x_0) \lambda (q_1, x_1) \ldots \in \Sigma _O^\omega \), where \(q_{i+1} = \delta (q_i, x_i)\) for all \(i \ge 0\). That is, in every time step i, the Mealy machine reads the input letter \(x_i\in \Sigma _I\), responds with an output letter \(\lambda (q_i, x_i) \in \Sigma _O\), and updates its state to \(q_{i+1} = \delta (q_i, x_i)\). A Mealy machine can directly model synchronous hardware designs, but also other systems with inputs and outputs evolving in discrete time steps. We write \({\mathsf {Mealy}}(I,O)\) for the set of all Mealy machines with inputs \(I\) and outputs \(O\).
Moore machines We use Moore machines to describe test strategies. A Moore machine is a special Mealy machine with \(\forall q\in Q{{\,\mathrm{\mathbin {.}}\,}}\forall x,x'\in \Sigma _I{{\,\mathrm{\mathbin {.}}\,}}\lambda (q,x) = \lambda (q,x')\). That is, \(\lambda (q,x)\) is insensitive to x, i.e., becomes a function \(\lambda : Q \rightarrow \Sigma _O\). This means that the input \(x_i\) at step i can affect the next state \(q_{i+1}\) and thus the next output \(\lambda (q_{i+1})\) but not the current output \(\lambda (q_i)\). We write \({\mathsf {Moore}}(I,O)\) for the set of all Moore machines with inputs \(I\) and outputs \(O\).
Composition Given Mealy machines \(\mathcal {S}_1 = (Q_1, q_{0,1}, 2^I, 2^{O_1}, \delta _1, \lambda _1) \in {\mathsf {Mealy}}(I,O_1)\) and \(\mathcal {S}_2 = (Q_2, q_{0,2}, 2^{I\cup O_1}, 2^{O_2}, \delta _2, \lambda _2) \in {\mathsf {Mealy}}(I\cup O_1, O_2)\), we write \(\mathcal {S}= \mathcal {S}_1 \circ \mathcal {S}_2\) for their sequential composition \(\mathcal {S}= (Q_1 \times Q_2, (q_{0,1}, q_{0,2}), 2^I, 2^{O_1 \cup O_2},\)\( \delta , \lambda )\), where \(\mathcal {S}\in {\mathsf {Mealy}}(I,O_1\cup O_2)\) with \(\delta \bigl ((q_1, q_2), x\bigr ) = \bigl (\delta _1(q_1,x), \delta _2(q_2, x \cup \lambda _1(q_1,x))\bigr )\) and \(\lambda \bigl ((q_1, q_2), x\bigr ) = \lambda _1(q_1,x) \cup \lambda _2\bigl (q_2,x \cup \lambda _1(q_1,x)\bigr )\). Note that \(x \in 2^I\).
Systems and test strategies A reactive system\(\mathcal {S}\) is a Mealy machine. An (adaptive) test strategy is a Moore machine \(\mathcal {T}= (T, t_0, \Sigma _O, \Sigma _I, \Delta , \Lambda )\) with input and output alphabet swapped. That is, \(\mathcal {T}\) produces values for input signals and reacts to values of output signals. A test strategy \(\mathcal {T}\) can be run on a system \(\mathcal {S}\) as follows. In every time step i (starting with \(i=0\)), \(\mathcal {T}\) first computes the next input \(x_i=\Lambda (t_i)\). Then, the system computes the output \(y_i = \lambda (q_i, x_i)\). Finally, both machines compute their next state \(t_{i+1} = \Delta (t_i, y_i)\) and \(q_{i+1} = \delta (q_i, x_i)\). We write \({\overline{\sigma }}(\mathcal {T},\mathcal {S}) = (x_0 \cup y_0) (x_1 \cup y_1) \ldots \in \Sigma ^\omega \) for the resulting execution trace. If \(\mathcal {T}= (T, t_0, 2^{O'}, \Sigma _I, \Delta , \Lambda ) \in {\mathsf {Moore}}(O', I)\) can observe only a subset \(O'\subseteq O\) of the outputs, we define \({\overline{\sigma }}(\mathcal {T},\mathcal {S})\) with \(t_{i+1} = \Delta (t_i, y_i \cap O')\). A test suite is a set \(\text {TS}\subseteq {\mathsf {Moore}}(O,I)\) of adaptive test strategies.
Realizability A Mealy machine \(\mathcal {S}\in {\mathsf {Mealy}}(I,O)\)realizes an LTL formula \(\varphi \), written Open image in new window, if \(\forall \mathcal {M}\in {\mathsf {Moore}}(O, I) {{\,\mathrm{\mathbin {.}}\,}}{\overline{\sigma }}(\mathcal {M},\mathcal {S}) \models \varphi \). An LTL formula \(\varphi \) is Mealyrealizable if there exists a Mealy machine that realizes it. A Moore machine \(\mathcal {M}\in {\mathsf {Moore}}(I, O)\) realizes \(\varphi \), written Open image in new window, if \(\forall \mathcal {S}\in {\mathsf {Mealy}}(O, I) {{\,\mathrm{\mathbin {.}}\,}}{\overline{\sigma }}(\mathcal {M},\mathcal {S}) \models \varphi \). A model checking procedure checks if a given Mealy (Moore) machine \(\mathcal {S}\) (\(\mathcal {M}\)) realizes an LTL specification \(\varphi \) and returns \(\mathsf {true}\) iff Open image in new window (Open image in new window) holds. We denote the call of a model checking procedure by \({\mathsf {modelcheck}}\bigl (\mathcal {S},\varphi \bigr )\) (\({\mathsf {modelcheck}}\bigl (\mathcal {M},\varphi \bigr )\)).
Reactive synthesis We use reactive synthesis [47] to compute test strategies. A reactive (Moore, LTL) synthesis procedure takes as input a set \(I\) of Boolean inputs, a set \(O\) of Boolean outputs, and an LTL specification \(\varphi \) over these signals. It produces a Moore machine \(\mathcal {M}\in {\mathsf {Moore}}(I, O)\) that realizes \(\varphi \), or the message unrealizable if no such Moore machine exists. We denote this computation by \(\mathcal {M}= {\mathsf {synt}}(I, O, \varphi )\).
Synthesis with partial information [33] is defined similarly, but this problem takes a subset \(I' \subseteq I\) of the inputs as an additional input. As output, the synthesis procedure produces a Moore machine \(\mathcal {M}' = {\mathsf {synt}}_p(I, O, \varphi , I')\) with \(\mathcal {M}' \in {\mathsf {Moore}}(I', O)\) that realizes \(\varphi \) while only observing the inputs \(I'\), or the message unrealizable if no such Moore machine exists. We assume that both synthesis procedure, \({\mathsf {synt}}\) and \({\mathsf {synt}}_p\), can be called incrementally with an additional parameter \(\Theta \), where \(\Theta \) denotes a set of Moore machines. The incremental synthesis procedures \(\mathcal {M}= {\mathsf {synt}}(I, O, \varphi , \Theta )\) and \(\mathcal {M}' = {\mathsf {synt}}_p(I, O, \varphi , I', \Theta )\) compute Moore machines \(\mathcal {M}\) and \(\mathcal {M}^\prime \), respectively, as before but with the additional constraints that \(\mathcal {M}, \mathcal {M}^\prime \not \in \Theta \).
In synthesis, we often use assumptionsA and guaranteesG. The assumptions are meant to state the requirements on the environment under which the guarantees should be met by the synthesized system. Technically, we synthesize a system \(\mathcal {M}\) that fulfills the specification \(A \rightarrow G\). Obviously, whenever the environment violates the assumptions, the implication is trivially satisfied and the behavior of the system is irrelevant.
For the purposes of this paper, we take synthesis as a black box. We will not describe the technical details of synthesis here but rather refer the interested reader to [9] for details.
Fault versus failure A Mealy machine \(\mathcal {S}\in {\mathsf {Mealy}}(I,O)\) is faulty with respect to LTL formula \(\varphi \) (specification) iff Open image in new window, i.e., \(\exists \mathcal {M}\in {\mathsf {Moore}}(O, I) {{\,\mathrm{\mathbin {.}}\,}}{\overline{\sigma }}(\mathcal {M},\mathcal {S}) \not \models \varphi \). We call a trace \({\overline{\sigma }}(\mathcal {M},\mathcal {S})\) that uncovers a faulty behavior of \(\mathcal {S}\) a failure and a deviation between \(\mathcal {S}\) and any correct realization \(\mathcal {S}^\prime \), i.e., Open image in new window, a fault. For a fixed faulty \(\mathcal {S}\), there are multiple correct \(\mathcal {S}^\prime \) that realize \(\varphi \) and thus a fault in \(\mathcal {S}\) can be characterized by multiple, different ways. As a simplification, we assume that in practice every faulty \(\mathcal {S}\) is close to a correct \(\mathcal {S}^\prime \) and only deviates in a simple fault. In the next section, we will show how this idea can be leveraged to determine test suites independent of the implementation and the concrete fault manifestation.
5 Synthesis of adaptive test strategies
This section presents our blackbox testing approach for synthesizing adaptive test strategies for reactive systems specified in LTL. First, we elaborate on the coverage objective we aim to achieve. Then we present our strategy synthesis algorithm. Finally, we discuss extensions and variants of the algorithm.
5.1 Coverage objective for test strategy computation
Definition 1
That is, for every output \(o_i\), system Open image in new window, and fault Open image in new window, \(\text {TS}\) must contain a test strategy \(\mathcal {T}\) that reveals the fault by causing a specification violation (Fig. 4). Note that the test strategies \(\mathcal {T}\in \text {TS}\subseteq {\mathsf {Moore}}(O,I)\) cannot observe the signal \(o_i'\). The reason is that this signal \(o_i'\) does not exist in the real system implementation(s) on which we run our tests—it was only introduced to define our coverage objective.
There can be an unbounded number of system realizations Open image in new window and faults Open image in new window. Computing a separate test strategy for each combination is thus not a viable option. We rather strive for computing only one test strategy per output variable.
Theorem 1
Proof
Example 1
Consider a system with input \(I=\{i\}\), output \(O=\{o\}\), and specification \(\varphi = \bigl ( {{\,\mathrm{\mathsf {G}}\,}}(i \rightarrow {{\,\mathrm{\mathsf {G}}\,}}i) \wedge {{\,\mathrm{\mathsf {F}}\,}}i \bigr ) \rightarrow \bigl ( {{\,\mathrm{\mathsf {G}}\,}}(o \rightarrow {{\,\mathrm{\mathsf {G}}\,}}o) \wedge {{\,\mathrm{\mathsf {F}}\,}}o \wedge {{\,\mathrm{\mathsf {G}}\,}}(i \vee \lnot o) \bigr )\). The left side of the implication assumes that the input i is set to \(\mathsf {true}\) at some point, after which i remains \(\mathsf {true}\). The right side requires the same for the output o. In addition, o must not be raised while i is still \(\mathsf {false}\). This specification is realizable (e.g., by always setting \(o=i\)). The test suite \(\text {TS}= \{\mathcal {T}_5\}\) with \(\mathcal {T}_5\) shown in Fig. 5 is universally complete with respect to fault model \(\delta = {{\,\mathrm{\mathsf {F}}\,}}(o \leftrightarrow \lnot o')\), which requires the output to flip at least once: as long as i is \(\mathsf {false}\), any correct system implementation Open image in new window must keep the output \(o'=\mathsf {false}\). Eventually, Open image in new window must flip the output o to \(\mathsf {true}\). When this happens, i is set to \(\mathsf {true}\) by \(\mathcal {T}_5\) so that the resulting trace \({\overline{\sigma }}(\mathcal {T}, \mathcal {S}' \circ F)\) violates \(\varphi \). Still, Eq. 6 is \(\mathsf {false}\).^{5} Strategy \(\mathcal {T}_5\) does not satisfy Eq. 6 because for the system \(\mathcal {S}\in {\mathsf {Mealy}}(\{i\},\{o,o'\})\) that sets \(o'=\mathsf {true}\) and \(o=\mathsf {false}\) in all time steps, we have \({\overline{\sigma }}(\mathcal {T}_5, \mathcal {S}) \models \bigl (\varphi [o_i\leftarrow o_i'] \wedge \delta \wedge \varphi \bigr )\). The reason is that i stays \(\mathsf {false}\), so \(\varphi [o_i\leftarrow o_i']\) and \(\varphi \) are vacuously satisfied by \({\overline{\sigma }}(\mathcal {T}_5, \mathcal {S})\). The formula \(\delta \) is satisfied because \(o \leftrightarrow \lnot o'\) holds in all time steps. Thus, \(\mathcal {S}\) is a counterexample to \(\mathcal {T}_5\) satisfying Eq. 6. Similar counterstrategies exist for all other test strategies.
The fact that Eq. 6 is not a necessary condition for a universally complete test suite to exist is somewhat surprising, especially in the light of the following two lemmas. Based on these lemmas, the subsequent propositions will show that Eq. 6 is both sufficient and necessary (i.e., one test per output is enough) for many interesting cases.
The following lemma, which is based on the determinacy of completeinformation games, states that the following two conditions are equivalent: (1) there is a single test strategy that shows a fault in any implementation and (2) for any implementation there is a strategy that shows the fault. This means that in certain settings, a single test strategy suffices to find a fault.
Lemma 1
For every LTL specification \(\psi \) over some inputs \(I\) and outputs \(O\), we have that \(\exists \mathcal {T}\in {\textsf {Moore}}(O, I) {{\,\mathrm{\mathbin {.}}\,}}\forall \mathcal {S}\in {\textsf {Mealy}}(I, O) {{\,\mathrm{\mathbin {.}}\,}}{\overline{\sigma }}(\mathcal {T}, \mathcal {S}) \models \psi \) holds if and only if \(\forall \mathcal {S}\in {\textsf {Mealy}}(I, O) {{\,\mathrm{\mathbin {.}}\,}}\exists \mathcal {T}\in {\textsf {Moore}}(O, I) {{\,\mathrm{\mathbin {.}}\,}}{\overline{\sigma }}(\mathcal {T}, \mathcal {S}) \models \psi \) holds.
Proof
The second lemma is again limited to perfect information. It states that the following two conditions are equivalent: (1) for any system that fulfills an assumption A, there is a test strategy that elicits behavior satisfying a guarantee G and (2) for any system there is a test strategy that elicits behavior satisfying the LTL property \(A \rightarrow G\). This lemma implies that in the case of complete information, an LTL synthesis tool suffices.
Lemma 2
Proof
Yet, in our setting, test strategies \(\mathcal {T}\in {\mathsf {Moore}}(O,I)\) have incomplete information about the system \(\mathcal {S}\in {\mathsf {Mealy}}(I,O\cup \{o_i'\})\) because they cannot observe \(o_i'\). Still, \(\mathcal {T}\) must enforce \((\varphi [o_i\leftarrow o_i'] \wedge \delta ) \rightarrow \lnot \varphi ,\) which refers to this hidden signal. Thus, Lemma 1 and 2 cannot be applied to Eq. 6 in general. However, in cases where there is (effectively) no hidden information, the lemmas can be used to prove that Eq. 6 is both a necessary and a sufficient condition for a universally complete test suite to exist. The following propositions show that this holds for many cases of practical interest.
The intuitive reason is that \(\varphi [o_i\leftarrow o_i']\) can be rewritten to \(\varphi [o_i\leftarrow \psi ]\) in Eq. 6, which eliminates the hidden signal such that Lemmas 1 and 2 can be applied.
Proposition 1
Given a fault model of the form \(\delta = {{\,\mathrm{\mathsf {G}}\,}}(o_i' \leftrightarrow \psi )\), where \(\psi \) is an LTL formula over \(I\) and \(O\), a universally complete test suite \(\text {TS}\subseteq {\textsf {Moore}}(O,I)\) with respect to \(\delta ,I,O\), and \(\varphi \) exists if and only if Eq. 6 holds.
Proof
Proposition 1 entails that computing one test strategy per output \(o_i\in O\) is enough for fault models such as permanent bit flips (defined by \(\delta = {{\,\mathrm{\mathsf {G}}\,}}(o_i' \leftrightarrow \lnot o_i)\)).
Proposition 2
If the fault model \(\delta \) does not reference \(o_i'\), a universally complete test suite \(\text {TS}\subseteq {\textsf {Moore}}(O,I)\) with respect to \(\delta ,I,O\), and \(\varphi \) exists iff Eq. 6 holds.
Proof
We show that Eq. 6 holds if and only if Eq. 7 holds. The remaining steps have already been proven for Theorem 1.
Lemma 3
Proof
Direction \(\Leftarrow \) is obvious because Eq. 6 contains stronger assumptions (and \(\forall \mathcal {S}\in {\mathsf {Mealy}}(I,O)\) can be changed to \(\forall \mathcal {S}\in {\mathsf {Mealy}}(I,O\cup \{o_i'\})\) in Eq. 11 because \(\delta \rightarrow \lnot \varphi \) does not contain \(o_i'\)).
Proof
Direction \(\Rightarrow \): is obvious because Eq. 11 is equivalent to Eq. 6 (Lemma 3) and Eq. 6 implies Eq. 7 (see proof for Theorem 1).
Direction \(\Leftarrow \): we show that Eq. 11 being \(\mathsf {false}\) contradicts Eq. 7 being \(\mathsf {true}\). Equation 11 being \(\mathsf {false}\) implies Eq. 16 (see above). As Open image in new window implies Open image in new window for all \(\mathcal {T}\in {\mathsf {Moore}}(O\cup \{o_i'\},I)\) and thus also for all \(\mathcal {T}\in {\mathsf {Moore}}(O,I)\), Eq. 7 cannot hold.
Thus, the assumption Open image in new window can be dropped from Eq. 5 if the fault model does not reference \(o_i'\). Correspondingly, \({\overline{\sigma }}(\mathcal {T}, \mathcal {S}) \models \bigl ((\varphi [o_i\leftarrow o_i'] \wedge \delta ) \rightarrow \lnot \varphi \bigr )\) simplifies to \({\overline{\sigma }}(\mathcal {T}, \mathcal {S}) \models (\delta \rightarrow \lnot \varphi )\) in Eq. 6. Since \(o_i'\) is now gone, Lemmas 1 and 2 apply. In general, the assumption Open image in new window is needed to prevent a faulty system Open image in new window from compensating the fault Open image in new window such that Open image in new window. E.g., for \(I=\emptyset \), \(O=\{o\}\), \(\varphi ={{\,\mathrm{\mathsf {G}}\,}}o\) and \(\delta = {{\,\mathrm{\mathsf {G}}\,}}(o \leftrightarrow \lnot o')\), Eq. 5 would be \(\mathsf {false}\) without Open image in new window because there exists an \(\mathcal {S}'\) that always sets \(o'=\mathsf {false}\), in which case \(\mathcal {S}' \circ F\) has o correctly set to \(\mathsf {true}\). However, if \(\delta \) does not reference \(o'\), such a fault compensation is not possible.
Proposition 2 applies to permanent or transient stuckat0 or stuckat1 faults (e.g., \(\delta ={{\,\mathrm{\mathsf {F}}\,}}\lnot o_i\) or \(\delta ={{\,\mathrm{\mathsf {G}}\,}}\!{{\,\mathrm{\mathsf {F}}\,}}o_i\)), but also to faults where \(o_i\) keeps its previous value (e.g., \(\delta ={{\,\mathrm{\mathsf {F}}\,}}(o_i\leftrightarrow {{\,\mathrm{\mathsf {X}}\,}}o_i\)) or takes the value of a different input or output (e.g., \(\delta ={{\,\mathrm{\mathsf {G}}\,}}\!{{\,\mathrm{\mathsf {F}}\,}}(o_i \leftarrow i_3)\)). Together with Proposition 1, it shows that computing one test strategy per output is enough for many interesting fault models. Finally, even if neither Propositions 1 nor 2 applies, computing one test strategy per output may still suffice for the concrete \(\varphi \) and \(\delta \) at hand. In the next section, we thus rely on Eq. 6 to compute one test strategy per output in order to obtain universally complete test suites.
5.2 Test strategy computation
Fault frequency \({{\,\mathrm{\mathsf {G}}\,}}\) means that the fault is permanent.
Frequency \({{\,\mathrm{\mathsf {F}}\,}}\! {{\,\mathrm{\mathsf {G}}\,}}\) means that the fault occurs from some time step i on permanently. Yet, we do not make any assumptions about the precise value of i.
Frequency \({{\,\mathrm{\mathsf {G}}\,}}\! {{\,\mathrm{\mathsf {F}}\,}}\) states that the fault strikes infinitely often, but not when exactly.
Frequency \({{\,\mathrm{\mathsf {F}}\,}}\) means that the fault occurs at least once.
Sanity checks Note that our coverage goal in Eq. 5 is vacuously satisfied by any test suite if \(\varphi \) or \(\delta \) is unrealizable. The reason is that the test suite must reveal every fault F realizing \(\delta \) for every system \(\mathcal {S}'\) realizing \(\varphi \). If there is no such fault or system, this is trivial. As a sanity check, we thus test the (Mealy) realizability of \(\varphi \) and \({{\,\mathrm{\mathsf {G}}\,}}\kappa \) before starting Algorithm 1 (because if \({{\,\mathrm{\mathsf {G}}\,}}\kappa \) is realizable, then so are \({{\,\mathrm{\mathsf {F}}\,}}\!{{\,\mathrm{\mathsf {G}}\,}}\kappa \), \({{\,\mathrm{\mathsf {G}}\,}}\!{{\,\mathrm{\mathsf {F}}\,}}\kappa \) and \({{\,\mathrm{\mathsf {F}}\,}}\kappa \)).
Handling unrealizability If, for some output, Line 3 of Algorithm 2 returns unrealizable for the highest fault frequency \(\mathsf {frq}={{\,\mathrm{\mathsf {G}}\,}}\), we print a warning and suggest that the user examines these cases manually. There are two possible reasons for unrealizability. First, due to limited observability, we do not find a test strategy although one exists (see Example 1). Second, no test strategy exists because there is some Open image in new window and Open image in new window such that the composition \(\mathcal {S}= \mathcal {S}' \circ F\) (see Fig. 4) is correct, i.e., Open image in new window. In other words, for some realization, adding the fault may result in an equivalent mutant in the sense that the specification is still satisfied. For example, in case of a stuckat0 fault model, there may exist a realization of the specification that has the considered output \(o_i\in O\) fixed to \(\mathsf {false}\). Such a high degree of underspecification is at least suspicious and may indicate unintended vacuities [7] in the specification \(\varphi \), which should be investigated manually. If Proposition 1 or 2 applies, or if \({\mathsf {synt}}\bigl (O\cup \{o_i'\}, I, \bigl (\varphi [o_i\leftarrow o_i'] \wedge {{\,\mathrm{\mathsf {G}}\,}}(\kappa )\bigr ) \rightarrow \lnot \varphi ,\Theta \bigr )\) returns unrealizable , we can be sure that the second reason applies. Then, we can even compute additional diagnostic information in the form of two Mealy machines Open image in new window and Open image in new window (by synthesizing some Mealy machine Open image in new window and splitting it into \(\mathcal {S}'\) and F by stripping off different outputs). The user can then try to find inputs for \(\mathcal {S}'\circ F\) such that the resulting trace violates the specification. Failing to do so, the user will understand why no test strategy exists (see also [32]).
If the specification is as intended but no test strategy exists, we could use “collaborative” strategies. Among such strategies, we can choose one that requires as little collaboration from the adversary as necessary [19, 20]. In our setting, this means that we weaken the requirement that we find the fault regardless of the implementation of the system but rather require that we find it for maximal classes of implementations. This is not unusual in testing, which is typically explorative and does not make the guarantees that we attempt to give. For instance, if the specification is \({{\,\mathrm{\mathsf {G}}\,}}(r \rightarrow {{\,\mathrm{\mathsf {F}}\,}}g)\) with input r and output g and the fault model is \({{\,\mathrm{\mathsf {G}}\,}}\!{{\,\mathrm{\mathsf {F}}\,}}\lnot g\), then there is no test strategy that finds this fault for all implementations. Yet, an input sequence in which r is always \(\mathsf {true}\) is a better test sequence than one in which r is always \(\mathsf {false}\), because the former strategy will find the fault in some implementations, whereas the latter will not find the fault in any implementation. We leave the extension to collaborative strategies to future work.
Complexity Both \({\mathsf {synt}}_p(O, I, \psi , O',\Theta )\) and \({\mathsf {synt}}(O, I, \psi ,\Theta )\) are 2EXPTIME complete in \(\psi \) [33], so the execution time of Algorithm 2, and consequently also Algorithm 1, are at most doubly exponential in \(\varphi  + \kappa \).
Theorem 2
For a system with inputs \(I\), outputs \(O\), and LTL specification \(\varphi \) over \(I\cup O\), if the fault kind \(\kappa \) is of the form \(\kappa =\psi \) or \(\kappa = (o_i' \leftrightarrow \psi )\), where \(\psi \) is an LTL formula over \(I\) and \(O\), \(\textsc {SyntLtlTest}(I, O, \varphi , \kappa )\) will return a universally complete test suite with respect to the fault model \(\delta ={{\,\mathrm{\mathsf {G}}\,}}(\kappa )\) if such a test suite exists.
Proof
Since \({{\,\mathrm{\mathsf {G}}\,}}(\kappa )\) implies \(\mathsf {frq}(\kappa )\) for all \(\mathsf {frq}\in \{{{\,\mathrm{\mathsf {F}}\,}}, {{\,\mathrm{\mathsf {G}}\,}}\! {{\,\mathrm{\mathsf {F}}\,}}, {{\,\mathrm{\mathsf {F}}\,}}\! {{\,\mathrm{\mathsf {G}}\,}}, {{\,\mathrm{\mathsf {G}}\,}}\}\), Theorem 1 and the guarantees of \({\mathsf {synt}}_p\) entail that the resulting test suite \(\text {TS}\) is universally complete with respect to \(\delta ={{\,\mathrm{\mathsf {G}}\,}}(\kappa )\) if \(\text {TS}=O\), i.e., if SyntLtlTest found a strategy for every output. It remains to be shown that \(\text {TS}=O\) for \(\kappa =\psi \) or \(\kappa = (o_i' \leftrightarrow \psi )\) if a universally complete test suite for \(\delta ={{\,\mathrm{\mathsf {G}}\,}}(\kappa )\) exists: either Propositions 1 or 2 states that Eq. 6 holds with \(\delta ={{\,\mathrm{\mathsf {G}}\,}}(\kappa )\). Thus, \({\mathsf {synt}}_p\) cannot return \(\textsf {unrealizable}\,\) in SyntLtlIterate with \(\mathsf {frq}= {{\,\mathrm{\mathsf {G}}\,}}\), so \(\text {TS}\) must be equal to \(O\) in this case.
Theorem 2 states that SyntLtlTest is not only sound but also complete for many interesting fault models such as stuckat faults or permanent bitflips. For \(\kappa =\psi \), Theorem 2 can even be strengthened to hold for all \(\delta =\mathsf {frq}(\kappa )\) with \(\mathsf {frq}\in \{{{\,\mathrm{\mathsf {F}}\,}}, {{\,\mathrm{\mathsf {G}}\,}}\! {{\,\mathrm{\mathsf {F}}\,}}, {{\,\mathrm{\mathsf {F}}\,}}\! {{\,\mathrm{\mathsf {G}}\,}}, {{\,\mathrm{\mathsf {G}}\,}}\}\).
5.3 Extensions and variants
A test suite computed by SyntLtlTest for specification \(\varphi \) and fault model \(\delta \) is universally complete and detects all faults with respect to \(\varphi \) and \(\delta \) independent of the implementation and the concrete fault manifestation if the fault manifests at one of the observable outputs as illustrated in Fig. 4.
In this section, we discuss some alternatives and extensions of our approach to improve fault coverage and performance.
Userspecified fault frequencies Besides the four fault frequencies (\({{\,\mathrm{\mathsf {G}}\,}}\), \({{\,\mathrm{\mathsf {F}}\,}}\!{{\,\mathrm{\mathsf {G}}\,}}\), \({{\,\mathrm{\mathsf {G}}\,}}\!{{\,\mathrm{\mathsf {F}}\,}}\), and \({{\,\mathrm{\mathsf {F}}\,}}\)), other fault frequencies (with different precedences) may be of interest, e.g., if a specific time step is of special interest. Algorithm 2 supports full LTL and thus the procedure can be extended by replacing Line 2 by “for each\(\mathsf {frq}\) from \(\mathsf {Frq}\) in this order”, where \(\mathsf {Frq}\) is an additional parameter provided by the user.
Faults at inputs In the fault model in the previous section, we only consider faults at the outputs. However, considering SUTs that behave as if they would have read a faulty input is possible as well (by changing Line 3 in Algorithm 1 to “for each\(o\in I\cup O\)do”).
Faults within a SUT If a fault manifests in a conditional fault in a system implementation, a universally complete \(\text {TS}\) may not be able to uncover the fault (see Example 2).
Example 2
Consider a system with input \(I=\{i\}\), output \(O=\{o\}\), and specification \(\varphi = {{\,\mathrm{\mathsf {G}}\,}}((i \leftrightarrow {{\,\mathrm{\mathsf {X}}\,}}\lnot i) \rightarrow {{\,\mathrm{\mathsf {X}}\,}}o)\). The specification enforces o to be set to \(\mathsf {true}\) whenever input i alternates between \(\mathsf {true}\) and \(\mathsf {false}\) in consecutive time steps. Consider a stuckat0 fault \(\delta = {{\,\mathrm{\mathsf {G}}\,}}\! {{\,\mathrm{\mathsf {F}}\,}}\lnot o\) at the output o. The test suite \(\text {TS}= \{\mathcal {T}_6\}\) with the test strategy \(\mathcal {T}_6\) illustrated in Fig. 7(on the left) is universally complete with respect to \(\delta \). The test strategy \(\mathcal {T}_6\) flips input i in every time step and thus forces the system to set \(o = \mathsf {true}\) in the second time step. Now consider the concrete and faulty system implementation in Fig. 7(on the right) of \(\varphi \). The test strategy \(\mathcal {T}_6\), when executed, first follows the bold edge and then remains forever in the same state. As a consequence, the fault in the system implementation, i.e., o stuckat0, is not uncovered. To uncover the fault, i has to be set to \(\mathsf {false}\) in the initial state.
Faults within a system implementation can be considered by computing more than one test strategy for a given test objective. We extend Algorithm 1 to generate a bounded number b of test strategies by setting \(\Theta \) = \(\text {TS}\) in Line 4 and enclosing the line by a whileloop that uses an additional integer variable c to count the number of test strategies generated per output \(o_i\). The whileloop terminates if no new test strategy could be generated or if c becomes equal to b. Note that this approach is correct in the sense that all computed test strategies are universally complete with respect to the fault model \(\mathsf {frq}(\kappa )\); however, in many cases it is more efficient to determine the lowest fault frequency first in Line 4 of Algorithm 2 and then generate multiple test strategies with the same (or higher) frequency by enclosing Line 3 with the whileloop.
Example 3
Consider a system with inputs \(I=\{r_1, r_2\}\) and outputs \(O=\{g_1, g_2\}\), which implements the specification of a twoinput arbiter \(\varphi = {{\,\mathrm{\mathsf {G}}\,}}(r_1 \rightarrow {{\,\mathrm{\mathsf {F}}\,}}g_1) \wedge {{\,\mathrm{\mathsf {G}}\,}}(r_2 \rightarrow {{\,\mathrm{\mathsf {F}}\,}}g_2) \wedge {{\,\mathrm{\mathsf {G}}\,}}( \lnot g_1 \vee \lnot g_2)\), i.e., every request \(r_i\) shall eventually be granted by setting \(g_i\) to \(\mathsf {true}\) and there shall never be two grants at the same time. A valid test strategy \(\mathcal {T}_7\) that tests for a stuckat0 fault of signal \(g_1\) from some point in time onwards may simply set \(r_1=\mathsf {true}\) and \(r_2=\mathsf {false}\) all the time (see Fig. 8). This forces the system in every time step to eventually grant this one request by setting \(g_1 = \mathsf {true}\). Another valid test strategy \(\mathcal {T}_8\) sets \(r_1=\mathsf {true}\) and \(r_2=\mathsf {true}\) all the time (see Fig. 8). Now the system has to grant both requests eventually. Both \(\mathcal {T}_7\) and \(\mathcal {T}_8\) test for the defined stuckat0 fault of signal \(g_1\) from some point in time onwards but will likely execute different paths in the SUT. Thus, considering the more general strategy \(\mathcal {T}_9\) (see Fig. 8) that sets \(r_1=\mathsf {true}\) all the time but puts no restrictions on the value of \(r_2\), allows the tester to evaluate different paths in the SUT while still testing for the defined fault class.
The procedure in Algorithm 3 generalizes a given test strategy \(\mathcal {T}\) by systematically removing variable assignments from states and employing a modelchecking procedure to ensure that the generalized test strategy still enforces the same test objective. The procedure loops in Line 2 over all states of \(\mathcal {T}\) and in Line 3 over all inputs. In Line 4 the assignment to the input \(x_i\) in a state is removed such that the corresponding variable becomes nondeterministic. If the resulting test strategy still enforce the test objective, then \(\mathcal {T}\) is replaced by its generalization. Otherwise, the change is reverted. Algorithm 3 is integrated into Algorithm 2 and applied in Line 5 to generalize each generated test strategy.
Note that generalizing a test strategy is a special way of computing multiple concrete test strategies, which was discussed in the previous section. However, generalization may fail when computing multiple strategies succeeds (by following different paths).
Optimization for full observability If we restrict our perspective to the case with no partial information, i.e., all signals are fully observable, we can employ the optimization discussed in Proposition 2 to improve the performance of test strategy generation. In Line 3 of Algorithm 2 we drop a part of the assumption and simplify the synthesis step to \(\mathcal {T}_i := {\mathsf {synt}}\bigl (O, I, \mathsf {frq}(\kappa ) \rightarrow \lnot \varphi , \Theta \bigr )\) for cases in which \(\kappa \) does not refer to a hidden signal \(o_i'\). Also, for a fault model \(\delta \) that describes a fault of kind \(\kappa = (o_i' \leftrightarrow \psi )\), where \(\psi \) is an LTL formula over \(I\) and \(O\), we can drop the part of the assumption according to Proposition 1 if \(\mathsf {frq}={{\,\mathrm{\mathsf {G}}\,}}\). This simplifies Line 3 of Algorithm 2 to \(\mathcal {T}_i := {\mathsf {synt}}\bigl (O, I, \varphi [o_i\leftarrow \psi ] \rightarrow \lnot \varphi ,\Theta \bigr )\). These simplifications, moreover, no longer require a synthesis procedure with partial information and thus, a larger set of synthesis tools is supported.
Other specification formalisms We worked out our approach for LTL, but it works for other languages if (1) the language is closed under Boolean connectives \((\wedge , \lnot )\), (2) the desired fault models are expressible, and (3) a synthesis procedure (with partial information) is available. These prerequisites do not only apply to many temporal logics but also to various kinds of automata over infinite words.
6 Case study
To evaluate our approach, we apply it in a case study on a real component of a satellite system that is currently under development. We first present the system under test and specify a version of the respective component in LTL. Using this specification, we compute a set of test strategies and evaluate them on a real implementation. Additional case studies can be found in [10].
6.1 Eu:CROPIS FDIR specification
An important task of each space and satellite system is to maintain its health state and react on failure. In modern space systems, this task is encapsulated in the Fault Detection, Isolation, and Recovery (FDIR) component, which collects information from all relevant sensors and onboard computers, analyzes and assesses the data in terms of correctness and health, and initiates recovery actions if necessary. The FDIR component is organized hierarchically in multiple levels [51] with the overall objective of maximizing the lifetime and correct operation of the system.
Eu:CROPIS FDIR In Fig. 9 we illustrate where the FDIR component for the magnetic torquers of the Eu:CROPIS onboard computing system is placed in practice and in Fig. 10, we give a highlevel overview of the FDIR component and its environment. The FDIR component regularly obtains housekeeping information from two redundantlydesigned control units, \(S_1\) and \(S_2\), which control the magnetic torquers of the satellite, and interacts with them via the electronic power system, EP. The control units \(S_1\) and \(S_2\) have the same functionality, but only one of them is active at any time. The other control unit serves as a backup that can be activated if necessary. The FDIR component signals the activation (or deactivation) of a control unit to the EP which regulates the power supply.
We distinguish two types of errors, called noncritical error and severe error, signaled to the FDIR component via housekeeping information. In case of a noncritical error, two recovery actions are possible. Either the erroneous control unit is disabled for a short time and enabled afterwards again or the erroneous control unit is disabled and the redundant control unit is activated to take over its task. In case of the severe error, however, only the latter recovery action is allowed, i.e., the erroneous control unit has to be disabled and the redundant control unit has to be activated. If this happens more than once and the redundant control unit as well shows erroneous behavior, the FDIR component initiates a switch of the satellite mode into safe mode. The safe mode is a fallback satellite mode designed to give the operators on ground the maximum amount of time to analyze and fix the problem. It is only invoked once a problem cannot be solved onboard and requires input from the operators to restore nominal operations.
LTL specification We model the specification of the FDIR component in LTL. Let \(I_{FDIR}\) = {\(\mathtt{mode}_{\mathtt{1}}\,\), \(\mathtt{mode}_{\mathtt{2}}\,\), \(\mathtt{err}_{\mathtt{nc}}\,\), \(\mathtt{err}_{\mathtt{s}}\,\), \(\mathtt{reset}\,\)} and \(O_{FDIR}\) = {\(\mathtt{on}_{\mathtt{1}}\,\), \(\mathtt{off}_{\mathtt{1}}\,\), \(\mathtt{on}_{\mathtt{2}}\,\), \(\mathtt{off}_{\mathtt{2}}\,\), \(\mathtt{safemode}\,\)} be the Boolean variables corresponding to the input signals and the output signals of the FDIR component, respectively.
These Boolean variables are abstractions of the real hardware/software implementation. The values of the Boolean variables are automatically extracted from the housekeeping information which is periodically collected from EP (\(\mathtt{mode}_{\mathtt{1}}\,\), \(\mathtt{mode}_{\mathtt{2}}\,\)) and \(S_1\) or \(S_2\) (\(\mathtt{err}_{\mathtt{nc}}\,\), \(\mathtt{err}_{\mathtt{s}}\,\)). The two error variables encompass multiple error conditions (e.g. communication timeouts, invalid responses, electrical errors like overcurrent or undervoltage, etc.) which are detected by the subsystem. The \(\mathtt{reset}\,\) variable corresponds to a telecommand sent from ground to the FDIR component. For the output direction the values of the variables are used to generate commands which are sent to the EP or the satellite mode handling component. Additionally, we use the auxiliary Boolean variables \(O^\prime \) = {\(\mathtt{lastup}\,\), \(\mathtt{allowswitch}\,\)} to model state information on specification level. These auxiliary variables do not correspond to real signals in the system, but are used as unobservable outputs of the FDIR component. In Table 1, we present a summary of the Boolean variables involved in the specification and describe their meaning.
Descriptions of inputs and outputs of the FDIR component
Boolean variable  Description 

\(\mathtt{mode}_{\mathtt{1}}\,\)  \(\mathsf {true}\) iff \(S_1\) is activated 
\(\mathtt{mode}_{\mathtt{2}}\,\)  \(\mathsf {true}\) iff \(S_2\) is activated 
\(\mathtt{err}_{\mathtt{nc}}\,\)  \(\mathsf {true}\) iff a noncritical error is signaled by \(S_1\) or \(S_2\) 
\(\mathtt{err}_{\mathtt{s}}\,\)  \(\mathsf {true}\) iff a severe error is signaled by \(S_1\) or \(S_2\) 
reset  \(\mathsf {true}\) iff the FDIR component is reset 
\(\mathtt{on}_{\mathtt{1}}\,\)  \(\mathsf {true}\) iff \(S_1\) shall be switched on 
\(\mathtt{off}_{\mathtt{1}}\,\)  \(\mathsf {true}\) iff \(S_1\) shall be switched off 
\(\mathtt{on}_{\mathtt{2}}\,\)  \(\mathsf {true}\) iff \(S_2\) shall be switched on 
\(\mathtt{off}_{\mathtt{2}}\,\)  \(\mathsf {true}\) iff \(S_2\) shall be switched off 
safemode  \(\mathsf {true}\) iff the FDIR component initiates the safemode of the satellite 
lastup  \(\mathsf {true}\) if the last active system was \(S_1\) and \(\mathsf {false}\) if the last active system was \(S_2\) 
allowswitch  \(\mathsf {true}\) iff a switch of \(S_1\) to \(S_2\) or \(S_2\) to \(S_1\) is allowed 
Temporal specification of systemlevel FDIR component in LTL
Assumptions\(A_{1}\)–\(A_{6}\)  
\(A_1\)  \({{\,\mathrm{\mathsf {G}}\,}}((\lnot \mathtt{mode}_{\mathtt{2}}\,\wedge \lnot \mathtt{mode}_{\mathtt{1}}\,) \rightarrow \lnot \mathtt{err}_{\mathtt{nc}}\,\wedge \lnot \mathtt{err}_{\mathtt{s}}\,)\) 
\(A_2\)  \({{\,\mathrm{\mathsf {G}}\,}}(\lnot \mathtt{err}_{\mathtt{nc}}\,\vee \lnot \mathtt{err}_{\mathtt{s}}\,) \wedge {{\,\mathrm{\mathsf {G}}\,}}(\mathtt{reset}\,\rightarrow \lnot \mathtt{err}_{\mathtt{nc}}\,\wedge \lnot \mathtt{err}_{\mathtt{s}}\,)\) 
\(A_3\)  \({{\,\mathrm{\mathsf {G}}\,}}(\mathtt{reset}\,\rightarrow {{\,\mathrm{\mathsf {X}}\,}}(\mathtt{mode}_{\mathtt{2}}\,\oplus \mathtt{mode}_{\mathtt{1}}\,))\) 
\(A_4\)  \(\begin{aligned} {{\,\mathrm{\mathsf {G}}\,}}(&\lnot \mathtt{mode}_{\mathtt{1}}\,\wedge \mathtt{on}_{\mathtt{1}}\,\wedge \lnot \mathtt{off}_{\mathtt{1}}\,\wedge \lnot \mathtt{on}_{\mathtt{2}}\,\wedge \lnot \mathtt{off}_{\mathtt{2}}\,\wedge \lnot \mathtt{reset}\,\wedge \lnot \mathtt{safemode}\,) \rightarrow \\ \quad&{{\,\mathrm{\mathsf {X}}\,}}\mathtt{mode}_{\mathtt{1}}\,\wedge (\mathtt{mode}_{\mathtt{2}}\,\leftrightarrow {{\,\mathrm{\mathsf {X}}\,}}\mathtt{mode}_{\mathtt{2}}\,)) \end{aligned}\) 
\(\begin{aligned} {{\,\mathrm{\mathsf {G}}\,}}(&\lnot \mathtt{mode}_{\mathtt{2}}\,\wedge \lnot \mathtt{on}_{\mathtt{1}}\,\wedge \lnot \mathtt{off}_{\mathtt{1}}\,\wedge \mathtt{on}_{\mathtt{2}}\,\wedge \lnot \mathtt{off}_{\mathtt{2}}\,\wedge \lnot \mathtt{reset}\,\wedge \lnot \mathtt{safemode}\,\rightarrow \\&{{\,\mathrm{\mathsf {X}}\,}}\mathtt{mode}_{\mathtt{2}}\,\wedge (\mathtt{mode}_{\mathtt{1}}\,\leftrightarrow {{\,\mathrm{\mathsf {X}}\,}}\mathtt{mode}_{\mathtt{1}}\,)) \end{aligned}\)  
\(A_5\)  \(\begin{aligned} {{\,\mathrm{\mathsf {G}}\,}}(&\mathtt{mode}_{\mathtt{1}}\,\wedge \lnot \mathtt{on}_{\mathtt{1}}\,\wedge \mathtt{off}_{\mathtt{1}}\,\wedge \lnot \mathtt{on}_{\mathtt{2}}\,\wedge \lnot \mathtt{off}_{\mathtt{2}}\,\wedge \lnot \mathtt{reset}\,\wedge \lnot \mathtt{safemode}\,\rightarrow \\&{{\,\mathrm{\mathsf {X}}\,}}\lnot \mathtt{mode}_{\mathtt{1}}\,\wedge (\mathtt{mode}_{\mathtt{2}}\,\leftrightarrow {{\,\mathrm{\mathsf {X}}\,}}\mathtt{mode}_{\mathtt{2}}\,) ) \end{aligned}\) 
\(\begin{aligned} {{\,\mathrm{\mathsf {G}}\,}}(&\mathtt{mode}_{\mathtt{2}}\,\wedge \lnot \mathtt{on}_{\mathtt{1}}\,\wedge \lnot \mathtt{off}_{\mathtt{1}}\,\wedge \lnot \mathtt{on}_{\mathtt{2}}\,\wedge \mathtt{off}_{\mathtt{2}}\,\wedge \lnot \mathtt{reset}\,\wedge \lnot \mathtt{safemode}\,\rightarrow \\&{{\,\mathrm{\mathsf {X}}\,}}\lnot \mathtt{mode}_{\mathtt{2}}\,\wedge (\mathtt{mode}_{\mathtt{1}}\,\leftrightarrow {{\,\mathrm{\mathsf {X}}\,}}\mathtt{mode}_{\mathtt{1}}\,) ) \end{aligned}\)  
\(A_6\)  \(\begin{aligned} {{\,\mathrm{\mathsf {G}}\,}}(&(\lnot (\lnot \mathtt{on}_{\mathtt{2}}\,\wedge \lnot \mathtt{off}_{\mathtt{1}}\,\wedge \lnot \mathtt{on}_{\mathtt{1}}\,\wedge \lnot \mathtt{off}_{\mathtt{2}}\,) \wedge {{\,\mathrm{\mathsf {X}}\,}}(\lnot \mathtt{on}_{\mathtt{2}}\,\wedge \lnot \mathtt{off}_{\mathtt{1}}\,\wedge \lnot \mathtt{on}_{\mathtt{1}}\,\wedge \lnot \mathtt{off}_{\mathtt{2}}\,) \wedge \\&(\lnot \mathtt{reset}\,\wedge {{\,\mathrm{\mathsf {X}}\,}}\lnot \mathtt{reset}\,\wedge \lnot \mathtt{safemode}\,\wedge {{\,\mathrm{\mathsf {X}}\,}}\lnot \mathtt{safemode}\,) \rightarrow \\&{{\,\mathrm{\mathsf {X}}\,}}((\mathtt{mode}_{\mathtt{2}}\,\leftrightarrow {{\,\mathrm{\mathsf {X}}\,}}\mathtt{mode}_{\mathtt{2}}\,) \wedge (\mathtt{mode}_{\mathtt{1}}\,\leftrightarrow {{\,\mathrm{\mathsf {X}}\,}}\mathtt{mode}_{\mathtt{1}}\,) ) \end{aligned}\) 
Guarantees\(G_{1}\)–\(G_{13}\)  
\(G_1\)  \({{\,\mathrm{\mathsf {G}}\,}}((\mathtt{on}_{\mathtt{1}}\,\wedge \lnot \mathtt{on}_{\mathtt{2}}\,) \rightarrow ({{\,\mathrm{\mathsf {X}}\,}}\mathtt{lastup}\,))\) 
\({{\,\mathrm{\mathsf {G}}\,}}((\lnot \mathtt{on}_{\mathtt{1}}\,\wedge \mathtt{on}_{\mathtt{2}}\,) \rightarrow ({{\,\mathrm{\mathsf {X}}\,}}\lnot \mathtt{lastup}\,))\)  
\({{\,\mathrm{\mathsf {G}}\,}}( (\lnot \mathtt{on}_{\mathtt{1}}\,\wedge \lnot \mathtt{on}_{\mathtt{2}}\,) \rightarrow (\mathtt{lastup}\,\leftrightarrow {{\,\mathrm{\mathsf {X}}\,}}\mathtt{lastup}\,))\)  
\(G_2\)  \({{\,\mathrm{\mathsf {G}}\,}}(\mathtt{on}_{\mathtt{1}}\,\rightarrow \lnot \mathtt{off}_{\mathtt{1}}\,\wedge \lnot \mathtt{on}_{\mathtt{2}}\,\wedge \lnot \mathtt{off}_{\mathtt{2}}\,)\) 
\({{\,\mathrm{\mathsf {G}}\,}}(\mathtt{off}_{\mathtt{1}}\,\rightarrow \lnot \mathtt{on}_{\mathtt{1}}\,\wedge \lnot \mathtt{on}_{\mathtt{2}}\,\wedge \lnot \mathtt{off}_{\mathtt{2}}\,)\)  
\({{\,\mathrm{\mathsf {G}}\,}}(\mathtt{on}_{\mathtt{2}}\,\rightarrow \lnot \mathtt{on}_{\mathtt{1}}\,\wedge \lnot \mathtt{off}_{\mathtt{1}}\,\wedge \lnot \mathtt{off}_{\mathtt{2}}\,)\)  
\({{\,\mathrm{\mathsf {G}}\,}}(\mathtt{off}_{\mathtt{2}}\,\rightarrow \lnot \mathtt{on}_{\mathtt{1}}\,\wedge \lnot \mathtt{on}_{\mathtt{2}}\,\wedge \lnot \mathtt{off}_{\mathtt{1}}\,)\)  
\(G_3\)  \({{\,\mathrm{\mathsf {G}}\,}}(\lnot \mathtt{mode}_{\mathtt{2}}\,\wedge \lnot \mathtt{mode}_{\mathtt{1}}\,\rightarrow {{\,\mathrm{\mathsf {F}}\,}}(\mathtt{reset}\,\vee \mathtt{on}_{\mathtt{2}}\,\vee \mathtt{on}_{\mathtt{1}}\,\vee \mathtt{safemode}\,))\) 
\(G_4\)  \({{\,\mathrm{\mathsf {G}}\,}}(\mathtt{allowswitch}\,\rightarrow \lnot \mathtt{safemode}\,)\) 
\(G_5\)  \({{\,\mathrm{\mathsf {G}}\,}}((\mathtt{mode}_{\mathtt{2}}\,\vee \mathtt{mode}_{\mathtt{1}}\,) \rightarrow \lnot \mathtt{on}_{\mathtt{1}}\,\wedge \lnot \mathtt{on}_{\mathtt{2}}\,)\) 
\(G_6\)  \({{\,\mathrm{\mathsf {G}}\,}}(\lnot \mathtt{allowswitch}\,\wedge \mathtt{lastup}\,\rightarrow \lnot \mathtt{on}_{\mathtt{2}}\,)\) 
\({{\,\mathrm{\mathsf {G}}\,}}(\lnot \mathtt{allowswitch}\,\wedge \lnot \mathtt{lastup}\,\rightarrow \lnot \mathtt{on}_{\mathtt{1}}\,)\)  
\(G_7\)  \({{\,\mathrm{\mathsf {G}}\,}}(\lnot \mathtt{reset}\,\wedge \mathtt{allowswitch}\,\wedge \mathtt{lastup}\,\wedge \mathtt{on}_{\mathtt{2}}\,\rightarrow {{\,\mathrm{\mathsf {X}}\,}}\lnot \mathtt{allowswitch}\,)\) 
\({{\,\mathrm{\mathsf {G}}\,}}(\lnot \mathtt{reset}\,\wedge \mathtt{allowswitch}\,\wedge \lnot \mathtt{lastup}\,\wedge \mathtt{on}_{\mathtt{1}}\,\rightarrow {{\,\mathrm{\mathsf {X}}\,}}\lnot \mathtt{allowswitch}\,)\)  
\(G_8\)  \({{\,\mathrm{\mathsf {G}}\,}}((\mathtt{allowswitch}\,\wedge \lnot (((\mathtt{lastup}\,\wedge \mathtt{on}_{\mathtt{2}}\,) \vee (\lnot \mathtt{lastup}\,\wedge \mathtt{on}_{\mathtt{1}}\,)))) \rightarrow {{\,\mathrm{\mathsf {X}}\,}}\mathtt{allowswitch}\,)\) 
\(G_9\)  \({{\,\mathrm{\mathsf {G}}\,}}(\mathtt{reset}\,\rightarrow {{\,\mathrm{\mathsf {X}}\,}}\mathtt{allowswitch}\,)\) 
\(G_{10}\)  \({{\,\mathrm{\mathsf {G}}\,}}(\mathtt{safemode}\,\rightarrow (\lnot \mathtt{on}_{\mathtt{1}}\,\wedge \lnot \mathtt{on}_{\mathtt{2}}\,))\) 
\(G_{11}\)  \({{\,\mathrm{\mathsf {G}}\,}}( \lnot \mathtt{allowswitch}\,\wedge \lnot \mathtt{reset}\,\rightarrow {{\,\mathrm{\mathsf {X}}\,}}\lnot \mathtt{allowswitch}\,)\) 
\(G_{12}\)  \( \begin{aligned} {{\,\mathrm{\mathsf {G}}\,}}(&(\mathtt{err}_{\mathtt{s}}\,\wedge \mathtt{mode}_{\mathtt{1}}\,\wedge \lnot \mathtt{reset}\,) \rightarrow \\&{{\,\mathrm{\mathsf {F}}\,}}(\mathtt{reset}\,\vee \mathtt{safemode}\,\vee \mathtt{mode}_{\mathtt{2}}\,\vee (\mathtt{mode}_{\mathtt{1}}\,\mathbin {\mathsf {U}}(\mathtt{mode}_{\mathtt{1}}\,\wedge \lnot \mathtt{err}_{\mathtt{s}}\,)))) \end{aligned}\) 
\(\begin{aligned} {{\,\mathrm{\mathsf {G}}\,}}(&(\mathtt{err}_{\mathtt{s}}\,\wedge \mathtt{mode}_{\mathtt{2}}\,\wedge \lnot \mathtt{reset}\,) \rightarrow \\&{{\,\mathrm{\mathsf {F}}\,}}(\mathtt{reset}\,\vee \mathtt{safemode}\,\vee \mathtt{mode}_{\mathtt{1}}\,\vee (\mathtt{mode}_{\mathtt{2}}\,\mathbin {\mathsf {U}}(\mathtt{mode}_{\mathtt{2}}\,\wedge \lnot \mathtt{err}_{\mathtt{s}}\,)))) \end{aligned}\)  
\(G_{13}\)  \({{\,\mathrm{\mathsf {G}}\,}}((\mathtt{err}_{\mathtt{nc}}\,\wedge \mathtt{mode}_{\mathtt{1}}\,\wedge \lnot \mathtt{reset}\,) \rightarrow {{\,\mathrm{\mathsf {F}}\,}}(\mathtt{reset}\,\vee \mathtt{safemode}\,\vee \mathtt{mode}_{\mathtt{2}}\,\vee (\mathtt{mode}_{\mathtt{1}}\,\wedge \lnot \mathtt{err}_{\mathtt{nc}}\,)))\) 
\({{\,\mathrm{\mathsf {G}}\,}}((\mathtt{err}_{\mathtt{nc}}\,\wedge \mathtt{mode}_{\mathtt{2}}\,\wedge \lnot \mathtt{reset}\,) \rightarrow {{\,\mathrm{\mathsf {F}}\,}}(\mathtt{reset}\,\vee \mathtt{safemode}\,\vee \mathtt{mode}_{\mathtt{1}}\,\vee (\mathtt{mode}_{\mathtt{2}}\,\wedge \lnot \mathtt{err}_{\mathtt{nc}}\,)))\) 
 \(A_1\)

Whenever both systems are off, then there is no running system that can have an error. Thus, the error signals have to be low as well.
 \(A_2\)

The error signals are mutual exclusive. If the environment enforces a reset then both error signals have to be low, because we assume that ground control has taken care of the errors.
 \(A_3\)

After a reset enforced by the environment, one of the two systems has to be running and the other has to be off.
 \(A_4\)

Whenever the FDIR component sends \(\mathtt{on}_{\mathtt{1}}\,\), we assume that in the next time step system number one is running (\(\mathtt{mode}_{\mathtt{1}}\,\)) and the state of the second system (\(\mathtt{mode}_{\mathtt{2}}\,\)) does not change. The same assumption applies analogously for \(\mathtt{on}_{\mathtt{2}}\,\).
 \(A_5\)

Whenever the FDIR component sends \(\mathtt{off}_{\mathtt{1}}\,\), we assume that in the next time step system number one is off (\(\lnot \mathtt{mode}_{\mathtt{1}}\,\)) and the state of the second system (\(\mathtt{mode}_{\mathtt{2}}\,\)) does not change. The same assumption applies analogously for \(\mathtt{off}_{\mathtt{2}}\,\).
 \(A_6\)

We assume that the environment, more specifically the electronic power unit, is not immediately free to change the state of the systems when there is no message from the FDIR component. It has to wait for one more time step (with no messages of the FDIR component).
 \(G_1\)

This guarantee stores which system was last activated by the FDIR component.
 \(G_2\)

We require the signals \(\mathtt{on}_{\mathtt{1}}\,\), \(\mathtt{off}_{\mathtt{1}}\,\), \(\mathtt{on}_{\mathtt{2}}\,\) and \(\mathtt{off}_{\mathtt{2}}\,\) to be mutually exclusively set to high.
 \(G_3\)

Whenever both systems are off, then the FDIR component eventually requests to switch on one of the systems (\(\mathtt{on}_{\mathtt{1}}\,\), \(\mathtt{on}_{\mathtt{2}}\,\)) or activates \(\mathtt{safemode}\,\) or observes a \(\mathtt{reset}\,\).
 \(G_4\)

We restrict the FDIR component to not enter \(\mathtt{safemode}\,\) as long as the component can switch to the backup system.
 \(G_5\)

The FDIR component must not request to switch on one of the systems (\(\mathtt{on}_{\mathtt{1}}\,\), \(\mathtt{on}_{\mathtt{2}}\,\)) as long as one of the systems is running.
 \(G_6\)

Whenever the FDIR component is not allowed anymore to switch to the backup system, then it must not request to switch the backup system on.
 \(G_7\)

Once the FDIR component switches to the backup system it is not allowed anymore to switch again (unless the environment performs a reset, see G9).
 \(G_8\)

As long as the FDIR component only restarts the same system it is still allowed to switch in the future.
 \(G_9\)

A \(\mathtt{reset}\,\) by the environment allows the FDIR component again to switch to the backup system if required.
 \(G_{10}\)

Whenever the FDIR component is in \(\mathtt{safemode}\,\) it must not request to switchon one of the systems (\(\mathtt{on}_{\mathtt{1}}\,\),\(\mathtt{on}_{\mathtt{2}}\,\)).
 \(G_{11}\)

Once a switch is not allowed anymore and the environment does not perform a reset, then the switch is also not allowed in the next time step.
 \(G_{12}\)

Whenever the FDIR component observes a server error (\(\mathtt{err}_{\mathtt{s}}\,\)), it must eventually switch to the backup system or activate \(\mathtt{safemode}\,\) unless the environment performs a \(\mathtt{reset}\,\) or the error disappears by itself (without restarting the system).
 \(G_{13}\)

Whenever the FDIR component observes a noncritical error (\(\mathtt{err}_{\mathtt{nc}}\,\)), it must eventually switch to the backup system or activate \(\mathtt{safemode}\,\) or the error disappears (restarting the currently running system is allowed).
6.2 Experimental results
In this section, we present experimental results for generating test strategies for the LTL specification of the Eu:CROPIS FDIR component. We first analyze runtime and memory consumption of test strategy synthesis, and then evaluate the effectiveness of the generated test strategies on a concrete implementation of the FDIR component. The proposed test strategy synthesis approach, however, is a blackbox testing technique, independent of the concrete implementation and can be applied even if no implementation is available. The synthesized test strategies do not contain the test oracle; for the experiments, we use a concrete implementation, that was manually verified, as test oracle.
6.2.1 Test strategy computation
Experimental setting All experiments for computing test strategies are conducted in a virtual machine with a 64 bit Linux system using a single core of an Intel i5 CPU running at 2.60 GHz. We use the synthesis procedure PARTY [31] as blackbox, which implements SMTbased bounded synthesis for full LTL and, thus, we call our tool PARTYStrategy.^{6}
Results for the FDIR specification. The suffix “k” multiplies by \(10^3\)
Fault  \(o_i\)  \(\mathsf {frq}\)  \(\mathcal {T}\)  Time  Peak memory 

(s)  (MB)  
Sa0  \(\mathtt{on}_{\mathtt{1}}\,\)  \({{\,\mathrm{\mathsf {F}}\,}}\! {{\,\mathrm{\mathsf {G}}\,}}\)  4  1.2k  400 
\(\mathtt{off}_{\mathtt{1}}\,\)  \({{\,\mathrm{\mathsf {F}}\,}}\! {{\,\mathrm{\mathsf {G}}\,}}\)  3  517  396  
safemode  \({{\,\mathrm{\mathsf {F}}\,}}\! {{\,\mathrm{\mathsf {G}}\,}}\)  4  934  324  
Sa1  \(\mathtt{on}_{\mathtt{1}}\,\)  \({{\,\mathrm{\mathsf {G}}\,}}\!{{\,\mathrm{\mathsf {F}}\,}}\)  4  438  222 
\(\mathtt{off}_{\mathtt{1}}\,\)  \({{\,\mathrm{\mathsf {F}}\,}}\! {{\,\mathrm{\mathsf {G}}\,}}\)  4  753  378  
safemode  \({{\,\mathrm{\mathsf {G}}\,}}\!{{\,\mathrm{\mathsf {F}}\,}}\)  3  169  192  
BitFlip  \(\mathtt{on}_{\mathtt{1}}\,\)  \({{\,\mathrm{\mathsf {G}}\,}}\!{{\,\mathrm{\mathsf {F}}\,}}\)  4  26k  3.6k 
\(\mathtt{off}_{\mathtt{1}}\,\)  \({{\,\mathrm{\mathsf {F}}\,}}\! {{\,\mathrm{\mathsf {G}}\,}}\)  4  98.9k  4.3k  
safemode  \({{\,\mathrm{\mathsf {G}}\,}}\!{{\,\mathrm{\mathsf {F}}\,}}\)  3  13.1k  4.3k 
In Table 3, we list the time and memory consumption for synthesizing the test strategies with our synthesis tool PARTYStrategy. The more freedom there is for implementations of the specification, the harder it becomes to compute a strategy. The search for strategies that are capable of detecting a bitflip is the most difficult one as we cannot make use of our optimization for full observability of the output signals. For all signals with a stuckat0 fault and for the \(\mathtt{off}_{\mathtt{1}}\,\) signal with one of the other two faults we are able to derive test strategies that can detect the fault if it is permanent from some point onwards. For the signals \(\mathtt{on}_{\mathtt{1}}\,\) and safemode we are able to derive strategies for stuckat1 faults and bitflips also at a lower frequency, i.e., we can detect those faults also if they occur at least infinitely often.
6.2.2 Test strategy evaluation
Test setting In the Eu:CROPIS satellite the FDIR component is implemented in software in the programming language C++. The implementation for the magnetic torquer FDIR handling is not an exact realization of the specification in Table 2 but extends it by allowing commands to the EP to be lost (e.g. due to electrical faults). This is accommodated by adding timeouts for the execution of the switchon/off commands and reissuing the commands if the timeout is triggered.
The implementation is designed with testability and portability in mind and uses an abstract interface to access other subsystems of the satellite. This allows engineers to exchange the used interface with a set of test adapters which connect to the signals generated by the test strategies. As we are only interested in the functional properties of the implementation, we can run the code on a normal Linux system, instead of the microprocessor which is used in the satellite. This gives access to all Linux based debugging and testing tools and allows us to use gcov to measure the line and branch coverage of the source code.
A time step of a test run consists of the following operations: request values for the input variables \(I_{FDIR}\) from the test strategy; feed the values to the test adapter from which they are read by the FDIR implementation; run the FDIR implementation for one cycle; extract the output values \(O_{FDIR}\) from the test adapter and feed them back to the test strategy to get new input values. For each time step, the execution trace—the values assigned to the inputs \(I_{FDIR}\) and outputs \(O_{FDIR}\) of the FDIR component—is recorded.
Mutation testing Besides line and branch coverage, we apply mutation analysis to assess the effectiveness, i.e., fault finding abilities, of a test suite. A test suite kills a mutant program M if it contains at least one test strategy that, when executed on M and the original program P, produces a trace where at least one output of M differs in at least one time step from the respective output of P (for the same input sequence). A mutant program M is equivalent to the original program P if M does not violate the specification. For our evaluation we manually identify and remove equivalent mutants.
 1.
Deletion of the line,
 2.
Replacement of true with false or false with true,
 3.
Replacement of == with != or != with ==, and
 4.
Replacement of && with  or  with &&
Mutation coverage by fault models and signal when executing all four derived strategies
Output  Fault model  

Sa0  Sa1  Bitflip  All  
(%)  (%)  (%)  (%)  
\(\mathtt{on}_{\mathtt{1}}\,\)  67.47  53.01  8.43  75.90 
\(\mathtt{off}_{\mathtt{1}}\,\)  13.25  3.61  16.87  16.87 
safemode  61.45  13.25  13.25  16.87 
All  71.08  55.42  16.87  78.31 
From the 83 mutant programs that violate the specification, the synthesized adaptive test strategies are able to kill 65 (78.31%). Since these test strategies are derived from requirements, without any implementationspecific knowledge, they are applicable to any system that claims to implement the specification. The mutation score of \(78.31\%\) motivate that the synthesized adaptive test strategies—although computed only for simple specific fault models—are sensitive to other faults.
In Table 4, we present the mutation scores for the three signals \(\mathtt{on}_{\mathtt{1}}\,\), \(\mathtt{off}_{\mathtt{1}}\,\), and safemode and the three fault models stuckat0 (Sa0), stuckat1 (Sa1), and bitflip (Bitflip). The last column and last row show the mutation scores when considering all three fault models and all three signal, respectively.
Comparison with random testing We compare code coverage and mutation score of the synthesized adaptive test strategies and random testing when executed for 0.1k, 1k, 10k, 100k time steps. The suffix “k” multiplies by \(10^3\). We use a uniform random distribution for choosing random values for all input signals, where reset is with \(10\%\) probability 1, and all other signals are with 50% probability 1.
The coverage and mutation scores are listed in Table 5. Coverage was measured with gcov. The table is built as follows: the different testing approaches are shown in the columns. The columns R(0.1k), R(1k), R(10k), R(100k) refer to random testing with increasing numbers of input stimuli, and the columns S(80) and S(80) + R(10k) refer to the synthesized test strategies and the test strategies in combination with R(10k).
Overview of coverage and mutation score by testing approach
Metric  Random stimuli  Adapt. strategies  

R(0.1k)  R(1k)  R(10k)  R(100k)  S(80)  S(80) + R(10k)  
(%)  (%)  (%)  (%)  (%)  (%)  
Line  91.5  95.7  96.8  100.0  83.0  97.9 
Branch  85.4  89.6  89.6  93.8  70.8  91.7 
Mutation  88.0  92.8  94.0  98.0  78.3  97.6 
7 Conclusion
We have presented a new approach to compute adaptive test strategies from temporal logic specifications using reactive synthesis with partial information. The computed test strategies reveal all instances of a userdefined fault class for every realization of a given specification. Thus, they do not rely on implementation details, which is important for products that are still under development or for standards that will be implemented by multiple vendors. Our approach is sound but incomplete in general, i.e., may fail to find test strategies even if they exist. However, for many interesting cases, we showed that it is both sound and complete.
The worstcase complexity is doubly exponential in the specification size, but in our setting, the specifications are typically small. This also makes our approach an interesting application for reactive synthesis. Our experiments demonstrate that our approach can compute meaningful tests for specifications of industrial size and that the computed strategies are capable of detecting faults hidden in paths that are unlikely to be activated by random input sequences.
We have applied our approach in a case study to the fault detection, isolation and recovery component of the satellite Eu:CROPIS . The computed test suite, based only on three different types of faults, achieves a line coverage, branch coverage, and mutation score of 83.0%, 70.8%, ad 78.3%, respectively, relying on information solely available from the specification. The approach also allows us to detect faults that require complex input sequences and are unlikely detected by using random testing.
Current directions for future work include improving scalability, successrate, and usability of our approach. To this end, we are investigating using random testing for inputs in the strategies that are not fixed to single values, and besteffort strategies [19, 20] for the case that there are no test strategies that can guarantee triggering the fault. Another direction for future work is research on evaluating LTL properties specified on infinite paths on finite traces to improve the evaluation process when executing the derived strategies.
Footnotes
 1.
While the semantics of LTL are defined over infinite execution traces, we can only run the tests for a finite amount of time. This can result in inconclusive verdicts [6]. We exclude this issue from the scope of this paper, relying on the user to judge when tests have been executed long enough, and on existing research on interpreting LTL over finite traces [14, 15, 27, 39].
 2.
PARTYStrategy, https://www.iaik.tugraz.at/content/research/scos/tools/.
 3.
This fault model is different from the standard fault model in mutation testing, which considers simple faults in a concrete implementation that can affect multiple outputs.
 4.
The word “complete” indicates that every considered fault is revealed at every output. The word “universal” indicates that this is achieved for every (otherwise correct) system.
 5.
This is (at least partially) confirmed by our test strategy synthesis tool: it reports that no test strategy with less than 12 states can satisfy Eq. 6.
 6.
PARTYStrategy, https://www.iaik.tugraz.at/content/research/scos/tools/.
 7.
Given that the user has decided that we have waited long enough for \(\mathtt{safemode}\,\) to become true.
Notes
Acknowledgements
Open access funding provided by Austrian Science Fund (FWF). This work was supported in part by the European Commission through the Horizon2020 project IMMORTAL (grant no. 644905) funded under H2020 EU.2.1.1.1., the FP7 project eDAS (grant no. 608770) funded under FP7ICT, the FP7 project STANCE (grant no. 317753) funded under FP7ICT, and by the Austrian Science Fund (FWF) through the national research network RiSE (S11406N23). We thank Ayrat Khalimov for helpful comments and assistance in using PARTY.
References
 1.Acree AT, Budd TA, DeMillo RA, Lipton RJ, Sayward FG (1979) Mutation analysis. Technical report GITICS79/08, Georgia Institute of Technology, Atlanta, GeorgiaGoogle Scholar
 2.Aichernig BK, Brandl H, Jöbstl E, Krenn W, Schlick R (2015) Killing strategies for modelbased mutation testing. Softw Test Verif Reliab 25(8):716–748CrossRefGoogle Scholar
 3.Alur R, Courcoubetis C, Yannakakis M (1995) Distinguishing tests for nondeterministic and probabilistic machines. In: Leighton FT, Borodin A (eds) Proceedings of the twentyseventh annual ACM symposium on theory of computing, 29 May–1 June 1995, Las Vegas, Nevada, USA. ACM, pp 363–372Google Scholar
 4.Ammann P, Ding W, Xu D (2001) Using a model checker to test safety properties. In: 7th International conference on engineering of complex computer systems (ICECCS 2001), 11–13 June 2001. Sweden. IEEE Computer Society, Skövde, pp 212–221Google Scholar
 5.Armoni R, Fix L, Flaisher A, Grumberg O, Piterman N, Tiemeyer A, Vardi MY (2003) Enhanced vacuity detection in linear temporal logic. In: Hunt WA Jr, Somenzi F (eds) Proceedings of the 15th international conference on computer aided verification, CAV 2003, Boulder, CO, USA, 8–12 July 2003, volume 2725 of lecture notes in computer science. Springer, Berlin, pp 368–380CrossRefGoogle Scholar
 6.Bauer A, Leucker M, Schallhart C (2011) Runtime verification for LTL and TLTL. ACM Trans Softw Eng Methodol 20(4):14:1–14:64CrossRefGoogle Scholar
 7.Beer I, BenDavid S, Eisner C, Rodeh Y (2001) Efficient detection of vacuity in temporal model checking. Formal Methods Syst Des 18(2):141–163CrossRefGoogle Scholar
 8.Blass A, Gurevich Y, Nachmanson L, Veanes M Play to test. In: Grieskamp and Weise [26], pp 32–46Google Scholar
 9.Bloem R, Chatterjee K, Jobstmann B (2018) Graph games and reactive synthesis. In: Clarke EM, Henzinger TA, Veith H, Bloem R (eds) Handbook of model checking. Springer, Berlin, pp 921–962CrossRefGoogle Scholar
 10.Bloem R, Könighofer R, Pill I, Röck F (2016) Synthesizing adaptive test strategies from temporal logic specifications. In: Piskac R, Talupur M (eds) 2016 Formal methods in computeraided design, FMCAD 2016, Mountain View, CA, USA, 3–6 Oct 2016. IEEE, pp 17–24Google Scholar
 11.Boroday S, Petrenko A, Groz R (2007) Can a model checker generate tests for nondeterministic systems? Electr Notes Theor Comput Sci 190(2):3–19CrossRefGoogle Scholar
 12.Clarke EM, Emerson EA (1981) Design and synthesis of synchronization skeletons using branchingtime temporal logic. In: Kozen D (ed) Logics of programs, workshop, Yorktown Heights, New York, USA, May 1981, volume 131 of lecture notes in computer science. Springer, Berlin, pp 52–71Google Scholar
 13.David A, Larsen KG, Li S, Nielsen B (2008) A gametheoretic approach to realtime system testing. In: Sciuto D (ed) Design, automation and test in Europe, DATE 2008, Munich, Germany, March 10–14, 2008. ACM, pp 486–491Google Scholar
 14.De Giacomo G, De Masellis R, Montali M (2014) Reasoning on LTL on finite traces: Insensitivity to infiniteness. In: Brodley CE, Stone P (eds) Proceedings of the twentyeighth AAAI conference on artificial intelligence, July 27–31, 2014, Québec City, Québec, Canada. AAAI Press, pp 1027–1033Google Scholar
 15.De Giacomo G, Vardi MY (2013) Linear temporal logic and linear dynamic logic on finite traces. In: Rossi F (ed) IJCAI 2013, Proceedings of the 23rd international joint conference on artificial intelligence, Beijing, China, August 3–9, 2013. IJCAI/AAAI, pp 854–860Google Scholar
 16.DeMillo RA, Lipton RJ, Sayward FG (1978) Hints on test data selection: help for the practicing programmer. IEEE Comput 11(4):34–41CrossRefGoogle Scholar
 17.Dillig I, Dillig T, McMillan KL, Aiken A (2012) Minimum satisfying assignments for SMT. In: Madhusudan P, Seshia SA (eds) Proceedings of the 24th international conference on computer aided verification—CAV 2012, Berkeley, CA, USA, July 7–13, 2012, volume 7358 of lecture notes in computer science. Springer, pp. 394–409CrossRefGoogle Scholar
 18.Ehlers R (2012) Symbolic bounded synthesis. Form Methods Syst Des 40(2):232–262CrossRefGoogle Scholar
 19.Faella M (2008) Besteffort strategies for losing states. CoRR arXiv:0811.1664
 20.Faella M (2009) Admissible strategies in infinite games over graphs. In: Královic R, Niwinski D (ed) Proceedings of the 34th international symposium on mathematical foundations of computer science 2009, MFCS 2009, Novy Smokovec, High Tatras, Slovakia, August 24–28, 2009. Volume 5734 of lecture notes in computer science. Springer, pp 307–318Google Scholar
 21.Finkbeiner B, Schewe S (2013) Bounded synthesis. STTT 15(5–6):519–539CrossRefGoogle Scholar
 22.Fraser G, Ammann P (2008) Reachability and propagation for LTL requirements testing. In: Zhu H (ed) Proceedings of the eighth international conference on quality software, QSIC 2008, 12–13 August 2008, Oxford, UK. IEEE Computer Society, pp 189–198Google Scholar
 23.Fraser G, Wotawa F (2007) Testcase generation and coverage analysis for nondeterministic systems using modelcheckers. In: Proceedings of the second international conference on software engineering advances (ICSEA 2007), August 25–31, 2007, Cap Esterel, French Riviera, France. IEEE Computer Society, p 45Google Scholar
 24.Fraser G, Wotawa F, Ammann P (2009) Issues in using model checkers for test case generation. J Syst Softw 82(9):1403–1418CrossRefGoogle Scholar
 25.Fraser G, Wotawa F, Ammann P (2009) Testing with model checkers: a survey. Softw Test Verif Reliab 19(3):215–261CrossRefGoogle Scholar
 26.Grieskamp W, Weise C (eds) (2006) Formal approaches to software testing, 5th international workshop, FATES 2005, Edinburgh, UK, July 11, 2005, revised selected papers, vol 3997. Lecture notes in computer science. SpringerGoogle Scholar
 27.Havelund K, Rosu G (2001) Monitoring programs using rewriting. In: 16th IEEE international conference on automated software engineering (ASE 2001), 26–29 November 2001, Coronado Island, San Diego, CA, USA. IEEE Computer Society, pp 135–143Google Scholar
 28.Hierons RM (2006) Applying adaptive test cases to nondeterministic implementations. Inf Process Lett 98(2):56–60MathSciNetCrossRefGoogle Scholar
 29.Jia Y, Harman M (2011) An analysis and survey of the development of mutation testing. IEEE Trans Softw Eng 37(5):649–678CrossRefGoogle Scholar
 30.Jin HS, Ravi K, Somenzi F (2004) Fate and free will in error traces. STTT 6(2):102–116CrossRefGoogle Scholar
 31.Khalimov A, Jacobs S, Bloem R (2013) PARTY parameterized synthesis of token rings. In: Sharygina N, Veith H (eds) Proceedings of the 25th international conference on computer aided verification—CAV 2013, Saint Petersburg, Russia, July 13–19, 2013. Volume 8044 of lecture notes in computer science. Springer, pp 928–933Google Scholar
 32.Könighofer R, Hofferek G, Bloem R (2013) Debugging formal specifications: a practical approach using modelbased diagnosis and counterstrategies. STTT 15(5–6):563–583CrossRefGoogle Scholar
 33.Kupfermant O, Vardit MY (2000) Synthesis with incomplete information. In: Barringer H, Fisher M, Gabbay D, Gough G (eds) Advances in temporal logic. Applied Logic Series, vol 16. Springer, DordrechtGoogle Scholar
 34.Kupferman O, Vardi MY (2003) Vacuity detection in temporal model checking. STTT 4(2):224–233CrossRefGoogle Scholar
 35.Luo G, von Bochmann G, Petrenko A (1994) Test selection based on communicating nondeterministic finitestate machines using a generalized wpmethod. IEEE Trans Softw Eng 20(2):149–162CrossRefGoogle Scholar
 36.Martin DA (1975) Borel determinacy. Ann Math 102(2):363–371MathSciNetCrossRefGoogle Scholar
 37.Mathur AP (2008) Foundations of software testing, 2nd edn. AddisonWesley, BostonGoogle Scholar
 38.Miyase K, Kajihara S (2004) XID: don’t care identification of test patterns for combinational circuits. IEEE Trans CAD Integr Circuits Syst 23(2):321–326CrossRefGoogle Scholar
 39.Morgenstern A, Gesell M, Schneider K (2012) An asymptotically correct finite path semantics for LTL. In: Bjørner N, Voronkov A (eds) Proceedings of the 18th international conference on logic for programming, artificial intelligence, and reasoning, LPAR18, Mérida, Venezuela, March 11–15, 2012. Volume 7180 of lecture notes in computer science. Springer, pp 304–319Google Scholar
 40.Nachmanson L, Veanes M, Schulte W, Tillmann N, Grieskamp W (2004) Optimal strategies for testing nondeterministic systems. In: Avrunin GS, Rothermel G (eds) Proceedings of the ACM/SIGSOFT international symposium on software testing and analysis, ISSTA 2004, Boston, MA, USA, July 11–14, 2004. ACM, pp 55–64Google Scholar
 41.Offutt AJ (1992) Investigations of the software testing coupling effect. ACM Trans Softw Eng Methodol 1(1):5–20CrossRefGoogle Scholar
 42.Petrenko A, da Silva Simão A, Yevtushenko N (2012) Generating checking sequences for nondeterministic finite state machines. In: Antoniol G, Bertolino A, Labiche Y (eds) Fifth IEEE international conference on software testing, verification and validation, ICST 2012, Montreal, QC, Canada, April 17–21, 2012. IEEE Computer Society, pp 310–319Google Scholar
 43.Petrenko A, Simão A (2015) Generalizing the dsmethods for testing nondeterministic fsms. Comput J 58(7):1656–1672CrossRefGoogle Scholar
 44.Petrenko A, Yevtushenko N. Conformance tests as checking experiments for partial nondeterministic FSM. In: Grieskamp and Weise [26], pp 118–133Google Scholar
 45.Petrenko A, Yevtushenko N (2014) Adaptive testing of nondeterministic systems with FSM. In: 15th international IEEE symposium on highassurance systems engineering, HASE 2014, Miami Beach, FL, USA, January 9–11, 2014. IEEE Computer Society, pp 224–228Google Scholar
 46.Pnueli A (1977) The temporal logic of programs. In: 18th annual symposium on foundations of computer science, Providence, Rhode Island, USA, 31 October–1 November 1977. IEEE Computer Society, pp 46–57Google Scholar
 47.Pnueli A, Rosner R (1989) On the synthesis of a reactive module. In: Conference record of the sixteenth annual ACM symposium on principles of programming languages, Austin, Texas, USA, January 11–13, 1989. ACM Press, pp 179–190Google Scholar
 48.Queille JP, Sifakis J (1982) Specification and verification of concurrent systems in CESAR. In: DezaniCiancaglini M, Montanari U (eds) Proceedings of the international symposium on programming, 5th colloquium, Torino, Italy, April 6–8, 1982, volume 137 of lecture notes in computer science. Springer, pp 337–351Google Scholar
 49.Tretmans J (1996) Conformance testing with labelled transition systems: implementation relations and test generation. Comput Netw ISDN Syst 29(1):49–79CrossRefGoogle Scholar
 50.Tan L, Sokolsky O, Lee I (2004) Specificationbased testing with linear temporal logic. In: Zhang D, Grégoire É, DeGroot D (eds) Proceedings of the 2004 IEEE international conference on information reuse and integration, IRI—2004, November 8–10, 2004, Las Vegas Hilton, Las Vegas, NV, USA. IEEE Systems, Man, and Cybernetics Society, pp 493–498Google Scholar
 51.Tipaldi M, Bruenjes B (2015) Survey on fault detection, isolation, and recovery strategies in the space domain. J Aerosp Inf Syst 12(2):235–256Google Scholar
 52.Yannakakis M (2004) Testing, optimizaton, and games. In: Díaz J, Karhumäki J, Lepistö A, Sannella D (eds) Proceedings of the automata, languages and programming: 31st international colloquium, ICALP 2004, Turku, Finland, July 12–16, 2004. Volume 3142 of lecture notes in computer science. Springer, pp 28–45Google Scholar
Copyright information
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.